Cybersecurity

Potential risks and mitigation strategies

[ad_1]

June 22, 2023Hacker News

Lack of sleep due to Generative-AI apps? You are not alone or in the wrong. According to the Astrix Security Research Group, the average mid-size organization already has 54 Generative-AI integrations into core systems such as Slack, GitHub, and Google Workspace and this number is expected to continue to grow. Continue reading to understand the potential risks and how to minimize them.

Book a Generative-AI Discovery session with an Astrix Security expert (free – no strings attached – no agents & no friction)

“Hey ChatGPT, review and optimize our source code”

“Hey Jasper.ai, email a summary of all our net new subscribers from the quarter”

“Hey Otter.ai, wrap up our Zoom board meeting”

In this era of financial chaos, businesses and employees alike are constantly looking for tools to automate work processes and increase efficiency and productivity by connecting third-party applications to core business systems such as Google workspaces, Slack, and GitHub via API keys, OAuth tokens, services. account and others. The rise of Generative-AI apps and GPT services exacerbated this problem, with employees from all departments quickly adding the latest and greatest AI apps to their productivity arsenal, without the security team’s knowledge.

From engineering applications such as code review and optimization to marketing, design and sales applications such as content & video creation, image generation and email automation applications. With ChatGPT is the fastest growing application in history, and AI-powered apps are downloading 1506% more than last yearthe security risks of using, and worse, connecting these often unchecked applications to core business systems have led to sleepless nights for security leaders.

Application connectivity to your organization’s applications

Gen-AI application risks

AI-driven applications present two main concerns for security leaders:

1. Data Sharing through apps like ChatGPT: AI’s strength lies in data, but this strength can become a weakness if mismanaged. Employees may inadvertently share sensitive business critical information including customer PII and intellectual property such as code. Such leaks can expose organizations to data breaches, competitive disadvantage and compliance violations. And it’s not a fairy tale – just ask Samsung.

Samsung Leaks and ChatGPT – a case for caution

Samsung reported three different leaks of highly sensitive information by three employees using ChatGPT for productivity purposes. One employee shares confidential source code to check for errors, another code shares code optimization, and a third shares meeting recordings to turn into meeting notes for presentations. All of this information is now used by ChatGPT to train AI models and can be shared across the web.

2. Unverified Generative-AI Applications: Not all generative AI apps come from verified sources. Recent research by Astrix reveals that employees are increasingly connecting these AI-based applications (which usually have high privilege access) to core systems such as GitHub, Salesforce, and such – raising significant security concerns.

Wide range of Generative AI applications

Book a Generative-AI Discovery session with an Astrix Security expert (free – no strings attached – no agents & no friction)

Real life examples of risky Gen-AI integration:

In the image below you can see the details of Astrix platforms about risky Gen-AI integrations connected to an organization’s Google Workspace environment.

This integration, the Google Workspace Integration “GPT For Gmail”, was developed by an untrusted developer and is granted elevated permissions to an organization’s Gmail account:

Among the scope of permissions granted to the integration is “mail.all”, which allows third-party applications to read, write, send, and delete email – very sensitive privileges:

Information about the integration supplier, which is not trusted:

How Astrix helps minimize your AI risk

To safely navigate the exciting but complex AI landscape, security teams need strong non-human identity management to gain visibility into the third-party services your employees are connected to, as well as control over permissions and properly evaluate potential security risks. With Astrix now you can:

Astrix Connectivity Map
  • Get a complete inventory of all the AI ​​tools your employees use and access your core systems, and understand the risks associated with them.
  • Remove security bottlenecks with automated guardrails: understand the business value of each non-human connection including usage rate (frequency, last maintenance, usage volume), connection owner, who in the company uses the integration and market info.
  • Reduce your attack surface – Ensure all AI-based non-human identities accessing your core systems have least privileged access, remove unused connections and untrusted application vendors.
  • Detect anomalous activity and remediate risk: Astrix analyzes and detects malicious behavior such as stolen tokens, internal application abuse, and untrusted vendors in real time via IP, user agent, and data access anomalies.
  • Fix it faster: Astrix eases the burden on your security team with automated remediation workflows as well as instructing end users to resolve their security issues independently.

Book a Generative-AI Discovery session with an Astrix Security expert (free – no strings attached – no agents & no friction)

Found this article interesting? Follow us on Twitter And LinkedIn to read more exclusive content we post.



[ad_2]

Source link

Related Articles

Back to top button