Artificial Intelligence News

How Generative AI Can Overcome SaaS Authentication Protocols — And Effective Ways To Prevent Other Major AI Risks in SaaS

[ad_1]

generative AI

Security and IT teams are routinely forced to adopt software before fully understanding the security risks. And AI tools are no exception.

Employees and business leaders alike are flocking to generative AI software and similar programs, often unaware of the major SaaS security vulnerabilities they are introducing into enterprises. February 2023 generative AI survey of 1,000 executives revealed that 49% of respondents use ChatGPT now, and 30% plan to take advantage of the ubiquitous generative AI tool soon. Ninety-nine percent of those using ChatGPT claim some form of cost savings, and 25% attest to cost reductions of $75,000 or more. Since the researchers conducted this survey only three months after the general availability of ChatGPT, the use of ChatGPT and AI tools today is undoubtedly higher.

Security and risk teams are overwhelmed protecting their real SaaS (which has now become a business operating system) from common vulnerabilities such as misconfigurations and user over-permissions. This leaves little bandwidth for assessing the threat landscape of AI tools, the unapproved AI tools currently in use, and the implications for SaaS security.

With threats emerging both outside and within the organization, CISOs and teams must understand the risks that AI tools are most relevant to SaaS systems — and how to mitigate them.

1 — Threat Actors Can Exploit Generative AI to Cheat SaaS Authentication Protocols

As ambitious employees find ways for AI tools to help them achieve more with less, cybercriminals do too. Using generative AI with malicious intent is inevitable, and it is already possible.

AI’s ability to impersonate humans very well makes weak SaaS authentication protocols especially vulnerable to hacking. Based on Techopedia, threat actors can abuse generative AI to guess passwords, hack CAPTCHAs, and build more powerful malware. Although this method might sound limited in its attack range, January 2023 CircleCI security breach linked to an engineer’s laptop infected with malware.

Similarly, three leading technology academics recently proposed plausible hypotheses for generative AI executing phishing attacks:

“A hacker used ChatGPT to generate personalized spear-phishing messages based on your company’s marketing materials and phishing messages that have been successful in the past. This managed to fool people who had been properly trained in email awareness, because it looked nothing like messages that had been trained to detected.”

Bad actors will avoid the most fortified entry points — usually the SaaS platforms themselves — and instead target the more vulnerable side doors. They won’t care about the latch and guard dog located at the front door when they can sneak back into the unlocked patio door.

Relying on authentication alone to keep SaaS data secure is not a viable option. In addition to implementing multi-factor authentication (MFA) and physical security keys, security and risk teams need ongoing visibility and monitoring of the entire SaaS perimeter, along with automated alerts for suspicious login activity.

This insight is necessary not only for the cybercriminal’s generative AI activity but also for the connection of employee AI tools to the SaaS platform.

2 — Employees Connect Unlicensed AI Tools to SaaS Platforms Without Considering the Risks

Employees now rely on unapproved AI tools to make their jobs easier. After all, who wants to work harder when AI tools improve effectiveness and efficiency? Like any shape that shadowEmployee adoption of AI tools is driven by the best of intentions.

For example, an employee believes they can manage their time and tasks better, but trying to monitor and analyze their task management and meeting engagements feels like a huge undertaking. AI can easily perform such monitoring and analysis and provide recommendations almost instantly, giving employees the productivity boost they crave in no time. Signing up for an AI scheduling assistant is, from an end-user perspective, as simple and (seemingly) harmless as:

  • Sign up for a free trial or sign up with a credit card
  • Approve the AI ​​tool’s Read/Write permission request
  • Connect the AI ​​scheduling assistant to their company’s Gmail, Google Drive, and Slack accounts

However, this process creates an invisible channel to the organization’s most sensitive data. This AI-to-SaaS connection inherits user permission settings, allowing hackers who manage to compromise AI tools to move silently and sideways across legitimate SaaS systems. A hacker can access and extract data until suspicious activity is noticed and acted upon, which can range from weeks to years.

AI tools, like most SaaS applications, use OAuth access token for continuous connection to the SaaS platform. Once authorization is complete, the token for the AI ​​scheduling assistant will maintain consistent API-based communication with Gmail, Google Drive, and Slack accounts — all without requiring users to periodically log in or authenticate. Threat actors able to leverage these OAuth tokens have found a spare SaaS equivalent key “hidden” under the doormat.

AI tool
Figure 1: Illustration of an AI tool establishing an OAuth token connection with a major SaaS platform. Credit: AppOmni

Security and risk teams often lack SaaS security tools to monitor or control the risk of such attack surfaces. Legacy tools such as cloud access security brokers (CASB) and secure web gateways (SWG) will not detect or warn of AI-to-SaaS connectivity.

But this AI-to-SaaS connection isn’t the only way employees can accidentally expose sensitive data to the outside world.

3 — Sensitive Information Shared with Generative AI Tools Is Vulnerable to Leakage

Data that employees send to generative AI tools — often with the goal of speeding up work and improving its quality — could end up in the hands of the AI ​​provider itself, the organization’s competitors, or the general public.

Because most generative AI tools are free and exist outside of an organization’s technology pool, security and risk professionals have no oversight or security controls for these tools. This is a growing concern among enterprises, and generative AI data leaks have occurred.

The March Incident was accidentally activated ChatGPT users to view other users’ chat titles and history in the website sidebar. Concerns were raised not only for the leakage of sensitive organizational information but also for the identity of users being exposed and compromised. OpenAI, the developer of ChatGPT, announced the ability for users to turn off chat history. In theory, this option stops ChatGPT from sending data back to OpenAI for product upgrades, but requires employees to manage data storage settings. Even with this setting enabled, OpenAI retains conversations for 30 days and exercises the right to review them “for abuse” before they expire.

This bug and the fine print of data retention have not gone unnoticed. in May, Apple restricts employees from using ChatGPT over fears of leaking confidential data. While the tech giant took this stance when building its own generative AI tools, it joins companies like Amazon, Verizon, and JPMorgan Chase in the ban. Apple is also directing its developers to avoid GitHub Co-pilot, owned by top competitor Microsoft, to automate code.

Common generative AI use cases are fraught with data leak risks. Consider a product manager asking ChatGPT to make messages in a product roadmap document more engaging. That product roadmap almost certainly contains information and product plans that were never meant for public consumption, let alone spying on competitors. Similar ChatGPT bugs — which cannot be scaled up or fixed by an organization’s IT team — can result in serious data exposure.

Generative AI on its own does not pose a SaaS security risk. But what is isolated today is connected tomorrow. Ambitious employees will naturally seek to extend the use of unapproved generative AI tools by integrating them into SaaS applications. At the moment, Slack ChatGPT integration demands a lot more work than the average Slack connection, but it’s not a very high bar for a smart and motivated employee. Integration using the OAuth token is exactly like the AI ​​scheduling assistant example described above, exposing organizations to the same risks.

How Organizations Can Protect Their SaaS Environments from Significant AI Tool Risks

Organizations need guardrails for data governance of AI tools, especially for their SaaS environment. This requires comprehensive SaaS security tools and proactive cross-functional diplomacy.

Employees use unapproved AI tools largely due to limitations of the approved technology stack. The desire to boost productivity and improve quality is a virtue, not a vice. There are unmet needs, and CISOs and their teams must approach employees with a collaboration versus curse attitude.

Good faith conversations with leaders and end users regarding their requests for AI tools are critical to building trust and goodwill. At the same time, CISOs must address legitimate security concerns and the potential consequences of risky AI behavior. Security leaders should think of themselves as accountants explaining how best to work within the tax code rather than viewing IRS auditors as enforcers who don’t care about anything outside of compliance. Whether it’s putting in place the right security settings for the desired AI tool or looking for viable alternatives, the most successful CISOs strive to help employees maximize their productivity.

Fully understanding and addressing AI tool risks requires comprehensive and robust SaaS security posture management (SSPM) solution. SSPM gives security and risk practitioners the insight and visibility they need to navigate the ever-changing state of SaaS risk.

To increase authentication strength, security teams can use SSPM to deploy MFA across SaaS applications in an estate and monitor configuration deviations. SSPM enables security teams and SaaS application owners to implement best practices without learning the ins and outs of every SaaS application setup and AI tool.

The ability to inventory unapproved and approved AI tools connected to the SaaS ecosystem will reveal the most pressing risks to investigate. Continuous monitoring automatically notifies the security and risk team when a new AI connection is established. This visibility plays a critical role in reducing the attack surface and taking action when unsanctioned, insecure, and/or above permission AI tools appear in the SaaS ecosystem.

The reliance on AI tools will almost certainly continue to spread rapidly. An outright ban is never easy. Instead, it’s a pragmatic mix of security leaders who share their colleagues’ goal of increasing productivity and reducing repetitive tasks the right SSPM solution is the best approach to drastically reduce SaaS data exposure or breach risk.

Found this article interesting? Follow us on Twitter And LinkedIn to read more exclusive content we post.



[ad_2]

Source link

Related Articles

Back to top button