Artificial Intelligence News

ChatGPT Data Protection Blind Spots and How the Security Team Can Solve Them

[ad_1]

April 20, 2023Hacker NewsArtificial Intelligence / Data Security

In the short time since their inception, ChatGPT and other generative AI platforms have rightfully earned a reputation as the ultimate productivity driver. However, the same technologies that enable the rapid production of high-quality text on demand, can at the same time disclose sensitive corporate data. Recently incident, in which a Samsung software engineer pasted a proprietary code into ChatGPT, clearly shows that this tool can easily become a conduit for potential data leaks. This vulnerability presents a formidable challenge to security stakeholders, as none of the existing data protection tools can ensure no sensitive data is exposed to ChatGPT. In this article, we’ll explore this security challenge in detail and show how browser security solutions can provide a solution. All of them allow organizations to fully realize the productivity potential of ChatGPT and without having to compromise data security.

ChatGPT data protection blind spots: How do you manage text insertion in the browser?

Whenever employees paste or type text into ChatGPT, that text is no longer controlled by the company’s data protection policies and tools. It doesn’t matter if the text was copied from a regular data file, online document, or other source. In fact, that’s the problem. Data Leakage Prevention (DLP) Solutions – from on-premises agents to CASB – everything file oriented. They apply policies to files based on their content, while preventing actions such as modifying, downloading, sharing and more. However, this capability is of little use for ChatGPT data protection. There are no files involved in ChatGPT. Instead, use involves pasting copied or typed snippets of text directly into web pages, which is outside the governance and control of any existing DLP product.

How browser security solutions prevent the use of insecure data in ChatGPT

LayerX launched its browser security platform for continuous monitoring, risk analysis and real-time protection of browser sessions. Delivered as a browser extension, LayerX has detailed visibility into every event that occurs in the session. This allows LayerX to detect risky behavior and configure policies to prevent predetermined actions from occurring.

In the context of protecting sensitive data from being uploaded to ChatGPT, LayerX leverages this visibility to select attempted text insert events, such as ‘paste’ and ‘type’, within the ChatGPT tab. If the text content in the ‘paste’ event violates the company’s data protection policy, LayerX will prevent such action altogether.

To enable this capability, security teams using LayerX must define the phrases or regular expressions they want to protect against exposure. Then, they need to create a LayerX policy which is triggered whenever there is a match with this string.

See what it looks like in action:

LayerX dashboard
Setting policies on the LayerX Dashboard
ChatGPT
Users trying to copy sensitive information to ChatGPT are blocked by LayerX

Additionally, organizations that wish to prevent their employees from using ChatGPT altogether, can use LayerX to block access to the ChatGPT website or to other AI-based online text generators, including ChatGPT-like browser extensions.

Learn more about LayerX ChatGPT data protection here.

Use the LayerX browser security platform to get comprehensive SaaS protection

It is that difference that makes LayerX the only solution that can effectively address the ChatGPT data protection loophole is the placement in the browser itself, with real-time visibility and policy enforcement on actual browser sessions. This approach also makes it an ideal solution for protecting against any cyberthreats that target data or user activity in browsers, such as SaaS applications.

Users interact with the SaaS application via their browser. This makes it easy for LayerX to protect data within these applications as well as the applications themselves. This is achieved by applying the following types of policies on user activity during a web session:

Data protection policy: On top of standard file-oriented protection (copy/share/download/etc. prevention), LayerX provides the same granular protection as for ChatGPT. In fact, once an organization determines which inputs are prohibited from pasting, the same policy can be extended to prevent this data from being exposed to any web or SaaS location.

Account intrusion mitigation: LayerX monitors every user activity in an organization’s SaaS applications. The Platform will detect any anomalous behavior or data interaction that indicates that the user’s account is compromised. The LayerX policy will then trigger session termination or disable any data interaction capabilities for the user in the application.

Learn more about LayerX ChatGPT data protection here.

Found this article interesting? Follow us on Twitter And LinkedIn to read more exclusive content we post.



[ad_2]

Source link

Related Articles

Back to top button