Artificial Intelligence News

New AI Tools Enable Cybercriminals to Launch Sophisticated Cyber ​​Attacks


July 15, 2023thnArtificial Intelligence / Cybercrime

With generative artificial intelligence (AI) becoming so popular these days, it’s perhaps no surprise that the technology has been reused by bad actors for their own benefit, allowing avenues to accelerate cybercrime.

According to findings from SlashNext, a new generative AI cybercrime tool is called WormGPT has been advertised in underground forums as a way for adversaries to launch sophisticated phishing and business email compromise (BEC) attacks.

“This tool presents itself as a blackhat alternative to the GPT model, specifically designed for malicious activity,” said security researcher Daniel Kelley said. “Cybercriminals can use these technologies to automate the creation of highly convincing, forged emails personalized to recipients, thus increasing the chances of a successful attack.”

The software’s author describes it as “the biggest enemy of the notorious ChatGPT” which “allows you to do all kinds of illegal things.”

In the hands of bad actors, tools like WormGPT can be a powerful weapon, especially as OpenAI ChatGPT and Google Bard are increasingly taking steps to combat the abuse of the big language model (LLM) to compose convincing phishing emails and produce wicked code.

“Bard’s cybersecurity anti-abuse restrictions are significantly lower than ChatGPT,” Check Point said in this week’s report. “As a result, it’s much easier to generate malicious content using Bard abilities.”

Sophisticated Cyber ​​Attacks

Earlier this February, an Israeli cybersecurity firm revealed how cybercriminals are working around ChatGPT restrictions take advantage of the APInot to mention trading stolen premium account and selling brute-force software to hack ChatGPT accounts by using large lists of email addresses and passwords.

The fact that WormGPT operates without ethical constraints underscores the threat posed by generative AI, allowing even novice cybercriminals to launch attacks quickly and on a large scale without having the technical means to do so.

UPCOMING WEBINARS

Protecting Against Insider Threats: SaaS Master Security Posture Management

Worried about insider threats? We are here to help you! Join this webinar to explore practical strategies and secrets to proactive security with SaaS Security Posture Management.

Join today

Even worse, threat actors are promoting a “jailbreak” for ChatGPT, technical instructions and input designed to manipulate the tool to produce output that could involve disclosing sensitive information, generating inappropriate content, and executing malicious code.

“Generative AI can create grammatically perfect emails, making them appear legitimate and reducing the chances of being flagged as suspicious,” says Kelley.

“The use of generative AI democratizes the execution of sophisticated BEC attacks. Even attackers with limited skills can use this technology, making it an accessible tool for a wider spectrum of cybercriminals.”

The revelations came when researchers from Mithril Security “surgically” modified an open-source AI model known as GPT-J-6B to make him spread disinformation and upload it to a public repository like Hug Face which can then be integrated into other applications, leading to what is called LLM supply chain poisoning.

Engineering success, dubbed PoisonGPTbank with the precondition that the lobotomy model be uploaded using a name that mimics a known company, in this case, a typo-type version of EleutherAI, the company behind GPT-J.

Found this article interesting? Follow us on Twitter And LinkedIn to read more exclusive content we post.





Source link

Related Articles

Back to top button