Artificial Intelligence News

Will ChatGPT start making killer malware?

[ad_1]

AI-pocalypse soon? As amazing as ChatGPT’s output is, should we also expect chatbots to dish out sophisticated malware?

ChatGPT didn’t write this article – I did. Nor am I asking him to answer the question from the title – I will. But I guess that’s what ChatGPT might say. Thankfully, there are a few grammatical errors left to prove I’m not a robot. But that’s just a thing ChatGPT might do too to make it look real.

Today’s hipster robotic technology is a fancy autoresponder good enough to produce homework answers, research papers, legal responses, medical diagnoses, and any number of other things that have passed the “smell test” when treated as if they were the work of a human actor. But will it add meaningfully to the hundreds of thousands of malware samples we see and process every day, or be an obvious bogus?

In the machine-to-machine duels that technorati have coveted for years, ChatGPT seems a little “too good” not to be seen as a serious competitor that might upset opposing machines. With both attackers and defenders using the latest machine learning (ML) models, this had to happen.

Except, to build a good antimalware engine, not just robot-on-robot. Some human intervention is always necessary: ​​we determined this years ago, to the chagrin of ML niche suppliers who entered the marketing fray – all while insisting on muddying the waters by calling their ML niche products as using “AI”.

While ML models have been used for rough triage front ends to more complex analyses, they fall short of being the big red “kill malware” button. Malware is not that simple.

But for sure, I’ve tapped a few ESET ML gurus myself and asked:

Q. How good is ChatGPT generated malware, or is it even possible?

A. We’re not particularly close to “full AI generated malware”, though ChatGPT is pretty good at suggesting code, generating samples and code snippets, debugging and optimizing code, and even automating documentation.

Q. What about more advanced features?

A. We don’t know how good the confusion is. Some examples hook to a scripting language like Python. But we see ChatGPT “reversing” the meaning of the disassembled code connected to IDA Pro, which are interesting. Overall, it might be a useful tool to help a programmer, and maybe it’s the first step towards building more full-featured malware, but not yet.

Q. How well now?

A. ChatGPT is impressive, considering it’s a Big Language Model, and its capabilities are staggering even creators such models. However, it is currently very shallow, makes mistakes, produces answers close to hallucinations (i.e., made-up answers), and cannot be relied upon for anything serious. But it looks like it’s progressing fast, judging by the swarm of technicians sticking their toes in the water.

Q. What can it do now – what is “easy to hang fruit” for the platform?

A. For now, we look at three possible harmful uses and adoptions:

  • Out-phishing the phishers

If you think phishing has looked convincing in the past, just wait. From investigating more data sources and mashing them up seamlessly to spitting out custom-crafted emails that would be very hard to detect based on their content, and promising success rates for better click-getting. And you won’t be able to hastily put it aside for a careless mistake of language; their knowledge of your native language may be better than yours. Since most of the nastiest attacks are initiated by someone clicking on a link, it is thought that the associated impact will be enormous.

  • Ransom negotiation automation

Smooth-talking ransomware operators may be rare, but adding a bit of ChatGPT to communications can decrease the workload of an attacker who appears legitimate during negotiations. It also means fewer errors allowing defenders to know the true identity and location of the operators.

With more and more natural language generation, natural and malicious scammers will sound as if they are from your area and have your best interest in mind. This is one of the first onboarding steps in a confidence scam: sound more confident by sounding as if they’re one of your people.

If any of this sounds like it will happen in the future, don’t bet on it. It won’t happen all at once, but the villains will get a lot better. We will see if the defense is up to the challenge.



[ad_2]

Source link

Related Articles

Back to top button