RSA Conference 2023 – How AI will infiltrate the world
As all things (misnamed) AI take the world’s biggest security events by storm, we rounded up some of their most touted use cases and applications.
OK, so there’s this ChatGPT thing layered on top of AI – well, not really, apparently even the practitioners responsible for some of the most impressive machine learning (ML)-based products don’t always stick to the basic terminology of their field. skill…
At RSAC, the pre-eminence of fundamental academic differences is likely to give way to marketing and economic considerations, and all the other supporting ecosystems are being built to secure AI/ML, implement it, and manage it – no small task.
To be able to answer questions like “what is love?”, GPT-like systems collect disparate data points from a large number of sources and combine them for a rough use. Here are some of the apps the AI/ML folks here at RSAC are looking to help with:
- Is the job candidate legit, and telling the truth? Sifting through the social media clutter and reconstructing documents comparing and contrasting bright candidates’ self-reviews isn’t an option for HR departments who are short on time struggling to go through the batch of resumes that have landed in their inboxes. Shaking that pile down to some ML stuff can sort the wheat from the chaff and provide a manager with a meaningfully vetted short list. Of course, one still has to wonder about the dangers of bias in ML models having given input data biases to learn from, but it can be a useful tool, albeit imperfect, that is still better than human-initiated text tracing. .
- Has your company’s development environment been compromised by a bad actor through one of your third parties? There’s no practical way to keep tabs on all of your development toolchain in real-time for hacked tools, potentially exposing you to all kinds of code issues, but maybe a reputable ML doo-dad can do that for you?
- Are deepfakes detected, and how do you know if you see one? One of the startup companies at RSAC started their promotion with a video of their CEO saying that their company is very bad. The original CEO asked the audience if they could tell the difference, the answer was “hardly anything.” So if the “CEO” asks someone for a wire transfer, even if you see the video and hear the audio, can it be trusted? ML hopes to help find out. But because CEOs tend to have a public presence, it’s easier to rehearse your fakes from real audio and video clips, making them that much better.
- What happened to privacy in the AI world? Italy recently crack down on use of ChatGPT due to privacy issues. One startup at RSAC offers a way to make data to and from the ML model private by using some interesting coding techniques. It’s just one attempt at a much larger set of challenges inherent in large language models that form the foundation for trained ML models meaningful enough to be useful.
- Are you writing insecure code, in the context of an ever-changing threat landscape? Even if your toolchain isn’t compromised, there’s still a host of new coding techniques that can prove to be insecure, especially with regard to integration with any cloud property mashups you may have. Refine code with ML-driven insights, over time, it may be important not to implement code with embedded insecurities.
In an environment where GPT consoles have been unceremoniously hosed down to the masses with little scrutiny, and people see the power of the early models, it’s easy to imagine fear and uncertainty about how sinister they can be. There will certainly be backlashes seeking to control technology before it can cause too much damage, but what exactly does that mean?
A powerful tool requires a strong guard against misbehaving, but that doesn’t mean it’s useless. There’s a moral imperative baked into the technology somewhere, and that’s yet to be worked out in this context. Meanwhile, I’ll head to one of the consoles and ask “What is love?”