Artificial Intelligence News

New virtual world software can verify how much knowledge AI really knows


April 04, 2023

(Nanowerk News) With increasing interest in generative artificial intelligence (AI) systems worldwide, researchers at the University of Surrey have created software that can verify how much information AI generates from an organization’s digital database (“Programming Semantics and Verification Techniques for AI-centric Programs”).

Surrey’s verification software can be used as part of a company’s online security protocol, helping organizations understand whether AI has learned too much or even accessed sensitive data.

The software is also capable of identifying whether AI has identified and is capable of exploiting flaws in the software code. For example, in the context of online gaming, it can identify whether AI has learned to always win in online poker by exploiting coding errors.

Dr Solofomampionona Fortunat Rajaona is a Research Fellow in formal verification of privacy at the University of Surrey and lead author of the paper. He said: “In many applications, AI systems interact with each other or with humans, such as a self-driving car on a highway or a hospital robot. Working out what intelligent AI data systems know is an ongoing problem that it took us years to find a solution that worked.

“Our verification software can infer how much AI can learn from their interactions, whether they have enough knowledge to enable successful cooperation, and whether they have too much knowledge to compromise privacy. Through the ability to verify what AI has learned, we can give organizations the confidence to safely unleash the power of AI into secure settings.”

Surrey’s software studies won the best paper award at the 25th International Symposium on Formal Methods.

Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, said: “Over the last few months there has been a major surge in public and industry interest in generative AI models driven by advances in such large language models. as ChatGPT. Building tools that can verify generative AI performance is critical to supporting its safe and responsible deployment. This research is an important step towards a critical step towards safeguarding the privacy and integrity of data sets used in training.”





Source link

Related Articles

Back to top button