Artificial Intelligence News

Researchers explore the vulnerability of AI systems to online misinformation

[ad_1]

University of Texas at Arlington researchers are working to enhance the security of natural language generation (NLG) systems, such as those used by ChatGPT, to prevent abuse and misuse that can enable the spread of misinformation online.

University of Texas at Arlington researchers are working to enhance the security of natural language generation (NLG) systems, such as those used by ChatGPT, to prevent abuse and misuse that can enable the spread of misinformation online.

Shirin Nilizadeh, assistant professor in the Department of Computer Science and Engineering, has been awarded a five-year Faculty Early Career Development Program (KARIR) grant worth $567,609 from the National Science Foundation (NSF) for her research. Understanding the vulnerability of artificial intelligence (AI) to online misinformation is an “important and timely issue to address,” he said.

“This system has a complex architecture and is designed to learn from any information on the internet. The enemy may try to poison this system with a collection of hostile or fake information,” said Nilizadeh. “The system will learn hostile information the same way it learns correct information. Adversaries may also use some system vulnerabilities to generate malicious content. We first need to understand the vulnerabilities of these systems in order to develop detection and prevention techniques that increase their resilience to these attacks.”

The CAREER Award is NSF’s most prestigious award for junior faculty. Recipients are outstanding researchers but are also expected to become outstanding teachers through research, educational excellence and integrating education and research at their home institution.

Nilizadeh’s research will include a comprehensive view of the types of attacks that NLG systems are vulnerable to and the creation of AI-based optimization methods to check systems against different attack models. He will also explore in-depth analysis and characterization of vulnerabilities that lead to attacks and develop defense methods to protect NLG systems.

This work will focus on two common natural language creation techniques: summaries, and answering questions. As a summary, AI is given a list of articles and asked to summarize their contents. In answering a question, the system is presented with a document, finds the answer to the question in that document and generates a text answer.

Hong Jiang, chairman of the Department of Computer Science and Engineering, underscored the importance of Nilizadeh’s research.

“With big language models and text generation systems revolutionizing how we interact with machines and enabling the development of new applications for healthcare, robotics and more, serious concerns are being raised about how these powerful systems can be abused, manipulated, or cause privacy and security leaks. . threats,” Jiang said. “It is this kind of threat that Dr. Nilizadeh’s CAREER Award by exploring new methods to increase the robustness of the system so that abuse can be detected and mitigated, and end users can trust and explain the results produced by the system .”


[ad_2]

Source link

Related Articles

Back to top button