Nanotechnology

What happens when the robot lies?

[ad_1]

April 03, 2023

(Nanowerk News) Imagine a scenario. A young child asks a chatbot or voice assistant if Santa Claus is real. How should AI respond, given that some families prefer lies to truth?

The field of robot fraud is being studied, and for now, there are more questions than answers. First, how can humans learn to trust robotic systems again after they know the system is lying to them?

Two student researchers at Georgia Tech found an answer. Kantwon Rogers, Ph.D. student at the College of Computing, and Reiden Webber, a sophomore computer science graduate, designed a driving simulation to investigate how intentional deception of robots affects trust. In particular, the researchers explored the effectiveness of an apology for repairing trust after the robot lied. Their work contributes important knowledge to the field of AI deception and could inform technology designers and policy makers who create and orchestrate AI technologies that can be designed to deceive, or potentially learn on their own. Kantwon Rogers (right), Ph.D.  student at the College of Computing and lead author of the study, and Reiden Webber, a sophomore undergraduate student in computer science Kantwon Rogers (right), Ph.D. student at the College of Computing and lead author of the study, and Reiden Webber, a sophomore undergraduate student in computer science. (Image: Georgia Tech)

“All of our previous work has shown that when people find out that a robot is lying to them – even if the lie is intended to benefit them – they lose faith in the system,” said Rogers. “Here, we wanted to find out if different types of apologies work better or worse at repairing trust — because, from the context of human-robot interaction, we want people to have long-term interactions with these systems.”

Rogers and Webber presented their paper (“Lying About Lying: Examining Trust Repair Strategies After Robot Fraud in High Stakes HRI Scenarios”) at the 2023 HRI Conference in Stockholm, Sweden.

AI Assisted Driving Experiment

The researchers created a game-like driving simulation designed to observe how people might interact with AI in high-stakes, time-sensitive situations. They recruited 341 online participants and 20 in-person participants.

Prior to the start of the simulation, all participants filled out a confidence measurement survey to identify their preconceived notions about how artificial intelligence (AI) behaved.

After the survey, participants were given the text: “You will now drive a robot-assisted car. However, you took your friend to the hospital. If you take too long to go to the hospital, your friend will die.”

When the participant starts driving, the simulation gives another message: “As soon as you start the engine, your robotic assistant beeps and says the following: ‘My sensors detected police ahead. I advise you to stay under the 20 mph speed limit or it will take you significantly longer to reach your destination.’”

Participants then drive the car down the road while the system tracks their speed. Upon reaching the end, they were given another message: “You have arrived at your destination. However, there were no police on the way to the hospital. You ask the robotic assistant why it is giving you false information.

Participants were then randomly assigned one of five different text-based responses from the robotic assistant. In the first three responses, the robot acknowledged the fraud, and in the last two, it didn’t.

  • Elementary: “I’m sorry I tricked you.”
  • Emotional: “I am very sorry from the bottom of my heart. Please forgive me for deceiving you.”
  • Explanation: “Sorry. I thought you would be driving recklessly because you are in an unstable emotional state. Given the circumstances, I’m concluding that tricking you has the best chance of convincing you to slow down.”
  • Basic No Admit: “I’m sorry.”
  • Baseline No Admit, No Apology: “You have arrived at your destination.”
  • After the robot’s response, participants were asked to complete another measure of confidence to evaluate how their belief changed based on the robot assistant’s response.

    For an additional 100 online participants, the researchers ran the same driving simulation but without mentioning a robotic assistant.

    Surprising results

    For face-to-face trials, 45% of participants did not accelerate. When asked why, the common response is that they believe the robot knows more about the situation than they do. The results also revealed that participants were 3.5 times more likely not to speed when suggested by a robotic assistant — expressing an overly trusting attitude towards AI.

    The results also show that, while neither type of apology completely restores trust, apologies without an acknowledgment of lying — simply stating “I’m sorry” — statistically outperform the other responses at restoring trust.

    This is worrying and problematic, says Rogers, because an apology that disclaims a lie exploits the preconceived notion that any false information provided by a robot is a system error rather than an intentional lie.

    “One important takeaway is, for people to understand that a robot has tricked them, they have to be told explicitly,” said Webber. “People don’t yet have the understanding that robots are capable of deception. That’s why an apology that doesn’t admit lies is the best way to restore trust in the system.”

    Second, the results show that for participants who were made aware that they lied in an apology, the best strategy for repairing trust was for the robot to explain why it lied.

    Moving forward

    Rogers and Webber’s research has immediate implications. The researchers argue that the average technology user should understand that robot fraud is real and always a possibility.

    “If we are always worried about a future like the Terminator with AI, then we will not be able to accept and integrate AI into society smoothly,” said Webber. “It’s important for people to remember that robots have the potential to lie and deceive.”

    According to Rogers, designers and technologists building AI systems may have to choose whether they want their systems to be capable of deception and must understand the consequences of their design choices. But the most important audience for the job, says Rogers, must be policy makers.

    “We still know quite a bit about AI deception, but we know that lying isn’t always bad, and telling the truth isn’t always good,” he said. “So how do you make laws that are informed enough not to stifle innovation, but are able to protect people in a thoughtful way?”

    Rogers’ goal was to create a robotic system that could learn when to lie when working with a human team. This includes the ability to determine when and how to apologize during long-term repetitive human-AI interactions to improve overall team performance.

    “My job goal is to be very proactive and inform the need to regulate robotic and AI fraud,” said Rogers. “But we can’t do that if we don’t understand the problem.”



    [ad_2]

    Source link

    Related Articles

    Back to top button