
Trust and Deception: The Role of Apology in Human-Robot Interaction
Robot deception is an under-studied area with more questions than answers, especially when it comes to rebuilding trust in robotic systems after they have been caught lying. Two research students at Georgia Tech, Kantwon Rogers and Reiden Webber, are trying to find an answer to this problem by investigating how intentional robot deception affects trust and the effectiveness of an apology in repairing trust.
Rogers, Ph.D. student at the Computer College, explains:
“All of our previous work has shown that when people find out that a robot is lying to them – even if the lie is intended to benefit them – they lose faith in the system.”
The researchers aimed to determine whether different types of apologies are more effective at restoring trust in the context of human-robot interaction.
The AI Assisted Driving Experiment and Its Implications
The duo designed driving simulation experiments to study human-AI interactions in high-risk, time-sensitive situations. They recruited 341 online participants and 20 in-person participants. The simulation involves an AI-assisted driving scenario where the AI provides false information about the presence of the police on the way to the hospital. After the simulation, the AI gives one of five different text-based responses, including different types of apologies and non-apologies.
That results revealed that participants were 3.5 times more likely not to speed when suggested by a robotic assistant, indicating an overly trusting attitude towards AI. Neither type of apology completely restores trust, but a simple apology without a lying admission (“I’m sorry”) outperforms the other responses. This finding is problematic, as it exploits the preconceived notion that any false information provided by the robots is a system error rather than an intentional lie.
Reiden Weber points out:
“One of the key takeaways is, for people to understand that a robot has tricked them, they have to be told explicitly.”
When participants were made aware of deception in apologies, the best strategy for repairing trust was for the robot to explain why it lied.
Moving Forward: Implications for Users, Designers, and Policymakers
This research has implications for the average technology user, AI system designer, and policy maker. It is very important that people understand that robot scams are real and always a possibility. Designers and technologists must consider the consequences of creating an AI system that is capable of deception. Policymakers must take the lead in crafting laws that balance innovation and protecting the public.
Kantwon Rogers’ goal was to create a robotic system that could learn when to lie and when not to lie when working with a human team, and when and how to apologize during repetitive and prolonged human-AI interactions to improve team performance.
He stressed the importance of understanding and regulating robot and AI fraud, saying:
“My job goal is to be very proactive and inform the need to regulate robotic and AI fraud. But we can’t do that if we don’t understand the problem.”
This research makes an important knowledge contribution to the field of AI deception and offers valuable insights for technology designers and policy makers who create and regulate AI technologies that are capable of deceiving or have the potential to learn to deceive themselves.