(Nanowerk News) As AI becomes more and more realistic, our trust in the people with whom we communicate can be compromised. Researchers at the University of Gothenburg have examined how sophisticated AI systems affect our trust in the individuals with whom we interact.
In one scenario, the would-be scammer, believing he was calling an old man, instead connected to a computer system that communicated through a pre-recorded loop. Fraudsters spend a lot of time trying to scam, patiently listening to the rather confusing and repetitive “male” stories. Oskar Lindwall, a communications professor at the University of Gothenburg, observed that it often took a long time for people to realize they were interacting with a technical system.
He, in collaboration with informatics Professor Jonas Ivarsson, wrote an article exploring how individuals interpret and relate to situations in which one party may be an AI agent (Computer Supported Cooperative Work, “Suspicious Minds: Trust Issues and Conversational Agents”). The article highlights the negative consequences of harboring suspicions about others, such as the damage it can cause to relationships.
Ivarsson gives examples of romantic relationships where trust issues arose, leading to jealousy and an increased tendency to seek evidence of deception. The authors argue that not being able to completely trust the intentions and identity of the interlocutor can result in undue suspicion even when there is no reason for it.
Their study found that during interactions between two humans, some behaviors were interpreted as signs that one of them was actually a robot.
The researchers argue that broad design perspectives are driving the development of AI with increasingly human-like features. While this may be interesting in some contexts, it can also be problematic, especially when it’s not clear who you’re communicating with. Ivarsson questioned whether AI should have human-like voices, as they create a sense of intimacy and lead people to form impressions based on sounds alone.
In the case of would-be fraudsters who called out “older men,” the deception only came to light after a long time, which Lindwall and Ivarsson attribute to the believability of human voices and the assumption that confusing behavior is due to age. Once AI has a voice, we infer attributes such as gender, age, and socioeconomic background, making it more difficult to identify that we are interacting with a computer.
The researchers propose to create an AI with a well-functioning and fluent voice that is still clearly synthetic, increasing transparency.
Communication with others involves not only deception but also building relationships and making shared meaning. Uncertainty whether someone is talking to a human or a computer affects this aspect of communication. While it may be fine in some situations, such as cognitive-behavior therapy, other forms of therapy that require more human connection can have negative repercussions.
Jonas Ivarsson and Oskar Lindwall analyzed the data available on YouTube. They studied three types of conversation and audience reactions and comments. In the first type, a robot calls someone to book a hair appointment, unbeknownst to the person on the other end. In the second type, one person summons another for the same purpose. In the third type, the telemarketer is transferred to a computer system with pre-recorded speech.