AI is your friend. Just keep telling yourself that. If you want to have a good experience with the technology, that is.
Arizona State University Associate Professor Ed Finn co-authored a new study that discovered prompting statements about the interaction people had with AI can influence their actual experience — good or bad.
Finn’s study, “Influencing human-AI interaction by priming beliefs about AI can increase perceived trustworthiness, empathy and effectiveness,” was recently published by Nature Machine Intelligence.
Finn and fellow researchers Pat Pataranutaporn, Ruby Liu and Pattie Maes examined how changing a person’s mental model of an AI system affects their interaction with the system, and the importance of how they are introduced to the new technology.
ASU News spoke to Finn, founding director of ASU’s Center for Science and the Imagination, and Pataranutaporn, a researcher at the Massachusetts Institute of Technology and an ASU alum, about this new user-friendly approach to AI.
Question: Why is it so important that trust must exist between AI and its user?
Finn: One thing I find really compelling about this research is the way that mistrust breeds deeper mistrust in a conversation. When we start off on the wrong foot with one of these systems, it’s very hard to regain that trust. But if we approach all our interactions with AI suspiciously, we’re going to create antagonistic relationships — fighting and competing against these systems instead of figuring out how to work with them.
Every time humans try to set up some direct competition with machines, like playing chess or flying jet planes, we end up losing, because we’re thinking in zero-sum, us-against-them terms. We need to find ways to collaborate with AI in order to use these incredible tools to create a better future — and maybe to be better humans.
Q: What does this tell us about how people imagine AI?
Finn: Humans are always looking for stories in the world. We look at alien systems like AI and we try to find meaningful patterns, and because AI shows some signs of intelligence and agency, we anthropomorphize them. We all go into our interactions with tools like Siri or ChatGPT with prior beliefs about what those systems are and how they work. This study suggests that the stories we bring into these interactions directly influence how the interactions go, because there is a kind of feedback loop between humans responding to the AI and AI responding to the humans. And when you think about the stories we tell about AI right now, like the killer robot story or the super-intelligence story, they’re not very descriptive of what these systems actually do.
We need better metaphors, better stories, that can help us contend with the alien intelligence of AI as it really exists in the world — not human, not really superhuman either; just different.
Q: Tell us how you came to your study’s premise — that a simple statement can effectively influence the user to trust AI to be more empathetic, trustworthy and better performing?
Pataranutaporn: The premise of our study ... was inspired by the concept of the placebo effect in medicine. Just as a simple sugar pill can produce significant improvements in patient outcomes through the power of belief, we wondered if a similar phenomenon could occur in human-AI interactions. Additionally, our research was influenced by science fiction narratives that explore the idea of AI companions and the potential for these technologies to develop human-like traits and emotions.
We wanted to investigate if priming users with certain statements about the AI's inner motives could shape their perceptions and interactions with the AI, ultimately influencing their trust and overall experience.
Q: You state in your paper there is an overestimation of AI’s capabilities, which impacts both AI and human behavior. What exactly did you mean by this?
Pataranutaporn: We found that a simple priming statement is effective in influencing the user's mental model of the inner motives of a conversational AI agent, with a positive and neutral primer strongly directing the user's beliefs. Users who imagined the AI having a caring motive had a strong tendency to see the AI as trustworthy, empathetic and better in performance. We also found that the effect of the priming and initial mental model is stronger in a more sophisticated AI model with more human-like behaviors. Finally, we demonstrated the feedback loop between the user's mental model and interaction with the AI, and the AI response, with each reinforcing the other.
Q: You also believe that most people perceive AI to be without feeling or empathy. How do you prove to them otherwise?
Pataranutaporn: Yes, many people perceive AI to be without feeling or empathy because they understand that AI systems are ultimately machines that (are) trained to perform a variety of tasks. They recognize that these systems lack the capacity for emotions or subjective experiences. However, our study suggests that people can still form mental models of an AI as empathetic or caring. This highlights the power of imagination and the influence it has on the user's perception and interaction with AI.
Q: So, your belief is that if an early adopter has a positive experience with AI, the more likely they will continue using it and vice versa?
Pataranutaporn: I like to think of AI as the "mirror" of the user, which highlights the reciprocal relationship between user experience and their mental model of AI. Positive experiences can shape positive mental models, which, in turn, influence users' adoption and continued use of AI.
Top photo courtesy iStock/Getty Images