When is it OK for AI to lie?

ASU computer scientist sees lying as a natural progression for artificial intelligence

January 30, 2019

Artificial intelligence has captured the public imagination by besting chess grandmasters and one-upping game show contestants. On top of that is the Silicon Valley hype, not to mention the doomsday science-fiction scenarios of machines taking over humanity like Skynet in "The Terminator" and or the increasingly sentient robots of "Westworld."

But for Arizona State University computer scientist Subbarao Kambhampati, another big milestone of AI’s development will be in its ability to lie — even the little white ones. He sees it as a natural progression as algorithms and neural networks grow ever more sophisticated. Tathagata Chakraborti presented different scenarios exploring when (and if) it would be permissible for AI to lie, and how to keep humans in the loop if they do. Tathagata Chakraborti presented different scenarios exploring when — and if — it would be permissible for AI to lie, and how to keep humans in the loop if they do. Download Full Image

“We always tell each other white lies,” said Kambhampati, a professor in the School of Computing, Informatics, and Decision Systems Engineering within the Ira A. Fulton Schools of Engineering. “It’s part of the societal lubricant. For example, if someone fixes you some food to eat, you are not supposed to say, ‘That sucks!’ Otherwise, the society falls apart. So, the question then, is, under what conditions are people willing to let AI systems tell them white lies?”

At this week’s conference on Artificial Intelligence, Ethics and Society, Kambhampati, along with his former graduate student Tathagata Chakraborti, presented different scenarios exploring when — and if — it would be permissible for AI to lie, and how to keep humans in the loop if they do.

“First of all, this comes from our research, which says that once I have a mental model of you, I can manipulate it and tell you lies," Kambhampati said. "In fact, I joke in many of these talks that I was ecstatic when my kid could tell his first lies, since that is a sure sign of intelligence.

“Telling lies is a way to get you to believe in an alternate reality which can lead to personal gain or greater good. Since we are making these AI systems, we can control when they can and cannot fabricate, or essentially tell lies.”

The researchers designed a thought experiment to explore both human-human and human-AI interactions in an urban search-and-rescue scenario: searching all locations on a floor of an earthquake-damaged building. They enlisted 147 people through crowdsourcing on Amazon’s Mechanical Turk to survey how human reactions change between dealing with humans or AI.

Results of their thought experiments and surveys indicate that public perception is positive toward AI lying for the greater good. Fabrication, falsification and obfuscation of information can be used by an AI agent to achieve teaming performance that would otherwise not be possible.

ASU computer scientist

Subbarao Kambhampati

“In this paper, we attempted to take the first steps towards understanding the state of the public consciousness on this topic,” Kambhampati said. “We got a sense of when people are willing to be told white lies.”

But it poses several unresolved ethical and moral questions with regards to the design of autonomy in AI, which the research group will continue to explore.

At the end of the presentation, they discussed scenarios where white lies are considered acceptable, as in certain circumstances in the doctor-patient relationship. AI is already rapidly coming to the forefront in pathology and imaging diagnostics, such as interpretations of X-rays and CT scans for cancer.

But what will be the role of AI in the future of medicine? After all, the Hippocratic Decorum states – “Perform your medical duties calmly and adroitly, concealing most things from the patient while you are attending to him.”

Medical lies are usually done to give as much truth as is good for the patient, especially in the delivery of bad news. This is obviously done for the good of patient, to help maintain a positive attitude even if given a late stage cancer diagnosis.

“The rationale, here, being that such information can demoralize the patient and impede their recovery,” Kambhampati wrote. “As we saw in the study, participants were open to deception or manipulation for greater good, especially for a robotic teammate.”

But there are also known deceptions like the placebo effect. And what if it is a medical AI that is concealing information from a patient, or the results of cancer medical imaging tests from a doctor?

The future of AI in medicine will also involve trust. And perhaps, there will be times when it makes sense for an AI to lie to a person.

“The doctor-patient relationship, and the intriguing roles of deception in it, does provide an invaluable starting point for conversation on the topic of greater good in human-AI interactions,” Kambhampati said.

In addition to the ASU talk, the AIES conference provided a platform for research and discussions from the perspectives of several disciplines to address the challenges of AI ethics within a societal context, featuring participation from experts in computing, ethics, philosophy, economics, psychology, law and politics.

“AI is evolving rapidly, and as a society we’re still trying to understand its impact — both in terms of its benefits and its unintended consequences,” said conference co-chair Vincent Conitzer, of Duke University. “The AIES conference was designed to include participation from different disciplines and corners of society, in order to offer a unique and informative look at where we stand with the development and the use of artificial intelligence.”

The AIES conference was chaired by a multidisciplinary program committee to ensure a diversity of topics. Conference sessions addressed algorithmic fairness, measurement and justice, autonomy and lethality, human-machine interaction and AI for social good, among other focuses.

AIES presenters and about 300 attendees include representatives from major technology and nontechnology companies, academic researchers, ethicists, philosophers and members of think tanks and the legal profession.

View a complete list of research papers and posters presented at the AIES Conference. The proceedings of the conference will be published in the AAAI and ACM Digital Libraries.

The conference was sponsored by the Berkeley Existential Risk Initiative, DeepMind Ethics and Society, Google, National Science Foundation, IBM Research, Facebook, Amazon, PWC, Future of Life Institute and the Partnership on AI.

Joe Caspermeyer

Manager (natural sciences), Media Relations & Strategic Communications


ASU researchers identify role for inflammatory marker in cognitive decline tied to childhood abuse

January 30, 2019

As many as 30 to 50 percent of adults experienced abuse or neglect when they were children. Such abuse can lead to physical and mental health problems and even cognitive deficits in adulthood.

In a paper published in the January 2019 issue of Annals of Behavioral Medicine, researchers in the Arizona State University Department of Psychology explored the physiological and psychological factors related to childhood abuse that might contribute to cognitive decline in adults. Mary Davis, Lead Author of the study Mary Davis, professor of psychology and lead author on the study. Download Full Image

“Abuse early in life is linked to a number of negative health outcomes in older age, including metabolic diseases, depressive symptoms and cognitive decline, prompting us to look at whether intervening physical and psychological health problems related to childhood abuse predict cognitive dysfunction of older adults,” said Mary Davis, professor of psychology and lead author on the study.

Understanding the physiological and psychological pathways by which childhood abuse affects adults can lead to interventions geared toward changing negative health outcomes.

Measuring the subtle effects of abuse, decades later

Childhood abuse can have a long-term influence on the stress responses in the brain and body, which could indirectly lead to cognitive decline decades later. The researchers tested whether early abuse predicted later markers of risk for metabolic diseases like diabetes, depressive symptoms and/or levels of inflammation in the body, and whether these risk factors were related to cognitive dysfunction in aging adults.

The participants were ethnically and socioeconomically diverse adults ages 40 to 65 currently living in the Phoenix metropolitan area. The research team had the participants complete a series of surveys about their childhood and their health experiences to date. The surveys included questions about whether the participants had experienced childhood abuse and whether they were currently experiencing or had experienced depressive symptoms or memory problems associated with cognitive decline.

The research team also visited each participant in their home to collect measurements of their physical health. The team measured the waist circumference, blood pressure and the height and weight of each participant, because these measurements can be used to predict whether someone has, or is at risk for, metabolic diseases like diabetes. The team also collected a blood sample from each participant, to measure the amount of a molecule called interleukin-6. This molecule is part of the immune system’s inflammatory response for illness and injury, but also can remain chronically elevated even after an illness or injury heals.

Blood levels of inflammation and depressive symptoms predicted cognitive decline

The survey the researchers used to measure cognitive decline is a simple and reliable tool called the Telephone Inventory for Cognitive Status, and it showed that about 5 percent of the participants had mild cognitive impairment, even though the participant age range was relatively young, from 40 to 65.

The researchers used a mathematical model to test whether childhood abuse was related to the risk for metabolic disease, depressive symptoms or blood levels of inflammation, and whether each of these risk factors was related to cognitive decline. Childhood abuse was related to higher levels of all three aspects of physical and psychological health. But it was interleukin-6 that explained the link between the experience of childhood abuse and poorer cognitive performance. Davis and the research team found the same pattern for depressive symptoms.

“Interleukin-6 and depressive symptoms both served as mediators of the link between childhood abuse and poorer cognitive performance, suggesting both physiological and psychological changes that fuel the cognitive dysfunction we see in older adults who have been abused as children” said Davis, who runs ASU’s Emotion Regulation and Health lab. “These two mediators represent at least part of the reason people who are abused early in life suffer from cognitive decline.”

The amount of interleukin-6 measured from a participant’s blood was also related to whether they had depressive symptoms, suggesting that depressive symptoms and levels of inflammation in the body might feed into each other.

The risk for metabolic diseases was also associated with childhood abuse but did not predict cognitive decline. About 37 percent of the participants met the criteria for having metabolic disease, a percentage that is comparable to the disease rate in the general population. Because the age range of the participants was 40 to 65 years, Davis said the effects of metabolic diseases on cognitive performance might emerge as individuals age into older adulthood.

“It is easy to discount the enduring impact of childhood abuse, but lots of evidence suggests that it is not ‘over and done’” Davis said. “Childhood abuse can have longstanding and subtle implications for how people age and can show up decades later physiologically, psychologically and in terms of cognitive processing.”

Kathryn Lemery-Chalfant and Linda Luecken of the ASU Department of Psychology also contributed to the study along with Ellen Yeung from the University of Missouri and Michael Irwin from the University of California, Los Angeles. The late Alex Zautra, ASU Foundation Professor of clinical health and psychology, also contributed to the study.

The study was funded by the National Institute on Aging, the Eunice K. Shriver National Institute of Child Health and Human Development, and the National Institute on Alcohol Abuse and Alcoholism. The content is solely the responsibility of the authors and does not necessarily reflect the official views of the National Institute of Health.

Science writer, Psychology Department