The ethical costs of advances in AI


A maroon trolly car floating on a flat ASU gold background

AI-generated images by Alex Davis/ASU

|

Editor's note: This feature article is part of our “AI is everywhere ... now what?” special project exploring the potential (and potential pitfalls) of artificial intelligence in our lives. Explore more topics and takes on the project page.

Recent advancements in artificial intelligence have come onto the scene with a flurry of excitement. Like a newborn baby, there were grand ideas for its future. But that excitement was tempered by ethical issues that pervade many aspects of its use.

Issues around privacy, bias, employment, surveillance, security and more. 

“Basically every time we innovate, we solve a problem and we create new problems,” said Andrew Maynard, professor of advanced technology transitions in Arizona State University’s School for the Future of Innovation in Society.

Of course, the most extreme ethical concern is the potential threat that some say AI can present to human existence. 

“This comes from the idea that AI can become super intelligent and super aware to the point that it decides for itself what sort of world it wants, what sort of future it wants and becomes so powerful that it is beyond human control,” Maynard said.

“So now we've got the risk that if this AI decides that it wants a world where humans don't flourish, it has the ability to create that. This is one of those theories that I don't fully agree with. But that's the big fear that people like Elon Musk talk about.”

Musk sounded the alarm in November, prior to the landmark AI Safety Summit in the United Kingdom, saying, “Artificial intelligence could pose an existential risk if it becomes antihuman.”

“I don't think that's a serious risk,” Maynard said. “But it's important that some people think about it.”

ASU Regents and Foundation Professor Gary Marchant is one of the people doing that thinking.

Marchant was co-principal investigator of a $10,000 grant from the Future of Life Institute to The Hastings Center that ran from 2017–2019 and focused on how to prevent AI from exterminating humans. 

Two-thirds of the 30 leading AI experts at the grant meeting agreed that this kind of threat was inevitable. Only one-third thought it would never happen.

“A lot of people there were convinced that it is going to exterminate us at some point if we don’t do something,” said Marchant, faculty director for the Center for Law, Science and Innovation at the Sandra Day O'Connor College of Law at ASU. “That was the first time in my life I ever got scared of technology.”

The U.S. State Department came out with a report in March saying that rapidly evolving AI can create national security risks of “catastrophic events up to and including events that would lead to human extinction.”

But just how did we arrive at this existential point in AI’s evolution?

Very quickly, it seems.

A maroon toy rocket ship

And we're off …

It is common knowledge that new technology needs to be tweaked. It evolves over time with adjustments in an ever-changing process of refinement. (The first mobile phone, for example, was 4 pounds compared with today’s 6-ounce counterpart.)

But in some respects, after AI came out of the gate, it may have already been too late for some serious and necessary modifications.

For example, large-scale language models such as ChatGPT and other forms of generative AI had access to vast archives of content — including everything from books by bestselling authors to the classics, and an extensive collection of artistic works. Access, it must be noted, that was rarely approved of by the content creators.

Drawing on this data from around the world, these models went to work. Now people are sorting through the ethical debris as creators grapple with the reality that a craft that took a lifetime to perfect might now be replicated in a matter of moments.

Marchant has been studying what he calls the “pacing problem” of technology for many years. The idea is that technology is advancing too fast for government regulations to keep pace and be put in place.

AI takes this problem to a new, unforeseen level.

“It is basically the pacing problem on steroids,” said Marchant, lead organizer for ASU's Governance of Emerging Technologies and Science conference.

“AI makes it much worse because it is moving so much faster than any other technologies.  Every week there's a new development, and its impact is profound — not just on a particular industry but across the entire economy.”

The implications of it are so much more extreme and far-reaching that it is beyond our grasp, he said.

“It’s going to change everything. It's impacting careers, relationships and more,” he said. “People are bonding with machines. A million people proposed marriage to Alexa (the virtual assistant technology) a year ago. Machines are becoming our friends, our lovers, our counselors — our everything.”

A maroon cup of tea with a crack in it

Intentional and unintentional consequences

A big debate in the AI ethics community is whether to focus on short-term ethical risks like bias and privacy, or on the issue of extinction. ASU experts say they are both priorities.

Short-term ethical risks created by AI have been divided into two categories — those that are intentional and those that are unintentional.

Unintentional ethical issues include things like societal risks and AI accidents.

For example, facial recognition technology is less accurate in identifying people of color, and language translation systems can associate some languages with certain genders, stereotypes or education levels.

Data security and privacy also fall under this category. AI uses large quantities of personal and sensitive data, which can be exploited.

In 2017, military bases in covert areas were revealed because of data being collected by fitness devices as soldiers went around tracks each day.

“People are trying to do the right thing, but things go wrong anyway,” Maynard said.

Intentional ethical dangers include psychological manipulation, weaponization, mass cyberattacks, disinformation campaigns and more.

“There are so many ways in which people can use AI to cause harm,” Maynard said.

AI-generated voice scams are being used to imitate family members. The technology can fool parents into believing their child is in serious trouble and in need of money.

Misinformation can also be spread. During elections, for example, AI can generate targeted emails, texts and videos that impersonate candidates and deceive voters.

Maynard is particularly concerned about deepfakes.

“There are platforms where photos of young people are used to artificially create nudes — that are then circulated,” he explained. “These are the sorts of issues that really worry me.”

Other issues are more ambiguous — such as the deployment of AI on the battlefield, raising the question of what constitutes ethical behavior in the context of war.

While AI technology has been a game changer for the military, especially in the use of combat or fighter drones, it has also created concerns — including the creation of machines capable of killing without human authorization and the detachment of soldiers from the consequences of war.

“This is the problem with ethics,” Maynard said. “You can have two sides that will argue from a basis of deep values that they are ethically right, but they will be completely opposed. AI is creating these sorts of conversations.”

A maroon scale of justice with ASU gold accents

The problem with solutions

One of the problems with finding solutions to AI-generated issues is that the ethics community can’t calculate the full extent of the problem. 

“We're now getting a technology which is so powerful and going so fast that we're really on the cusp of creating problems that we don't understand — so we can't work out how to address them,” Maynard said.

Katina Michael calls that algorithmic fallout.

“The negative potential of AI is invisible,” says Michael, a professor in ASU’s School of Computing and Augmented Intelligence. “And we can't measure it. We can't see it, and we often don't know what it is, except we know that harm can occur. ... To an extent, we really can't understand it until the aftereffects are seen and people start reporting them.”

But what we do understand is this:

The U.S. State Department’s recent report says that time is running out for the government to stave off the potential damage that could be caused by AI. 

Laws are slow to develop, and even those that are in place — for example, to address deepfakes — are hard to police. There is also the risk that laws can overreach and inhibit innovation.

During the past 100 years, there was an assumption that with good regulations and policies in place, technology could be kept in check. AI is breaking all the rules, experts say, meaning that how we thought about developing responsible technologies in the past is unlikely to work with AI.

What will work is to ensure that ethics are embedded in the development process, according to Michael.

“It’s easy to point to operationalizing ethics at three levels — in the collection of data, in the design and in the implementation of an AI-enabled digital transformation,” she said.

"If you abide by laws, you abide by human rights, you abide by international standards, you abide by organizational policies and human ethics protocols as a designer and developer, you're not going to have an awful outcome."

Collaboration in the design process is another essential element in developing ethical AI.

“If you use ethically aligned design principles, what you're saying is, 'I've gone to the users and other stakeholders, and I've asked them to be co-designers,'” Michael said.

The issue of consent is also a critical piece in the implementation of responsible technology.

Michael says there are 40 years of digital data, including 40 billion images that can be scraped from the internet. And while some of it is being used for good — for example, to solve cold crimes — open-source data is fundamentally being incorporated without consent.

It must be clean data that’s collected transparently, Michael said, and we need to right any wrongs in the past.

“Data is all about consent,” she said.

Marchant says because of the pacing problem, the government is just too slow to take on all of the issues already unraveling. They must be addressed through governance by multiple organizations.

More from an AI expert

Portrait of ASU Law Professor Gary Marchant

Gary Marchant: Exploring the intersection of law and technology

It’s done by the media, he said — and academia has a huge role to play as well — but not the government alone.

Industry standards are already being put in place, and that will continue. 

Michael says that as industry conventions or soft laws are put in place, they will become the precedent for hard laws.

“Attempts to predict the possible pitfalls of a bigger, better and badder AI are challenging," she said. "Because of that inability to see beyond the horizon of risks — as well as the benefits — from the ever-evolving AI technology, a framework of global response practices must be put in place.”

What the future holds in addressing ethical problems is difficult to predict.

"People have been discussing AI ethics for a long time," said Maynard, who attended one of the first major conferences on the topic in 2017. The initial assumption was that a solid ethical foundation would ensure AI prioritizes human interests.

"But we've found that ethics alone are insufficient," he said. "While ethics help define right and wrong, they don't necessarily lead to the creation of safe and responsible AI. That's why the focus is shifting towards responsible AI, a concept gaining traction in the U.S. government, major companies and globally.

"However, current approaches seem inadequate. AI development is rapidly outpacing our ability to responsibly manage it. The challenge remains: How do we keep up? … All we can do is be agile and responsive in trying to ensure the benefits far outweigh the risks.”

AI is everywhere ... now what?

AI-generated images of a medical bag, robot, bike and calendar

Artificial intelligence isn't just handy for creating images like the above — it has implications in an increasingly broad range of fields, from health to education to saving the planet.

Explore the ways in which ASU faculty are thinking about and using AI in their work:

More Law, journalism and politics

 

Woman photographing a man standing next to a filing cabinet while looking at a manila folder.

ASU's Carnegie-Knight News21 project examines the state of American democracy

In the latest project of Carnegie-Knight News21, a national reporting initiative and fellowship headquartered at Arizona State University’s Walter Cronkite School of Journalism and Mass Communication…

Man in blue suit standing in front of college students in classroom

Arizona secretary of state encourages students to vote

Arizona Secretary of State Adrian Fontes looked right and left, taking in the more than 100 students who gathered to hear him speak in room 103 of Wilson Hall.He then told the students in the Intro…

Photo of Bernice King

Peace advocate Bernice A. King to speak at ASU in October

Bernice A. King is committed to creating a more peaceful, just and humane world through nonviolent social change.“We cannot afford as normal the presence of injustice, inhumanity and violence,…