Q&A: How to build AI that enhances human and planetary well-being
Graphic by Jason Drees/ASU
While an AI revolution has been quietly in the making for decades, the lightning-fast emergence of large language model AI systems like ChatGPT has taken the world by storm.
Beyond their everyday use for writing essays, planning vacations or summarizing reams of text, the new technologies promise to revolutionize nearly every facet of the human experience.
On the scientific front, such systems are already guiding autonomous machines that mimic insect intelligence, improving the diagnoses of brain aneurysms, modeling the interiors of black holes and potentially deciphering the vocalizations of whales.
But experts say there are also ethical concerns and dilemmas to address, from built-in biases to unprecedented energy use, among others.
Steven Hartman is the founding executive director of the BRIDGES sustainability science coalition in UNESCO’s Management of Social Transformations program, based at Arizona State University’s Julie Ann Wrigley Global Futures Laboratory.
BRIDGES, a UNESCO-anchored global coalition, focuses on humanities-led sustainability science. UNESCO’s Recommendation on the Ethics of AI (2021) presented the first global standard on AI ethics.
Recently, Hartman participated in the third UNESCO Global Forum on the Ethics of AI, titled "Enabling Ethical AI for Present and Future Generations in a Time of Heightened Global Insecurity."
ASU News sat down with him to discuss some of the concerns raised by the exponentially advancing technology and how societies might protect themselves from potentially harmful consequences.
Note: Answers have been edited for length and/or clarity.
Question: Corporations wield enormous power to shape policy in their favor. How realistic is it to bring other voices — civil society, consumers, educators — into the conversation in ways that can truly influence the decisions shaping our environmental future?
Answer: It’s challenging, but essential. We can’t rely solely on nation-states or multilateral agreements; the private sector, civil society and consumers all have roles to play. The marketplace can send powerful signals through what people choose to support, and there’s broad public backing for climate and environmental action. But lasting change also depends on education, giving societies the capacity to understand the stakes and demand constructive action so these conversations rise above the culture wars.
Q: AI is moving rapidly into the classroom. How do you see this shaping education, and what are the risks and opportunities we should be paying attention to?
A: One thing that's clear is that education is already changing because of AI. Obviously, there is a risk if these technologies are taken uncritically, given the growing role that AI and generative AI are playing in society and our schools. However, I think these challenges also provide valuable teaching moments that can foster the development of critical faculties. But it's difficult, because the technology is constantly accelerating and evolving.
Q: What are the environmental concerns associated with the rapid expansion of AI-related infrastructure?
A: One of the key issues is the impact of data centers on global energy consumption. For example, Peter Schlosser (who gave the keynote for the event we held on June 24) cited a projection that we’re rapidly approaching a point where 5% of all global energy use will be consumed by data centers alone. And these facilities are only going to proliferate. So where does that leave us?
I think it’s crucial to assess the energy demands of AI and the data centers that power it — especially how much of that energy comes from renewable sources. Right now, most of it doesn’t. An AI system that is largely powered by nonrenewable energy is simply unsustainable. That’s a global concern, not just an issue for U.S. to solve. The effects are particularly in water-stressed regionsOver 40% of U.S. data centers are in regions facing high or extreme water scarcity, including areas like Phoenix and other parts of the Southwest. By 2027, AI demand alone is projected to drive global water withdrawal to 1.1–1.74 trillion gallons of water annually — equivalent to more than the total annual water use of the U.K. https://www.apmresearchlab.org/10x/data-centers-resource , where the impact can be much more dramatic — even staggering.
Q: As AI systems begin to take on more decision-making power — potentially even in high-stakes domains like defense — what concerns you most about this shift?
A: One thing that concerns me is the degree to which human beings are withdrawing from necessary oversight. We're also not doing enough to prepare wider groups of people to critically assess the reliability of AI-generated outputs — like summaries from ChatGPT. That lack of critical engagement poses a real risk.
Large language models can sound highly convincing, even when they're producing hallucinated or false information. When we rely on them uncritically, especially in areas like journalism or health care, the consequences can be serious.
Q: Despite the risks, what areas give you the most hope for how AI could benefit society or the environment in the long run?
A: There’s real potential for AI to serve the public good, if it’s directed in a constructive, ethical way. That includes promoting human dignity, protecting knowledge diversity and ensuring equitable access, especially around traditional and Indigenous knowledge. But it has to be done on those communities’ terms, or it risks continuing a pattern of extraction and exploitation.
AI could also help us better monitor environmental conditions and respond more quickly to signs of danger. And creatively, we may see entirely new hybrid forms emerge — a fusion of human ingenuity and AI output that we can’t even fully imagine yet. But none of this will happen on its own — it requires deliberate, sustained engagement, considerable reflexivity and creative effort.
Read the key priorities and takeaways from the UNESCO Global Forum on the Ethics of AI.
Steven Hartman is also involved in an initiative called the Integrated History and Future of People on Earth Research Network, or IHOPE. In his own words:
“IHOPE is focused on past cases of resilience and collapse, and what lessons can be drawn from them. It brings together historical disciplines like archaeology, anthropology, historical anthropology and literary studies.
In fact, under the influence of some leading researchers such as anthropologist Carole Crumley, IHOPE served as a prototype for the kind of interdisciplinary community that really gained momentum and helped form the basis for the BRIDGES model, which was later advanced within UNESCO and the U.N. system.
BRIDGES is now the first humanities-driven sustainability science program operating at a global scale, nested within UNESCO’s intergovernmental Management of Social Transformations program.”
More Science and technology
Lessons on maintaining your humanity in the world of AI technology
AI is not human. But it does a good job of acting like it.It is capable of replicating how we speak, how we write and even how we solve problems.So it’s easy to see why many consider it a threat, or…
When you’re happy, your dog might look sad
When people are feeling happy, they’re more likely to see other people as happy. If they’re feeling down, they tend to view other people as sad. But when dealing with dogs, this well-established…
New research by ASU paleoanthropologists: 2 ancient human ancestors were neighbors
In 2009, scientists found eight bones from the foot of an ancient human ancestor within layers of million-year-old sediment in the Afar Rift in Ethiopia. The team, led by Arizona State University…