Lincoln Center for Applied Ethics announces new director

June 21, 2023

Gaymon Bennett, associate professor for the School of Historical, Philosophical and Religious Studies, believes that the study of ethics is best exhibited not just through the subject matter of research, but through embodied, collective practice. This is a key perspective at the core of his work as the new director of the Arizona State University Lincoln Center for Applied Ethics.

Bennett has served as the associate director for the Lincoln Center since 2020, supporting the center’s move to the humanities under The College of Liberal Arts and Sciences and its shift in focus to the topics of humane technology and ethical innovation. Gaymon Bennett wearing a white button up shirt is illuminated against a dark background. Gaymon Bennett Download Full Image

“We are fortunate indeed to have an ethicist with so deep and varied a background at the helm of the Lincoln Center for Applied Ethics,” said Jeffrey Cohen, dean of humanities. “I have worked with Dr. Bennett closely over the years and have come to admire his brilliance, his field-crossing knowledge and his good heart. Dr. Bennett is a one-of-a-kind scholar-practitioner who will further and deepen the mission of a center — the work of which has never been so urgent.”

Bennett earned PhDs in cultural anthropology and philosophical theology from UC Berkeley, with a research focus on the impacts of modernity on contemporary experiences of science and religion. His connections within ASU are far reaching, with appointments in the Institute for the Future of Innovation in Society, the Center for the Study of Religion and Conflict and the Center for Jewish Studies.

Bennett’s work interfaces with both humanities and tech innovation, and has interrogated the culture and politics of knowledge production by exploring shared conceptual frameworks and collaborative empirical inquiry. During his tenure at ASU, Bennett’s work has been supported by grants from the John Templeton Foundation, The Templeton Religion Trust, The American Council of Learned Societies, The U.S-Israel Binational Science Foundation and others.

His emphasis on collaboration and experimental practice as keys to ethics are demonstrated throughout the work of the Lincoln Center. The center’s renewed vision for ethical innovation in 2020 coincided with the inception of its design studio model: a process at the heart of the center that seeks to rethink how research gets done, how findings get shared and who gets to participate. This research model consists of co-designed movements of thought, guided by personal experiences, shared stories and an ethics of mutual care, taken up by a diverse participants who range from scholars and industry leaders to graduate students and community members.

Bennett’s appointment comes as current director, Elizabeth Langland, retires from a 16-year career at ASU. Langland previously served as vice provost at the West campus, and has held appointments at the New College for Interdisciplinary Arts and Sciences, The College of Liberal Arts and Sciences and the Institute for Humanities Research. She was named director of the Lincoln Center in 2020.

“Gaymon came into the Lincoln Center as associate director when I became director, and he has always been a full partner in all our initiatives,” Langland said. “He is uniquely qualified now to lead the center as its new director, and I am delighted he has agreed to do so.”

Langland and Bennett’s collaborative working relationship began at the IHR’s Future of Humane Technology Symposium in Washington, D.C, in 2019. Under their leadership, the Lincoln Center has refined the design studio toolkit through multiple case studies and has also released projects including the Human Tech Oracle Deck.

With Bennett now at the helm, the center will continue to expand on its implementation of the design studio model while also pursuing new opportunities in teaching, research and the facilitation of ethical inquiry. One of these key initiatives involves research on responsible artificial intelligence innovation, sponsored by the National Humanities Center, which has led to the development of a class on the human impacts of AI that will be taught by Bennett and the center’s research program manager, Erica O’Neil.

“The team at the Lincoln Center has struck a chord in this new moment at the culmination of collective efforts at ASU to interrogate issues around collaborative research and playful experimentation,” Bennett said. “In the humanities, there is a longstanding tradition of the lone genius of scholarly excellence, which has shaped our collective sense of what good work is supposed to look like. There is nothing wrong with specialization and deep learning. But when it comes at the expense of our ability to work inventively with people of other walks of life — including scholarly and scientific walks of life — then our strength becomes an impediment. Learning to reimagine and remake that process, to turn toward new modes of creativity and iterative exploration, requires the very pleasurable work of undoing those habits.

 “At the Lincoln Center, we’re asking: What does it mean to make ethics not just part of the scene, but to rethink the pursuit of innovation in ways that improve our collective living?”

Karina Fitzgerald

Communications program coordinator , Lincoln Center for Applied Ethics


Disruption 2.0: Event focuses on dilemmas around AI in new creativity landscape

June 21, 2023

Writers, filmmakers, lawyers and thought leaders met in June at the ASU California Center in Los Angeles to try to answer some of the critical dilemmas posed by the rise of generative artificial intelligence and its impacts in the entertainment industry.

Disruption 2.0, a collaboration between ASU’s narrative and emerging media master's program, Creative Commons and Eqty Lab, was held at the historic Herald Examiner building. Person giving a talk on stage with an ASU California Center sign in the background. Jonathan Dotan leads the discussion about generative AI during the Disruption 2.0 event at the ASU California Center. Photo by Charles Anderson Download Full Image

“As more countries are beginning to draft legislation around AI and the Writers Guild of America strikes, partially around the threat that AI can pose, now is the time for substantive dialogue and conversation around creator rights directly impacted by AI and its secondary effects,” EQTY Lab founder Jonathan Dotan said.

Dotan said that lessons on the current rise of AI could be drawn from history — particularly the impact of loom technology on the textile industry during the first Industrial Revolution.

“The arrival of textiles into Great Britain brought a huge amount of social change. … The technology of the loom was the forebear for a huge amount of social change that spanned for centuries to come," Dotan said.

In the 17th century, workers feared they would lose their jobs to the technology; however, at that time organized labor was illegal. That fear was also about the distribution of fair profit — a thread that continues today through the writers’ strikes.

“Again, we are responding to change and figuring out how to use technology,” Dotan said.

Dotan described AI as a system of applied statistics that brings in vast amounts of data to create answers based on probabilities: “As it amasses more data … it is able to complete statistically better answers.”

Due to this, he said AI was not sentient. Instead, he described it as an “overconfident teenager” that “knows everything and knows nothing." 

Its uses are vast — from language models like ChatGPT, which was trained on large parts of the internet, to Midjourney, which is able to create original art in seconds.

But these uses raise the need to protect the rights of artists for the works that AI might draw from. While one option would be shutting down the open-source nature of the models, that doesn't seem feasible.

“If we shut it down then that might be the end of the open internet as we know it,” Dotan said. “The risk is that we overcorrect. … Think about how the cathedrals of stone gave way to the printing press, which created a whole universe of media and creativity.”

Dotan said that creators needed “a better story” to tell about their response to AI: “We have sufficient funds and technology to address some of these problems. We need new tools and processes. It’s time to create.”

(From left) EQTY Lab founder Jonathan Dotan leads a panel discussion at ASU California Center about the impact of generative AI on the film industry with filmmaker Aaron Zelman and Writers Guild of America members John Rogers and John Lopez. Photo by Charles Anderson

Dotan then led a panel discussion with writers about the industry's challenges and opportunities from AI, which was particularly prescient as the WGA writers’ strike entered its seventh week. 

As part of the strike, WGA is requesting that the Alliance of Motion Picture and Television Producers (AMPTP) ban the use of AI for writing and rewriting any source material, as well as its use as a source material of its own, and that no AI material be trained on WGA writers' work.

John Rogers from the WGA said concerns about plagiarism were real — even if it was hard to track down the original wrongdoing.

“We do it with money, we call it money laundering; and it's still a crime and you chase it down," Rogers said.

Filmmaker Aaron Zelman cut his teeth working on "Law and Order," but said people told him the formulaic show would be easy for AI to write. He likened that notion to when he wrote his first episode.

“After that, I ran into someone at the coffee shop, and they said that the dialogue wasn’t quite there. Maybe that’s what an AI 'Law and Order' would look like. ... Everyone on the show would say it's too facile, too rote. But this is an exponential curve here of learning that is happening.”

Zelman added that one of the biggest fears about AI was not that it was artificially intelligent but that it was artificially “creative,” which “spooked us."

John Lopez of the WGA said there was a creative process that might be lost if writers or studios looked increasingly to AI to do the job.

“I got to learn as a writer as I wrote crappy box office reports. We have automated one of the most valuable parts of being human, which is learning," Lopez said.

When Zelman asked ChatGPT how to become a better writer, it told him that it could make him more efficient.

“What if I don’t want the process to be more efficient? I’m looking for a creative process," Zelman said. "In that first draft, you discover something different that you never thought of and that becomes the thing.”

Rogers said mistakes are style.

“They might suck right now, but when you shine them up, they will be the things that make you great. You have to give people a chance to fail, and if you don’t it won’t be original or interesting.”

The Disruption 2.0 event also held panels on music and art and an open conversation with the general counsel for OpenAI.

Strategic communications manager , Media Relations and Strategic Communications