ASU workgroup addresses ethical questions about the use of AI in higher ed


A robotic arm reaches out from the left side of the image to touch a human arm with tattoos reaching from the right of the image. The orientation is a nod to Michaelangelo's "The Creation of Adam."

Photo credit: Cottonbro Studio

|

As artificial intelligence becomes more ubiquitous in our everyday lives, the AI and Ethics Workgroup at Arizona State University's Lincoln Center for Applied Ethics is working to establish ethical guidelines and frameworks for the deployment of AI technologies. 

Composed of experts from a variety of fields, the workgroup is dedicated to navigating the complex ethical challenges arising from rapid advancements in AI. The group published their first white paper earlier this month, which focuses on the use of AI tools in higher education.

The workgroup’s co-chairs are Sarah Florini, the associate director of the Lincoln Center and an associate professor of film and media studies, and Nicholas Proferes, an associate professor for ASU’s School of Social and Behavioral Sciences.

Florini and Proferes shared some insights into their workgroup’s research process and their publication, “AI and Higher Education: Questions and Projections.”

Note: Answers have been edited lightly for length and/or clarity.

Question: Why is it critical that academics begin asking questions about AI in higher ed? Why now?

Florini: Historically, when technology has radically reshaped our lives, it has had as much to do with cultural processes as with technological innovation. New technologies don’t have inherent, predetermined outcomes. Every new technology is accompanied by a period where, as a society, we negotiate its meaning, its value and how it might fit into our world. We’re in this moment right now with AI. And it is crucial that those of us in higher education be active participants in shaping the future of AI in a way that is beneficial to our students, our community and society (at) large.

Proferes: As we sit on the cusp of more widespread AI adoption in higher ed, we have a brief window where we can take a step back and be (reflective) — not just about AI, but also about a lot of practices in the academe. This is a moment for us to collectively ask: “What do we value in the educational enterprise, and how can we use AI, with moral wisdom, to help us achieve those values?” As a scholar of technology and society, I think it’s incredibly important for us to think through and understand the tradeoffs that any new technology presents, and to understand that there are always tradeoffs.

Q: What do the stories about AI in popular culture and the news media tell us about the way these emerging technologies are being adopted and used?

Florini: I think they tell us more about our cultural (imagination) than about real-world adoption and use. Popular culture and the news media are key terrains for negotiating how new technologies will be understood and integrated into our lives. They can work to naturalize particular ways of thinking about these technologies. And that doesn’t always match the reality of day-to-day use. For example, there is growing evidence suggesting that AI might actually create more work instead of reducing it. Despite utopian and dystopian sci-fi narratives or new media predictions about how AI will revolutionize every aspect of society, the on-the-ground reality is often more nuanced and far more mundane.

Proferes: Rather than focus on specific success stories or doomsday stories about AI, I’d argue we need to pay attention to the function that stories about technology have in our lives. Stories about technology help create a kind of order for our world. They do this by enabling and limiting ways of “seeing” a given technology’s role in society and its future possibilities. Stories help shape how a given technology becomes part of our systems of goals, values and meaning. In a way, they serve as a kind of collective sensemaking guide. I think it’s really important that folks critically engage stories about AI and think about how these stories might shape our dispositions, particularly in higher ed.

Q: What can educators and institutions start doing today to instill more responsible, ethical adoption of AI-related technologies?

Florini: Get involved and participate in the conversations surrounding these technologies. We all need to be part of the efforts to shape how they will be integrated into colleges and universities. The terrain around AI is moving quickly, and there are many stakeholders with diverging opinions about the best course of action. We all need to be developing a critical understanding of these technologies and contributing to the process of determining how they align with our values.

Proferes: Have conversations with your community. Not just your peers, but with every stakeholder who might be impacted. Create spaces for that dialogue. Map out what the collective core values you want to achieve with the technology are, and then develop policies and procedures that can help support that.

But also, be willing to revisit these conversations. Very often with tech development, ethics is treated as a checkbox, rather than an ongoing process of reflection and consideration. Living wisely with technology requires phronesis, or practical wisdom. That’s something that’s gained over time through practice. Not a one-and-done deal.

Q: How does the workgroup plan to contribute to future discussions of AI at ASU?

Florini: In addition to several more white papers exploring AI and higher education, we hope to facilitate and anchor conversations about AI and ethics. Not only does ASU strive to innovate, but it is also deeply committed to doing so in principled and ethical ways. Being at the forefront of AI and higher education, ASU is uniquely poised to grapple with the hard questions of when and how to use these technologies, to determine best practices, and to create a model for the more effective and principled uses of AI at universities and colleges.

Proferes: In addition to the white paper we just published, we have a number of different white papers in the works that, we hope, will serve as tools to help further conversations about what ethical implementation of AI can look like, not just here at ASU, but across higher ed more broadly. ASU is so far ahead of the curve in so many ways, and we think that the conversations we are having here will also start showing up elsewhere.

More Science and technology

 

ImmunoShield Therapeutics founders Jessica Weaver, Orrin Sneer and Matthew Becker with Phoenix Mayor Kate Gallego at the 2024 BIO International Convention

ASU startup pioneers breakthroughs in cell therapy

Regenerative medicine harnesses the body’s inherent ability to heal itself by using cells to repair or replace damaged tissues and organs. The field is full of new possibilities for disease treatment…

Cells dividing as seen through a microscope.

The high cost of complexity

Between 1.8 billion and 800 million years ago, earthly life was in the doldrums. During this period, called the "boring billion," the complexity of life remained minimal, dominated by single-celled…

An illustration of two people engaging in online cybersecurity education.

DEF CON Academy looks to serve, build community

Every year, a legion of hackers, programmers, cybersecurity professionals and researchers descend on Las Vegas for the most storied convention in the hacker community: DEF CON.Since 1993, the…