ASU workgroup addresses ethical questions about the use of AI in higher ed


A robotic arm reaches out from the left side of the image to touch a human arm with tattoos reaching from the right of the image. The orientation is a nod to Michaelangelo's "The Creation of Adam."

Photo credit: Cottonbro Studio

|

As artificial intelligence becomes more ubiquitous in our everyday lives, the AI and Ethics Workgroup at Arizona State University's Lincoln Center for Applied Ethics is working to establish ethical guidelines and frameworks for the deployment of AI technologies. 

Composed of experts from a variety of fields, the workgroup is dedicated to navigating the complex ethical challenges arising from rapid advancements in AI. The group published their first white paper earlier this month, which focuses on the use of AI tools in higher education.

The workgroup’s co-chairs are Sarah Florini, the associate director of the Lincoln Center and an associate professor of film and media studies, and Nicholas Proferes, an associate professor for ASU’s School of Social and Behavioral Sciences.

Florini and Proferes shared some insights into their workgroup’s research process and their publication, “AI and Higher Education: Questions and Projections.”

Note: Answers have been edited lightly for length and/or clarity.

Question: Why is it critical that academics begin asking questions about AI in higher ed? Why now?

Florini: Historically, when technology has radically reshaped our lives, it has had as much to do with cultural processes as with technological innovation. New technologies don’t have inherent, predetermined outcomes. Every new technology is accompanied by a period where, as a society, we negotiate its meaning, its value and how it might fit into our world. We’re in this moment right now with AI. And it is crucial that those of us in higher education be active participants in shaping the future of AI in a way that is beneficial to our students, our community and society (at) large.

Proferes: As we sit on the cusp of more widespread AI adoption in higher ed, we have a brief window where we can take a step back and be (reflective) — not just about AI, but also about a lot of practices in the academe. This is a moment for us to collectively ask: “What do we value in the educational enterprise, and how can we use AI, with moral wisdom, to help us achieve those values?” As a scholar of technology and society, I think it’s incredibly important for us to think through and understand the tradeoffs that any new technology presents, and to understand that there are always tradeoffs.

Q: What do the stories about AI in popular culture and the news media tell us about the way these emerging technologies are being adopted and used?

Florini: I think they tell us more about our cultural (imagination) than about real-world adoption and use. Popular culture and the news media are key terrains for negotiating how new technologies will be understood and integrated into our lives. They can work to naturalize particular ways of thinking about these technologies. And that doesn’t always match the reality of day-to-day use. For example, there is growing evidence suggesting that AI might actually create more work instead of reducing it. Despite utopian and dystopian sci-fi narratives or new media predictions about how AI will revolutionize every aspect of society, the on-the-ground reality is often more nuanced and far more mundane.

Proferes: Rather than focus on specific success stories or doomsday stories about AI, I’d argue we need to pay attention to the function that stories about technology have in our lives. Stories about technology help create a kind of order for our world. They do this by enabling and limiting ways of “seeing” a given technology’s role in society and its future possibilities. Stories help shape how a given technology becomes part of our systems of goals, values and meaning. In a way, they serve as a kind of collective sensemaking guide. I think it’s really important that folks critically engage stories about AI and think about how these stories might shape our dispositions, particularly in higher ed.

Q: What can educators and institutions start doing today to instill more responsible, ethical adoption of AI-related technologies?

Florini: Get involved and participate in the conversations surrounding these technologies. We all need to be part of the efforts to shape how they will be integrated into colleges and universities. The terrain around AI is moving quickly, and there are many stakeholders with diverging opinions about the best course of action. We all need to be developing a critical understanding of these technologies and contributing to the process of determining how they align with our values.

Proferes: Have conversations with your community. Not just your peers, but with every stakeholder who might be impacted. Create spaces for that dialogue. Map out what the collective core values you want to achieve with the technology are, and then develop policies and procedures that can help support that.

But also, be willing to revisit these conversations. Very often with tech development, ethics is treated as a checkbox, rather than an ongoing process of reflection and consideration. Living wisely with technology requires phronesis, or practical wisdom. That’s something that’s gained over time through practice. Not a one-and-done deal.

Q: How does the workgroup plan to contribute to future discussions of AI at ASU?

Florini: In addition to several more white papers exploring AI and higher education, we hope to facilitate and anchor conversations about AI and ethics. Not only does ASU strive to innovate, but it is also deeply committed to doing so in principled and ethical ways. Being at the forefront of AI and higher education, ASU is uniquely poised to grapple with the hard questions of when and how to use these technologies, to determine best practices, and to create a model for the more effective and principled uses of AI at universities and colleges.

Proferes: In addition to the white paper we just published, we have a number of different white papers in the works that, we hope, will serve as tools to help further conversations about what ethical implementation of AI can look like, not just here at ASU, but across higher ed more broadly. ASU is so far ahead of the curve in so many ways, and we think that the conversations we are having here will also start showing up elsewhere.

More Science and technology

 

Associate Professor Yoan Simon in his lab on the Tempe campus.

New polymer technology opens door to paths for enhancing sustainability

We are reminded every day in the media of the unnecessary amount of waste being generated with pictures of plastic garbage patches floating in the oceans or stranded on our beaches. But…

A young girl receives an eye exam.

ASU researchers use AI to help people see more clearly

Myopia, also known as nearsightedness, is on the rise, especially among children.Experts predict that by the year 2050, myopia will affect approximately 50% of the world’s population. Researchers…

A portrait of Tim Cope posing with the ORIGAMI RISK logo and balloons that form the number 900.

ASU computer science alum turns entrepreneurship into activism

Tim Cope encourages students to take risks.That may sound like an odd posture for a co-founder of a software firm that helps companies avoid costly business risks — but he has his reasons.“Taking…