'Escaping Flatland': Faculty advocate for AI literacy in higher education


A single illuminated cube among several dark cubes.

In their white paper "Escaping Flatland: Understanding Multi-Dimensional Potentials of AI Literacies in College Research and Writing," a group of ASU faculty are advancing national conversations on what it means to be “AI literate.” The paper's title borrows the concept of "escaping Flatland" from Edwin Abbott’s novella to highlight how a simplistic or flat perspective on literacy limits understanding. Photo courtesy of Pixabay

|

A group of Arizona State University scholars is advancing national conversations on what it means to be “AI literate.” 

Their recent white paper, "Escaping Flatland: Understanding Multi-Dimensional Potentials of AI Literacies in College Research and Writing," is part of a series from the Lincoln Center for Applied Ethics’ AI and ethics work group. 

The work group is a collaborative forum dedicated to navigating the complex ethical challenges that arise from rapid advancements in artificial intelligence. Drawing expertise from tech ethics, data science and critical technology studies, the group aims to establish ethical guidelines and frameworks for the responsible development and deployment of AI technologies.

The paper was co-authored by:

  • Michael Simeone, associate research professor, School of Complex Adaptive Systems.

  • Marisa Duarte, associate professor, School of Social Transformation.

  • Nicholas Proferes, associate professor, School of Social and Behavioral Sciences.

  • Michael Stancliff, associate professor, School of Humanities, Arts and Cultural Studies.

  • Alexander Halavais, associate professor, School of Social and Behavioral Sciences.

  • Sarah Florini, associate director, Lincoln Center for Applied Ethics; associate professor of film and media studies.

  • Shawn Walker, assistant professor, School of Social and Behavioral Sciences.

  • Jaime Kirtz, assistant professor, School of Art, Media and Engineering.

Below, four of the co-authors share insights about why AI literacy matters, what risks they see for students and educators, and how this research invites colleagues at ASU and beyond to join the conversation.

Collage of professor portraits.
From left: Michael Simeone, Nicholas Proferes, Marisa Duarte and Michael Stancliff. Courtesy photos

Question: What is your role in this project and how did you become involved?

Michael Stancliff: I direct the writing program on the West Valley campus and New College. Working with first-year students, I became deeply involved in considering the use and ethics of AI. A central assertion of our white paper is that we need to develop critical and culturally aware literacy around AI, helping students understand its limitations, biases and potential impacts, while also recognizing how it can be used as a tool. My contribution was bringing a classroom-oriented perspective to the paper, connecting pedagogy with broader knowledge creation.

Michael Simeone: This paper started by responding to the way the term "literacy" was being applied to AI. Literacy is not just a skill; it has historical, cultural and disciplinary dimensions. We borrowed the concept of "escaping Flatland" from Edwin Abbott’s novella to highlight how a simplistic or flat perspective on literacy limits understanding. We want students and scholars to move beyond basic skill acquisition to a richer, more nuanced engagement with AI, seeing it as a medium for expression, creation and critical thinking.

Nicholas Proferes: It’s not enough to know how to use AI; literacy also involves knowing when and why to use it, and understanding the broader social systems in which AI exists. Libraries, for example, aren’t neutral, they're cultural artifacts. AI similarly flattens knowledge if we treat it only as a linear tool. Escaping that flatland means helping students grasp the context, history and social implications of information.

Marisa Duarte: Our white paper is part of a larger research agenda. We are looking at AI in higher education from three perspectives: disciplinary training, technical knowledge and critical philosophical reflection. This isn’t just for students, it’s for colleagues across ASU — an invitation to consider the ethical, cognitive and pedagogical implications of AI. Literacy is about more than skills; it’s about how people create, interpret and protect knowledge responsibly.

Q: What are some of the risks or concerns with AI use in education?

Stancliff: Writing is a process of creating knowledge. If students use AI in a task-oriented, production-focused way, they risk "cognitive laziness," flattening knowledge and expression. AI can help students, but it can also de-skill them if they bypass the problem-solving and critical thinking essential to learning.

Proferes: A key part of literacy is practical wisdom, knowing when it’s appropriate to use AI and when it’s not. For example, AI shouldn’t replace human empathy or interaction in situations where authentic voice matters, such as counseling a peer or engaging with patients.

Duarte: We must consider political, legal and ethical implications. Students should understand how their work may be used or misused, how AI impacts privacy, intellectual property and even environmental resources. AI literacy isn’t just about using tools; it’s about understanding consequences and engaging ethically.

Q: How can AI literacy be integrated into learning and teaching?

Simeone: AI training isn’t about mastering a tool, it’s about creativity, analysis and evaluation. Users should understand possible futures, consider ethical implications and think critically about outputs.

Stancliff: It’s vital that students retain their original voice and perspective. Tools like AI should augment learning, not flatten discourse or replace critical thinking. The white paper encourages teaching methods that preserve cognitive engagement, problem-solving and expression.

Proferes: A holistic view of literacy also prepares students to navigate safety, risk and responsibility. Skills are part of it, but knowing how to assess the appropriateness, reliability and ethical dimensions of AI use is central.

Duarte: ASU is an early adopter, and this work helps set an example. By documenting our approach and producing white papers, we’re sharing guidance for colleagues and the broader academic community on how to responsibly integrate AI into education.

Q: What’s the broader goal of your research on AI literacy?

Simeone: We want to start a conversation. AI literacy is a multi-part project encompassing ethics, creativity, evaluation and trust. Our goal isn’t to define it rigidly, but to stimulate thought and inquiry so students, teachers and scholars are empowered to engage critically with AI in all its dimensions.

Duarte: Ultimately, AI literacy is about ensuring that technologies empower rather than diminish human potential. We aim to cultivate a culture where students and educators understand the social, ethical and practical dimensions of AI and use it thoughtfully and responsibly.

Proferes: When we teach literacy beyond skills, we produce a greater good, students are better equipped to understand context, consequences and responsible usage, which benefits society at large.

More Arts, humanities and education

 

Photo collage of different visual projects from students in the course ranging from maps to poetry.

ASU course explores culture through an interdisciplinary lens

When Razieh Araghi joined Arizona State University in fall 2025, she wanted to show students the power of humanities. Her course — SLC 202 Exploring Cultures: Words, Images, Stories — aims to do…

A person with orange hair interacts with an abstract digital mirrored structure. The structure is composed of squares in varying shades of green, orange, white, and black which are pieced together to reflect the individual’s figure. The figure's hand is extended as if pointing to or interacting with the mirrored structure. Behind the structure are streams of binary code in orange, flowing towards the digital grid. Image by Yutong Liu & Kingston School of Art/Better Images of AI/CC-BY 4.0

ASU launches ‘AI-Informed Writing Classroom’

“How do I know what I think until I see what I say?”This question, attributed to novelist E.M. Forster, alludes to the role of writing in discovery and cognition.In 2026, the existence of large-…

Global Launch student uses VR headset in Fluent Futures Lab.

Fluent Futures Lab teaches what English textbooks miss

Learning English is about more than mastering key vocabulary and demonstrating verb tenses — it’s about knowing what to say, how to say it and when. At Arizona State University,…