Understanding how our perception of AI affects its use


A studio portrait of Kyle Jensen, wearing a white shirt on a dark background lit with orange lighting

Students enrolled in ASU Professor Kyle Jensen’s advanced English course use generative artificial intelligence tools to assist their writing process — from brainstorming ideas to editing drafts. Photo by Samantha Chow/ASU

|

Editor's note: This expert Q&A is part of our “AI is everywhere ... now what?” special project exploring the potential (and potential pitfalls) of artificial intelligence in our lives. Explore more topics and takes on the project page.

Professor Kyle Jensen, Arizona State University’s director of Writing Programs, believes that by understanding how AI works, both his students and the world at large can better realize the unconscious biases that impact how we interact with the world.

“As a scholar of rhetoric, I'm very concerned that the stories that we're telling about generative AI are actually working against our ability to become curious about how it's working,” Jensen said.

“What I don't want to have happen is for the technology to take off in really exciting directions and have a majority of people say, ‘Wow, that really caught us off guard.’ I think it's the responsibility of public educators to ensure that doesn't happen.”

Students enrolled in Jensen’s advanced English course use generative artificial intelligence tools to assist their writing process — from brainstorming ideas to editing drafts.

“These technologies are not altogether that threatening if we are genuinely committed to working closely with our students, learning their voice, reading multiple drafts and cultivating a trusting, collaborative relationship,” Jensen said.

Below, Jensen speaks to how AI is impacting the field of humanities and what excites him most about its future.

Question: What is the focus of your research at ASU?  

Answer: I am a professor of rhetoric, which means that I study why people argue the way that they do. I specialize in modern rhetorical theory, which means that I study how arguments that fall just below our conscious awareness persuade us to think, act or feel a certain way.

Modern rhetorical theory is a helpful framework for understanding generative AI because it gives us tools for explaining why people might be anxious, fearful or optimistic about the technology even when they don't know very much about it. Obviously, the way that a person perceives generative AI will affect the way that they use it.

So my research is exploring different ways to think about and interact with generative AI by making our nonconscious perceptions about it the subject of conscious reflection.

Q: How is AI changing research within the field of humanities?

A: It's probably too early to tell given where the technology is at the moment. But there have been some initial returns that seem promising.

For example, generative AI is encouraging historians, critics and philosophers to reflect on how we have defined human intelligence in relation to computing technologies, which is helping us develop a more precise definition of what makes human intelligence unique and worthy of investigation.

Humanists are also incorporating generative AI into their classroom to expand how students conceptualize the work of reading and writing. And, there are many humanists who are reasonably concerned about the ethical uses of generative AI with regard to its selected datasets that reinforce race and gender stereotypes, its application in areas such as surveillance and policing, and its relationship to intellectual property laws.

So I think humanities scholars are well positioned to contribute meaningfully to discussions about AI as it continues to evolve.

Q: How is this technology shaping the real-world application of humanities?

A: Humanities scholars, like every scholar in the university, want their students to be prepared for 21st-century workplaces. So giving them access to generative AI applications and helping them build a working knowledge of how the applications function relative to their major is important. But humanities scholars don’t stop at application.

We want our students to develop critical thinking strategies that help them address the implications of generative AI use relative to abstract topics such as “What makes us unique as humans?” and more concrete topics such as “Does this generative AI application create output that compromises core social values such as accessibility, equity and justice?"

If so, how can we create new applications that are more consistent with our stated social values or develop policies that are responsive to and curtail its limitations and dangers?

Q: What opportunities related to AI in humanities most excite you?

A: I enjoy working on complicated problems that require me to collaborate with scholars from other disciplines. Obviously, generative AI poses many complicated problems and demands many different forms of expertise. So I’ve spent a lot of time over the last two years learning from colleagues at ASU who know a lot more than I do about generative AI.

These conversations are already leading to new interdisciplinary initiatives that redefine what it means to create knowledge and develop public-facing products using generative AI applications. These initiatives rely on enterprise partnerships, which means that we will get to see how our intellectual work addresses the social problems that we care most about. If that’s not exciting, I don’t know what is.

Q: What challenges related to AI in humanities need to be addressed?

A: The hard truth is that very few people understand how generative AI works, and the existing explanations of its operations leave a lot to be desired. Part of the problem is that generative AI is really difficult to explain, much less understand. Another part of the problem is that generative AI applications look a lot like familiar applications such as search engines or word processing software. So users reasonably expect that generative AI functions the same way as these familiar technologies. But they do not.

The challenge is to slow down the decision-making process so that we learn how this technology functions. At the same time, university leaders must immediately build infrastructure that helps faculty collaborate in a manner that is consistent with the university’s overall mission. Fortunately, ASU leaders such as Ron Broglio, Jeffrey Cohen, Elizabeth Reilly, Allison Hall, Kyle Bowen, Danielle McNamara, Anne Jones, Lev Gonick and Nancy Gonzales have been meeting this challenge successfully.

Q: Why is ASU important to the successful development of AI related to humanities?

A: First, ASU’s mission ensures that generative AI initiatives will be developed with attention to access, equity and scale. Increasing the amount of people who understand how generative AI functions will lead to richer and more equitable conversations about its applications. Humanities faculty are committed to such important work.

Second, ASU’s commitment to developing enterprise partnerships ensures that our faculty, staff and students have access to generative AI applications ahead of many other universities. Being at the leading edge of conversations about generative AI is energizing and rewarding, and ASU has done a good job of ensuring that humanities faculty have a seat at the table.

Third, ASU is committed to redefining what a 21st-century education may be. So humanities faculty, staff and students are well positioned to adapt our research and teaching methods as this technology develops in exciting new directions.

AI is everywhere ... now what?

AI-generated images of a medical bag, robot, bike and calendar

Artificial intelligence isn't just handy for creating images like the above — it has implications in an increasingly broad range of fields, from health to education to saving the planet.

Explore the ways in which ASU professors are thinking about and using AI in their research on our special project page.

More Science and technology

 

A graphic announcing the "cool" products of TOMNET with people working in the foreground and computer screens with data in the background.

ASU travel behavior research center provides insights on the future of transportation

The Center for Teaching Old Models New Tricks, known as TOMNET, has spent the past seven years conducting research and developing tools to improve transportation systems planning methods and data.As…

Illustration of a line up with four black silhouettes and one maroon silhouette

When suspect lineups go wrong

It is one of the most famous cases of eyewitness misidentification.In 1984, Jennifer Thompson was raped at knifepoint by a man who broke into her apartment. During the assault, she tried to make a…

Adam Doupé and the Shellphish team cheer from their seats in the Las Vegas Convention Center.

Jackpot! ASU hackers win $2M at Vegas AI competition

This August, a motley assortment of approximately 30,000 attendees, including some of the best cybersecurity professionals, expert programmers and officials from top government agencies packed the…