The pros and cons of ChatGPT


Person seating at a table in front of a laptop writing in a notebook.
|

Editor’s note: This story is featured in the 2023 year in review.

If you ask ChatGPT, a chatbot that was launched by OpenAI last fall, how it feels about being used by students to cheat on school assignments, here’s what it will spit out. 

“As an AI language model, I do not have feelings, emotions or personal opinions. I am a machine learning model trained to respond to user inputs based on patterns in the data I was trained on. The use of ChatGPT or any other AI model for cheating purposes goes against the principles of academic integrity and is not in line with the intended purpose of these models.” 

And yet much of the outcry over ChatGPT is about its ethical uses or misuses. 

RELATED: The state of artificial creativity

ASU News talked to Arizona State University Assistant Professor of English Jacob Greene to get his take on this topic. Greene's research focuses on the rhetorical potential of emerging forms of location-based digital writing, such as augmented reality and GPS-guided audio tours. 

Greene says like any technology, it has pros and cons. He recently used the chatbot as a teaching tool in his writing class and discussed it on his website

Here's what he had to say.

Portrait of

Jacob Greene

Question: Based on the comments in your blog, you seem to see this new technology in a positive light. Or is it more of a resignation? 

Answer: It's basically like any technology for writing or communication technology. It could be used in both positive or negative ways. I think it's inevitably going to become a part of the way writers write in any context.

Microsoft invested $10 billion into the company that created ChatGPT. So it's likely that this software or some style of it will be integrated into Microsoft Word or Google Docs in the coming years or maybe even months. 

Q: You decided to teach this technology in class, and surprisingly, most of your writing students had not used it. Why be so straightforward in your approach to ChatGPT?

A: I just taught a whole class about ChatGPT and its implications for writers — what it means for the future of how we communicate. What might this offer us as writers in terms of maybe offloading the kind of writing that we don't necessarily see as particularly creative or expressive, but rather mundane and repetitive? It allows us to focus on other elements of our writing or communication that we would rather focus on. 

I found it really fascinating and was surprised at how many of my students hadn't really heard of it yet. So I may have unwittingly offered a lot of them a means to circumvent some assignments, but we'll see. 

Q: Most of the buzz around ChatGPT is from an ethical standpoint. Understandably, educators regard the use of content creators as cheating and therefore an obstacle to learning.

A: ChatGPT could clearly be used to cheat, but so can Google. I mean, if you want to know the answer to a question on your history exam, you can type it into Google and you’ll get results on the first page. ChatGPT really isn't revolutionary to me in that regard. It just is a cleaner interface for doing something that Google pretty much does already. But it organizes it a little better and maybe makes it easier.

... A common assignment in an English class might be to create a rhetorical analysis of Martin Luther King's “I Have a Dream” speech. So you could type that into ChatGPT and it'll produce a fairly basic but passable rhetorical analysis of that famous speech. And the reason it can do it so well is because there are thousands of rhetorical analyses of King’s speech online already.

So I think it's basically doing a lot of that stuff. Its real innovation is its ability to create unique prose that doesn't exist anywhere else online. It could just kind of generate its own sentences, its own syntax and vocabulary, and does it in different voices and styles. 

It has its limitations. It feels kind of robotic in what it can generate. But you know, it is impressive stylistically, the different things it can do. It can write a summation of Aristotle's philosophies in the style of a haiku or a poem or things like that. So, yeah, it's interesting.

Q: There has been an outcry of ethical concerns about ChatGPT — what are yours?

A: There will always be concerns about ethics for AI — especially when it comes to the kind of material that it's trained on and whether it can be used for reinforcing particular racist ideologies, gendered misogyny or things like that. I think the bigger concern is how this could be used to further the spread of misinformation, particularly if it can now be cloaked in very scientific, professional-sounding discourse that people can generate instantly. That could definitely be a concern.

In my class today, we went to the website for ChatGPT and there was a disclaimer — something like, “We discourage the use of this tool for things like plagiarism or creating hate speech.” So I think there are certain words that you can't put in there. 

At one point, I was just kind of playing around with it, and I said, “Write a negative review of a dog-grooming business.” It spent some time thinking, and eventually it popped up with this kind of boilerplate text: “It would be unethical for me to produce a negative review of a business.”

So it refused to do it. Wow. So it does have some kind of ethical responses — there are some ethical safeguards put in there.

Q: How will this change the way we think about writing?

A: If writing is a skill that we come to value in society, I think this tool forces us to reckon with what is the nature of that skill. Is that skill just taking this information and putting it in this certain grammatical order that we've all agreed on? Is this the correct way to write, this kind of standard, robotic style of writing? Or is writing something else?

And that I think is an important conversation to have because I do see value in writing as a skill and as a creative expressive act.

I think the use of writing as an assessment tool will also have to change in relation to ChatGPT. Writing was usually evidence of thinking about the relation between different ideas and how those ideas were constructed, analyzed, synthesized — how sources were incorporated into evidence for an argument. Those are all valuable things as communicators and humans that we need. But these tools might make it easier to just pass it off. So the use of writing to assess thinking or critical thinking or knowledge attainment is — it could definitely be trickier. 

Q: You delineate situations in which the use of AI may be acceptable and other situations where it wouldn’t. Can you explain? 

A: For me, it's very different to say, “Here's a memoir I wrote, or a book,” and it was actually generated by AI, as opposed to here is a cover letter or professionally written text such as a grant that was generated by AI.

Q: How are you using it in your class?

A: For their first essay, I'm requiring (students) to use ChatGPT to write it. It's 1,000 words and they need to use the AI tool to at least write some portion of it. They can use as much or as little as they’d like. And then they also have to write a separate short reflection ... completely original, on the experience of using it as a technology. And so we'll see how that goes. It's kind of a bit of an experiment. I have a feeling because it is an upper-level class a lot of them are going to use this technology and realize its limitations fairly quickly because they're all writers, but maybe not. Maybe some of them will find use for it. But I figured, you know, if you can't beat ’em, join ’em.

Top photo courtesy of Pexels

More Science and technology

 

Two scientists in a lab observe a microchip.

ASU student researchers get early, hands-on experience in engineering research

Using computer science to aid endangered species reintroduction, enhance software engineering education and improve semiconductor material performance are just some of the ways Arizona State…

Gail-Joon Ahn works with a colleague in his office.

ASU professor honored with prestigious award for being a cybersecurity trailblazer

At first, he thought it was a drill.On Sept. 11, 2001, Gail-Joon Ahn sat in a conference room in Fort Meade, Maryland. The cybersecurity researcher was part of a group that had been invited…

Michel Kinsy poses in his lab.

Training stellar students to secure semiconductors

In the wetlands of King’s Bay, Georgia, the sail of a nuclear-powered Trident II Submarine laden with sophisticated computer equipment juts out of the marshy waters. In a medical center, a cardiac…