image title

ASU experts explore national security risks of ChatGPT

August 15, 2023

As the editor in chief of a Georgia daily newswire turned to ChatGPT to research a Second Amendment lawsuit pending in Washington, D.C., it delivered a newsworthy nugget.

ChatGPT, a natural language processing tool driven by AI technology, stated that the legal complaint accused a syndicated radio talk show and podcast host of defrauding and embezzling funds from the Second Amendment Foundation. It seemed scandalous, but ChatGPT was simply making up facts that sounded convincing, a phenomenon known as “hallucination.” The editor in chief never published the fabricated information but did share it with the talk show host, who filed a defamation lawsuit against ChatGPT’s developer, Open AI.

In New York, a federal judge imposed fines on two attorneys and a law firm for submitting fictitious legal research generated by ChatGPT in an aviation injury claim. The judge said the lawyers and their firm “abandoned their responsibilities when they submitted nonexistent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”

These are just two recent examples of how ChatGPT, the internet’s favorite plaything since its debut in November, poses a threat to our reputations, jobs, privacy — and even truth itself. Arizona State University experts say another threat looms large but often escapes our notice: how ChatGPT and other chatbots pose a risk to national security.

A real game changer

“ChatGPT is a real game changer. This is the first time in human history that we are facing a democratic process with the 2024 election with an incredibly powerful artificial intelligence potentially being used to undermine the process in ways that we have never ever seen before,” says Andrew Maynard, a scientist, author and professor in ASU’s School for the Future of Innovation in Society, a unit of the College of Global Futures.

Maynard’s work focuses on how society can successfully transition to a future in which transformative technologies have the power to fundamentally change every aspect of life. He writes about emerging technologies and responsible innovation in "The Future of Being Human" on Substack.

“People can use generative AI to create content that looks legitimate and human sourced. That’s one side of things,” he says. “On the other side of things is social or human hacking. How can you use persuasive content to get underneath people’s defense mechanisms and critical thinking and nudge them in certain ways?

“This is what skilled manipulators and skilled politicians do: They use rhetoric in a way that makes it far harder for us to engage in critical thinking, and far easier to just go with the flow of their ideas. They do that through an incredibly clever use of language that plays to our internal biases. And now we’ve taught machines how to manipulate how we think, feel and behave in a way that has never been done before."

As hundreds — if not thousands — of chatbots join forces with social media and media outlets, that is a lot of political persuasion in the run-up to the 2024 election.

The holy grail of disinformation research

“The holy grail of disinformation research is to not only detect manipulation, but also intent. It’s at the heart of a lot of national security questions,” says Joshua Garland, associate research professor and interim director at ASU’s Center on Narrative, Disinformation and Strategic Influence (NDSI), part of the Global Security Initiative.

NDSI conducts research on strategic communication, influence, data analytics and more to generate actionable insights, tools and methodologies for security practitioners to help them navigate today’s (dis)information age. One example is the Semantic Forensics (SemaFor) program, funded by the U.S. Defense Advanced Research Projects Agency (DARPA), which aims to create innovative technologies to detect, attribute and characterize disinformation that can threaten our national security and daily lives.

ASU is participating in the SemaFor program as part of a federal contract with Kitware Inc., an international software research and development company. Their project, Semantic Information Defender (SID), aims to produce new falsified-media detection technology. The multi-algorithm system will ingest significant amounts of media data, detect falsified media, attribute where it came from and characterize malicious disinformation.

“Disinformation is a direct threat to U.S. democracy because it creates polarization and a lack of shared reality between citizens. This will most likely be severely exacerbated by generative AI technologies like large language models,” says Garland.

The disinformation and polarization surrounding the topic of climate change could also worsen.

“The Department of Defense has recognized climate change as a national security threat,” he says. “So, if you have AI producing false information and exacerbating misperceptions about climate policy, that’s a threat to national security.”

Garland adds that the technology’s climate impact goes beyond disinformation.

“It’s really interesting to look at the actual climate impact of these popular large language models,” he says.

Programs like Open AI’s ChatGPT and Google’s Bard are energy intensive, requiring massive server farms to provide enough data to train the powerful programs. Cooling those data centers consumes vast amounts of water, as well.

Researchers from the University of California, Riverside and University of Texas at Arlington published AI water consumption estimates in a pre-print paper titled “Making AI Less ‘Thirsty.’” The authors reported it required 185,000 gallons of water (or about a third of the water needed to fill an Olympic sized swimming pool) to train GPT-3 alone. Using these numbers, it was determined that ChatGPT would require a standard 16.9-ounce water bottle for every 20 to 50 questions answered. Given the chatbot’s unprecedented popularity, researchers like Garland fear it could take a troubling toll on water supplies amid historic droughts in the U.S.

The promise (and pitfalls) of rapid adoption

“Right now, we are seeing rapid adoption of an incredibly sophisticated technology, and there’s a significant disconnect between the people who have developed this technology and the people who are using it. Whenever this sort of thing happens, there are usually substantial security implications,” says Nadya Bliss, executive director of ASU’s Global Security Initiative who also serves as chair of the DARPA Information Science and Technology Study Group.  

She says ChatGPT could be exploited to craft phishing emails and messages, targeting unsuspecting victims and tricking them into revealing sensitive information or installing malware. The technology can produce a high volume of these emails that are harder to detect.

“There’s the potential to accelerate and at the same time reduce the cost of rather sophisticated phishing attacks,” Bliss says.

ChatGPT also poses a cybersecurity threat through its ability to rapidly generate malicious code that could enable attackers to create and deploy new threats faster, outpacing the development of security countermeasures. The generated code could be rapidly updated to dodge detection by traditional antivirus software and signature-based detection mechanisms.

Turning ChatGPT into a force for good

At GSI’s Center for Human, Artificial Intelligence, and Robot Teaming (CHART), Director Nancy Cooke and her team explore the potential legal and ethical issues that arise as robots and AI are assigned increasing autonomy. They also study how teams of humans and synthetic agents can work together effectively, from communicating verbally and nonverbally to engendering the appropriate level of human trust in AI.

“In my experiments where I bring participants into the lab, train them on a task and tell them they will be working with AI, in many cases the participants trust the AI too much,” she says. “When the AI starts making mistakes, participants often think, ‘I just don’t know what I’m doing because I’m new to this task. AI must be better than I am.’”

If people suspect AI can perform tasks better than they can, it’s only natural for professionals such as computer programmers, financial advisers, writers and others to fear for their job security.

While it is likely — if not inevitable — that ChatGPT will wipe out jobs, Cooke believes it is possible for humans to be empowered by AI, not threatened by it. She gives the example of playing chess.

“Creating teams that are half human and half machine (known as “centaurs”) happens in chess, where you have a pretty good chess player matched with a pretty good chess program, and together they beat the most famous grandmaster, Garry Kasparov, as well as the very best chess program. By using two different kinds of intelligences, you can take the best of what AI has to offer and team it with the best of what humans have to offer. At CHART, we call it making humans with superhuman capabilities,” says Cooke.

For technology to transform us into humans with superhuman capabilities, Cooke says we first need to build guardrails to ensure it works on our side.

“What if we were to regulate it such that developers of AI would need to produce a report card telling us in what ways the technology would be good and bad for human well-being?” she asks.

Developing ChatGPT literacy

The phrase made popular by Ronald Reagan during his presidency, “Trust but verify,” rings especially true today with the growing popularity of ChatGPT. After reaching more than 100 million active users less than a year after its launch, ChatGPT is the fastest-growing consumer application in history, according to a UBS study.

Bliss recommends maintaining a healthy dose of skepticism when using this application that is more glib than accurate. Triggering emotion is at the heart of successful disinformation campaigns, and Bliss suggests pausing if you read something that triggers a strong reaction.

“If I read something that makes me feel sad, happy or angry, I will usually go back and research to see if there is a reliable source that has a story on a similar topic,” she says. “I’m a big fan of checking sources and making sure those sources are reliable.”

To help people get better results from chatbots, Maynard teaches a new ASU Online course called Basic Prompt Engineering with ChatGPT: Introduction. The course is open to students in any major and, despite its name, is not really about engineering. Maynard says it is like driver’s ed for ChatGPT users.

“Having a car is great, but having people driving them without knowing the rules of the road or basic driving skills doesn’t lead to safe roads,” he says. “It’s the same with ChatGPT. The more people understand how to use it in safe and responsible ways, the more likely we’ll see the benefits of it.”

ASU’s Global Security Initiative is partially supported by Arizona’s Technology and Research Initiative Fund (TRIF). TRIF investment has enabled hands-on training for tens of thousands of students across Arizona’s universities, thousands of scientific discoveries and patented technologies, and hundreds of new startup companies. Publicly supported through voter approval, TRIF is an essential resource for growing Arizona’s economy and providing opportunities for Arizona residents to work, learn and thrive.

Written by Lori Baker

image title

Top ASU stories you might have missed over summer break

August 15, 2023

While summer is a time of rest and relaxation for many students, not everything slows down at Arizona State University over the summer break.

As we welcome new and returning students to ASU for the fall semester, here are some of the top stories you might have missed this summer.

1. Introducing the world to the first outdoor sweating, breathing and walking manikin

On the far northeast corner of ASU’s Tempe campus lives ANDI, the world’s first indoor-outdoor breathing, sweating and walking thermal manikin. ANDI can mimic the thermal functions of the human body and has 35 different surface areas that are all individually controlled with temperature sensors, heat flux sensors and pores that bead sweat. His purpose? Measuring the effects of extreme heat on human health. 

ANDI has also been featured on multiple news organizations this summer, like AZCentral, Popular Science and Good Morning America.

2. ASU announces launch of new medical school, ASU Health initiative

Students in lab coats, medical masks and gloves point at a chart

The Arizona Board of Regents has asked ASU to expand medical education in Arizona by launching a new medical school. The new ASU School of Medicine and Advanced Medical Engineering will integrate clinical medicine, biomedical science and engineering.

The new school headlines ASU Health, a “learning health ecosystem” being created by the university to accelerate and focus its health-related efforts to tackle the state’s urgent health care needs, now and into the future. Dr. Sherine Gabriel — whose resume includes an extensive list of leadership positions in medicine and academia — will lead ASU Health as its executive vice president.

3. ASU retains No. 1 US spot in Times Higher Education ranking

Hand grabbing water sample from lake

The internationally respected Times Higher Education Impact Rankings recognized the university as the No. 1 institution in the United States and sixth in the world for addressing the United Nations Sustainable Development Goals, or SDGs.

The annual publication of university rankings looks at the impacts of 17 specific goals aimed at achieving a better world by 2030. 

4. ASU joins prestigious Association of American Universities

Two researchers working in COVID saliva lab

This June, the university was selected to join the prestigious Association of American Universities (AAU), which comprises the nation’s elite research universities.

Members of AAU, including stalwart private universities like Harvard, Stanford, MIT and Johns Hopkins, as well as leading public universities like UCLA, the University of Washington, the University of Wisconsin-Madison and the University of Michigan, collectively help shape policy for higher education, science and innovation; promote best practices in undergraduate and graduate education; and strengthen the contributions of leading research universities to American society.

5. Study shows social companionship improves your dog’s health

Two dogs lying on lawn

The largest survey and data compilation of its kind, which includes more than 21,000 owners, has revealed the social determinants that may be tied to healthier aging for pet dogs. Among them, the dog’s social support network proved to have the greatest influence on better health outcomes.

“This does show that, like many social animals — including humans — having more social companions can be really important for the dog’s health,” said PhD student Bri McCoy, who worked on the study.

6. Jacob Moore moves to new executive post

Jacob Moore sitting at a table with a bright blue and brown tribal design.

Jacob Moore, the former associate vice president of tribal relations in the Office of Government and Community Affairs, is the new vice president and special advisor to the president on American Indian affairs.

Moore’s goal is to build upon the university’s previous work to make higher education more accessible for American Indian/Indigenous students and strengthen the university’s engagement with tribal nations and communities.

"I am grateful for the opportunity to be of service to ASU and to Indigenous students and communities,” said Moore, who is Lakota, Dakota, Akimel O’odham and Tohono O’odham.

7. ASU to play important role in new Starbucks Costa Rica lab

Coffee farm in Costa Rica

Arizona State University will work with Starbucks as early as this fall to offer educational programming for select ASU students and Starbucks partners at a new sustainability learning and innovation lab at Hacienda Alsacia — the company’s global agronomy headquarters for research and development located in Costa Rica.

The first wave at the farm will include study abroad opportunities for students tied to existing ASU degree programs such as sustainability, sustainable food systems, global agribusiness and environmental and resource management.

8. ASU, Applied Materials to create Materials-to-Fab Center at ASU Research Park

A large piece of equipment sits in a lab.

More than $270 million in corporate and state investment will create a world-class shared research, development and prototyping facility — the Materials-to-Fab (MTF) Center — in the university’s MacroTechnology Works building at ASU Research Park.

The center will be designed to accelerate the transfer of innovations from ideation to fab prototype by bringing Applied Materials’ state-of-the-art semiconductor manufacturing equipment into a collaborative environment where ASU and Applied Materials can work with industry partners, startups, government entities and academic institutions. 

9. ASU's Léon Marchand swimming toward Olympic gold

Portrait of college male resting in pool with elbows propped up on deck

Léon Marchand, who at 18 years old was already a world-class swimmer and had competed in the 2020 Olympics, came to ASU so he could be coached by Bob Bowman — the man who coached Michael Phelps.

This summer he participated in the 2023 World Aquatics Championships, where he broke a record set by Phelps, and when the 2024 Summer Olympics are held in Paris (Marchand’s home country) he could become the face of the games.

Learn more about Marchand, who is known among his friends for his humility just as much as his swimming prowess.

10. Sun Devil Athletics partners with Mountain America Credit Union, will join Big 12 Conference

ASU's football stadium during use for a football game and packed with fans

ASU and Sun Devil Athletics have entered a a multiyear naming-rights partnership with Mountain America Credit Union to form one of the most dynamic naming-rights deals in college athletics. The 15-year partnership — the most significant in the athletics department's history — includes ASU’s football stadium, which now will be called Mountain America Stadium, Home of the ASU Sun Devils.

In other sports news, it was also announced that Arizona State University, the University of Arizona and the University of Utah will join the Big 12 Conference in 2024, positioning the universities and their student-athletes for increased stability and success. The move will be effective on Aug. 2, 2024.  

Top photo: Biomedical sciences student Tori Morgan studies from her hammock on the Tempe campus. Photo by Charlie Leight/ASU News