Skip to main content

Drowning in disinformation

How can we stop the deluge of disinformation flooding the internet and social media?


A new white paper co-authored by Nadya Bliss, executive director of the Global Security Initiative at Arizona State University, outlines a clear agenda for research on disinformation that could help inform a national response. Illustration by Shireen Dooling/Arizona State University

|
December 18, 2020

The use and spread of disinformation — false or misleading information intended to deceive people — is being amplified and accelerated at an alarming rate on the internet via social media.

Within the U.S., this has quickly eroded trust in institutions that serve as the bedrock of our society, such as science, the media, and government, to the point that we can’t even agree on basic facts.

In a white paper for the Computing Research Association’s Computing Community Consortium, a group of researchers from Arizona State University, Columbia University, Santa Fe Institute and the University of Colorado, outlines steps to begin dealing with the disinformation problem.

Disinformation is often used to create confusion and dismantle trust in traditionally trustworthy organizations. One obvious example of disinformation today is the way COVID-19 has been called a “hoax,” which resulted in many people not viewing it as a real threat to their health or taking necessary precautions to prevent and contain its spread. 

“Disinformation and the poisoned information environment we’re all swimming in needs to be a national priority,” said Nadya Bliss, executive director of the Global Security Initiative at ASU. “Our white paper outlines a clear agenda for research on the topic that could help inform a national response driven by the public and private sectors together.”

One of the CRA’s main goals is to explore how computing research can help support national priorities.

“Within the past few months, we’ve seen other large-scale disinformation about elections and the democratic process in terms of the validity, legality and security of mail-in ballots, fraudulent voting, rigged elections, dead people voting, supercomputers changing votes, etc.,” said co-author Joshua Garland, an Applied Complexity Fellow at the Santa Fe Institute. “And there are many other examples surrounding migrants, vaccines and climate change.”

Disinformation is an existential threat to democracy and society, points out Elizabeth Bradley, a professor of computer science at the University of Colorado.

“We technologists created many of the tools being used by disinformation creators and circulators — the internet, social media, etc. — and it’s incumbent upon us to think about solutions,” Bradley said.

To address disinformation, the researchers emphasize that both supply and demand must be addressed.

“On the supply side, we need to develop better methods for detecting and isolating or at least mitigating disinformation before it spreads,” Bliss said. “On the demand side, we need improved efforts to educate the citizenry so people are less susceptible to believing and spreading disinformation.”

Purveyors of disinformation are excellent at manipulating human emotions — they create content that is meant to seem believable while triggering an emotional response. As an individual, the best thing you can do to stop the spread of disinformation is to be sure you aren’t part of the problem. If you’re online and see a post that outrages you, Bliss cautions you to take a moment to think before sharing it.

The researchers say the challenge of combatting disinformation requires a comprehensive response that goes far beyond computing research and includes education, psychology, journalism and other disciplines.

“There's a tremendous need to understand how data empowered algorithms are impacting our reality and the offline world,” said co-author Chris Wiggins, an associate professor of applied mathematics at Columbia University’s School of Engineering and Applied Science and the chief data scientist at The New York Times. “Just like for any other complex system, addressing this will require interacting with the system — here the information ecosystem — in a way that respects ethical concerns for rights, harms and justice.”

Story by Arizona State University, Computing Research Association’s Computing Community Consortium, the Santa Fe Institute, University of Colorado and Columbia University.

More Science and technology

 

Palo Verde Blooms

2 ASU postdocs receive prestigious Pegasi 51b Fellowship to study exoplanets 

The Heising-Simons Foundation has announced that Arizona State University School of Earth and Space Exploration postdoctoral…

March 28, 2024
Student using laptop computer

ASU class explores how ChatGPT Enterprise can assist in scholarly writing

Just over a month ago, Jacob Greene received a notification he’d been waiting for — his proposal to use ChatGPT Enterprise was…

March 27, 2024
Outdoor ASU sign reading "New schools New degrees New buildings" in front of a building.

New engineering degrees at ASU aim to open pathways, empower engineering expertise

It doesn’t take an extensive internet search to discover that engineering has become one of the most rapidly and broadly…

March 26, 2024