When facts aren’t enough
Developed by researchers from the School of Computing and Augmented Intelligence, part of the Ira A. Fulton Schools of Engineering at Arizona State University, and ASU’s Center for Strategic Communication, Skeptik is a browser-based tool powered by artificial intelligence that identifies and explains logical fallacies in online news articles. Graphic by Andrea Heser/ASU
In the age of viral headlines and endless scrolling, misinformation travels faster than the truth. Even careful readers can be swayed by stories that sound factual but twist logic in subtle ways that quietly distort reality while never quite crossing the line into a lie.
That’s where Skeptik comes in.
Developed by a cross-disciplinary team from Arizona State University, Skeptik is a new browser-based tool designed to help readers recognize these hidden flaws. The system — created by researchers from the School of Computing and Augmented Intelligence, part of the Ira A. Fulton Schools of Engineering, and ASU's Center for Strategic Communication — uses large language models, similar to those that power modern artificial intelligence, or AI, chatbots and combines these models with human communication theory to automatically identify and explain logical fallacies in online news articles.
“Our goal isn’t to tell people what to think,” says Fan Lei, a researcher who led the project until he received his computer science doctoral degree from the Fulton Schools in 2025. “It’s to help them see how an argument is built, where it’s solid and where it might be taking shortcuts. We want to empower readers to think critically, not passively consume information.”
Seeing through the spin
Traditional fact-checking can verify whether a claim is true, but it often misses the deeper structure of persuasion and the rhetorical sleights of hand that make falsehoods seem reasonable. The Skeptik framework fills that gap by scanning news text for logical inconsistencies and marking suspect sentences directly within the article.
Readers can then click to reveal brief explanations, external evidence and even initiate a live chat with an AI model that provides deeper clarification. In the system’s prototype, each fallacy type is color-coded and linked to an interactive sidebar. A vague statement, for instance, might appear underlined in purple, while a red line could flag a strawman argument. Hovering reveals a short explanation and clicking opens multilayered “intervention” panels that guide readers through progressively deeper insights.
The first layer offers a simple clarification, explaining why the reasoning may be misleading. The second layer provides supporting evidence and counterarguments to help readers evaluate the claim more critically. The third layer offers proactive context, anticipating similar misinformation patterns before the reader encounters them again.
“People don’t always fall for misinformation because they’re careless,” Lei says. “They fall for it because persuasive writing often feels logical. We wanted to give readers a way to pause and ask, ‘Does this conclusion really follow from the evidence?’”
AI that helps us think for ourselves
The project draws its conceptual roots from inoculation theory, a communication framework suggesting that exposure to small doses of misinformation paired with explanations of why they’re wrong can build resistance to larger falsehoods later. That insight came from co-author Steve Corman, a professor emeritus in the Hugh Downs School of Human Communication and longtime expert in strategic communication and narrative analysis.
The research team designed Skeptik’s AI to work like a conversational assistant, not an all-knowing judge. When it flags a potential fallacy, it phrases the finding cautiously, displaying messages like, “This may be an example of X," leaving the final judgment to the reader.
That balance between automation and human interpretation reflects one of Ross Maciejewski’s key leadership priorities. Maciejewski is a professor of computer science and engineering as well as director of the School of Computing and Augmented Intelligence. He oversaw the student work for Skeptik and encouraged them to design systems that amplify human reasoning rather than replace it.
“Good visualization and AI tools should help people think more clearly, not take the thinking away,” Maciejewski says. “Skeptik embodies that philosophy. It’s a system built to foster critical awareness.”
Maciejewski, a national leader in data visualization and human-centered AI, has long championed projects that bridge computation and social impact. During his tenure, he has expanded collaborations to tackle challenges at the intersection of technology, policy and public communication.
Bridging code and conversation
That interdisciplinary spirit is central to Skeptik’s success. While the engineering team built the computer science framework for smooth browser performance, the communication scholars helped ensure that the explanations actually made sense to everyday readers.
“Working with Professor Corman was eye-opening,” says Lei. “He helped us translate technical detection results into meaningful guidance that resonates with how people read, argue and form opinions. It’s a true human-AI collaboration.”
The system currently detects nine common fallacy types, including cherry-picking, false cause and red herring, and then links them visually to the text. In a series of case studies, Skeptik successfully identified misleading reasoning in articles about climate change, election processes and public health, often providing context that clarified complex issues without sensationalism.
When tested against the Ad Fontes Media dataset, which rates thousands of news outlets on bias and reliability, Skeptik revealed a striking trend: The more biased a source, the more logical fallacies its articles contained. The findings suggest that fallacy detection could one day complement existing fact-checking metrics, offering a more nuanced measure of news credibility.
“There’s a real need for tools like Skeptik, ones that don’t tell people what to think but how to think about media content more critically,” Corman says. “We intentionally designed it not to say, ‘This statement is wrong,’ but rather, ‘Here’s why you should think about this statement more carefully.’”
A future of smarter reading
For the researchers, Skeptik is less about policing and more about cultivating intellectual curiosity.
“In an age of information overload, it’s easy to get cynical,” says Lei, who is now a postdoctoral scholar at the University of Waterloo in Ontario, Canada. “But if we can build tools that make critical reading more interactive, and even a little fun, we can restore some of that trust between journalists and audiences.”
The team envisions Skeptik as an open, evolving framework. Future versions could integrate visualizations to show how an article’s logical structure flows, or allow crowdsourced annotations where readers collaborate to refine fallacy detection. The researchers also hope to adapt the system for classrooms, where students can learn to analyze bias and rhetoric through hands-on engagement.
For Maciejewski, this project reflects the growing role of AI in addressing societal challenges.
“This kind of research captures the best of what the Fulton Schools stands for,” he says. “It’s technically sophisticated but also deeply human. We’re building technology that doesn’t just process information. It helps people make sense of it.”
More Science and technology
ASU and Thailand advance semiconductor collaboration through workforce development initiatives
As Thailand accelerates efforts to strengthen its semiconductor ecosystem, Arizona State University and Thai partners are expanding collaboration to support the country’s innovation and workforce…
ASU, Dreamscape Learn bring immersive VR biology classrooms to Phoenix school district
Camilo Cruz adjusted the strap on his virtual reality headset, grabbed the black joystick in front of him and, with a point of his index finger, punched in the four-digit code.Immediately, he was…
Looking back on 20 years of discovery at ASU’s Center for Bioarchaeological Research
When Christopher Stojanowski works in his lab, he isn’t just handling ancient remains. To him, every tooth or fragment of bone offers a voice from the past and a reminder that scientific data can…