ASU researcher shares insights on the role of social media in the Jan. 6 insurrection


U.S. Capitol building with police caution tape in front of it.
|

Candace Rondeaux lives 10 blocks from the U.S. Capitol in Washington, D.C. Days before the Jan. 6 insurrection, she remembers noticing an uptick in tourists sporting MAGA hats and pro-Trump signs. But even if she hadn’t been physically in the thick of it, she would have known something was afoot.

Rondeaux, a professor of practice at the School of Politics and Global Studies and a senior fellow with the Center on the Future of War at Arizona State University, is an active member of a community of digital forensic experts, and as early as a week before Jan. 6, 2021, there were rumblings among them that something was about go down.

Rondeaux and the roughly 200-300 other U.S. researchers in the community had been closely observing the activity on right-wing sites like Parler, as well as more mainstream sites like Facebook and YouTube, and what they saw alarmed them.

“We were seeing signs online that something much more serious than everybody else was predicting might happen,” Rondeaux said.

While there was nothing she or the rest of the researchers could do to stop it, the information they were able to capture from content posted online about the participants and how they coordinated their plan to storm the Capitol was invaluable. Now, as director of Future Frontlines, a public intelligence service for next-generation security and democratic resilience, Rondeaux and her colleagues are hoping to use that data to answer questions that could help predict similar future events and even possibly curtail the arguably negative influence of social media sites on American democracy.

In an event hosted by the Center on the Future of War as part of a series of events featuring faculty from the ASU Online Master of Arts in Global Security at the School of Politics and Global Studies, Rondeaux shared more about her work and answered several questions from the audience, ranging from whose responsibility it is to regulate social media to the role of analysts like Rondeaux in working with law enforcement to preempt events like the Jan. 6 insurrection.

Editor’s note: The following Q&A portion of the event has been edited for length and clarity. You can watch the entire presentation and Q&A session on YouTube.

headshot of ASU Professor of Practice

Candace Rondeaux

Question: Public safety and social norms were enforced by government regulation through the FCC for generations in the media world. Clearly, the FCC is not getting involved in something like this at the same level. So where does governance happen in the information space today, and are Amazon, Apple, Microsoft and Google now responsible for that?

Answer: I think governance is becoming much like customary law. When we think about international law, we have the law that’s on the books — so, we’ve got the Geneva Conventions, we’ve got all kinds of treaties. And then we think about how decisions are made by different judicial venues. In a lot of ways, that’s kind of where we are with governance of internet technologies and network communication technologies, where you’ve got different venues and some of the big stakeholders happen to dominate in those venues.

So a good example, obviously, are the big five, right? Amazon, Apple, Google and so forth. They have a huge influence on not only the governance of platforms, but they're vertically integrated. And so they also have influence on the architecture — the literal physical architecture — and then the logic of how that architecture is managed to create information flows. So they're kind unique. They're like big whales out there in the ocean of customary law, because they're so outsized and their influenced from top to bottom throughout the entire network communication system.

But then you also have these other governance mechanisms, like the International Telecommunication Union, which is kind of the standard setter for how network technologies and communications generally — which used to be telegraph and telephone lines and that sort of thing but today, of course, also encompasses the internet. And that body, of course, is in this kind of tug of war between some of the big state players: Russia, China, the United States and a few others — India, Brazil. … Those are also very significant, because they do have a certain kind of prowess. And obviously the European Union is another factor here; they're setting standards. And so there's kind of multiple layers, I would say.

And so, when we're thinking about governance, we often talk about it in the context of content moderation, but that's kind of small ball. Really, content moderation is like the last resort, or like, not even the tip of the spear. I'd say it's like the way back end of the spear. Because it's really about rules of the road for the design of the architectural technologies that are part of a governance mechanism, as well as rules of the road for algorithms, and how they're deployed, and what they can collect and the ways in which they can be used to serve as a platform. So there’s lots of different moving parts in the ecosystem.

Q: Based on what you’ve found so far, is it your sense that Parler might have been perceived by certain groups as more trustworthy, or maybe even private, and so gave them a sense of freedom to express extreme views?

A: One of our current running theories is that — by the way, I should mention that Parler has sued Amazon in court. Initially it was in federal court, but now it's back in King County court in Washington state, where, Amazon of course is headquartered. And that case is still going forward, so a lot of what I'm telling you here is coming from the argumentation in that brief that was filed back in January, that evolved since March.

So Parler argues that its business model is no different than Facebook’s. That it was good at content management and it did do a good job of warning the FBI, which of course, they did write letters saying, we’re worried about what we're seeing in our content online. But I think one of our current theories of the case that, because Parler took a really interesting route in terms of its business model, in terms of not directly selling their user data to advertisers, that then raises the question, well, how were they making money? What was the revenue structure for this company? And the only thing that we kind of can surmise, or at least the thing that sort of raised our curiosity is the fact that Parler was so easy to hack. It had such low security barriers. And I should say that all that information was publicly available, so it’s not quite hacking. It's not like a breach in the illegal sense, but it's more sort of what's publicly available out there. So were there design features that were intentional, where the so-called key to Parler’s users was actually never inserted into the door lock, basically? That is to say, the whole model is that of course Parler doesn't have to sell its information directly to advertisers, because it's already freely available. So it is possible that an average consumer … is thinking, “Well, it probably has the same protections as Twitter and Facebook. When I sign up, it's got all these kinds of default protections.” Well, it turns out it didn't really at all.

Q: Are analysts like yourself involved in the efforts by law enforcement to preempt these types of events? Do you have insight into the types of methods these law enforcement officers used to preempt or react to these events?

A: The answer is no. We don't do that. We are very, very keen to stay neutral. Of course, we are happy to talk to members of the government or law enforcement about what we're doing and give them a sense of our insights, as much as we would members of the public. We're very careful, though, about maintaining our neutrality and our objectivity. And we want to be careful about — when we talk about public intelligence, we're really talking about public interest intelligence. So we're not here to serve the purposes of a government that may or may not have a good idea of how to govern at any given time. And I mean that not just in the U.S. context, but in other contexts as well. For example, in conflict areas, you have to be really careful about how you interact. As to the work that law enforcement is doing, my perception is that it's not very advanced, and that they're probably in the same place that most researchers are when it comes to thinking about all the different transects of information that you would need to have the kind of data that you would need to have some sort of predictive capability and then intervene. I don't see that evolving yet, although I know those conversations are going on. It's certainly not there on the federal level, and who knows what the local level looks like.

Q: Do you have thoughts regarding how the line might be drawn between freedom of speech and the ease with which mis- and disinformation can be spread with these social media platforms, and who enforces that line?

A: I actually think that question is a red herring. I think what we're learning from the whistleblowers of documentation on Facebook is that content moderation is clearly a very subjective exercise. I mean, it's people who are making decisions about what is bad and what is good, and developing kind of systems for doing that and communicating that to each other internally and externally. If we're going to use Facebook as the example, because I think it's a prime one, you have people, somewhere in California, making lots of decisions about people and how they communicate somewhere in Myanmar or Bangladesh or Afghanistan or Nigeria. And those people in California don't speak Bengali. They don't speak Burmese. And so they're kind of subcontracting out their own content-moderation responsibilities to people whom they have no ability to check as far as whether or not the models and standards that are being applied in California can be applied in the context of Myanmar. And there are a lot of reasons why you can't do that.

We really pay attention to the platform architecture. It's really key. It's not about the content. Yes, the content is problematic. But toxicity gets rewarded in our current business model. We know this from Facebook and from all the leaks. And it happens to be true on every single one of these platforms that outrage engagement and performative behavior is what is rewarded. So Marshall McLuhan, of course, talked about this very famously (when he said) the medium is the message. And the medium here is a very performative media. You don't go online to talk about what you had for breakfast or to share your kitten photos because you're a very private person. You go online because you want to share that information in a way that shows you're part of a community, which is a performance. And violence is also very performative when it's political. And so, weirdly, there's a lot more work than I think these big tech companies and providers, and frankly, the Federal Communications Commission need to do to think about, OK, well, how do we attenuate the use of these algorithms and this reward system so that we're not rewarding violence, and we're not rewarding hate speech? I'm less concerned about what’s said and much more concerned about the architecture of how it is pushed around.

Q: This question is related to a series that we've created here through the MA in global security with our professors like Candace: The ASU Center on the Future of War’s Weaponized Narrative Initiative. It was one of the very first important voices in the discourse about the weaponization of the modern information sphere. In January, on the fifth anniversary of its launch, we hosted one of the initiative’s co-founders, Professor Brad Allenby, in an event discussing the current state and role the information domain plays in global security, domestic order and, indeed, the Western liberal democratic model in general, which you didn’t discuss much, Candace. I know there’s some concern about social order in the information domain and how Jan. 6 actually portends some trouble in that area. Do you have thoughts on that that you want to share?

A: I do. Thank you for asking that. I think it's a really important point. I don't want to over-focus on technology and what data tells us in this kind of myopic way. We should be really concerned about what we saw on Jan. 6 and what the data is telling us about what we saw. Because I really do think one of the impacts, of course, of de-platforming Parler, and also this unresolved tension around how to govern social media platforms, is that it leaves us in a very vulnerable position in the United States, but also obviously in other countries of the world that are also trying to govern their populations. There's a pretty good chance that we're going to see these types of influence campaigns more.

In fact, I would say one observation that's really key: Just after 2016, we were talking about Russian interference and the kind of tools, techniques and tactics that were used by the internet research agency and the SVR and the GRU, two intelligence agencies in Russia. And a lot of us got a good schooling in all that and how Russia did what it did. There has been some learning in some of these far-right communities from that experience, and it was directly applied in the 2020 elections. And there was plenty of evidence to show that there was a borrowing of tools, tactics and techniques. And that's what's frightening about that, is that when American citizens begin to turn these tools on their own people and then external actors like China and Russia decide they have a dog in the fight — then the amplification prospects become much more frightening.

Top photo courtesy of iStock/Getty Images

More Law, journalism and politics

 

A stack of four pizza boxes

How to watch an election

Every election night, adrenaline pumps through newsrooms across the country as journalists take the pulse of democracy. We gathered three veteran reporters — each of them faculty at the Walter…

A group of students stand as someone talks at a lectern emblazoned with the ASU logo.

Law experts, students gather to celebrate ASU Indian Legal Program

Although she's achieved much in Washington, D.C., Mikaela Bledsoe Downes’ education is bringing her closer to her intended destination — returning home to the Winnebago tribe in Nebraska with her…

Palo Verde Blooms

ASU Law to honor Africa’s first elected female head of state with 2025 O’Connor Justice Prize

Nobel Peace Prize laureate Ellen Johnson Sirleaf, the first democratically elected female head of state in Africa, has been named the 10th recipient of the O’Connor Justice Prize.The award,…