There is a pandemic. State and federal authorities clash over the timing, policies and efforts to check its spread. Hot spots arise in cities and states with lesser restrictions. Casualties rise to levels so shocking they are culturally unacceptable.
Federal troops intervene, limiting interstate travel. Food insecurity arises in states that can’t grow their own food. Governors and state-first groups bristle.
Hit a little too close to home? That was the point of this scenario, lifted from a new report by Arizona State University’s Threatcasting Lab.
The lab’s mission is to imagine future problems and solutions so plans can be prepared for them. The latest report — on digital weapons of mass destabilization — was commissioned by the Defense Threat Reduction Agency, a Department of Defense agency tasked with countering weapons of mass destruction like chemical, biological, radiological and nuclear weapons.
However, those are all kinetic weapons. The threats covered in the report would emanate mostly from the internet and anything connected to it. The specter of a hacked electrical grid or dam has been discussed for years.
For example, in one scenario industries and government increasingly adopt automated systems, like smart cities and Internet of Things, to the point where they surpass security systems and analog fail-safes. Regulation and coordinated security systems fail. A reduction in grocery stores means more dependence on just-in-time resources and food delivery. An external hack of food storage and distribution then creates a sudden food shortage.
To come up with these scenarios, the lab assembles a wide range of participants from academia, the military and subject experts, as well as science fiction writers and other thinkers. The lab focuses on four areas: sponsored research for clients like the military or supply chain trade groups, reports, building an academic base, and classified work.
Lab director and futurist Brian David Johnson, a professor of practice at ASU's School for the Future of Innovation in Society, sat down with ASU News to discuss threatcasting and the lab's latest report.
Question: This report very much has a “ripped from today's headlines” flavor.
Answer: That's the whole point of threatcasting as a methodology, right? It not only looks at possible and potential threats, but it's an applied methodology, which is really the difference. It says not only, "OK, we want to look at the future of weapons of mass destruction," but we also say, "Who are we doing this for? And what are they going to do with it?" This was originally done for the Defense Threat Reduction Agency and the United States Air Force with the idea that it should be a kind of a road map for engagement, because this is a whole of nation problem.
Q: So you're coming up with things that they wouldn't necessarily think of?
A: That's why they come. That's our role, right? As a large public university, we can physically get people together. This was before the pandemic. We can convene people together from government, military, private industry, trade associations and academia, bringing in students. DTRA is really good at radiological WMDs, but this whole notion of a cyber-aided or a digital WMD they had no concept of. And they were trying to figure out who would they need to partner with? What would that look like? They knew that it was much more complicated, but they were just scratching the surface of it.
Q: When it comes to cyberattacks, hackers are going to take out the electric company, they're going to take out the controls on a dam — they're not about hitting civilian computers. And you look at situations like that.
A: Yes. Part of what we were trying to do is come up with a framework so that DTRA or anybody can approach this problem in a different way. So oftentimes when people talk about cyberattacks or anything like that, they talk "digital 9/11," or "digital Pearl Harbor." What we put forward is to say, well, if you're thinking about that, you're thinking about it wrong. You're thinking about it like a 20th-century attack, a kinetic attack where somebody dropped bombs. It's really different. And there's a very good chance if we don't change how we're thinking about these possible attacks, there's a good chance that it's already happened. There's a good chance that these things are already happening now with the digital weapons of mass destruction. We're not there yet, but it can happen to us quickly if we're looking in the wrong place.
Q: We had the toilet paper scarcity. Now imagine you somehow get to everyone and tell them there's no food.
A: That's one of the findings that we saw playing out or beginning to play out in 2020 and 2021: this notion that you could have a weapon of mass destabilization. The central point of the whole report is to say that killing lots of people or damaging the earth is not the goal. The goal of an adversary is to gain strategic advantage. And so what we put forward is to say: That means it's a weapon of mass destabilization. And now we all understand what the destabilizing event feels like. We've all been through one. Now imagine somebody being able to take that mass destabilization and unleash it through a digital means. Of what that could do. Imagine somebody being able to use information to do that.
Q: It was a year or two ago when somebody in Hawaii mistakenly sent out a public service announcement that North Korean missiles were incoming.
A: That was a mistake. Somebody just had an accident and those sorts of things happen all the time. We were really looking at the possible futures of somebody being able to weaponize something like that. … We've really begun to see what these digital weapons can do. We've seen it not only online, but we've seen it in the Western wildfires, where we had disinformation and misinformation attacks around antifa saying that they started them. And you're starting to see it being used. Part of the idea is to then think about what happens when that scale gets quite large.
Q: In the 20th century, state actors largely tried to leave civilians out of it. We had precise guided munitions in the Iraq War and so on. Now it's a complete shift: The population's going to be the target.
A: Yeah, they're using the population. There's some of the other reports that we did on the future of information warfare that looks at reflexive control: getting your adversary to do something that they wouldn't normally do. And you're watching it happen in the social structure. You know, one of the things that we said is that certain adversaries, they don't really care what you believe in. They don't really care if you're the right or the left. If you're anti, if you're pro — they don't care. Part of what they're trying to do is just create chaos, trying to create that destabilization. And it's really that destabilization is the point so that they can then gain an advantage.
Q: What were your solutions to deal with that?
A: No. 1 is to reframe it. This isn't a 20th-century kinetic fight, this is a 21st-century fight that's living primarily in the digital space, and its goal is destabilization. So with that in mind, who would (the government fighting against this) partner with? Given the fact that there's many things that they wouldn't be able to do, they will have to partner with the private sector. They're going to have to work with academia to figure out how to do so — that "whole of nation" way of thinking. Before, it was their job to be able to guard against nuclear weapons, nuclear weapons of mass destruction. And this is a very different thing than that.
Q: Did you get any reaction from the report from DTRA folks after it was released?
A: They really liked it. The whole goal of threatcasting is to come up with a range of possible and potential threats that are quite wide — sometimes right on the edge of the impossible. But ultimately our output in this report is there to help decision-makers and leaders make decisions, whether that be policy decisions, investment, research decisions, partnership decisions. What the threat lab does when we do these types of reports is to not say we're predicting the future. That's not what we're trying to do. We're trying to create tools so that decision-makers can look out at an uncertain and shifting future and make the best-informed decisions possible.
Q: Twenty years ago we never imagined a civilian airline being hijacked and used as a weapon. It's your job to imagine something like that.
A: That's why we live in the dark spaces, and why we operate on the edge of the possible. We really need to get out as far as we can. We're using social science and technical research and cultural history and economics and trend data. Everything's founded in research, you know, academic research and industry research, but that's why we go out 10 years to really get to those edges, to find the new and the novel.
Top image from Pixabay.
More Science and technology
ASU student researchers get early, hands-on experience in engineering research
Using computer science to aid endangered species reintroduction, enhance software engineering education and improve semiconductor material performance are just some of the ways Arizona State…
ASU professor honored with prestigious award for being a cybersecurity trailblazer
At first, he thought it was a drill.On Sept. 11, 2001, Gail-Joon Ahn sat in a conference room in Fort Meade, Maryland. The cybersecurity researcher was part of a group that had been invited…
Training stellar students to secure semiconductors
In the wetlands of King’s Bay, Georgia, the sail of a nuclear-powered Trident II Submarine laden with sophisticated computer equipment juts out of the marshy waters. In a medical center, a cardiac…