Anticipating how technology can be manipulated to cause harm


Computer language

Photo illustration courtesy of Pexels.com

|

Rapid advancements in new technologies are often embraced by society because they improve efficiencies, outcomes and make all our lives easier. But with each evolution, there is an equal and opposite consequence. 

Hackers seeking a ransom shut down the systems used to track patient care at a hospital north of Detroit. An employee of a large corporation was tricked into paying out $25 million to scammers using deepfake technology to pose as the chief financial officer. Foreign actors infiltrated an internet-connected controller at a water system in Pennsylvania, forcing the municipality to take the system offline and manage it manually.

These are all real-world examples of how new technology can either be manipulated or how it can go awry. 

Arizona State University’s Nadya Bliss is heavily engaged in national efforts around technology research, design and development, often discussing ways to more deliberately anticipate potential harms and mitigate their consequences before they take root in society. 

She co-authored a white paper on this topic titled “Addressing the Unforeseen Harms of Technology” with colleagues from the Computing Community Consortium, or CCC, a national group of computing experts from academia, industry and government advancing computing research in societally responsible ways. 

Bliss became chair of the CCC on July 1, in addition to her role as executive director of ASU’s Global Security Initiative. She spoke to ASU News about the white paper, her role on the CCC and national security challenges.

Portrait of a woman with long brown hair wearing a white shirt sitting in front of a planter bed
Nadya Bliss. ASU photo

Question: New technologies have changed the world, both for better and for worse. Can you give me some examples of unforeseen harms of technology from the last decade or so? Should we have anticipated these? And what is the research community doing to get better at identifying these harms before they take root in society?

Answer: Some of the most consequential security challenges of today stem from new technologies that became broadly available at relatively little expense and have been manipulated by bad actors for harmful purposes. 

For example, connecting critical infrastructure to the internet was aimed at improving efficiency and security, but ended up leaving pipelines, hospitals and electrical grids vulnerable to attacks like ransomware; social media was developed as an online space for creating connections, but it did not take long for it to fundamentally alter how we consume information and for people to figure out how to manipulate its algorithms to spread misleading or false information to further their agenda; and some people thought handing decisions over to automated tools driven by artificial intelligence would reduce bias — that machines would be truly neutral parties — when in fact, those tools simply reflect the biases of their creators and have in practice often exacerbated inequity and unfairness. 

I do think some of these should have been anticipated — and to be fair — some absolutely were. Unfortunately, not enough people listened. 

To anticipate these potential harms though, developers of new technologies need to stretch beyond their comfort zones and bring experts from other fields into the development and design process — experts from psychology, anthropology, journalism, humanities, political science, etc.  And as the CCC white paper notes, they also need to engage with communities that may be impacted by the technology. There is some progress being made in the research community to do both of these, but more can and should be done. 

Q: This white paper is from the Computing Community Consortium. You just assumed the role of chair of this national organization. What is it? Why is it important, and why is it producing reports like this?

A: I’ve been formally involved with the CCC for seven years now, first as a member, then as an executive committee member, then as vice chair and now as chair, and have seen the impact and reach this organization has firsthand. It is a genuinely unique organization that plays a leading role in shaping the direction of computing research, in bringing experts from different sectors together to envision what’s next in computing and how to get there, and often drives discussions around how to advance computing technology responsibly, ethically and with the goal of addressing pressing societal challenges. In a lot of ways, it serves as a channel for the computing research community to voice ambitions, concerns and challenges at the intersection of computing research and national priorities. 

This paper on addressing the unforeseen harms of technology is just one example. A number of years ago, the CCC worked with the Association for the Advancement of Artificial Intelligence to lead a national initiative to produce a roadmap for AI research that has informed and shaped a national research agenda. The CCC regularly brings leading experts from computing and other disciplines together to respond to governmental requests for information (RFIs) on relevant computing research topics; we are currently working on one related to artificial intelligence for defense applications, for example. The CCC’s bread and butter though are its visioning efforts: facilitated discussions among experts aimed at looking ahead to the next opportunities, priorities and challenges in computing and how we as a country — the public, private and educational sectors — achieve the outcome we want and continue leading the world in computing innovations and research.

Q: The paper makes a distinction between unforeseen and unforeseeable consequences. What is the definition and difference of each word?

A: This is an important distinction, and one the paper frames as "willfully unforeseen" impacts vs. "justifiably unforeseen" impacts. 

A willfully unforeseen impact is a harm that could have been considered at the design phase if given the necessary amount of time, resources, thought or care, but was not due to any number of reasons: incentive structures, the race to get to market first, disciplinary blinders, etc.

By justifiably unforeseen, we mean those impacts that could not reasonably have been predicted, even with intensive efforts aimed at identifying potential security flaws up front. The world is a complex, messy place that usually defies prediction, and it would be unreasonable to expect any group or organization to be able to predict every possible way people could manipulate a new technology. But we should be honest with ourselves and recognize when it could have been foreseen and mitigated with a little more effort.

Q: As you look into your crystal ball and try to predict the world a decade from now, what are some potential security issues or negative consequences from technology that will be prevalent then and that we should be discussing more now?

A: Well, there is no such thing as a crystal ball. But we absolutely can make reasonable assessments of emerging technology anchored in experience.  

First is the need for greater understanding of and education around generative AI. Even experts often have trouble explaining how and why these systems work while they are being rapidly deployed across application domains. There are awesome benefits to this technology, but as it becomes part of our everyday toolkits, we need to consider both its limitations and potential vulnerabilities. 

Second, as we make progress in quantum computing, we need to keep in mind the impact on encryption technology; at a certain point, modern-day security practices could become defunct. Continued research into post-quantum encryption is critical to avoid the worst-case scenario here.  

And third is climate change, an area where computing research can play both an exacerbating and mitigating role. For example, the large language models that power generative AI consume a lot of energy, and at the same time could conceivably help identify more efficient energy practices in sectors like agriculture or transportation. 

Ultimately though, with any new invention, there is always a risk for nefarious use. People are the key to mitigating these unforeseen impacts, and I do not only mean researchers. This needs to be a whole-of-society effort that includes public education campaigns and explanatory journalism and developing critical thinking skills. These are all things that both ASU and the CCC are exceptional at, bringing diverse groups of people together to look at a problem from different angles and work towards a solution that is safe, secure, inclusive and broadly beneficial. 

Q: In addition to your leadership of the CCC, you are on a slew of other national committees and boards, all focused on technology and security. Why is this an important topic to you, and why is it important for ASU to have a voice in these national conversations?

A: Technology is advancing at a rapid pace and is incredibly intertwined with every aspect of our lives. Yet security often remains a secondary consideration behind capabilities, despite all the examples we have of unforeseen harms stemming at least partly from new technologies. 

This is to me one of the most consequential debates taking place in our society today — the ethos of "move fast and break things" vs. "let’s take a beat and think about what exactly we might be breaking and what kind of damage that would cause."

We need to prioritize security alongside capabilities, and that goal is really at the core of my national service commitments — whether we are discussing the future of computing research, impacts of climate change on national security, new research directions for the Department of Defense or how to incentivize more inclusivity in national security research.

More Science and technology

 

A young girl receives an eye exam.

ASU researchers use AI to help people see more clearly

Myopia, also known as nearsightedness, is on the rise, especially among children.Experts predict that by the year 2050, myopia will affect approximately 50% of the world’s population. Researchers…

A portrait of Tim Cope posing with the ORIGAMI RISK logo and balloons that form the number 900.

ASU computer science alum turns entrepreneurship into activism

Tim Cope encourages students to take risks.That may sound like an odd posture for a co-founder of a software firm that helps companies avoid costly business risks — but he has his reasons.“Taking…

Students sitting at a table assembling microelectronic devices, with others seated around them.

Industry leaders give ASU students a look at emerging microtechnology careers

Microelectronics is among the most robust of modern high-tech fields that are becoming integral to progress in many major industries. Its expanding capabilities are enabling advances that promise to…