Skip to main content

When it comes to human and AI interfaces, ASU professor says ethical questions abound


|
July 23, 2019

Since Neuralink’s public launch in 2017, there’s been much speculation about the focus of the company’s research. Its press event on July 17 was the first time we’ve had a glimpse of what the research team has been working on. 

Neuralink has been developing brain-computer interface technology with the use of flexible threads inserted into the brain with an innovative robotic system. The concept has the potential for multiple applications for those with physical or neurological issues and performed perhaps in the future via less evasive laser surgery — much like LASIK surgery is now. 

While the process has been tested only in lab animals, Elon Musk and the investors are hoping human applications are next. ASU Now sat down with Andrew Maynard, director of the Risk Innovation Lab and professor with the School for the Future of Innovation, to discuss the announcement and what the future holds for emerging neuroscience like Neuralink. 

Andrew Maynard

Question: Was Elon Musk’s Neuralink announcement exciting or a daunting thing for emerging tech?

Answer: In founding Neuralink, Elon Musk was inspired by science fiction writers like Iain M. Banks and the technologies they speculated about that enabled complete integration between the brain and powerful cybersystems. The company set out to use cutting edge technology to accelerate the development of such devices, in part by bringing new talent and ideas to the field of brain-computer interfaces. It’s quite telling that, in its recruitment drives, Neuralink boldly stated, “No neuroscience experience is required.”

Tuesday’s announcement was the first chance most of us have had to see what this disruptive tech approach has been leading to. And technically, the results are impressive. Neuralink has taken cutting edge research in materials science, robotics and neurosurgery, and brought the Musk “secret sauce” to them to advance what is possible far faster than conventional approaches probably could have.

The result is a system for implanting a large number of ultrafine electrodes into the brain, under robotic control, to read and write neurological activity at an unprecedented resolution. So far, the technology is only being used on animals — it will be some time before it’s approved for human trials. But the intent is for this to be a first step toward transforming how we create deep interfaces between the brain and machines, including artificial intelligence.

If it succeeds, the technology could lead to massive strides in how we understand and address neurological diseases — this is currently one of the main thrusts of Neuralink’s research. But it also opens the door to wider uses, including two-way connections between people’s brains and the internet. With these capabilities come substantial ethical and social concerns that need to be addressed well ahead of time if the technology is to be developed and used responsibly.

Q: What is “neuroethics” and why should we care about it?

A: Neuroethics deals with the ethical issues that increasingly sophisticated brain technologies raise. Like other areas of science and technology ethics, it sets out to understand and navigate the tension between what we can do as science and technology advance, and what we should do — and perhaps, should not do.

Part of the challenge is that emerging neural capabilities such as Neuralink’s interface are extending the boundaries of what is possible so fast, that we have no easy answers to the question of how far is too far, or even who should decide such things. This is made trickier by the tremendous health benefits that could come out of responsible innovation in this field. With well-designed interfaces, it may be possible to counter the effects of debilitating brain disorders like Parkinson’s disease, epilepsy and others. Yet the chances of encountering unintended consequences as the technology is developed are extremely high — especially as the more we discover about how the brain works, the more we realize how little we understand.

Neuroethics extends well beyond the “do good” and “do no harm” we expect of medical science though and encompasses how we might use neurological technologies in other areas. For instance, how do we decide what the appropriate boundaries are for developing brain-computer interfaces that connect us to our smartphone, or enable users to engage in a new level of online gaming, or even to enhance their intelligence? We also need to think seriously about potential issues around addictive behavior, or even “brain-jacking” with future iterations of the technology. These are questions that have deep social as well as health-related ramifications. And like many ethical questions, there are no clear-cut answers.

If the technologies being developed by Neuralink and other companies reach their hoped-for potential, there are likely to be profound implications to how we live our lives. These include how “indentured” we become to who owns and controls the tech, and how secure the technology is. But they also touch on the divide between who has access to technologies that could give them a significant advantage in life, and who does not.

It’s this ability for brain technologies to affect each and every one of us in the future that makes their responsible and ethical development worth caring about by all of us. At the end of the day, if we want to see the benefits of the technology without having to fix the consequences of messy mistakes, we all have a responsibility to be part of the conversation around what is ethical, and what is not.  

Question: How can we make sure that ethics are part of the development process for new and emerging technologies?

Answer: Technology ethics is a hot topic these days — especially as artificial intelligence and associated technologies become increasingly powerful. As a result, an increasing number of companies are realizing that, in order to succeed, they need to take ethical and responsible development seriously. And yet this is challenging — most entrepreneurs and innovators aren’t taught how to practically integrate ethics and responsible innovation into what they do.

This is an area we’re working on with startups and others in the ASU Risk Innovation Accelerator. Recognizing that innovators are stuck between a rock and a hard place as they balance what they aspire to with the realities of keeping a business afloat, we’re developing tools and approaches that help then navigate an increasingly complex landscape of hidden and blindsiding risks — what we refer to as “orphan risks.”

However, there are many more ways of helping ensure ethical and responsible approaches to innovation are integrated into new technologies, including the technologies that companies like Neuralink are developing. These include listening to and collaborating with experts who understand the interplay between technology and society — including those in places like the ASU School for the Future of Innovation in Society — and by developing an awareness of and responsiveness to social aspirations and concerns. But they also require tech companies to proactively work with organizations such as the World Economic Forum and others that are developing new approaches to responsible, beneficial and profitable innovation.

Beyond what companies can and should be doing though, we all have a responsibility to come up to speed on emerging technologies and how they might impact our lives, so that we can be a part of ensuring they benefit society rather than cause more harm than good. And one place to start is to read up on emerging trends and the challenges and opportunities they present — a topic that my recent book "Films from the Future" focused heavily on.

Q: Isn’t it risky to have human brains interfacing with computers directly?

A: Anything involving inserting probes into the human brain comes with risks, and those risks only increase as we develop the ability to control as well as read how neurons are operating. Because Neuralink is pushing us into such new territory, it’s not yet clear what the nature and magnitude of these risks are, which is all the more reason to proceed with caution.

To get a sense of what we might expect though, in 2014 the brain surgeon Phil Kennedy experimented on himself with his own design of a neural probe. Things didn’t go that well, as he temporarily lost the ability to speak.

Kennedy’s experiments on himself were groundbreaking, but they also showed how easy it is to adversely interfere with our brain when we start to push and prod it. When it comes to two-way interfaces with computers though, the potential risks escalate. This is where we enter deeply uncharted territory, as it’s not clear what happens when our brains come under the influence of machines and apps, or how these capabilities might disrupt social norms and behaviors. As my colleague Katina Michael explored in a recent TEDxASU talk, the social and personal risks associated with brain implants go far beyond their potential health impacts.

Q: Do advancements like Neuralink and other AI-based technologies blur the line between science fiction and science fact?

A: They absolutely do. Of course, we have to be careful that we don’t get caught up in the alternative reality and hype that so often seems to accompany these technologies — many AI-based technologies are more limited in what they can achieve than it sometimes seems, and we’re certainly not heading for an AI Armageddon anytime soon. And yet, what we’re now beginning to achieve at the intersection between machine learning, neuroscience, robotics, and other areas, is truly mind-blowing — to the extent that a mere 5 to 10 years ago, it would have seemed like science fiction.

Interestingly, this is leading to quite novel connections between science reality and science fiction, as writers are inspired by emerging technologies, and technologists are in turn inspired by science fiction writers — just as Elon Musk was inspired by Iain M. Banks and his “neural lace” with Neuralink. We seem to be entering a creative feedback loop where it’s becoming ever-easier to translate the creativity of science fiction into science fact.

Of course, if we do this without thinking about the consequences, it’s a recipe for disaster. However, as I explore in the book "Films from the Future," because science fiction is often about the relationship between technology and people, we can learn a surprising amount about developing new technologies ethically and responsibly by exploring sci-fi movies — including the types of technology being developed by Neuralink. As long, that is, as we’re guided along by experts who are adept at navigating the thin line between reality and fiction.

More Science and technology

 

Palo Verde Blooms

2 ASU postdocs receive prestigious Pegasi 51b Fellowship to study exoplanets 

The Heising-Simons Foundation has announced that Arizona State University School of Earth and Space Exploration postdoctoral researcher Luis Welbanks and incoming postdoctoral researcher Megan Weiner…

Student using laptop computer

ASU class explores how ChatGPT Enterprise can assist in scholarly writing

Just over a month ago, Jacob Greene received a notification he’d been waiting for — his proposal to use ChatGPT Enterprise was approved. Greene is an assistant professor at Arizona State University’…

Outdoor ASU sign reading "New schools New degrees New buildings" in front of a building.

New engineering degrees at ASU aim to open pathways, empower engineering expertise

It doesn’t take an extensive internet search to discover that engineering has become one of the most rapidly and broadly expanding STEM fields. Engineering has been on an upswing in recent years,…