image title

New curriculum will focus on philosophy of artificial intelligence

New ASU arts curriculum will teach philosophy of artificial intelligence.
July 23, 2019

ASU program will combine creation of technology with literature, sociology

Editor's note: This story is being highlighted in ASU Now's year in review. Read more top stories from 2019.

Artificial intelligence algorithms have become pervasive in daily life, but should they? And what are the drawbacks and advantages of using machine learning?

Several Arizona State University faculty members have won a grant from the National Endowment for the Humanities to create a new curriculum that will challenge students to think about these complex issues while they’re learning how to create the technology.

The grant is funding a yearlong process for the School of Arts, Media and Engineering to create the new program, which will be a concentration within the existing Bachelor of Arts in digital culture. The school is housed in both the Herberger Institute for Design and the Arts and the Ira A. Fulton Schools of Engineering.

Suren Jayasuriya, an assistant professor jointly between the School of Arts, Media and Engineering and School of Electrical, Computer, and Energy Engineering, is the project director for the grant and said the program will be integrative.

“You could go to a history or philosophy department or English lit department and learn about artificial intelligence. Or you could go to a computer science department and learn how to build AI,” he said.

“But this program is unique in trying to bring it together.”

Jayasuriya works in computer vision, which is applying artificial intelligence to visual media.

“It’s things like how computers recognize objects and images, how they analyze images to understand object shapes, location and what an object’s utility is,” he said.

“It’s being used for self-driving car technology.”

Jayasuriya also has an interest in philosophy and literature and co-taught the “Prototyping Dreams” course with Ed Finn, associate professor in the School of Arts, Media and Engineering and the Department of English and founding director of the Center for Science and the Imagination.

“So we came up with this idea of developing a curriculum for the AME program that meets this dual need of both teaching students about the underlying technology, like what is and isn’t possible, but also the social-cultural knowledge behind AI,” Jayasuriya said.

In the grant proposal, the reading list for the courses includes classics from Isaac Asimov, including “I, Robot,” and content that builds on ASU’s Frankenstein Bicentennial Project to highlight creativity and responsibility. Besides Jayasuriya and Finn, the other faculty involved in creating the new program are Pavan Turaga, associate professor in the schools of Arts, Media, and Engineering and Electrical Engineering and director of the Geometric Media Lab, and Xin Wei Sha, professor and director of the School of Arts, Media and Engineering and director of the Synthesis Center.

Jayasuriya answered some questions from ASU Now about creating the new program.

Question: In the grant proposal to the National Endowment for the Humanities, the team mentions the “anxiety and fear” around artificial intelligence, and specifically cites the “Terminator” movies. Do you hope to address these negative perceptions?

Answer: There are a couple of tropes that get amplified.

I’m not saying that it’s positive or negative, but we noticed this storytelling and we wanted to develop a curriculum that gives voices to some of those stories but also other stories.

Basically for the National Endowment for the Humanities proposal, part of my goal was, if you want to have humanities students for the 21st century who are going to deal with these technologies in their society and their workplaces and their lives, how do you effectively train them for this emerging domain?

Q: How much technology will the students learn?

A: We want to introduce them to some basic technology so they can creatively think and design in AI spaces but it’s not necessarily the goal for them to build a state of the art AI system, although we wouldn’t discourage that.

Each course will be designed to build them up to speed on programming and data handling and other things you would need to deal with AI tech, but the focus is to give them a broader education.

Q: So the graduates won’t be the people who are doing the programming but will be the people working with the programmers?

A: That’s one option. Or digital journalism or marketing. Even human resources will be affected by AI technologies. A lot of HR companies are using AI systems to help with recruitment and intake and you will need knowledge of how that system is working and of its hidden biases.

Q: What kind of biases?

A: It’s interesting to talk about how AI reflects social inequities.

It can be pernicious. AI systems learn on large data sets and the data sets reflect the social inequities in society, so the networks are implicitly learning these biases, like AI systems for determining insurance rates.

So “Terminator” and robotics is the stuff of popular narratives but some of these narratives are less well known but affect the world equally as much.

Q: What’s the Prototyping Dreams course?

A: Prototyping Dreams is a required four-week module in our digital culture program.

Ed Finn and I co-taught it and focused on prototyping development to help with storytelling. One module we did was on minds and machines and we had the students build a working chatbot in a Python programming language starting from scratch.

We read works from Descartes and John Searle, and we read about the Turing Test and something called the Chinese Room Argument.

We had students reflect on AI in society and at the end we did a gallery exhibition at the Tempe Center for the Arts and people came and got to interact with the students’ chatbots.

So we’re going to build that out into a full course extending beyond chatbots to visual media and other kinds of algorithms.

Q: What are faculty in the School of Arts, Media and Engineering interested in?

A: They’re interested in new technologies and how they can be used for media arts and sciences, and that takes various forms, such as visual media like images, projections, video feedback and virtual reality and augmented reality systems.

It could be audio. A lot of our faculty build new types of musical instruments and musical audio interfaces.

I’ve taught courses on computational cameras — rethinking what a camera is. So building new types of cameras that see underwater, or around corners or see through fog or smoke or see things that are not generally visible but using new types of optics, signal processing and AI to be able to do that.

I’ve taught Understanding Activity, which is designed to create experiential media systems to interact and give feedback with a user using audio, visual and touch.

The general cohesion of AME is the use of technology in the media arts and sciences, and the reasons for why you would do that. We think about creative practice and design skills and philosophy and stories we tell using that technology.

Top image by Pixabay

Mary Beth Faller

Reporter , ASU News

480-727-4503

 
image title

When it comes to human and AI interfaces, ASU professor says ethical questions abound

July 23, 2019

Since Neuralink’s public launch in 2017, there’s been much speculation about the focus of the company’s research. Its press event on July 17 was the first time we’ve had a glimpse of what the research team has been working on. 

Neuralink has been developing brain-computer interface technology with the use of flexible threads inserted into the brain with an innovative robotic system. The concept has the potential for multiple applications for those with physical or neurological issues and performed perhaps in the future via less evasive laser surgery — much like LASIK surgery is now. 

While the process has been tested only in lab animals, Elon Musk and the investors are hoping human applications are next. ASU Now sat down with Andrew Maynard, director of the Risk Innovation Lab and professor with the School for the Future of Innovation, to discuss the announcement and what the future holds for emerging neuroscience like Neuralink. 

Andrew Maynard

Question: Was Elon Musk’s Neuralink announcement exciting or a daunting thing for emerging tech?

Answer: In founding Neuralink, Elon Musk was inspired by science fiction writers like Iain M. Banks and the technologies they speculated about that enabled complete integration between the brain and powerful cybersystems. The company set out to use cutting edge technology to accelerate the development of such devices, in part by bringing new talent and ideas to the field of brain-computer interfaces. It’s quite telling that, in its recruitment drives, Neuralink boldly stated, “No neuroscience experience is required.”

Tuesday’s announcement was the first chance most of us have had to see what this disruptive tech approach has been leading to. And technically, the results are impressive. Neuralink has taken cutting edge research in materials science, robotics and neurosurgery, and brought the Musk “secret sauce” to them to advance what is possible far faster than conventional approaches probably could have.

The result is a system for implanting a large number of ultrafine electrodes into the brain, under robotic control, to read and write neurological activity at an unprecedented resolution. So far, the technology is only being used on animals — it will be some time before it’s approved for human trials. But the intent is for this to be a first step toward transforming how we create deep interfaces between the brain and machines, including artificial intelligence.

If it succeeds, the technology could lead to massive strides in how we understand and address neurological diseases — this is currently one of the main thrusts of Neuralink’s research. But it also opens the door to wider uses, including two-way connections between people’s brains and the internet. With these capabilities come substantial ethical and social concerns that need to be addressed well ahead of time if the technology is to be developed and used responsibly.

Q: What is “neuroethics” and why should we care about it?

A: Neuroethics deals with the ethical issues that increasingly sophisticated brain technologies raise. Like other areas of science and technology ethics, it sets out to understand and navigate the tension between what we can do as science and technology advance, and what we should do — and perhaps, should not do.

Part of the challenge is that emerging neural capabilities such as Neuralink’s interface are extending the boundaries of what is possible so fast, that we have no easy answers to the question of how far is too far, or even who should decide such things. This is made trickier by the tremendous health benefits that could come out of responsible innovation in this field. With well-designed interfaces, it may be possible to counter the effects of debilitating brain disorders like Parkinson’s disease, epilepsy and others. Yet the chances of encountering unintended consequences as the technology is developed are extremely high — especially as the more we discover about how the brain works, the more we realize how little we understand.

Neuroethics extends well beyond the “do good” and “do no harm” we expect of medical science though and encompasses how we might use neurological technologies in other areas. For instance, how do we decide what the appropriate boundaries are for developing brain-computer interfaces that connect us to our smartphone, or enable users to engage in a new level of online gaming, or even to enhance their intelligence? We also need to think seriously about potential issues around addictive behavior, or even “brain-jacking” with future iterations of the technology. These are questions that have deep social as well as health-related ramifications. And like many ethical questions, there are no clear-cut answers.

If the technologies being developed by Neuralink and other companies reach their hoped-for potential, there are likely to be profound implications to how we live our lives. These include how “indentured” we become to who owns and controls the tech, and how secure the technology is. But they also touch on the divide between who has access to technologies that could give them a significant advantage in life, and who does not.

It’s this ability for brain technologies to affect each and every one of us in the future that makes their responsible and ethical development worth caring about by all of us. At the end of the day, if we want to see the benefits of the technology without having to fix the consequences of messy mistakes, we all have a responsibility to be part of the conversation around what is ethical, and what is not.  

Question: How can we make sure that ethics are part of the development process for new and emerging technologies?

Answer: Technology ethics is a hot topic these days — especially as artificial intelligence and associated technologies become increasingly powerful. As a result, an increasing number of companies are realizing that, in order to succeed, they need to take ethical and responsible development seriously. And yet this is challenging — most entrepreneurs and innovators aren’t taught how to practically integrate ethics and responsible innovation into what they do.

This is an area we’re working on with startups and others in the ASU Risk Innovation Accelerator. Recognizing that innovators are stuck between a rock and a hard place as they balance what they aspire to with the realities of keeping a business afloat, we’re developing tools and approaches that help then navigate an increasingly complex landscape of hidden and blindsiding risks — what we refer to as “orphan risks.”

However, there are many more ways of helping ensure ethical and responsible approaches to innovation are integrated into new technologies, including the technologies that companies like Neuralink are developing. These include listening to and collaborating with experts who understand the interplay between technology and society — including those in places like the ASU School for the Future of Innovation in Society — and by developing an awareness of and responsiveness to social aspirations and concerns. But they also require tech companies to proactively work with organizations such as the World Economic Forum and others that are developing new approaches to responsible, beneficial and profitable innovation.

Beyond what companies can and should be doing though, we all have a responsibility to come up to speed on emerging technologies and how they might impact our lives, so that we can be a part of ensuring they benefit society rather than cause more harm than good. And one place to start is to read up on emerging trends and the challenges and opportunities they present — a topic that my recent book "Films from the Future" focused heavily on.

Q: Isn’t it risky to have human brains interfacing with computers directly?

A: Anything involving inserting probes into the human brain comes with risks, and those risks only increase as we develop the ability to control as well as read how neurons are operating. Because Neuralink is pushing us into such new territory, it’s not yet clear what the nature and magnitude of these risks are, which is all the more reason to proceed with caution.

To get a sense of what we might expect though, in 2014 the brain surgeon Phil Kennedy experimented on himself with his own design of a neural probe. Things didn’t go that well, as he temporarily lost the ability to speak.

Kennedy’s experiments on himself were groundbreaking, but they also showed how easy it is to adversely interfere with our brain when we start to push and prod it. When it comes to two-way interfaces with computers though, the potential risks escalate. This is where we enter deeply uncharted territory, as it’s not clear what happens when our brains come under the influence of machines and apps, or how these capabilities might disrupt social norms and behaviors. As my colleague Katina Michael explored in a recent TEDxASU talk, the social and personal risks associated with brain implants go far beyond their potential health impacts.

Q: Do advancements like Neuralink and other AI-based technologies blur the line between science fiction and science fact?

A: They absolutely do. Of course, we have to be careful that we don’t get caught up in the alternative reality and hype that so often seems to accompany these technologies — many AI-based technologies are more limited in what they can achieve than it sometimes seems, and we’re certainly not heading for an AI Armageddon anytime soon. And yet, what we’re now beginning to achieve at the intersection between machine learning, neuroscience, robotics, and other areas, is truly mind-blowing — to the extent that a mere 5 to 10 years ago, it would have seemed like science fiction.

Interestingly, this is leading to quite novel connections between science reality and science fiction, as writers are inspired by emerging technologies, and technologists are in turn inspired by science fiction writers — just as Elon Musk was inspired by Iain M. Banks and his “neural lace” with Neuralink. We seem to be entering a creative feedback loop where it’s becoming ever-easier to translate the creativity of science fiction into science fact.

Of course, if we do this without thinking about the consequences, it’s a recipe for disaster. However, as I explore in the book "Films from the Future," because science fiction is often about the relationship between technology and people, we can learn a surprising amount about developing new technologies ethically and responsibly by exploring sci-fi movies — including the types of technology being developed by Neuralink. As long, that is, as we’re guided along by experts who are adept at navigating the thin line between reality and fiction.

Executive Director, Marketing and Communications , ASU Law

480-727-6193