Why seeing robots in pop culture is important


Robots in pop culture

|

What was the first robot you ever encountered? (Or maybe who is more apt, if less technically accurate – more on that later.) If you’re a boomer, it might have been the Jetsons' helpful if obsolete maid, Rosie. If you’re a millennial, maybe it was the decidedly more terrifying red-eyed Terminator.

What (or whom) ever it was, it’s most likely you encountered it in popular culture.

“There's something about robots that just tickles the childlike wonder in us. Something about encountering this thing that seems like it has agency but is in reality a machine,” said Lance Gharavi, an associate professor in the School of Music, Dance and Theatre at ASU’s Herberger Institute for Design and the Arts.

Gharavi, whose work focuses on the intersections of art and science, is currently at work on two projects as an affiliate faculty member of the Center for Human, Artificial Intelligence, and Robot Teaming. One, titled “Robotopolis,” is essentially a test bed for running experiments with autonomous vehicles, while the other involves teaming robots up with humans to perform tasks. Both have an element of performance, something Gharavi believes is inherent to apparently intelligent machines.

“Robots are theater,” he said.

In fact, the word “robot” was coined not in a lab or an engineering facility, but by the Czech writer Karel Čapek in his 1920 science fiction play “R.U.R” (short for “Rossumovi Univerzální Roboti” or, in English, “Rossum's Universal Robots”).

While the idea of a machine that performs work was nothing new then – the history of automatons stretches back to the ancient Greeks – and stories like “Frankenstein” and that of the Jewish golem, which attribute sentience to creatures created by humans, already populated humankind’s mythological canon, Čapek’s “R.U.R.” is often credited as one of the first stories in modern consciousness to imagine machines as humanlike, and thereby begin to grapple with some of the more complex questions surrounding the emerging technology that we’re familiar with today.

“It is said that the function of art is to hold a mirror up to nature,” Gharavi said. “Robots sort of serve as a kind of mirror for us, almost like a fun house mirror, because they don't mirror us exactly. But they do throw into relief the things that make us human.”

Stories about robots, said Ed Finn, founding director of ASU’s Center for Science and the Imagination, tap into “our anxieties about what it means to be intelligent, what it means to be a human, what it means to be a worker, what it means to be a master and a slave … what it means to other. They are ways of creating an artificial face in order to confront our own ideas about who we are: our own ideas about personhood.”

(Perhaps tellingly, “R.U.R.” concludes by indulging humankind’s now widely held fear of a robot rebellion that results in our extinction.)

"I really like Wall-E," Finn said. "I like robots that don't try to be human and that create their own ideas of personhood."

Finn also serves as the academic director of Future Tense, a partnership between ASU, New America and Slate Magazine that frequently publishes sci-fi stories with titles like “The State Machine,” which imagines a future where the government is run entirely by – you guessed it – machines.

Since Čapek’s “R.U.R.,” humanlike robots have proliferated popular culture, from the sexualized “Maria” in Fritz Lang’s seminal “Metropolis” to the insidiously charming “Sonny” of “I, Robot” to the wisecracking, cigar smoking “Bender” of “Futurama.”

“It’s important to have stories that explore the relationship between scientific creativity and responsibility,” Finn said. And there are a few stories that we tend to tell over and over again about robots.

There’s the story of the killer robot (“The Terminator,” “Ex Machina,” “I, Robot”), in which humans are always opening Pandora’s box and finding themselves unprepared for what comes out. There’s the story of the robot as girlfriend (“Her,” “Ex Machina” again), in which humans address the fear that robots will become indistinguishable from us. And then there’s the “God story.”

“In the God story, we create these super intelligent beings that are so much more advanced than we are that they effectively become omniscient and omnipotent, and we end up replacing our old gods with new gods that we've created,” Finn said. “I think we actually need to be telling new, more grounded and realistic stories about the near future and AI.”

Certainly, as robots become increasingly intelligent, there’s no shortage of concerns to explore: issues of privacy, access, trust, influence and authenticity are all on the table.

“I worry that in many realms of our progress right now, our technical reach extends beyond our ethical grasp,” Finn said.

For evidence of that, we need look no further than the phones in our pocket, which literally track our every move, and the various apps and social media platforms they play host to, which are practically sprinting toward the point when they will be able to pull off the staggeringly impressive feat of accurately assessing our moods and predicting our behaviors.

Katina Michael, a professor in both ASU’s School for the Future of Innovation in Society and School of Computing, Informatics and Decision Systems Engineering, calls it “uberveillance”: “the purported ability to know the ‘who,’ ‘where’ and ‘what’ condition someone or something is in.”

“One cannot pass by the Arthur C. Clarke classic, ‘2001: A Space Odyssey,’” Michael said. “HAL 9000 says, ‘I'm sorry, Dave. I'm afraid I can't do that.’ It is the ‘override’ moment that we can learn from critically on the future perils of technologies with potential unintended consequences.”

After all, when iPhone’s Siri and Amazon’s Alexa are listening to us all day, they probably get a pretty good idea of what we’re all about. But both Michael and Finn caution that it’s important to manage our expectations of what emerging technologies are capable of.

“I love all the early ads for Siri where she was having these really lifelike conversations with celebrities like Samuel L. Jackson,” Finn said. “But if you've ever tried to have a conversation with Siri, you know it doesn't go that well. … If you treat Siri like a person, you're missing the things that Siri is actually capable of doing.”

Humans are now at a point where the biological is merging with the technical, and Michael, whose research and writing has looked at the potential of implantable devices for medical and nonmedical applications, believes that the biggest ethical questions and concerns regarding emerging technologies today have to do with the promise of technologies that will alleviate social injustices.

“To that end,” she said, “the techno-myth that promises to end suffering — through robotic assistive tech — or to end pain, in the case of robotic implant devices that stimulate parts of the body and brain, or to offer solutions that are touted as a panacea, for example, hiring a robot to look after the autistic or the elderly for care” also brings up “questions related to human rights, questions related to responsibility and accountability and the ethics of care. Building up artificial intelligence as being something that it is not, is perilous to people in need, creating false hopes, when a vast majority of solutions are not approved by health insurance providers and are unaffordable.”

Expanding further on that thought, Michael added, “We want to build brain computer interfaces that are complex, yet the majority of the world’s disabled persons who are missing a limb or are unnecessarily turning blind (suffer from) a lack of resources and do not have basic prosthetics or operative procedures toward prevention. The inequality question needs to be broached.”

The fact that humans are so trusting of intelligent technology as to be willing to implant it in our bodies, let our Roombas run amok while we’re not home and believe utterly what our Facebook feed is telling us speaks to how much we take it for granted. And when we do that, we run the risk of allowing ourselves to be detrimentally influenced by it.

“We outsource so much of our cognition and our memory to these systems already, and we don't often pause to think about what we're paying for the services that we're getting,” Finn said. “When you think about Google or Apple or Amazon or Facebook, these platforms provide all of these incredible tools, but they're not doing that as a public service. They're doing that as part of an economy where we are the products that they're selling to other people.”

But fear not, gentle humans – Finn, while prudently wary, is also optimistic, and he has some wise words of comfort for us all.

Michael also has an affinity for the “Doctor Who” Dalek character, a fictional extraterrestrial race of mutants who want to exterminate all other life forms and pronounce that “resistance is futile.” “I don’t agree with the Dalek; I think resistance is not futile. But it’s not even about resistance, it’s about co-designing solutions that citizenry want and need,” Michael said.

“A lot of people in the technology community are starting to recognize that what they're doing is not just solving technical challenges,” he said. “They’re moving farther and farther into the social and cultural realm, and they’re realizing that their work has challenges and consequences that can't just be addressed with technical fixes. So my optimistic side sees that realization slowly dawning and percolating through more and more levels of society in the tech world and beyond, and I’m hoping people on the policy and governmental sides of things will start catching up and say, ‘OK, we have to create new structures of regulation to contend with these challenges.

“This is an area where I think science fiction is incredibly helpful, because it lets us work through the ethical and social dimensions of these problems before we've actually brought them into reality, and it gives everybody a shared vocabulary so we can do that work together. You don't have to have a PhD in AI to have a real conversation about it, because you can read a science fiction story or watch a movie and begin to have these conversations. We need to keep doing that work, and we need to bring more diverse voices into the conversation, because if we just create all these tools and we don't have the conversation about how we should use them, we're going to set ourselves up for disaster.”

MORE

Dancing, vacuuming, learning: What's next for robots and their creators?

ASU on the cutting edge of robotics

Top photos courtesy of Twentieth Century Fox, A24 andTriStar Pictures. All gifs courtesy of GIPHY.

More Science and technology

 

Image of a human brain and a tractor.

Study reveals lasting effects of common weed killer on brain health

Environmental exposure to toxins in the air, water or certain chemicals can increase the risk of ill health effects, including to the human brain.Now, new research has shown even brief exposure with…

Rob Dollar on a bike cycling across the Tempe campus of Arizona State University

ASU software engineers honor late cyclist with 'dashcam' for bikes

It was all Rob’s idea.In the months before his death, avid cyclist Rob Dollar spoke frequently with his father, John Dollar, about his desire to start a foundation to promote safer riding conditions…

Three people inspecting gear on a table.

How ASU is reshaping manufacturing training

The manufacturing sector faces a persistent challenge: a shortage of skilled workers.As industries adopt emerging technologies, traditional training methods struggle to meet the demand for skilled…