image title

Why seeing robots in pop culture is important

January 22, 2021

3 ASU experts on humans' fascination with stories of our machine-based counterparts and what we can learn from them

What was the first robot you ever encountered? (Or maybe who is more apt, if less technically accurate – more on that later.) If you’re a boomer, it might have been the Jetsons' helpful if obsolete maid, Rosie. If you’re a millennial, maybe it was the decidedly more terrifying red-eyed Terminator.

What (or whom) ever it was, it’s most likely you encountered it in popular culture.

“There's something about robots that just tickles the childlike wonder in us. Something about encountering this thing that seems like it has agency but is in reality a machine,” said Lance Gharavi, an associate professor in the School of Music, Dance and Theatre at ASU’s Herberger Institute for Design and the Arts.

Gharavi, whose work focuses on the intersections of art and science, is currently at work on two projects as an affiliate faculty member of the Center for Human, Artificial Intelligence, and Robot Teaming. One, titled “Robotopolis,” is essentially a test bed for running experiments with autonomous vehicles, while the other involves teaming robots up with humans to perform tasks. Both have an element of performance, something Gharavi believes is inherent to apparently intelligent machines.

“Robots are theater,” he said.

In fact, the word “robot” was coined not in a lab or an engineering facility, but by the Czech writer Karel Čapek in his 1920 science fiction play “R.U.R” (short for “Rossumovi Univerzální Roboti” or, in English, “Rossum's Universal Robots”).

While the idea of a machine that performs work was nothing new then – the history of automatons stretches back to the ancient Greeks – and stories like “Frankenstein” and that of the Jewish golem, which attribute sentience to creatures created by humans, already populated humankind’s mythological canon, Čapek’s “R.U.R.” is often credited as one of the first stories in modern consciousness to imagine machines as humanlike, and thereby begin to grapple with some of the more complex questions surrounding the emerging technology that we’re familiar with today.

“It is said that the function of art is to hold a mirror up to nature,” Gharavi said. “Robots sort of serve as a kind of mirror for us, almost like a fun house mirror, because they don't mirror us exactly. But they do throw into relief the things that make us human.”

Stories about robots, said Ed Finn, founding director of ASU’s Center for Science and the Imagination, tap into “our anxieties about what it means to be intelligent, what it means to be a human, what it means to be a worker, what it means to be a master and a slave … what it means to other. They are ways of creating an artificial face in order to confront our own ideas about who we are: our own ideas about personhood.”

(Perhaps tellingly, “R.U.R.” concludes by indulging humankind’s now widely held fear of a robot rebellion that results in our extinction.)

"I really like Wall-E," Finn said. "I like robots that don't try to be human and that create their own ideas of personhood."

Finn also serves as the academic director of Future Tense, a partnership between ASU, New America and Slate Magazine that frequently publishes sci-fi stories with titles like “The State Machine,” which imagines a future where the government is run entirely by – you guessed it – machines.

Since Čapek’s “R.U.R.,” humanlike robots have proliferated popular culture, from the sexualized “Maria” in Fritz Lang’s seminal “Metropolis” to the insidiously charming “Sonny” of “I, Robot” to the wisecracking, cigar smoking “Bender” of “Futurama.”

“It’s important to have stories that explore the relationship between scientific creativity and responsibility,” Finn said. And there are a few stories that we tend to tell over and over again about robots.

There’s the story of the killer robot (“The Terminator,” “Ex Machina,” “I, Robot”), in which humans are always opening Pandora’s box and finding themselves unprepared for what comes out. There’s the story of the robot as girlfriend (“Her,” “Ex Machina” again), in which humans address the fear that robots will become indistinguishable from us. And then there’s the “God story.”

“In the God story, we create these super intelligent beings that are so much more advanced than we are that they effectively become omniscient and omnipotent, and we end up replacing our old gods with new gods that we've created,” Finn said. “I think we actually need to be telling new, more grounded and realistic stories about the near future and AI.”

Certainly, as robots become increasingly intelligent, there’s no shortage of concerns to explore: issues of privacy, access, trust, influence and authenticity are all on the table.

“I worry that in many realms of our progress right now, our technical reach extends beyond our ethical grasp,” Finn said.

For evidence of that, we need look no further than the phones in our pocket, which literally track our every move, and the various apps and social media platforms they play host to, which are practically sprinting toward the point when they will be able to pull off the staggeringly impressive feat of accurately assessing our moods and predicting our behaviors.

Katina Michael, a professor in both ASU’s School for the Future of Innovation in Society and School of Computing, Informatics and Decision Systems Engineering, calls it “uberveillance”: “the purported ability to know the ‘who,’ ‘where’ and ‘what’ condition someone or something is in.”

“One cannot pass by the Arthur C. Clarke classic, ‘2001: A Space Odyssey,’” Michael said. “HAL 9000 says, ‘I'm sorry, Dave. I'm afraid I can't do that.’ It is the ‘override’ moment that we can learn from critically on the future perils of technologies with potential unintended consequences.”

After all, when iPhone’s Siri and Amazon’s Alexa are listening to us all day, they probably get a pretty good idea of what we’re all about. But both Michael and Finn caution that it’s important to manage our expectations of what emerging technologies are capable of.

“I love all the early ads for Siri where she was having these really lifelike conversations with celebrities like Samuel L. Jackson,” Finn said. “But if you've ever tried to have a conversation with Siri, you know it doesn't go that well. … If you treat Siri like a person, you're missing the things that Siri is actually capable of doing.”

Humans are now at a point where the biological is merging with the technical, and Michael, whose research and writing has looked at the potential of implantable devices for medical and nonmedical applications, believes that the biggest ethical questions and concerns regarding emerging technologies today have to do with the promise of technologies that will alleviate social injustices.

“To that end,” she said, “the techno-myth that promises to end suffering — through robotic assistive tech — or to end pain, in the case of robotic implant devices that stimulate parts of the body and brain, or to offer solutions that are touted as a panacea, for example, hiring a robot to look after the autistic or the elderly for care” also brings up “questions related to human rights, questions related to responsibility and accountability and the ethics of care. Building up artificial intelligence as being something that it is not, is perilous to people in need, creating false hopes, when a vast majority of solutions are not approved by health insurance providers and are unaffordable.”

Expanding further on that thought, Michael added, “We want to build brain computer interfaces that are complex, yet the majority of the world’s disabled persons who are missing a limb or are unnecessarily turning blind (suffer from) a lack of resources and do not have basic prosthetics or operative procedures toward prevention. The inequality question needs to be broached.”

The fact that humans are so trusting of intelligent technology as to be willing to implant it in our bodies, let our Roombas run amok while we’re not home and believe utterly what our Facebook feed is telling us speaks to how much we take it for granted. And when we do that, we run the risk of allowing ourselves to be detrimentally influenced by it.

“We outsource so much of our cognition and our memory to these systems already, and we don't often pause to think about what we're paying for the services that we're getting,” Finn said. “When you think about Google or Apple or Amazon or Facebook, these platforms provide all of these incredible tools, but they're not doing that as a public service. They're doing that as part of an economy where we are the products that they're selling to other people.”

But fear not, gentle humans – Finn, while prudently wary, is also optimistic, and he has some wise words of comfort for us all.

Michael also has an affinity for the “Doctor Who” Dalek character, a fictional extraterrestrial race of mutants who want to exterminate all other life forms and pronounce that “resistance is futile.” “I don’t agree with the Dalek; I think resistance is not futile. But it’s not even about resistance, it’s about co-designing solutions that citizenry want and need,” Michael said.

“A lot of people in the technology community are starting to recognize that what they're doing is not just solving technical challenges,” he said. “They’re moving farther and farther into the social and cultural realm, and they’re realizing that their work has challenges and consequences that can't just be addressed with technical fixes. So my optimistic side sees that realization slowly dawning and percolating through more and more levels of society in the tech world and beyond, and I’m hoping people on the policy and governmental sides of things will start catching up and say, ‘OK, we have to create new structures of regulation to contend with these challenges.

“This is an area where I think science fiction is incredibly helpful, because it lets us work through the ethical and social dimensions of these problems before we've actually brought them into reality, and it gives everybody a shared vocabulary so we can do that work together. You don't have to have a PhD in AI to have a real conversation about it, because you can read a science fiction story or watch a movie and begin to have these conversations. We need to keep doing that work, and we need to bring more diverse voices into the conversation, because if we just create all these tools and we don't have the conversation about how we should use them, we're going to set ourselves up for disaster.”

MORE

Dancing, vacuuming, learning: What's next for robots and their creators?

ASU on the cutting edge of robotics

Top photos courtesy of Twentieth Century Fox, A24 andTriStar Pictures. All gifs courtesy of GIPHY.

 
image title

Dancing, vacuuming, learning: What's next for robots and their creators?

January 22, 2021

100 years after the term 'robot' was coined, ASU roboticists discuss the state of the field and where it's headed in the future

Robotics may be the strangest of the hard sciences. 

It’s incredibly old: Chinese artisans made humanoid figures that could sing and dance in the 10th century B.C. Alexandrian engineers built automata used in theater and religious ceremonies, as well as toys for the rich. Arabs took Greek robotic science and added practical applications. In the 12th century, a Muslim engineer named Ismail Al-Jazari built a waitress that could serve drinks.

It’s incredibly hard: Making a humanoid machine that can think and work on its own is a long way off. People will be living on Mars long before we see a robot anything like what we see in the movies. Roboticists joke that rocket science is a piece of cake compared to robotics, and they’re right. The Abraham Lincoln Audio-Animatronic at Disneyland that recites the Gettysburg Address is basically where the field is — and that’s been around since 1964.

It challenges philosophy: Robotics asks fundamental questions about the nature of humanity. Roboticists have to teach their machines things humans don’t even teach children, because children learn a lot on their own through observation — something robots can't do.

One hundred years ago this week, a play by Czech writer Karel Čapek called "R.U.R." debuted on Jan. 25. It introduced the word “robot” to the world. The drama takes place in a factory where artificial humans are manufactured. (The robots revolt, as they do so often in popular culture.)

In this story, we will look at where robots are now, what the field’s future is and the obstacles that lie in the way. Arizona State University has a multitude of people working in 25 labs on facets of robotics including artificial intelligence, rehabilitation, exoskeletons, foldable robots, collective behavior, human-robot collaboration, brain-machine interfaces and more. 

They are mechanical engineers, electrical engineers, computer scientists, applied psychologists and human systems engineers. Ask most of them and they will say it all overlaps. 

Back before the Arab, Greek and Chinese automata, robots and working machines appeared in ancient mythology. People have been captivated by the idea of creating mechanical facsimilies of themselves for a very long time. Why?

Heni Ben Amor directs the Interactive Robotics Laboratory at ASU. Originally he wanted to be an animator, creating lifelike characters on screen. But in grad school he worked with robots. “I went in a completely different direction,” he said. 

Now he studies artificial intelligence, machine learning, human-robot interaction, robot vision and automatic motor skill acquisition. He has led several international and national projects as a principal investigator, including projects for NASA, the National Science Foundation, Intel, Daimler-Benz, Bosch, Toyota and Honda. Two years ago he received the NSF CAREER Award — the organization's most prestigious award in support of early-career faculty. Six assistant professors and one associate professor at ASU, all in robotics, have won the award.

“I think humans have this intrinsic motivation of actually creating stuff,” Ben Amor said. “And for a lot of us creating something that's similar to us, like another human being, is really kind of the hallmark of what we can achieve. … Certainly this drive to create something that's similar to us seems to be innate. It's not something that we came up within the last couple hundred years.”

Video by Deanna Dent/ASU Now

Ben Amor speculates it’s the same impulse that led early man to paint the walls in caves. 

“The Czech play is a milestone in a long arc of development,” he said, citing the early builders of automata. “So the marking of the word is not necessarily the beginning. It's kind of a sort of focusing, bringing all of these ideas, narrowing them down to these five letters. And I think that's what it's about. It kind of focused everything to these five letters and then people took those five letters and created this plethora of many, many different ideas out of that again. So, it's kind of conceptualized it, made it a little bit clearer that there's a domain behind this.”

A few weeks ago, attentions were captivated by a video of robots created by Boston Dynamics dancing to The Contours’ “Do You Love Me?” It was impressive, showcasing robot dexterity and how physical systems are improving. Microprocessors and computing power have grown by leaps and bounds. The video has nearly 27 million views.

But while it’s impressive, the Boston Dynamics robots are essentially sophisticated marionettes. Fundamentally they’re not any different than Al-Jazari’s 12th-century waitress or Walt Disney’s Abe Lincoln. They can only do one thing. 

Here’s what they can’t do: Wash a sink of dirty dishes containing fine china plates, crystal stemware and a lasagna pan caked in baked-on crud without breaking most of it. They can’t go into any house and fix a water leak. They could deliver a package, but not walk up to the front door and let you know it arrived. And they can’t make any decisions on their own.

People and robots

“The field of robotics in general is still in its infancy and in particular human robot interaction is still at a very early stage,” said Wenlong Zhang, an assistant professor of systems engineering and director of the Robotics and Intelligent Systems Laboratory. Zhang and his team work on design and control of advanced robotic systems, including wearable assistive robots, soft robots and unmanned aerial vehicles. They also work on the problem of human-robot interaction. 

Right now, most working robots sit in factories doing repetitive tasks all day long. They don’t have much intelligence. If you’ve ever seen an industrial robot at work, you know it’s nothing you’d want to get close to while it’s operating. 

“For many of these conventional robots, you literally have to tell them the trajectory and the many new applications that we're looking at,” Zhang said. “It's not possible. … Robots need to have not only high precision and power, but also need to be adaptive. They need to be inherently safe. They need to understand the intent of their users. So human robot interaction is basically a very big umbrella that covers these topics.”

Machine intelligence is a broad field with a lot of people working on it. The ultimate goal is to make sure robots can be reliable partners. The knot of the problem arises in teaching machines things parents don’t even teach children explicitly, things they learn on their own, like speaking or picking up things. 

“That's why many researchers, especially in psychology and machine learning, are trying to understand how infants learn and they are trying to apply these to robots,” Zhang said. “This is a challenging problem. … Robots now work with humans, and humans are inherently adaptive and stochastic. … Let's say if we have this conversation again, I probably am not going to say the exact same thing. … If you think about robots being your partner, they have to understand this.”

hugging robot

Computer science master’s degree student Michael Drolet gives the "hugging robot" a hug in the Interactive Robotics Lab on Oct. 23, 2020. Heni Ben Amor’s lab focuses on machine learning, human-robot interaction, grasping manipulation and robot autonomy with the hugging robot "learning" to interact with humans. Photo by Deanna Dent/ASU Now 

How can I help you?

Ask Ben Amor how long it takes to program a robot.

“Six years.”

Six years?

“First you have to get the PhD,” he said.

Remember early personal computers? They were a bear to set up and operate. The early internet wasn’t much better. This video player didn’t work with that browser and vice versa. No one surfed the net. They kind of stumbled around in it. Today personal computers (and tablets and smart phones) are a breeze. You don’t have to know how they work.

But with robots and artificial intelligence today, you have to be a computer scientist. Sometimes even that’s not enough.

Siddharth Srivastava is an assistant professor and director of the Autonomous Agents and Intelligent Robots lab. Srivastava and his team research ways to compute the behavior of autonomous agents, going from theoretical formulations to executable systems. They use mathematical logic, probability theory, machine learning and notions of state and action abstractions.

“Undergrads who join our lab have to go through one or two years of training before they can actually change what a robot does,” Srivastava said. “In fact we often have undergraduate students — or let's say first-year master's students who've already done their undergraduate program in computer science, it takes them about a year after knowing all that, to just start configuring the robot on their own for a new task. It's easy to hit replay and make it repeat whatever it has been programmed to do. But that's not the promise of AI, right? The promise of AI is that you can tell it what you want and it will do that stuff for you in a safe way. … To give that instruction, you actually need to have a lot of expertise.”

Srivastava tackles two lines of research in his lab. 

First, how can autonomy be made more efficient. You have a household robot and you need to give it a new task. What kind of algorithms do you need to make it more adept at that?

In the second, you have an AI robot or an autonomous assistant.  You don’t know much about it. (Srivastava expects this will be the vast majority of users.) “How do you interact with it?” he said. “How do you change what it's doing? How do you rectify it? How do you understand what it's doing?”

What he has found is that in both dimensions some common mathematical tools dealing with abstraction and hierarchies — and how they are used — help solve those problems.

“So on the computation side, we are looking at how to use hierarchical abstractions to make it easier for AI systems to autonomously solve complex problems,” he said.

Frying an egg isn’t too tough for a robot to pull off, but it would have serious issues if you just asked it to make you breakfast. Where does it start? What does it do first?

Subbarao Kambhampati has been researching robot task planning his entire career. He is a computer scientist with expertise in artificial intelligence, automated planning and information integration. For the past 10 years he has been working on how robots can work with humans. 

When you book a flight online, the bot you use (which Kambhampati calls "a piece of code living on the sidewalk on the internet") will offer you a range of times and prices. What if it did more than that? What if it anticipated what you want and helped you, instead of just taking a request? 

“So in general, my work is how to get these robots, our bots, to be our partners, rather than just our tools,” he said. 

Teaching robots to teach themselves, via many simulations, is streamlining task planning. Children do that by picking up everything they see around them. Eventually they figure out: This is heavy, that is smooth, these break. No one specifically teaches them these things. 

The Boston Dynamics robots didn’t learn to jump by jumping a million times on their own. That’s what Kambhampati is talking about.

“Much of what changed in AI in general in the last 10 years is these tacit knowledge tasks that we didn't know how to tell robots, because basically anything that we do would only be an approximation, and it's much better somehow to get the robot to learn by itself by just trial and error and so much of manipulation right now, like grasping,” Kambhampati said. “So this is actually an interesting change in the research. Much more robotics work now is essentially focused on learning techniques and object manipulation.” 

Tom Sugar Lab

Professor Tom Sugar works with doctoral student Jason Olson and shows how the "Sparky" robotic ankle works on July 20, 2018. Sugar and his students work on robotics that enhance human movements and work as tools to help rehabilitation for those who have suffered stroke or injury. Photo by Deanna Dent/ASU Now 

Enhancing people with robotics

Tom Sugar is a roboticist and director of the Human Machine Integration Lab, which works on robotic orthoses, prostheses and wearable robots for enhanced mobility. To answer your question, Sugar says no, there will never be an Iron Man suit; it would be too heavy and too expensive.

But the next 10 years will see a boom in human augmentation. Exoskeletons will assist warehouse workers.

“In my realm, we're trying to build robotic devices to assist people,” Sugar said. “In logistics, if you have to lift, or if you have to push your arms above your head, and you've got shoulder injury or neck injury or back injury, we're trying to build devices to make the work easier and less fatiguing.”

Sugar sees robots as tools. Horses made people travel farther and faster, then they were replaced by cars. Robotics is going to make life easier by assisting people with artificial intelligence. Picture an airplane mechanic working on an engine wearing something like Google glasses.

“It'll pop up information and tell you, ‘Hey, you know, if this is wrong, why don't you check these five things?’” Sugar said. “And then the human being can make an assessment on those things.”

Artificial intelligence can beat people in chess, but pair a human with AI and they’re both going to be better. 

“Plus the human being can make smarter and better choices," Sugar said. "And I think that's the positive outcome. I think that we're going to see in the future.”

Looking ahead

What will future of robots be like – something that’s universally accessible to everyone, or something tailored to specific people or classes of user?

“I think what we might see is more task-specific rather than person-specific, because people have a huge variance in their limits, capabilities, preferences and things like that,” Srivastava said. “But there are fewer tasks where it's easy to conform a robot to a task. For instance, robot vacuum cleaners. They are not personalized, but they are specific to the task of vacuuming.”

One of the things the AI community has been intuitively doing is to make AI systems safe and efficient and easy to use. The missing element has been the operator class. Srivastava said systems that are easy to learn how to use are what need to be designed.

“Who is it easy to use for?” he asked. “You could perfectly reasonably say that an AI system is used in a very narrow situation. So the operators who are qualified to use it are only people with this level of certification. That's fine in some cases, but I think what we need to do is design systems that are not just easy to use, but easy to learn how to use. … What this would allow you to do is, if you have an AI system and an inexperienced user, then the AI system would be able to train the user on the fly in a way that it keeps the operations safe. So I think that is a good objective for us as we go forward.”

MORE

ASU on the cutting edge of robotics

Why seeing robots in pop culture is important

Top photo: The Kuka robotic arm goes through its paces at the Innovation Hub on the Polytechnic campus on May 17, 2018. Photo by Charlie Leight/ASU Now

Scott Seckel

Reporter , ASU News

480-727-4502