image title

ASU on the cutting edge of robotics

January 22, 2021

Take a look at some of the exciting work being done in Sun Devil robotics labs

Twenty-five faculty members at Arizona State University are researching the cutting edge of robotics. Seven of them have won the NSF CAREER award, the National Science Foundation's most prestigious award in support of early-career faculty. 

It's safe to say, ASU is poised to make great strides in the field.

“We have talent,” exoskeleton roboticist Tom Sugar said.

Some of their research has gone out into the world and become reality, like a prosthetic hand that can feel. Some of it, like a flying swarm of tiny robots, is a long way off. 

Here are some highlights from inside ASU's robotics labs:

Enhancing quality of life

Daniel Aukes, an assistant professor at the Polytechnic campus, is leading the Kaiteki Project

About 50 million people in the U.S. have disabilities that hamper their mobility. Shopping in stores, strolling around neighborhoods and crossing an airport unassisted are some of the things — which people with disabities have difficulty doing — that many of us take for granted.

Aukes and four other roboticists are working on wearable, foldable robotic exoskeletons to enhance quality of life. The devices would fit around knees, elbows, waists and other areas. The team’s approach for the devices is threefold: providing alternate ways to carry loads, reconfiguring itself for different activities and learning with the wearer to improve usability. The devices would use machine learning to learn about the wearer, and guide and train them, for a symbiotic experience.

Making the work easier

In his Human Machine Integration Lab, Sugar has created exoskeletons to cool bodies in extreme heat, make runners faster, ease the load for backpackers and climb walls like Spiderman.

In addition to working on the Kaiteki Project with Aukes, Sugar is working on an exoskeleton for the U.S. Air Force for pushing and lifting. He also has a grant to build an elbow exoskeleton for people who can’t use their hands and elbows because of severe nerve injuries in the shoulder. 

“In my realm, we're trying to build robotic devices to help people, to assist people,” Sugar said. “We're trying to build devices to make the work less fatiguing, easier.”

Robot swarms

Spring Berman is an associate professor of aerospace and mechanical engineering in the School for Engineering of Matter, Transport and Energy and is graduate faculty for Computer Science and Exploration Systems Design. One of her research areas is creating swarms of tiny robots

They could be used for search and rescue, mapping and a host of other applications. Aspects she and her team have investigated include collective transport, collision-free navigation, swarm herding, and mapping and estimation. 

ASU engineer posing with bio-inspired robots

Spring Berman is an assistant professor of mechanical and aerospace engineering in ASU’s Ira A. Fulton Schools of Engineering who focuses on research in the modeling, analysis, control and optimization of multi-robot systems.

Human-robot interface

James Abbas doesn’t consider himself a roboticist. An associate professor in the School of Biological and Health Systems Engineering, he is director of the Center for Adaptive Neural Systems. He sees his work as using robot-like technology to interface with the human nervous system. 

For example, in spring 2011, a 41-year-old Florida real estate investor named Jason Little was in a rollover accident on the interstate. His left arm was trapped outside the vehicle, hopelessly mangled. Seven years later, he was matched with a prosthesis created by Abbas and Professor Ranu Jung, chair of the Department of Biomedical Engineering at Florida International University. When he touched his wife using the prosthesis, it was the first time since the accident he had felt her with his left hand.

The prosthesis is called the Neural-Enabled Prosthetic Hand system. Wires implanted in the upper arm connect to a neurostimulator implanted in Little’s shoulder that has a radio frequency coil and a magnet. When Little dons the arm, he positions an external coil over the implant and it is held in place with a magnet.

“It basically is sensing two things,” Abbas said. “One is the force on the thumb, and the other is how much it’s open.”

Abbas has funding to recruit more amputees to the center and to run a second site at Walter Reed, the national military medical center in Washington, D.C.

“They're going to be recruiting military members with amputation and they'll be getting the system,” Abbas said. “So we're excited about that. We are going to be working with some people that have lost both hands and then providing sensation on one of their prostheses. For someone in that situation who really doesn't have any sensation at all with either of their hands, the importance of getting that back is going to be much more meaningful and much more useful for them because their baseline is so low in terms of what they can do.”

Robots in rehab

Stroke is one of the leading causes of long-term disabilities in the United States, affecting about 6.5 million Americans. With a decrease in stroke mortality and an increasingly aging population, the number of people requiring rehabilitation following stroke is projected to increase, creating a significant societal need to improve the effectiveness of stroke rehabilitation services.

Hyunglae Lee, an assistant professor of aerospace and mechanical engineering, is developing an innovative framework, called Transparent Robotic-Aided Rehabilitation, or TRAIN for short, and utilizing it in exercise therapy to significantly improve the effectiveness of robot-aided rehabilitation. 

Robot-aided rehabilitation has been increasingly utilized to support clinicians in providing high-intensity and repetitive exercise therapy for stroke survivors, and many studies have demonstrated its effectiveness over conventional therapy.


Dancing, vacuuming, learning: What's next for robots and their creators?

Why seeing robots in pop culture is important

Top photo: A demonstration of the electromyography motion capture system in the Hyunglae Lee robotics lab on the Tempe campus in 2018. Photo by Deanna Dent/ASU Now

Erik Wirtanen contributed to this article

Scott Seckel

Reporter , ASU News

image title

Why seeing robots in pop culture is important

January 22, 2021

3 ASU experts on humans' fascination with stories of our machine-based counterparts and what we can learn from them

What was the first robot you ever encountered? (Or maybe who is more apt, if less technically accurate – more on that later.) If you’re a boomer, it might have been the Jetsons' helpful if obsolete maid, Rosie. If you’re a millennial, maybe it was the decidedly more terrifying red-eyed Terminator.

What (or whom) ever it was, it’s most likely you encountered it in popular culture.

“There's something about robots that just tickles the childlike wonder in us. Something about encountering this thing that seems like it has agency but is in reality a machine,” said Lance Gharavi, an associate professor in the School of Music, Dance and Theatre at ASU’s Herberger Institute for Design and the Arts.

Gharavi, whose work focuses on the intersections of art and science, is currently at work on two projects as an affiliate faculty member of the Center for Human, Artificial Intelligence, and Robot Teaming. One, titled “Robotopolis,” is essentially a test bed for running experiments with autonomous vehicles, while the other involves teaming robots up with humans to perform tasks. Both have an element of performance, something Gharavi believes is inherent to apparently intelligent machines.

“Robots are theater,” he said.

In fact, the word “robot” was coined not in a lab or an engineering facility, but by the Czech writer Karel Čapek in his 1920 science fiction play “R.U.R” (short for “Rossumovi Univerzální Roboti” or, in English, “Rossum's Universal Robots”).

While the idea of a machine that performs work was nothing new then – the history of automatons stretches back to the ancient Greeks – and stories like “Frankenstein” and that of the Jewish golem, which attribute sentience to creatures created by humans, already populated humankind’s mythological canon, Čapek’s “R.U.R.” is often credited as one of the first stories in modern consciousness to imagine machines as humanlike, and thereby begin to grapple with some of the more complex questions surrounding the emerging technology that we’re familiar with today.

“It is said that the function of art is to hold a mirror up to nature,” Gharavi said. “Robots sort of serve as a kind of mirror for us, almost like a fun house mirror, because they don't mirror us exactly. But they do throw into relief the things that make us human.”

Stories about robots, said Ed Finn, founding director of ASU’s Center for Science and the Imagination, tap into “our anxieties about what it means to be intelligent, what it means to be a human, what it means to be a worker, what it means to be a master and a slave … what it means to other. They are ways of creating an artificial face in order to confront our own ideas about who we are: our own ideas about personhood.”

(Perhaps tellingly, “R.U.R.” concludes by indulging humankind’s now widely held fear of a robot rebellion that results in our extinction.)

"I really like Wall-E," Finn said. "I like robots that don't try to be human and that create their own ideas of personhood."

Finn also serves as the academic director of Future Tense, a partnership between ASU, New America and Slate Magazine that frequently publishes sci-fi stories with titles like “The State Machine,” which imagines a future where the government is run entirely by – you guessed it – machines.

Since Čapek’s “R.U.R.,” humanlike robots have proliferated popular culture, from the sexualized “Maria” in Fritz Lang’s seminal “Metropolis” to the insidiously charming “Sonny” of “I, Robot” to the wisecracking, cigar smoking “Bender” of “Futurama.”

“It’s important to have stories that explore the relationship between scientific creativity and responsibility,” Finn said. And there are a few stories that we tend to tell over and over again about robots.

There’s the story of the killer robot (“The Terminator,” “Ex Machina,” “I, Robot”), in which humans are always opening Pandora’s box and finding themselves unprepared for what comes out. There’s the story of the robot as girlfriend (“Her,” “Ex Machina” again), in which humans address the fear that robots will become indistinguishable from us. And then there’s the “God story.”

“In the God story, we create these super intelligent beings that are so much more advanced than we are that they effectively become omniscient and omnipotent, and we end up replacing our old gods with new gods that we've created,” Finn said. “I think we actually need to be telling new, more grounded and realistic stories about the near future and AI.”

Certainly, as robots become increasingly intelligent, there’s no shortage of concerns to explore: issues of privacy, access, trust, influence and authenticity are all on the table.

“I worry that in many realms of our progress right now, our technical reach extends beyond our ethical grasp,” Finn said.

For evidence of that, we need look no further than the phones in our pocket, which literally track our every move, and the various apps and social media platforms they play host to, which are practically sprinting toward the point when they will be able to pull off the staggeringly impressive feat of accurately assessing our moods and predicting our behaviors.

Katina Michael, a professor in both ASU’s School for the Future of Innovation in Society and School of Computing, Informatics and Decision Systems Engineering, calls it “uberveillance”: “the purported ability to know the ‘who,’ ‘where’ and ‘what’ condition someone or something is in.”

“One cannot pass by the Arthur C. Clarke classic, ‘2001: A Space Odyssey,’” Michael said. “HAL 9000 says, ‘I'm sorry, Dave. I'm afraid I can't do that.’ It is the ‘override’ moment that we can learn from critically on the future perils of technologies with potential unintended consequences.”

After all, when iPhone’s Siri and Amazon’s Alexa are listening to us all day, they probably get a pretty good idea of what we’re all about. But both Michael and Finn caution that it’s important to manage our expectations of what emerging technologies are capable of.

“I love all the early ads for Siri where she was having these really lifelike conversations with celebrities like Samuel L. Jackson,” Finn said. “But if you've ever tried to have a conversation with Siri, you know it doesn't go that well. … If you treat Siri like a person, you're missing the things that Siri is actually capable of doing.”

Humans are now at a point where the biological is merging with the technical, and Michael, whose research and writing has looked at the potential of implantable devices for medical and nonmedical applications, believes that the biggest ethical questions and concerns regarding emerging technologies today have to do with the promise of technologies that will alleviate social injustices.

“To that end,” she said, “the techno-myth that promises to end suffering — through robotic assistive tech — or to end pain, in the case of robotic implant devices that stimulate parts of the body and brain, or to offer solutions that are touted as a panacea, for example, hiring a robot to look after the autistic or the elderly for care” also brings up “questions related to human rights, questions related to responsibility and accountability and the ethics of care. Building up artificial intelligence as being something that it is not, is perilous to people in need, creating false hopes, when a vast majority of solutions are not approved by health insurance providers and are unaffordable.”

Expanding further on that thought, Michael added, “We want to build brain computer interfaces that are complex, yet the majority of the world’s disabled persons who are missing a limb or are unnecessarily turning blind (suffer from) a lack of resources and do not have basic prosthetics or operative procedures toward prevention. The inequality question needs to be broached.”

The fact that humans are so trusting of intelligent technology as to be willing to implant it in our bodies, let our Roombas run amok while we’re not home and believe utterly what our Facebook feed is telling us speaks to how much we take it for granted. And when we do that, we run the risk of allowing ourselves to be detrimentally influenced by it.

“We outsource so much of our cognition and our memory to these systems already, and we don't often pause to think about what we're paying for the services that we're getting,” Finn said. “When you think about Google or Apple or Amazon or Facebook, these platforms provide all of these incredible tools, but they're not doing that as a public service. They're doing that as part of an economy where we are the products that they're selling to other people.”

But fear not, gentle humans – Finn, while prudently wary, is also optimistic, and he has some wise words of comfort for us all.

Michael also has an affinity for the “Doctor Who” Dalek character, a fictional extraterrestrial race of mutants who want to exterminate all other life forms and pronounce that “resistance is futile.” “I don’t agree with the Dalek; I think resistance is not futile. But it’s not even about resistance, it’s about co-designing solutions that citizenry want and need,” Michael said.

“A lot of people in the technology community are starting to recognize that what they're doing is not just solving technical challenges,” he said. “They’re moving farther and farther into the social and cultural realm, and they’re realizing that their work has challenges and consequences that can't just be addressed with technical fixes. So my optimistic side sees that realization slowly dawning and percolating through more and more levels of society in the tech world and beyond, and I’m hoping people on the policy and governmental sides of things will start catching up and say, ‘OK, we have to create new structures of regulation to contend with these challenges.

“This is an area where I think science fiction is incredibly helpful, because it lets us work through the ethical and social dimensions of these problems before we've actually brought them into reality, and it gives everybody a shared vocabulary so we can do that work together. You don't have to have a PhD in AI to have a real conversation about it, because you can read a science fiction story or watch a movie and begin to have these conversations. We need to keep doing that work, and we need to bring more diverse voices into the conversation, because if we just create all these tools and we don't have the conversation about how we should use them, we're going to set ourselves up for disaster.”


Dancing, vacuuming, learning: What's next for robots and their creators?

ASU on the cutting edge of robotics

Top photos courtesy of Twentieth Century Fox, A24 andTriStar Pictures. All gifs courtesy of GIPHY.