Skip to main content

AI taught itself to beat us at our own game — what does it mean?

Q&A with ASU computer science professor provides a glimpse into the future

November 03, 2017

Smart just got beaten by smarter, and it taught itself.

Two weeks ago Google DeepMind announced that the artificial intelligence program AlphaGo Zero soundly beat all previous versions of AlphaGo in the ancient Chinese board game Go, teaching itself to become the best Go player ever, human or machine, in just 40 days. 

Previous versions of AlphaGo were trained on thousands of human amateur and professional games of Go to learn what humans required 3,000 years to master. AlphaGo Zero had only the rules of Go to work with, mastering the game without human assistance by playing itself.

Some experts believe the victory moves the needle on AI, pushing forth a new AI-driven industrial revolution, while others are worried about a robot uprising that will threaten people’s jobs and security.

Arizona State University's Subbarao Kambhampati, a professor of computer science the Ira A. Fulton Schools of Engineering, works in artificial intelligence and focuses on planning and decision-making, especially in the context of human-machine collaboration. As president of the Association for the Advancement of Artificial Intelligence, Kambhampati provides a glimpse into future in this Q&A with ASU Now.  

Man in glasses smiling

Subbarao Kambhampati

Question: What does the ability of AlphaGo Zero to teach itself play the game of Go at super-human level tell us about the state of artificial intelligence?

Answer: AlphaGo Zero is an impressive technical achievement, in as much it as it learns the game of Go purely from self-play, without any human intervention. It is, however, still an example of narrow AI. While we romanticize ability in Go and Chess as signs of high-intellect, the games don’t actually have that much in common with real world. For example, there aren’t that many real world scenarios or tasks for which unlimited self-play with a perfect simulator is feasible — something AlphaGo Zero depends on.

Because of this, human learning is forced to be a lot more parsimonious in terms of examples, depending instead on background knowledge accumulated over a lifetime to analyze single examples more closely

Q: What do we as humans have to gain in creating intelligence smarter than us?

A: I think we have always worked on creating machines better than us in narrow spheres. We don’t compete with calculators in arithmetic, or with cranes on weight lifting. We found ways to augment our overall abilities with the help of these specialized superhuman machines. So it goes with intelligent systems specialized to narrow areas. For example, image recognition systems that can read radiology images better than humans can be used to help improve the diagnostic capabilities of the human doctors.

Q: The experts seem to disagree on the dangers of whether or not AI is dangerous to the economy and to mankind. What is your perspective?

A: I find the “AI as a threat to humankind” arguments advanced by the likes of Elon Musk and Nick Bostrom rather far-fetched. These "Terminator" scenarios often distract attention from the more important discussions we need to have about the effects of increased autonomy and automation on our society.

It is increasingly clear that AI will have big impact on many types of routine jobs — whether they are blue collar or white collar. We need to educate the public about this and provide retraining opportunities for those affected by the job displacement.

Q: Is there something we can do to develop AI technology that will work with us rather than replace or threaten us?

A: For much of its history, AI has focused on autonomy and surpassing humans in various tasks, than on the far more important, if less glamorous, goal of cooperation and collaboration with humans. I believe that AI research should focus a lot more on human-aware AI systems that are designed from the ground-up to collaborate with us. After all, it is this ability we have of working together, rather than that of playing a game of Go, that is the true hallmark of our intelligence (even if we tend to take it for granted).

To do this well, AI systems need to learn and use mental models of the humans they work with, and take aspects of social and emotional intelligence much more seriously. I joke that they have been the Rodney Dangerfield of AI research — they weren’t given much respect. This is why human-aware AI is the main focus of our research group at ASU.

Top photo illustration courtesy of Pixabay.

More Science and technology


Photo of student Cartner Snee and professor Kevin McGraw standing in a backyard

AI-equipped feeders allow ASU Online students to study bird behavior remotely

ASU Online students are participating in a research opportunity that's for the birds — literally. Online Bird Buddies is a project that allows students to observe birds remotely, using bird feeders…

A robotic hand reaches up into a network of connected lines and dots, an unseen light source illuminates the hand.

National Humanities Center renews partnership with Lincoln Center for responsible AI research

The National Humanities Center has announced  that Arizona State University's Lincoln Center for Applied Ethics is one of four organizations to receive funding for the second phase of their…

Illustration of a semiconductor being put together

Advanced packaging the next big thing in semiconductors — and no, we're not talking about boxes

Microchips are hot. The tiny bits of silicon are integral to 21st-century life because they power the smartphones we rely on, the cars we drive and the advanced weaponry that is the backbone of…