A leap of progress for energy-efficient intelligent computing

ASU engineer part of team exploring solutions to memory and energy challenges

December 26, 2018

According to Andrew Ng, a pioneer in machine learning and co-founder of Google Brain, artificial intelligence will have a transformational impact on the world similar to the electricity revolution. From self-driving vehicles to smart biomedical devices, nearly all future technologies will incorporate algorithms that enable machines to think and act more like humans. 

These powerful deep-learning algorithms require an immense amount of energy to manipulate, store and transport the data that will make artificial intelligence work. But current computing systems are not designed to support such large network models. Thus, innovative solutions are critical to overcoming performance-energy and hardware challenges, especially for memory. random-access memory modules An assortment of random-access memory modules. Photo courtesy of Shutterstock Download Full Image

The need for sustainable computing platforms has motivated Jae-sun Seo and Shimeng Yu, faculty members at Arizona State University and Georgia Tech, respectively, to explore emerging memory technologies that will enable parallel neural computing for artificial intelligence.

Neural computing for artificial intelligence will have profound impacts on the future, including in autonomous transportation, finance, surveillance and personalized health monitoring.

More sustainable computing platforms will also help bring artificial intelligence down to power- and area-constrained mobile and wearable devices, without employing a number of central and graphics processing units.

Seo and Yu’s collaboration began in ASU’s Ira A. Fulton Schools of Engineering, where Yu was a faculty member before joining Georgia Tech. Jeehwan Kim with the Massachusetts Institute of Technology and Saibal Mukhopadhyay with Georgia Tech round out their research team.

portrait of

Jae-sun Seo

With ASU as the lead institution, the team received a three-year, $1.8 million grant from the National Science Foundation and the Semiconductor Research Corporation as part of the program called Energy-Efficient Computing: from Devices to Architectures, or E2CDA. The program supports revolutionary approaches to minimize the energy impacts of processing, storing and moving data.

“Overall, this project proposes a major shift in the current design of resistive random-access memories, also known as RRAMs, to achieve real-time processing power for artificial intelligence,” said Seo, an assistant professor in the School of Electrical, Computer and Energy Engineering, one of the six Fulton Schools.

The team wants to explore RRAM as an artificial intelligence computing platform — a departure from the way these applications are currently powered.

To accelerate deep-learning algorithms, many application-specific integrated circuit designs are being developed in CMOS, or complementary metal-oxide semiconductor, technology. CMOS is the leading semiconductor technology and is used in most of today’s computer microchips.

However, it has been shown that on-chip memory such as static random-access memory, or SRAM, creates a bottleneck in bringing artificial intelligence technology to small-form-factor and energy-constrained portable hardware systems based on two important aspects: area and energy.

For custom application-specific integrated circuit chips — which are used to power many artificial intelligence applications — cost limits the silicon space where information can be stored and restricts the amount of on-chip memory that can be integrated.

Considering that state-of-the-art neural networks, such as ResNet, require storage of millions of weights, the amount of on-chip SRAM is typically not sufficient and requires an off-chip main memory. Communicating data to and from the off-chip memory consumes substantial energy and delay.

SRAM is also volatile — and requires power to store its data. As such, it will consume static power and energy even when there is no activity.

As an alternative to on-chip weight storage technology with a higher density that is 10 times smaller than SRAM, emerging non-volatile memory devices have been proposed as a solution for long-term storage. Since the density is much higher, more weights can be stored on-chip, eliminating or substantially reducing high-energy dynamic memory communication.

Because non-volatile memory devices do not require a power supply to maintain the storage, static energy is also dramatically reduced. RRAM is a special subset of non-volatile memory devices that can store multiple levels of resistance states and could naturally emulate a synaptic device in neural networks, but it still has limitations for practical large-scale artificial intelligence computing tasks.

“We’ll perform innovative and interdisciplinary research to address the limitations in today’s RRAM-based neural computing,” said Seo. “Our work will make a leap of progress toward energy-efficient intelligent computing.”

diagram showing four parts

State-of-the-art neural networks require storage of millions of weights, while conventional SRAM storage is costly and consumes static power. Jae-sun Seo and Shimeng Yu’s E2CDA project investigates binary RRAM devices in the near term and analog RRAM devices in the long term as the storage and computing platform for artificial intelligence. Diagram courtesy of Jae-sun Seo

Seo and Yu’s research will address limitations in RRAMs by investigating novel technologies from device to architecture. Using a staged approach, in the near term, with today’s binary resistive devices (high and low resistance), the researchers will first develop new RRAM array designs that allow effective representation and parallel in-memory computation with positive and negative weights.

Over the longer term, the team will investigate a novel epitaxial resistive device, or EpiRAM, to overcome the material limitations that have been holding back the use of RRAM arrays as an artificial intelligence computing platform. The preliminary results of EpiRAM have shown many idealistic properties, including suppressed variability and high endurance. The material- and device-level synapse innovations will be accompanied and integrated with innovations in circuit- and architecture-level techniques, toward an RRAM-based artificial intelligence computing platform.

“With vertical innovations across material, device, circuit and architecture,” said Seo, “we’ll pursue the tremendous potential and research needs toward energy-efficient artificial intelligence in resource-constrained hardware systems.”

The team envisions new materials- and device-based intelligent hardware that will enable a variety of applications with profound impacts.

“For instance, the intelligent information processing on power-efficient mobile platforms will boost new markets like smart drones and autonomous robots,” said Seo. “Furthermore, a self-learning chip that learns in near real-time and consumes very little power can be integrated into smart biomedical devices, truly personalizing health care.”

Amanda Stoneman

Senior Marketing Content Specialist, EdPlus


ASU School of Music to host prestigious Bösendorfer and Yamaha USasu International Piano Competitions

December 26, 2018

Forty-three exceptional pianists from around the world will come together at the Arizona State University School of Music on Jan. 13-20 to participate in the ninth Bösendorfer and Yamaha USasu International Piano Competitions.

The Bösendorfer and Yamaha USasu International Piano Competitions are recognized as among the top piano competitions in the world, with prizes including more than $50,000 in cash awards, engagements with The Phoenix Symphony, a recital in Merkin Hall in the Kaufman Music Center in New York, and other recital performance opportunities for the top winners. piano competition A student plays a Bösendorfer piano. Download Full Image

Dedicated to the discovery and encouragement of young artists, the competitions are committed to promoting outstanding artists and providing them with solo and orchestral performance opportunities around the globe.

“Our competition has become one of the leading piano competitions in the world today, alongside the Van Cliburn, Leeds and Arthur Rubinstein competitions,” said Baruch Meir, president and artistic director of the competition and associate professor of piano in the ASU School of Music. “Many of our competition winners have gone on to develop major careers. We are proud to assist these young pianists in achieving their dreams while focusing the musical world’s attention on Arizona. Our selected competitors come from some of the worlds’ leading music institutions, including Juilliard, Yale, Shanghai Conservatory and the Royal College of Music, as well as ASU.“

The competitions are a collaboration of the ASU School of Music, The Phoenix Symphony and the Arizona Young Artist Committee. A total of 280 pianists from 35 countries applied to the 2019 competition, with 43 selected to perform in the semifinal and final rounds. All competition rounds are open to the public.

The opening gala for the competition, at 7:30 p.m. Jan. 13 in ASU’s Katzin Hall, welcomes guest pianist Sofya Gulyak, who won first prize and the Princess Mary Gold Medal at the 16th Leeds International Piano Competition in England.

All solo performances of the Bösendorfer Competition (ages 19-32) will be held at Katzin Hall in the School of Music on Jan. 14, 15 and 17. The final round will be held at the Mesa Center for the Arts in the Ikeda Theater at 2 p.m. Jan. 20, with finalists showcased playing a concerto with The Phoenix Symphony, under conductor Matthew Kasper. The announcement of the winners and the presentation of medals and Bösendorfer awards will immediately follow the performance.

The semifinal and final rounds for the Yamaha Senior and Junior competition will take place Jan. 16 and 18 in Katzin Hall at the ASU School of Music. The winners’ recital and awards ceremony will take place at 7:30 p.m. Jan. 19 in Katzin Hall.

Tickets for all Yamaha and Bösendorfer live solo performances can be purchased at https://pianocompetition.music.asu.edu/tickets. For a complete schedule of all competition events, visit music.asu.edu/events.

There will be a question-and-answer session in ASU’s Katzin Hall from 10:30 a.m.-noon Jan. 19, where the audience can interact with members of the jury. This year’s jury includes Sofya Gulyak, Leeds International Piano Competition gold medalist; Faina Lushtak, Steinway Artist and professor of music and piano performance at Tulane University; Asaf Zohar, Tel Aviv University professor, Israeli pianist and pedagogue; Zhe Tang, vice dean and piano professor at the Shanghai Conservatory of Music; Robert Hamilton, internationally renowned pianist, recording artist and ASU professor; and Baruch Meir, ASU associate professor of piano and Bösendorfer Concert Artist.

This year’s competition includes three new awards — the Mary Jane Trunzo Audience Favorite award of $1,500, with the recipient being selected during the semifinal round by the audience from among the eight Bösendorfer participants; a stipend of $1,500 for the Bösendorfer competition winner, as well as an invitation for a solo recital performance by the Oracle Piano Society of Arizona; and the Menahem Zohar memorial awards of $250, which will be awarded to the most outstanding artistic performances of a classical work in the Yamaha Senior or Junior competition.

“We are pleased to host an international competition of the caliber of the Bösendorfer and Yamaha USasu International Piano Competition at the ASU School of Music,” said Heather Landes, director of the ASU School of Music. “The competitions serve as a springboard for the development of the next generation of young artists and provides us with a reminder of the transformative power of music.”

For more information about the competition, visit pianocompetition.music.asu.edu or contact the competition office by email at pianocompetition@asu.edu or phone at 480-965-8740.

Lynne MacDonald

communications specialist, School of Music