ASU research expands artificial intelligence knowledge


Colorful, artistic, artificial circuit board

The School of Computing and Augmented Intelligence, part of the Ira A. Fulton Schools of Engineering, is a national leader in artificial intelligence, with its undergraduate program ranked No. 23 by U.S. News & World Report in 2022. Photo courtesy DeepMind on Unsplash

|

As artificial intelligence research evolves, new advances and technologies regularly make national headlines. In the School of Computing and Augmented Intelligence, part of the Ira A. Fulton Schools of Engineering at Arizona State University, many faculty members are among the AI experts and thought leaders broadening this field.

One of these faculty members is Subbarao Kambhampati, a professor of computer science and global AI thought leader. Kambhampati is leading a discussion on generative AI at the inaugural School of Computing and Augmented Intelligence Faculty Colloquium, where he described the origins, status and many implications of this rapidly evolving technology.  Kambhampati explored tools like ChatGPT, DALL-E and Whisper, and the impact they are having on evolving creativity. 

YooJung Choi, an assistant professor of computer science, is also making contributions to the AI field. She is researching probabilistic modeling, an essential component of AI that explores uncertainty in models’ knowledge by explicitly representing it as a probability distribution. Acknowledging uncertainty in these models helps humans build trust in AI technologies.

“For our research, we introduce discrimination patterns, or examples of when AI algorithms show bias,” Choi said. “We show that a large number of these patterns may exist in a probabilistic model and then propose efficient, exact and approximate discrimination pattern miners to find and remove them from probabilistic circuits.”

Her research aims to provide efficient and easy-to-understand auditing of AI models to help reinforce their fairness or lack of bias. She and her team are then able to suggest better algorithms for removing these discrimination patterns to create fairer models.

Choi hopes this research will be used to identify and eliminate discrimination patterns early in the development of probabilistic AI models, allowing researchers to create fairer models from the start.

“Our school’s exceptional faculty members are constantly striving to innovate in the AI field with dynamic research,” said Ross Maciejewski, school director and a professor of computer science. “Their passion has positioned our school as a national leader in AI and allows us to witness critical advancement in this area firsthand.”

The school is also exploring action language — specifically, a new language named mA* that is under development by Chitta Baral, a professor of computer science. Action languages in AI describe commands and instructions for machines and analyze how they can perform requests.

“We’re working to develop a foundation for reasoning about actions in a multiagent scenario, where an agent may perform actions not just to achieve an objective, but also to deceive other agents,” Baral said.

He and his research team are investigating how their mA* action language can bridge the capabilities of a multiagent domain, which allows for multiple decision-making opportunities at once rather than a single decision at one time.

The team’s goal in developing this language is to take a first step toward creating scalable and efficient automated reasoning and planning systems in multiagent domains.

Empowering the next generation

In addition to faculty, ASU students are key contributors in leading AI research. Computer science graduate students Kaize Ding and Yancheng Wang are working closely with Yingzhen Yang, an assistant professor of computer science, and Huan Liu, a Regents Professor of computer science, to conduct research on graph contrastive learning, or GCL, a technique for learning generalizable graph representations through contrasting the augmented views of the input graph. In computer science, a graph is a group of data points linked together in complex ways.

This technique is used to improve the performance of self-supervised representation learning of graph neural networks, or GNNs, which are a family of deep learning models designed for graph-structured data.

The team is developing a framework — called Simple Neural Networks with Structural and Semantic Contrastive Learning, or S3-CL —  to address the limitations in unsupervised GCL and better capture global knowledge within a graph. The new framework has proven it can outperform other unsupervised GCL methods.

Ivan Zvonkov, an incoming doctoral student who will join computer science Assistant Professor Hannah Kerner’s lab in the fall, also leads research using machine learning and remote sensing data to form predicted maps of geographic regions. His work with Kerner also extends into a project with NASA Harvest, in which this mapping is used to inform indigenous farmers in Maui County, Hawaii, to help combat local food insecurity.

Leading scientific exchange

One of the forums for sharing innovative research in the AI field is the Association for the Advancement of Artificial Intelligence, or AAAI, conference, which fosters discussion between researchers, practitioners, scientists, students and engineers spanning an array of AI disciplines.

The 2023 AAAI conference took place in Washington, D.C., and included presentations from the aforementioned faculty members and students.

Kambhampati spoke at the conference’s Bridge: AI and Law meeting, where he discussed the need for “explainability” and transparency in AI technologies. Additionally, he co-chaired the New Faculty Highlights program, which spotlights promising AI professionals early in their careers such as Choi, who was recognized in the session.

In addition to his involvement, Kambhampati’s students presented four research papers at the Representation Learning for Responsible Human-Centric AI workshop and one at the Artificial Intelligence for Cyber Security workshop.

Paulo Shakarian, an associate professor of computer science, collaborated with Baral in creating a half-day tutorial session. The researchers showcased advances in neurosymbolic reasoning, or NSR, an emerging field of AI that combines ideas from computational logic and deep learning.

“Some people think that NSR is going to be an important part of achieving artificial general intelligence,” says Shakarian, who presented the mini course with colleagues from Argentina’s Universidad Nacional del Sur and the U.S. Defense Advanced Research Projects Agency, or DARPA.

The tutorial session aimed to educate researchers looking to understand the current landscape of NSR research and attract those looking to apply it in areas such as natural language processing and verification.

Participants explored an overview of the frameworks of NSR, neurosymbolic approaches for deduction, combining NSR with logic and applications, challenges and opportunities that this field faces.

“AAAI is one of the top, if not the top, scientific conference in AI,” Shakarian says, “so it was quite an honor to hold a session to present our tutorial there.”

More Science and technology

 

Isabella Faris works on a laptop

Cracking the code of online computer science clubs

Experts believe that involvement in college clubs and organizations increases student retention and helps learners build valuable…

Jack Stilgoe, seated, speaks to an unseen audience

Consortium for Science, Policy & Outcomes celebrates 25 years

For Arizona State University's Consortium for Science, Policy & Outcomes (CSPO), recognizing the past is just as important as…

Portrait of Christopher Langenderfer.

Hacking satellites to fix our oceans and shoot for the stars

By Preesha KumarFrom memory foam mattresses to the camera and GPS navigation on our phones, technology that was developed for…