12 million images later, Mars starts to make sense
A new AI model from an ASU lab helps scientists analyze the red planet at planetary scale
By Kelly deVos, ASU News
April 30, 2026
Mars has been photographed to death. Orbiters have mapped it in high resolution, low resolution and even infrared. Scientists are drowning in data, and the problem isn’t seeing Mars anymore. It’s understanding it.
That’s where Mirali Purohit comes in.
Purohit, a computer science doctoral student in the School of Computing and Augmented Intelligence, part of the Ira A. Fulton Schools of Engineering at Arizona State University, spends her days wrangling a planet’s worth of images into something coherent.
By the time Purohit arrived at ASU in fall 2022, she already knew exactly where she wanted to be. She joined the Kerner Lab, where she conducts research under the supervision of Hannah Kerner, a Schmidt Sciences AI2050 Early Career Fellow and an emerging leader in applications of artificial intelligence, or AI, designed to serve the public good.
“I knew I wanted to do something in the planetary sciences, something outside Earth,” Purohit says. “If we can explore the moon and really see Mars, we can determine what is actually happening there.”
The pairing would lead to the Mars Orbital Model, or MOMO, the first foundation model built specifically for the Red Planet.
If that sounds abstract, the problem it tackles is not. Mars is one of the most heavily imaged objects in the solar system. Orbiters from NASA and other space agencies have been circling it for decades, capturing everything from microscopic rock textures to continent-scale landscapes to thermal signatures invisible to the human eye. The result is a fragmented deluge: different sensors, different resolutions, different wavelengths, all describing the same planet in incompatible ways.
Until now, scientists have faced two imperfect choices: adapting AI models trained on everyday objects like cats, dogs, chairs and tables, or using models built for Earth imagery dominated by forests, oceans and cities. Both approaches fall short because Mars data is fundamentally different from these datasets, limiting how well the models can transfer what they have learned. Custom-built models, meanwhile, are slow and expensive.
Remodeling the red planet
The idea behind MOMO was to build one model that can do it all.
Purohit worked with a team that trained MOMO on roughly 12 million Mars images, painstakingly assembled from multiple instruments and missions. The scale alone was daunting, but the process of getting there was even more so. Unlike Earth observation, which benefits from mature data pipelines, software and other resources, Mars research still runs on ad hoc systems and scattered archives.
“We realized that we don’t have the infrastructure for Mars that we have for Earth observation, and we were lacking pipelines, libraries and packages,” Purohit says. “I handled much of the work myself, with guidance from experts. We started with about 40 million samples, but after extensive filtering and cleaning, that number was reduced to roughly 12 million high-quality samples.”
What emerged is something closer to a general-purpose “brain” for Mars. Instead of forcing all data into a single format, the team trained separate models on different types of imagery, letting each learn its own representation. Then they merged them into a unified system. The result is a model that can move fluidly between scales, from identifying tiny boulders to mapping vast geological features like landslides.
That flexibility matters because Mars, despite its reputation as a barren desert, is surprisingly complex.
“We tend to think of Mars as blank, but it has a lot more diversity because of its history and geology,” Purohit says.
In one region, cone-shaped geological features might signal past water activity; a few kilometers away, those same features can look entirely different. Models trained on one region often fail in another.
MOMO begins to solve that problem by learning from the planet as a whole. Feed it an image, and it can detect craters, map landslides, identify frost and spot boulders. Some tasks, like identifying atmospheric dust, are easy. Others, such as picking out tiny, pixel-scale boulders, still push the limits.
Still, across benchmarks, MOMO consistently outperforms previous approaches, especially on detailed surface mapping. It doesn’t just see Mars. By capturing features across the entire planet, it helps scientists piece together Mars’ geological history, possibly revealing signs of past water or even life.
From closed labs to open worlds
The goal is bigger than a single AI model or even a single planet. By turning massive, fragmented datasets into something scientists can effectively use, MOMO points toward a future where planetary science happens at scale, accelerating discovery across worlds.
The Kerner Lab and collaborators plan to release not only the model, but the roughly 12 million high-quality images it was trained on, lowering the barrier for researchers everywhere to study Mars without building tools from scratch.
For Purohit, it’s only the beginning. Next, she wants to connect orbital data with rover imagery, stitching together large-scale views of Mars with the tiny patches explored on the ground. In the short term, she’s preparing to defend her doctoral dissertation this summer and will likely continue the work as a postdoctoral researcher. Long term, she wants to take models like MOMO out of the lab and into the real world, where they can continuously process data, adapt and improve.
And if the opportunity ever arises to see Mars up close?
“Oh yeah,” she says, laughing. “My answer is yes. I would go. Why would I not?”
This story originally appeared on ASU News.