The rise of human-machine teams: Q&A with Jamie Gorman
Director of CHART shares his optimism for collaboration with robots with psychology and social science in mind
Jamie Gorman recalls taking a psychology course as an undergrad that made him think, “Wow, this is really cool because this is science, but it's about questions that might not really have one right answer. It’s about humans.”
Today, Gorman’s career focuses on applying science to those human questions. as director of the Center for Human, Artificial Intelligence and Robot Teaming (CHART) at Arizona State University. Part of ASU’s Global Security Initiative, CHART unites psychology, social science, computer science and robotics to develop more effective human-machine teams.
CHART doesn't just study AI. They don't just study robots. They don't just study people. They study the whole system, and that's a new science, a key innovation that falls right in line with GSI's vision.
“The research is high stakes, but it's also fun and innovative,” Gorman said. That's the culture of GSI. They appreciate having a clear and innovative vision for what you want to do in this area.”
In this Q&A, Gorman discusses CHART’s challenges and successes, and how he envisions the future of human-machine teamwork.
Question: Why is this work important to society?
Answer: The AI and robotics we're seeing now are often developed in closed laboratories or closed off from the importance of the human element actually working with these machines. For example, different users will use an AI, like ChatGPT, in different ways and have different experiences. Those experiences can be productive and exciting, or they might be disappointing and frustrating, because the human interaction component was not taken into consideration at the outset. That's what CHART is trying to do. The center looks at computer science — but also considers psychology and social science to see how they work together as a system. We want to develop technologies that team effectively with humans.
Q: In recognition of GSI’s 10-year anniversary, what is something you consider one of the center’s biggest successes?
A: Some of the areas we are having the most impact on are training, instruction and education. For example, we develop team measurement systems that are essentially sensors that can detect things like where people are looking or their heart rate or what they're saying to each other. These team measurement systems can help, for example, military instructors increase their pace and scale of training, as well as effective use of their training resources. In a lot of these situations, there will be one instructor or one teacher. We do things in K-12 education as well, where one instructor or one teacher is responsible for observing learning across a large number of students or trainees. What we want to do is develop technologies that these teachers and instructors can team with to help expand their capabilities to provide effective instruction at that scale.
Q: How are students involved in the center’s research?
A: We have over 75 affiliates and that is primarily faculty, but a good chunk of that are student affiliates that include graduate students as well as undergrads. CHART also has a number of activities that students can get involved in. We typically hold one to two workshops per year that allow students, faculty and researchers from outside of ASU — like Department of Defense funders — to get together in one- to two-day workshops to exchange ideas and talk about how they do research and develop new research directions that are at the center of this new science of human-AI-robot teaming.
Last spring we did a workshop on animal-machine-human-teaming, led by Heather Lum, a CHART affiliate and assistant professor in human systems engineering, where we brought together experts from within and outside ASU. Think about search and rescue dogs working with humans as well as human-machine teaming. At that workshop, we divided off into groups composed of external researchers, ASU faculty and graduate students. We all wrote white papers about a novel application for human-animal-machine teaming. I think the students found it especially rewarding to see how research develops and to get an experience that they just can't get inside of a classroom.
Q: If someone gave your center $100 million, what would you do with it?
A: I'm a professor of human systems engineering, so this answer might be a little biased towards my area, but I would establish our center as the place where we're developing novel measures for the interactive experience of humans working with these technologies. The interactive experience is central. That's when you actually stand there, sit down or you're driving — working with the technology, the real-time experience. If we can measure those experiences and become the place known for that, that's a new way of doing human factors for user-centered design. We have to have an approach that combines both the unique personal experience of a user as well as the system performance outcomes to optimize the system at the level of human-machine teaming.
Think about driving in a car that has a lot of automation in it. You're holding the steering wheel and it’s kind of jerking in your hands and you have this state where you're not fighting it, but it's a little frustrating. That's the interactive experience I'm talking about. If we can get right to that point of unique user experience and system performance, we can do things like compute the influence that the vehicle is having on the situation versus the influence the human is having on the situation and try to harmonize that interactive state in real time. That's the vision.
What are three of the more pressing challenges in your field? How is your center addressing them, if applicable?
For me the most pressing challenge is not taking the human into the consideration of the design and development of emerging technologies. The way we are addressing that is through incorporating humans early and often in the development of these systems. We use the word “stakeholders” to talk about the developers and the industries, but stakeholders are also people like instructors or teachers or drivers that have to use these technologies in their everyday lives. Those types of stakeholders have been neglected in the development of these technologies.
We use a lot of more qualitative methods like interviewing instructors and teachers as we develop our metrics and technologies to really understand what they need and what they want from these systems from the outset, in addition to what the funders and developers want. Another thing we do is a lot of human subjects research. At ASU there's a participant pool where students can come participate in experiments, or we can pay people to come participate in our experiments. We can do things like simulate emerging technologies and have the humans interact with them to see what's effective, what's safe, what's reasonable, what's satisfying. So human subject research is key. In our human subjects research, we try to incorporate realistic situations or scenarios that a user would encounter as much as possible.
For example, in a recent study we had a human participant working with an AI agent. This was an AI pilot and a human photographer, and they had to work together with a navigator that could have been human or an AI to photograph reconnaissance targets. At various points through these 40-minute missions we injected moments where the AI system would do things like overly anticipate what the user wanted to do, or fail to understand what the user wanted to do, or do things like disrupt their communication channels, to provide a realistic context for human behavior with these systems. We are trying to introduce these realistic scenarios to get as much insight as possible into how that impacts the user's experience and satisfaction from the outset.
Q: What about the future? What are emerging challenges you foresee in your field and how is your center preparing for them?
A: One challenge is that human subjects research is expensive and it takes a long time. Some government agencies might be moving away from human subjects research into more simulation studies. I think that's a big challenge, and I think it's a mistake. There's something unique about a human's experience and context-sensitive behavior that I don't think could be simulated on a computer. One way of addressing that challenge is to make the experiments, the research as rich as possible in representing realistic experiences that users could have so that we don't have to collect as much human subjects data.
Another challenge is developing measures and metrics for for human-machine teaming, because very few people take the perspective that we take at CHART, by trying to incorporate the states of the machine as well as the states of the human into team metrics. Most approaches will build a model or build a metric that's for the machines and assess the system that way, or build a metric that's about the user's mental model of the machine and assess the system that way. But what we're measuring at its core is human-machine teaming.
Q: Can you tell me about your center’s partnerships and how they propel your research?
A: There’s this project called Space Challenge that encapsulates our novel approach to understanding human, AI and robot teaming for addressing pressing societal challenges. In the Space Challenge, we worked with astronauts, including Cady Coleman. We interviewed astronauts and we talked to space geologists to develop space-teaming scenarios that focused around exploration and mining operations on the moon and on Mars to restore Earth’s energy supply. It was really cool teaming with astronauts and space geologists and it led to a novel approach for measuring system resilience in futuristic space-teaming scenarios across human operators and robots. This research was recently published with Nancy Cooke and her lab as well as several of my students.
Q: Is there anything on the horizon you’re optimistic about? Either positive trends in the field or solutions-focused work your center is doing that give you hope?
A: I'm most optimistic about the various projects that involve human-machine teaming as it relates to classrooms and DoD training environments. We have a number of projects we're working on. One is with the Air Force Research Lab, where we are developing the Team Dynamics Measurement System. This is already being piloted in the U.S. Air Force School of Aerospace Medicine Critical Care, Air Transport Team training. It's a computer program with an interface that works with an instructor and gives them feedback by observing the communications of these military teams and assessing the quality of communication. This is something an instructor can work with to analyze a lot of data in a short amount of time, but it also enables them to teach more and more effectively.
Another one is with DARPA, and there we're building a biobehavioral team dynamics measurement system. We're using sensor suites that do neurophysiological sensing like EEG, as well as eye tracking, ECG, skin conductance, respiration and communication. It's a big sensor set, but it's not unlike the sensor set that's on a dashboard on a new car. We're also developing, as part of the system, a team coordination dynamics computer, which is the key innovation that will allow us to measure interactive states in any DoD training environment in real-time. It works with an AI agent to optimize performance prediction along different dimensions valued by stakeholders such as military leadership, soldiers and training staff.