Unlocking the potential of AI for homeland security


Adam Cox speaks to an unseen audience, sitting next to another person in a suit

Adam Cox, director in the Office of Strategy and Policy at the Department of Homeland Security Science and Technology Directorate, notes how as AI becomes more prolific, infrastructure will become more vulnerable. Photo courtesy Hager Sharp

“How can we think differently to do what we're doing now cheaper, more efficiently, more effectively?”

Adam Cox, director in the Office of Strategy and Policy at the Department of Homeland Security Science and Technology Directorate, initiated a pivotal discussion with a question that set the tone for this year’s Center for Accelerating Operational Efficiency’s annual meeting, spearheaded by Arizona State University.

The Center for Accelerating Operational Efficiency, or CAOE, brought together key figures from the DHS alongside leading artificial intelligence researchers to address the pressing challenges hindering operational effectiveness in a series of panel sessions and presentations.

Together, they underscored the compelling potential of AI-driven solutions in bolstering homeland security operations.

AI-driven solutions for government agencies

In the ever-evolving landscape of national security, the convergence of cutting-edge technology and strategic foresight has become paramount. As part of ASU’s Global Security Initiative, the CAOE’s annual meeting provided a platform for insightful discussions, highlighting both the promises and challenges that lie ahead in harnessing AI to safeguard our nation.

“FEMA seems to be dealing with the world being both on fire and flooding at the same time and having to respond to these events and give out assistance. CISA is assessing the threats due to the use of AI in critical infrastructure,” Cox said.

Cox set the tone by underlining the escalating threats to national security and the imperative for innovation to match the evolving mission demands. Then, he delved into various facets of AI integration within DHS, with a keen emphasis on practical applications and the success it has already seen with digital tools.

The Department of Homeland Security has been actively exploring ways to enhance current operations, such as the tasks performed by Transportation Security Officers, or TSOs, who scrutinize multiple images on screens. This tedious approach has prompted Cox to pose explorative questions, such as “What if they didn't have to look at the screen except to resolve alarms?”

From TSO tasks to border crossing management to cyber threat detection, he echoed the urgent need for AI-driven solutions to streamline operations and bolster efficiency while underscoring the ethical and responsible use considerations intertwined with AI deployment. He advocated for transparent and reliable frameworks to uphold integrity in governmental applications.

James Sung, acting deputy chief data officer for the Office of Intelligence and Analysis (I&A), shed light on governance, trust and workforce readiness for AI adoption in federal agencies. Responsible AI development emerged as a focal point, stressing the importance of ethical considerations, cybersecurity, privacy and civil rights and civil liberties safeguards and analytic standards in AI deployment. Amid concerns surrounding trust in these new technologies, the dialogue underscored the critical need for a skilled workforce equipped to navigate the intricacies of AI-driven applications. In the DHS's AI working groups, various questions are being asked to inform the type of work that needs to be done to help address concerns.

“What processes and policies (do) we have to adjust now that a lot of these AI tools are coming about, and what do we have to change? What are the things that we need to do differently, now that some of these tools are available and coming online?”

There are just a few of the questions Sung shared.

Additionally, evacuation planning emerged as a critical area in which AI could play a pivotal role. Despite challenges in funding and support, researchers emphasized the potential of AI and IoT in expediting emergency response times.

As researchers chimed in with questions and comments, the conversation expanded to encompass AI bias and limitations, particularly in medical imaging. Concerns were raised regarding adaptive adversaries in AI training, highlighting the imperative for ongoing research and development to address emergent threats.

In the realm of law enforcement, AI emerges as a double-edged sword, promising operational efficiencies while necessitating careful consideration of societal impacts.

The opening AI panel session wrapped up with a clear overarching message: AI holds immense potential to revolutionize homeland security operations. However, its successful integration hinges upon a multifaceted approach encompassing ethical governance, robust cybersecurity measures and a skilled workforce adept at navigating the complexities of AI-driven technologies.

Countering misinformation in the era of language learning models

In one presentation, ASU Regents Professor and senior Global Futures scientist Huan Liu presented the perils posed by disinformation in this digital age — the topic of Liu’s current research in CAOE. Liu detailed the negative impacts on democracy and public health, shedding light on the urgent need for countermeasures.

As the presentation unfolded, Liu delved into the psychological underpinnings of information consumption, emphasizing the importance of understanding human vulnerabilities. Interdisciplinary collaboration emerged as a recurring theme as Liu continued to highlight its pivotal role in advancing research efforts.

Throughout the session, attendees were made aware of a plethora of innovative developments. From linguistic feature analysis to neural models and expert knowledge integration, Liu highlighted a diverse array of tools poised to revolutionize the fight against misinformation.

“But you cannot rely on logic alone. You have to use all that all the tools in your toolbox, basically,” Liu said.

Liu wrapped up his presentation by stressing the need for ethical endeavors, which resonated strongly with the audience. Particularly in an era dominated by tech giants, he also echoes this call to action in his classrooms.

He urges students to seek meaningful work aligned with societal good and their own interests.

Looking ahead, the journey towards harnessing the full potential of AI for homeland security will undoubtedly be fraught with challenges. Yet, as evidenced by the discussions and innovative solutions put forth by CAOE experts, the future holds promise for a safer and more secure nation empowered by the transformative capabilities of AI if it is harnessed for good.

“We still need to work hard to tame disinformation,” Liu added. “We need to collaboratively search for practical methods for new challenges.”

More Science and technology

 

Pooyan Fazli, Adil Ahmad and Hasti Seifi work together at a table.

Google grant creates AI research paths for underserved students

Top tech companies like Google say they are eager to encourage women and members of historically underrepresented groups to consider careers in computer science research.The dawn of the era of…

Isabella Faris works on a laptop

Cracking the code of online computer science clubs

Experts believe that involvement in college clubs and organizations increases student retention and helps learners build valuable social relationships. There are tons of such clubs on ASU's campuses…

Jack Stilgoe, seated, speaks to an unseen audience

Consortium for Science, Policy & Outcomes celebrates 25 years

For Arizona State University's Consortium for Science, Policy & Outcomes (CSPO), recognizing the past is just as important as designing the future. The consortium marked 25 years in Washington, D…