|
|
Christopher Collins, with the Office of the Undersecretary of Defense for Research and Engineering, moderates the summit's morning panel. Photo credit Mike Morgan |
|
The inaugural University of Maryland (UMD) MATRIX Lab Autonomy Summit brought together leaders, experts, and professionals to explore the most pressing issues surrounding autonomous systems and artificial intelligence (AI).
On November 14, more than 200 attendees representing dozens of different organizations across industry, government, and academia gathered at the USMSM SMART Building in California, Maryland. The day’s presentations, panels, and breakout sessions explored building trust in autonomous systems, bridging gaps between technology development and workforce readiness, and accelerating the innovative use of autonomous technologies across multiple domains. These discussions are now being compiled into actionable takeaways.
The MATRIX Lab organized the event with the support of local government and nonprofit organizations as well as several UMD Clark School of Engineering departments and units. Dr. Reza Ghodssi (ECE/ISR/Fischell Institute) is the UMD Herbert Rabin Distinguished Chair in Engineering and the MATRIX Lab Executive Director of Research and Innovation.
“It was worth the time and effort to bring world class expertise to the region,” Dr. Ghodssi said. “By connecting leaders and experts across multiple sectors, we were able to discuss current and developing issues related to AI and autonomy, identify gaps in the ecosystem, and develop solutions to close those gaps. We will be sharing those takeaways with our partners, so we can all work together to improve and advance this critical field.”
Major Themes
Autonomous Systems Are Critical to National Defense
Maynard Holliday, with the Office of the Undersecretary of Defense for Research and Engineering, delivers the summit's morning keynote address. Photo credit Mike Morgan
In the morning keynote address, Mr. Maynard Holliday discussed the Department of Defense’s (DoD) use of autonomous systems. Mr. Holliday is the Assistant Secretary of Defense for Critical Technologies in the Office of the Under Secretary of Defense for Research and Engineering.
His presentation was titled, “Criticality of Emerging Technologies in Defense, Our Nation and the World Today.” In it, he discussed how adding autonomy and AI to systems can make them safer, more reliable, and more cost-effective. Mr. Holliday cited the DoD’s autonomous helicopters and AI-enabled uncrewed undersea vehicles as examples. He also outlined the need to consider ethics and address trust issues when adopting advanced technologies.
“In the defense sector, AI helps us enhance security and anticipate threats. We want to innovate these technologies to further support us, but as we make progress, we need to ensure the tech is serving humanity in an ethical way,” Mr. Holliday said. “If it’s not trustworthy and transparent, no one will want to use it.”
AI/Autonomy Keeps Humans Out of Harm’s Way
Kingsley Fregene, Director of Technology Integration at Lockheed Martin, delivers the summit's afternoon keynote address. Photo credit Mike Morgan
In his afternoon keynote address, Dr. Kingsley Fregene, the Director of Technology Integration at Lockheed Martin, discussed how humans can use AI/Autonomy to stay safe. His presentation was titled, “AI and Autonomy After Next.” Dr. Fregene mentioned how these technologies can be deployed across domains, including land, air, sea, and space, to detect dangers and run missions autonomously so people aren’t exposed to threats.
He also projected the steps to get to these human-AI teams. State 1 will see more AI deployed on traditional systems and platforms, State 2 will include large scale intelligent agents executing distributed Observe-Orient-Decide-Act loops at the tactical edge, State 3 will have high confidence Test, Evaluation, Validation and Verification of collaborative human-AI teams, and State 4 will feature teams of composable humans and trustworthy AI agents.
“It’s important to work toward human-AI teaming, and critical to do so at a pace that is safe and sustainable,” Dr. Fregene said. “Having discussions like these with such a large, diverse audience helps us make progress while addressing both current and emerging concerns and issues.”
People Won’t Use AI/Autonomy if They Don’t Trust It
Throughout the day, speakers, panelists, and attendees spoke on the issue of trusting autonomous systems. Many participants agreed that humans would need to monitor these systems until they are deemed reliable, and that robust testing and validation would help to increase trust. They also believe people will put more trust in the decisions AI makes if the decision-making process is transparent.
Collaboration is Key
University of Maryland faculty members Dinesh Manocha and Ming Ling speak with summit morning keynote speaker Maynard Holliday. Photo credit Mike Morgan
Industry, government, and academia all have roles to play in advancing AI and autonomous technologies. Academia will educate the future workforce and advance cutting-edge research, industry will realize the research vision and build the technologies, which government will then deploy and adopt. Collaborative research across all three sectors is critical.
The Autonomy Summit’s breakout sessions offered a unique opportunity to hear different perspectives on topics like trusting AI, operationalizing autonomy, and educating the future workforce. Dr. Ming Lin, Dr. Barry Mersky and Capital One E-Nnovate Endowed Professor at the University of Maryland, moderated one of these sessions.
“It’s critically important for academic institutions to include industry and government in conversations about autonomy and AI research and development,” said Dr. Lin. “Collaboration through events like the summit is essential to advancing capabilities and transforming critical areas, because together we can tackle complex challenges and drive innovation.”
Conclusion and Next Steps
The summit laid out what’s next for the rapidly evolving fields of AI and autonomy, and addressed the critical challenges of trust and ethics. This is a strong foundation for continued collaboration among members of industry, government, and academia for innovative research.
The MATRIX Lab team is developing a report summarizing the summit and outlining an action plan for assessing and ensuring trust in autonomous systems, closing the gap between education and the AI/Autonomy workforce, and scaling the development of autonomous systems while maintaining ethical and safety standards. This report will be distributed to MATRIX Lab partners in research, education, workforce development, and policymaking.
The MATRIX Lab Autonomy Summit was sponsored by BlueHalo, the University of Maryland Clark School of Engineering and several of its departments and units, and the St. Mary's County Department of Economic Development. See all of the sponsors here: https://autonomy-work-summit.umd.edu/
Related Articles:
Maryland Engineering to Highlight Educational Advances at the 2024 ASEE Annual Conference Innovating in Engineering Education: Join Us at the 2024 ASEE Annual Conference AVS Mid-Atlantic Chapter DC Regional Meeting - May 9th, 2024 Connect with Maryland Engineering at the 2023 ASEE Annual Conference USMSM Debuts SMART Innovation Center UMD, UMBC, ARL Announce Cooperative Agreement to Accelerate AI, Autonomy in Complex Environments
December 9, 2024
|