Artificial intelligence concept. Brain over a circuit board. HUD future technology digital background

Artificial intelligence concept. Brain over a circuit board. HUD future technology digital background (Getty images)

WASHINGTON — The Defense Department has created a new task force to better understand how generative artificial intelligence (AI) tools, like large language models, can bolster its innovation efforts, while also being used responsibly, the department announced today. 

“The establishment of Task Force Lima underlines the Department of Defense’s unwavering commitment to leading the charge in AI innovation,” Deputy Secretary Kathleen Hicks said in a statement. “As we navigate the transformative power of generative AI, our focus remains steadfast on ensuring national security, minimizing risks, and responsibly integrating these technologies. The future of defense is not just about adopting cutting-edge technologies, but doing so with foresight, responsibility, and a deep understanding of the broader implications for our nation.”

Task Force Lima will be led by the Pentagon’s Chief Digital and AI Office (CDAO) and “will assess, synchronize, and employ generative AI capabilities across the DoD, ensuring the Department remains at the forefront of cutting-edge technologies while safeguarding national security,” according to a DoD announcement

While not a new technology, generative AI’s popularity has skyrocketed in the past few months in the tech space because of its accessibility with things like ChatGPT. The technology is trained on huge data sets and can generate audio, text, images and other types of content. Now, DoD is looking at ways generative AI can be used for intelligence gathering and future warfighting.

But while the technology can offer new opportunities for DoD, the department must also consider the risks the technology can expose it to. A memorandum [PDF] from Hicks announcing Task Force Lima says the department aims to use the technology responsibly, and the task force will make policy recommendations on how to do so. 

The CDAO will work alongside the offices of the under secretaries of defense for policy, research and engineering, acquisition and sustainment, intelligence and security, and the DoD chief information office — all offices have specifically assigned roles under the task force. 

“The DoD has an imperative to responsibly pursue the adoption of generative AI models while identifying proper protective measures and mitigating national security risks that may result from issues such as poorly managed training data,” CDAO Craig Martell said in the announcement. “We must also consider the extent to which our adversaries will employ this technology and seek to disrupt our own use of AI-based solutions.”

How Generative AI Could Work For DoD

According to the memorandum, lessons learned from the task force will inform the DoD’s Responsible AI Working Council. The document further outlines specific goals and intended outputs from the task force with timelines. For example, by the second quarter of fiscal 2024, Task Force Lima is expected to provide an assessment and recommendation of large language models and generative AI use cases, followed by a plan for governance and oversight by the next quarter of the fiscal year.

In May, Martell warned of the downsides of generative AI, saying it could become the perfect tool for disinformation. “Yeah, I’m scared to death. That’s my opinion,” he said at the AFCEA TechNet Cyber conference in Baltimore.

Still, the Pentagon is set on finding ways to best use the technology. In April, Maynard Holliday, DoD’s deputy chief technology officer for critical technologies, told Breaking Defense in an interview the department would host a first-of-its-kind conference on “trusted AI and autonomy” which would, in part, look at the perils and potential of generative AI.

“We’ll definitely be talking about… how going forward we would mitigate the hallucinatory tendencies of LLMs and what we could be doing with respect to making those results of queries to LLMs more trusted,” Holliday said.

Meanwhile, the military services themselves have already started exploring how generative AI can aide them in the future. In June, Air Force Secretary Frank Kendall announced that he asked the service’s scientific advisory board to study the potential impacts of the technology.

““I’ve asked my Scientific Advisory Board to do two things really with AI. One was to take a look at the generative AI technologies like ChatGPT and think about the military applications of them, to put a small team together to do that fairly quickly,” Kendall said. “But also to put together a more permanent, AI focused group that will look at the collection of AI technologies, quote, unquote, and help us understand them and figuring out how to bring them in as quickly as possible.”