Artificial Intelligence & Machine Learning
,
Government
,
Industry Specific
Defense Department’s Artificial Intelligence Strategy Prioritizes Deployment Speed
The Pentagon aims to adopt artificial intelligence systems at scale through an agile approach that prioritizes the rapid and responsible implementation of the emerging technology, it says in guidance published Friday.
See Also: Live Webinar | Generative AI: Myths, Realities and Practical Use Cases
The Department of Defense strategy calls for the enterprisewide accelerated deployment of new AI tools while emphasizing continuous experimentation and iterative feedback loops between developers, users, and test and evaluation experts. AI and enhanced data and analytics programs can provide the Pentagon with key advantages, the strategy says, including superior battlespace awareness, adaptive force planning and more efficient enterprise business operations.
The strategy says that the increased use of AI technologies “will introduce technical vulnerabilities” and other risks that “will be managed not by flawless forecasting, but by continuous deployment powered by campaigns of learning.” It prioritizes transparency, knowledge sharing and “early and ongoing real-world feedback.”
The strategy tasks all DOD components with identifying clear leaders for data-related transformation projects as part of an effort to strengthen accountability, and it proposes enterprise-level governance initiatives that prioritize cybersecurity, data management and responsible AI use.
The department’s Chief Digital and Artificial Intelligence Office spearheaded the guidance, which supersedes a 2018 AI strategy and a 2020 data strategy. Craig Martell, the Defense Department’s chief digital and AI officer, said in a statement that the strategy “prioritizes an agile approach to adoption by focusing on the fundamentals of speed, agility, responsibility and learning.”
Defense established the office in 2022 to accelerate the adoption of AI and deliver scalable AI solutions that safeguard against current and emerging threats. The agency is expected to publish an implementation plan for the new strategy, though it did not specify an exact timeline and did not immediately respond to a request for comment.
The new guidance follows an AI executive order the White House published in October that directs the developers of advanced AI models to report their safety test results to the federal government. President Joe Biden invoked the Defense Production Act in mandating the new disclosure requirements and established an AI Safety and Security Board to help ensure the responsible use of AI across government agencies.
Members of the Group of Seven industrialized democracies in October published a voluntary AI code of conduct that calls for
enhanced public reporting around capabilities and risks and aims to synchronize global AI regulations with technical standards. AI and security experts told Information Security Media Group that both measures represented significant starting points for the U.S. and international governments to begin collaborating on the safe and secure deployment of AI technologies.