Presentation
10 June 2024 DAGR, SHIELD, and the DoD responsible AI toolkit
Rachel Rajaseelan
Author Affiliations +
Abstract
The CDAO's Responsible AI team focuses on operationalizing the DoD AI Ethical Principles, sustaining the DoD's tactical edge through concrete actions, processes, and tools. This presentation provides a deep dive into a key piece of the DoD’s approach to Responsible AI: the Responsible AI Toolkit. The Toolkit is a voluntary process through which AI projects can identify, track, and mitigate RAI-related issues (and capitalize on RAI-related opportunities for innovation) via the use of tailorable and modular assessments, tools, and artifacts. The Toolkit rests on the twin pillars of the SHIELD Assessment and the Defense AI Guide on Risk (DAGR), which holistically address AI risk. The Toolkit enables risk management, traceability, and assurance of responsible AI practice, development, and use.
Conference Presentation
© (2024) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Rachel Rajaseelan "DAGR, SHIELD, and the DoD responsible AI toolkit", Proc. SPIE 13051, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications VI, 130510M (10 June 2024); https://doi.org/10.1117/12.3029875
Advertisement
Advertisement
Back to Top