Presentation + Paper
6 June 2022 Instructive artificial intelligence (AI) for human training, assistance, and explainability
Author Affiliations +
Abstract
We propose a novel approach to explainable AI (XAI) based on the concept of "instruction" from neural networks. In this case study, we demonstrate in a human subject experiment how a superhuman neural network might instruct human trainees as an alternative to traditional approaches to XAI. Specifically, an AI examines human actions and calculates variations on the human strategy that lead to better performance. Experiments with a JHU/APL-developed AI player for the cooperative card game Hanabi suggest this technique makes unique contributions to explainability while improving human performance. One area of focus for Instructive AI is in the significant discrepancies that can arise between a human's actual strategy and the strategy they profess to use. This inaccurate self-assessment presents a barrier for XAI, since explanations of an AI's strategy may not be properly understood or implemented by human recipients. As an alternative, we propose a method of translating insights from an AI into corrections on human decision making. With neural networks, this allows a direct calculation of the changes in network weights needed to improve the human strategy to better emulate an AI that outperforms humans on certain metrics. Subjected to constraints (e.g. sparsity) these weight changes can be interpreted as recommended changes to human strategy (e.g. "value A more, and value B less"). Instruction from AI such as these functions both to help humans perform better at tasks, but also to better understand, anticipate, and correct the actions of an AI.
Conference Presentation
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Nicholas Kantack, Nina Cohen, Nathan Bos, Corey Lowman, James Everett, and Timothy Endres "Instructive artificial intelligence (AI) for human training, assistance, and explainability", Proc. SPIE 12113, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications IV, 1211308 (6 June 2022); https://doi.org/10.1117/12.2618616
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Artificial intelligence

Evolutionary algorithms

Neural networks

Human subjects

Systems modeling

Human-computer interaction

Machine learning

Back to Top