Affiliation:
1. Oregon State University, Corvallis, OR
Abstract
Explainable AI is growing in importance as AI pervades modern society, but few have studied how explainable AI can directly support people trying to
assess
an AI agent. Without a rigorous process, people may approach assessment in ad hoc ways—leading to the possibility of wide variations in assessment of the same agent due only to variations in their processes. AAR, or After-Action Review, is a method some military organizations use to assess human agents, and it has been validated in many domains. Drawing upon this strategy, we derived an After-Action Review for AI (AAR/AI), to organize ways people assess reinforcement learning agents in a sequential decision-making environment. We then investigated what AAR/AI brought to human assessors in two qualitative studies. The first investigated AAR/AI to gather formative information, and the second built upon the results, and also varied the type of explanation (model-free vs. model-based) used in the AAR/AI process. Among the results were the following: (1) participants reporting that AAR/AI helped to
organize their thoughts
and
think logically
about the agent, (2) AAR/AI encouraged participants to reason about the agent from a
wide range of perspectives
, and (3) participants were able to leverage AAR/AI with the model-based explanations to
falsify
the agent’s predictions.
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Human-Computer Interaction
Reference78 articles.
1. Guidelines for Human-AI Interaction
2. Mental models of mere mortals with explanations of reinforcement learning;Anderson Andrew;ACM Transactions on Interactive Intelligence Systems,2020
3. 'It's Reducing a Human Being to a Percentage'
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献