Author:
Srivastava Divya,Lilly J. Mason,Feigh Karen M.
Abstract
AbstractAI-advised Decision Making is a form of human-autonomy teaming in which an AI recommender system suggests a solution to a human operator, who is responsible for the final decision. This work seeks to examine the importance of judgement and shared situation awareness between humans and automated agents when interacting together in the form of a recommender systems. We propose manipulating both human judgement and shared situation awareness by providing the human decision maker with relevant information that the automated agent (AI), in the form of a recommender system, uses to generate possible courses of action. This paper presents the results of a two-phase between-subjects study in which participants and a recommender system jointly make a high-stakes decision. We varied the amount of relevant information the participant had, the assessment technique of the proposed solution, and the reliability of the recommender system. Findings indicate that this technique of supporting the human’s judgement and establishing a shared situation awareness is effective in (1) boosting the human decision maker’s situation awareness and task performance, (2) calibrating their trust in AI teammates, and (3) reducing overreliance on an AI partner. Additionally, participants were able to pinpoint the limitations and boundaries of the AI partner’s capabilities. They were able to discern situations where the AI’s recommendations could be trusted versus instances when they should not rely on the AI’s advice. This work proposes and validates a way to provide model-agnostic transparency into recommender systems that can support the human decision maker and lead to improved team performance.
Funder
Sandia National Laboratories
Publisher
Springer Science and Business Media LLC