Affiliation:
1. Sonalysts, Inc. – Human-Autonomy Interaction Laboratory
2. University of Maryland – School of Public Policy
Abstract
Artificial Intelligence (AI) is often viewed as the means by which the intelligence community will cope with the increasing amount of information available to them. Trust is a complex, dynamic phenomenon, which drives adoption (or disuse) of technology. We conducted a naturalistic study with intelligence professionals (planners, collectors, analysts, etc.) to understand trust dynamics with AI systems. We found that on a long-enough time scale, trust in AI self-repaired after incidents where trust was lost, usually based merely on the assumption that AI had improved since participants last interacted with it. Similarly, we found that trust in AI increased over time after incidents where trust was gained in the AI. We termed this general trend “buoyant trust in AI,” where trust in AI tends to increase over time, regardless of previous interactions with the system. Key findings are discussed, along with possible directions for future research.
Subject
General Medicine,General Chemistry
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献