Affiliation:
1. George Mason University
2. Perceptronics Solutions, Inc.
Abstract
When interacting with complex systems, the manner in which an operator trusts automation influences system performance. Recent studies have demonstrated that people tend to apply trust broadly rather than exhibiting specific trust in each component of the system in a calibrated manner (e.g. Keller & Rice, 2010). While this System–Wide Trust effect has been established for basic situations such as judging gauges, it has not been studied in realistic settings such as collaboration with autonomous agents in a multi-agent system. This study utilized a multiple UAV control simulation, to explore how people apply trust in multi autonomous agents in a supervisory control setting. Participants interacted with four UAVs that utilized automated target recognition (ATR) systems to identify targets as enemy or friendly. When one of the autonomous agents was inaccurate and performance information was provided, participants were 1) less accurate, 2) more likely to verify the ATR’s determination, 3) spent more time verifying images, and 4) rated the other systems as less trustworthy even though they were 100% correct. These findings support previous work that demonstrated the prevalence of system-wide trust and expand the conditions in which system-wide trust strategies are applied. This work suggests that multi-agent systems should provide carefully designed cues and training to mitigate the system-wide trust effect.
Subject
General Medicine,General Chemistry
Cited by
27 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献