Affiliation:
1. Monash University, Caulfield East, Victoria, Australia
Abstract
User trust plays a key role in determining whether autonomous computer applications are relied upon. It will play a key role in the acceptance of emerging AI applications such as optimisation. Two important factors known to affect trust are system transparency, i.e., how well the user understands how the system works, and system performance. However, in the case of optimisation, it is difficult for the end-user to understand the underlying algorithms or to judge the quality of the solution. Through two controlled user studies, we explore whether the user is better able to calibrate their trust in the system when: (a) They are provided feedback on the system operation in the form of visualisation of intermediate solutions and their quality; (b) They can interactively explore the solution space by modifying the solution returned by the system. We found that showing intermediate solutions can lead to over-trust, while interactive exploration leads to more accurately calibrated trust.
Funder
ICT Centre for Excellence Program
Publisher
Association for Computing Machinery (ACM)
Subject
Human-Computer Interaction
Reference68 articles.
1. Trends and Trajectories for Explainable, Accountable and Intelligible Systems
2. David Anderson, Emily Anderson, Neal Lesh, Joe Marks, Brian Mirtich, David Ratajczak, and Kathy Ryall. 2000. Human-guided simple search. In Proceedings of the AAAI/IAAI. 209–216.
3. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
4. A survey of trust in computer science and the Semantic Web
5. Automation-induced complacency for monitoring highly reliable systems: the role of task complexity, system experience, and operator trust
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献