Author:
Lebiere Christian,Blaha Leslie M.,Fallon Corey K.,Jefferson Brett
Abstract
Trust calibration for a human–machine team is the process by which a human adjusts their expectations of the automation’s reliability and trustworthiness; adaptive support for trust calibration is needed to engender appropriate reliance on automation. Herein, we leverage an instance-based learning ACT-R cognitive model of decisions to obtain and rely on an automated assistant for visual search in a UAV interface. This cognitive model matches well with the human predictive power statistics measuring reliance decisions; we obtain from the model an internal estimate of automation reliability that mirrors human subjective ratings. The model is able to predict the effect of various potential disruptions, such as environmental changes or particular classes of adversarial intrusions on human trust in automation. Finally, we consider the use of model predictions to improve automation transparency that account for human cognitive biases in order to optimize the bidirectional interaction between human and machine through supporting trust calibration. The implications of our findings for the design of reliable and trustworthy automation are discussed.
Funder
Air Force Office of Scientific Research
Subject
Artificial Intelligence,Computer Science Applications
Reference28 articles.
1. An Integrated Theory of the Mind;Anderson;Psychological Review,2004
2. No Effect of Cue Format on Automation Dependence in an Aided Signal Detection Task;Bartlett;Hum. Factors,2019
3. Cognitive Mechanisms for Calibrating Trust and Reliance on Automation;Blaha,2020
4. Active Warnings: False Alarms;Bliss,2006
5. System Reliability, Performance and Trust in Adaptable Automation;Chavaillaz;Applied Ergonomics,2016
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献