Affiliation:
1. Naval Postgraduate School, Monterey, CA, USA
2. Arizona State University, Tempe, AZ, USA
Abstract
AI is set to take over some tasks within the decision space that have traditionally been reserved for humans. In return, human decision-makers interacting with AI systems entails rationalization of AI outputs by humans, who may have difficulty forming trust around such AI-generated information. Although a variety of analytical methods have provided some insights into human trust in AI, a more comprehensive understanding of trust may be augmented by generative theories that capture the temporal evolution of trust. Therefore, an open system modeling approach, representing trust as a function of time with a single probability distribution, can potentially improve modeling human trust in an AI system. Results of this study could improve machine behaviors that may help steer a human’s preference to a more Bayesian optimal rationality which is useful in stressful decision-making scenarios.
Subject
General Medicine,General Chemistry
Reference35 articles.
1. Quantum structure in cognition
2. Quantum Mechanics and Human Decision Making
3. A formulation of computational trust based on quantum decision theory
4. Bisantz A., Llinas J., Seong Y., Finger R., Jian J.Y. (2000). Empirical Investigations of Trust-Related Systems Vulnerabilities in Aided, Adversarial Decision Making. STATE UNIV OF NEW YORK AT BUFFALO CENTER OF MULTISOURCE INFORMATION FUSION. https://apps.dtic.mil/sti/citations/ADA389378
5. Humans and hardware