Affiliation:
1. Department of Human-Centred Digitalization, Institute for Energy Technology, Halden, Norway
2. Department of Humans and Automation, Institute for Energy Technology, Halden, Norway
Abstract
There is increasing interest in the use of artificial intelligence (AI) to improve organizational decision-making. However, research indicates that people’s trust in and choice to rely on “AI decision aids” can be tenuous. In the present paper, we connect research on trust in AI with Mayer, Davis, and Schoorman’s (1995) model of organizational trust to elaborate a conceptual model of trust, perceived risk, and reliance on AI decision aids at work. Drawing from the trust in technology, trust in automation, and decision support systems literatures, we redefine central concepts in Mayer et al.’s (1995) model, expand the model to include new, relevant constructs (like perceived control over an AI decision aid), and refine propositions about the relationships expected in this context. The conceptual model put forward presents a framework that can help researchers studying trust in and reliance on AI decision aids develop their research models, build systematically on each other’s research, and contribute to a more cohesive understanding of the phenomenon. Our paper concludes with five next steps to take research on the topic forward.
Subject
Organizational Behavior and Human Resource Management,Applied Psychology,Arts and Humanities (miscellaneous)
Reference82 articles.
1. Alan A., Costanza E., Fischer J., Ramchurn S., Rodden T., Jennings N. R. (2014, May 5–9). A field study of human-agent interaction for electricity tariff switching. 13th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), Paris, France.
2. The importance of the assurance that “humans are still in the decision loop” for public trust in artificial intelligence: Evidence from an online experiment
3. Bahner J. E., Elepfandt M. F., Manzey D. (2008, September 22–26). Misuse of diagnostic aids in process control: The effects of automation misses on complacency and automation bias. Human Factors and Ergonomics Society 52nd Annual Meeting, Los Angeles, CA. https://doi.org/10.1177/154193120805201906.
4. Expanding the Technology Acceptance Model with the Inclusion of Trust, Social Influence, and Health Valuation to Determine the Predictors of German Users’ Willingness to Continue using a Fitness App: A Structural Equation Modeling Approach
5. The Influence of Task Load and Automation Trust on Deception Detection
Cited by
22 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献