Abstract
AbstractThis paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI practitioners through the vehicle of an AI application. I conclude with four critical questions based on the discretionary account to determine if trust in particular AI applications is sound, and a brief discussion of the possibility that the main roles of the physician could be replaced by AI.
Publisher
Springer Science and Business Media LLC
Subject
Library and Information Sciences,Computer Science Applications
Reference57 articles.
1. Aaen, J., Nielsen, J. A., & Carugati, A. (2021). The dark side of data ecosystems: A longitudinal study of the DAMD project. European Journal of Information Systems. https://doi.org/10.1080/0960085X.2021.1947753
2. Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction machines: The simple economics of artificial intelligence. Harvard Business Review Press.
3. Baier, A. (1986). Trust and antitrust. Ethics, 96, 231–260.
4. Bond, R. R., et al. (2018). Automation bias in medicine: The influence of automated diagnoses on interpreter accuracy and uncertainty when reading electrocardiograms. Journal of Electrocardiology, 51, S6–S11.
5. Brayne, S. (2017). 2017 Big data surveillance: The case of policing. American Sociological Review, 82(5), 977–1008.
Cited by
15 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献