Abstract
Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems can be relied on, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness. Insofar as patients are required to rely on AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.
Subject
Health Policy,Arts and Humanities (miscellaneous),Issues, ethics and legal aspects,Health(social science)
Reference29 articles.
1. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs
2. Caruana R , Lou Y , Microsoft JG , et al . Intelligible models for health care: predicting pneumonia risk and hospital 30-day readmission. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, 2015:1721–30.
3. Ramsundar B , Kearnes S , Riley P , et al . Massively Multitask networks for drug discovery. arXiv.org 2015.
4. Topol EJ . Deep medicine: how artificial intelligence can make healthcare human again. New York, NY: Basic Books, 2019.
5. Promoting trust between patients and physicians in the era of artificial intelligence;Nundy;JAMA,2019
Cited by
94 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献