Abstract
I analyse an argument according to which medical artificial intelligence (AI) represents a threat to patient autonomy—recently put forward by Rosalind McDougall in the Journal of Medical Ethics. The argument takes the case of IBM Watson for Oncology to argue that such technologies risk disregarding the individual values and wishes of patients. I find three problems with this argument: (1) it confuses AI with machine learning; (2) it misses machine learning’s potential for personalised medicine through big data; (3) it fails to distinguish between evidence-based advice and decision-making within healthcare. I conclude that how much and which tasks we should delegate to machine learning and other technologies within healthcare and beyond is indeed a crucial question of our time, but in order to answer it, we must be careful in analysing and properly distinguish between the different systems and different delegated tasks.
Subject
Health Policy,Arts and Humanities (miscellaneous),Issues, ethics and legal aspects,Health (social science)
Reference10 articles.
1. Bostrom N . Superintelligence: paths, dangers, strategies: Oxford UP, 2014.
2. Killer Robots
3. Lin P . Why ethics matters for autonomous cars: In. Autonomous driving. Berlin, Heidelberg: Springer, 2016:69–85.
4. Computer knows best? The need for value-flexibility in medical AI
5. Domingos P . The master algorithm: Basic Books, 2015.
Cited by
28 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献