Abstract
AbstractThis manuscript draws on the moral norms arising from the nuanced accounts of epistemic (in)justice and social identity in relational autonomy to normatively assess and articulate the ethical problems associated with using AI in patient care in light of the Black Box problem. The article also describes how black-boxed AI may be used within the healthcare system. The manuscript highlights what needs to happen to align AI with the moral norms it draws on. Deeper thinking – from other backgrounds other than decolonial scholarship and relational autonomy – about the impact of AI on the human experience needs to be done to appreciate any other barriers that may exist. Future studies can take up this task.
Funder
University of the Witwatersrand
Publisher
Springer Science and Business Media LLC
Reference47 articles.
1. Afnan, M. A. M., Y. Liu, V. Conitzer, C. Rudin, A. Mishra, J. Savulescu, and M. Afnan. 2021. Interpretable, not black-box, artificial intelligence should be used for embryo selection. Human Reproduction Open 2021(4). https://doi.org/10.1093/hropen/hoab040.
2. Astromskė, K., E. Peičius, and P. Astromskis. 2021. Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations. AI and Society 36(2). https://doi.org/10.1007/s00146-020-01008-9.
3. Baier, A. 1985. Postures of the mind: essays on mind and morals. University of Minnesota Press.
4. Baumann, H. 2008. Reconsidering relational autonomy. Personal autonomy for socially embedded and temporally extended selves. Analyse & Kritik 30(2). https://doi.org/10.1515/auk-2008-0206.
5. Beauchamp, T. L., and J. F. Childress. 1979. Principles of Biomedical Ethics (principles of Biomedical Ethics. Oxford University Press.