Author:
Tahri Sqalli Mohammed,Aslonov Begali,Gafurov Mukhammadjon,Nurmatov Shokhrukhbek
Abstract
The increasing use of artificial intelligence (AI) in healthcare has brought about numerous ethical considerations that push for reflection. Humanizing AI in medical training is crucial to ensure that the design and deployment of its algorithms align with ethical principles and promote equitable healthcare outcomes for both medical practitioners trainees and patients. This perspective article provides an ethical framework for responsibly designing AI systems in medical training, drawing on our own past research in the fields of electrocardiogram interpretation training and e-health wearable devices. The article proposes five pillars of responsible design: transparency, fairness and justice, safety and wellbeing, accountability, and collaboration. The transparency pillar highlights the crucial role of maintaining the explainabilty of AI algorithms, while the fairness and justice pillar emphasizes on addressing biases in healthcare data and designing models that prioritize equitable medical training outcomes. The safety and wellbeing pillar however, emphasizes on the need to prioritize patient safety and wellbeing in AI model design whether it is for training or simulation purposes, and the accountability pillar calls for establishing clear lines of responsibility and liability for AI-derived decisions. Finally, the collaboration pillar emphasizes interdisciplinary collaboration among stakeholders, including physicians, data scientists, patients, and educators. The proposed framework thus provides a practical guide for designing and deploying AI in medicine generally, and in medical training specifically in a responsible and ethical manner.
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献