Responsible artificial intelligence in clinical decision support systems requires good science: Lessons learned from an international roundtable discussion (Preprint)
Author:
Pfisterer KaylenORCID, Saha ShumitORCID, Fossat YanORCID, Grossman Maura RORCID, Wong AlexanderORCID, Yadollahi AzadehORCID, Wang BoORCID, Garg AnimeshORCID, Dinu Larisa, Kale DimitraORCID, Leppin CorinnaORCID, Oldham MelissaORCID, Taylor MadisonORCID, Connell IanORCID, Garnett ClaireORCID, Pham QuynhORCID
Abstract
UNSTRUCTURED
In healthcare, where increasing efficiency is essential to the demand of scale, there is immense opportunity to incorporate advances in artificial intelligence (AI). However, particularly in healthcare, these technologies must be designed to be both effective and ethical. Our objective in a multidisciplinary international roundtable discussion (Canada, United States, United Kingdom), was to identify concepts, perspectives, and considerations for AI systems in healthcare settings that are designed, developed, and deployed with good intention to empower patients and healthcare providers in a safe, trustworthy, and ethical way. We refer to this notion as responsible AI (RAI). First, we discuss the role and opportunity of AI to support collaborative healthcare (clinicians and patients working together) and increase specialist capacity. Second, we outline risks and ramifications of poorly implemented AI including bias, implications of predictors to support diagnosis, and privacy and security considerations. Third, we discuss how these risks can be mitigated through conducting “good science” by addressing biases such as representative data, probing annotation bias, and the role of the biostatistician. We also outline the need to evaluate fit for purpose through transdisciplinary collaboration to address: explainability, fairness, interpretability, transparency, as well as the role of standards, auditing, and regulatory considerations. Finally, we detail four criteria outlining determinants, considerations and rationale for developing RAI. These determinants and considerations are meant to position new AI-powered healthcare technologies primed for responsible design supporting acceptability, appropriateness, feasibility, and adoption. Future directions should expand on additional factors and monitor responsible AI implementation success to validate these criteria.
Publisher
JMIR Publications Inc.
|
|