Abstract
Abstract
This paper motivates institutional epistemic trust as an important ethical consideration informing the responsible development and implementation of artificial intelligence (AI) technologies (or AI-inclusivity) in healthcare. Drawing on recent literature on epistemic trust and public trust in science, we start by examining the conditions under which we can have institutional epistemic trust in AI-inclusive healthcare systems and their members as providers of medical information and advice. In particular, we discuss that institutional epistemic trust in AI-inclusive healthcare depends, in part, on the reliability of AI-inclusive medical practices and programs, its knowledge and understanding among different stakeholders involved, its effect on epistemic and communicative duties and burdens on medical professionals and, finally, its interaction and alignment with the public’s ethical values and interests as well as background sociopolitical conditions against which AI-inclusive healthcare systems are embedded. To assess the applicability of these conditions, we explore a recent proposal for AI-inclusivity within the Dutch Newborn Screening Program. In doing so, we illustrate the importance, scope, and potential challenges of fostering and maintaining institutional epistemic trust in a context where generating, assessing, and providing reliable and timely screening results for genetic risk is of high priority. Finally, to motivate the general relevance of our discussion and case study, we end with suggestions for strategies, interventions, and measures for AI-inclusivity in healthcare more widely.
Publisher
Cambridge University Press (CUP)