Affiliation:
1. Oxford Internet Institute, University of Oxford , 1 St Giles, OX1 3JS Oxford , United Kingdom
Abstract
Abstract
This article examines whether the EU Medical Device Regulation (MDR) adequately addresses the novel risks of AI-based medical devices (AIaMDs), focusing on AI medical imaging tools. It examines two questions: first, does the MDR effectively deal with issues of adaptability, autonomy, bias, opacity, and the need of trustworthiness of AIaMD? Second, does the manufacturer’s translation of the MDR’s requirements close a discrepancy between an AIaMDs’ expected benefit and the actual clinical utility of assessing device safety and effectiveness beyond the narrow performance of algorithms? While the first question has previously received attention in scholarly literature on regulatory and policy tensions on AIaMD generally, and work on future technical standard setting, the second has been comparatively overlooked. We argue that effective regulation of AIaMD requires framing notions of patient safety and benefit within the manufacturer’s articulation of the device’s intended use, as well as reconciling tensions. These tensions are on (i) patient safety and knowledge gaps surrounding fairness, (ii) trustworthiness and device effectiveness, (iii) the assessment of clinical performance, and (iv) performance updates. Future guidance needs to focus on the importance of translated benefits, including nuanced risk framing and looking at how the limitations of AIaMD inform the intended purpose statement in the MDR.
Funder
Wellcome Trust
Sloan Foundation
Department of Health and Social Care
Luminate Group
Trustworthiness Auditing for AI
Governance of Emerging Technologies
Oxford Internet Institute
University of Oxford
Publisher
Oxford University Press (OUP)
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献