Abstract
AbstractTechnological trends point to Artificial Intelligence (AI) as a crucial tool in healthcare, but its development must respect human rights and ethical standards to ensure robustness and safety. Despite general good practices are available, health AI developers lack a practical guide to address the construction of trustworthy AI. We introduce a development framework to serve as a reference guideline for the creation of trustworthy AI systems in health. The framework provides an extensible Trustworthy AI matrix that classifies technical methods addressing the EU guideline for Trustworthy AI requirements (privacy and data governance; diversity, non-discrimination and fairness; transparency; and technical robustness and safety) across the different AI lifecycle stages (data preparation; model development, deployment and use, and model management). The matrix is complemented with generic but customizable example code pipelines for the different requirements with state-of-the-art AI techniques using Python. A related checklist is provided to help validate the application of different methods on new problems. The framework is validated using two representative open datasets, and it is provided as Open Source to the scientific and development community. The presented framework provides health AI developers with a theoretical development guideline with practical examples, aiming to ensure the development of robust and safe health AI and Clinical Decision Support Systems. GitHub repository:https://github.com/bdslab-upv/trustworthy-ai
Publisher
Cold Spring Harbor Laboratory