Affiliation:
1. District General Hospital of Førde Førde Norway
2. Faculty of Medicine The University of Bergen Bergen Norway
3. Western Norway University of Applied Sciences Bergen Norway
4. Weill Cornell Medicine New York New York USA
Abstract
AbstractArtificial intelligence (AI), specifically machine learning (ML), is adept at identifying patterns and insights from vast amounts of data from routine outcome monitoring (ROM) and clinical feedback during treatment. When applied to patient feedback data, AI/ML models can assist clinicians in predicting treatment outcomes. Common reasons for clinician resistance to integrating data‐driven decision‐support tools in clinical practice include concerns about the reliability, relevance and usefulness of the technology coupled with perceived conflicts between data‐driven recommendations and clinical judgement. While AI/ML‐based tools might be precise in guiding treatment decisions, it might not be possible to realise their potential at present, due to implementation, acceptability and ethical concerns. In this article, we will outline the concept of eXplainable AI (XAI), a potential solution to these concerns. XAI refers to a form of AI designed to articulate its purpose, rationale and decision‐making process in a manner that is comprehensible to humans. The key to this approach is that end‐users see a clear and understandable pathway from input data to recommendations. We use real Norse Feedback data to present an AI/ML example demonstrating one use case for XAI. Furthermore, we discuss key learning points that we will employ in future XAI implementations.