Affiliation:
1. Imperial College London
2. Imperial College
Abstract
Abstract
Background: The challenge of responsibly guiding clinicians to incorporate AI recommendations and explanations into their day-to-day practice has thus far neglected the realm of decisions outside of diagnosis (where there is no gold standard to compare against). We assess how clinicians' decisions may be influenced by additional information more broadly, and how this influence can be modified by either the source of the information (human peers or AI) and the presence or absence of an AI explanation (XAI, here using simple feature importance).
Methods: We conducted a human-AI interaction study with ICU doctors using a modified between-subjects design. Doctors were presented on a computer for each of 16 trials with a patient case and prompted to prescribe continuous values for IV fluid and vasopressor. We used a multi-factorial experimental design with four arms, where each clinician experienced all four arms on different subsets of our 24 patients. The four arms were (i) baseline (control), (ii) peer human clinician scenario showing what doses had been prescribed by other doctors, (iii) AI suggestion and (iv) XAI suggestion.
Findings: Among 86 ICU doctors we had four key findings. First, additional information (peer, AI or XAI) had a strong influence on prescriptions (significantly for AI, not so for peers) but XAI did not have higher influence than AI alone. Second, inter-clinician prescription variability was affected differentially according to whether the recommendation (whether peer, AI or XAI) was higher or lower than what subjects in the baseline arm did. Third, there was no correlation between attitudes to AI or clinical experience on the AI-supported decisions. Fourth, there was no correlation between what doctors self-reported about how useful they found the XAI and whether the XAI actually influenced their prescriptions.
Interpretation: Taken together, our findings on a comparatively large clinical expert population raise important questions for the meaning and design of medical XAI systems. Specifically, we show that the marginal impact of XAI as currently formulated is low in a medical population. We also cast doubt on the utility of self-reports as a valid metric for assessing XAI in clinical experts vs our more objective behavioural response paradigm. Further work in this area could look to higher fidelity and more granular markers that assess the natural behaviour of clinicians when they interact with decision support tools.
Publisher
Research Square Platform LLC
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献