Abstract
AbstractBackgroundArtificial Intelligence (AI) will have unintended consequences for radiology. When a radiologist misses an abnormality on an image, their liability may differ according to whether or not AI also missed the abnormality.MethodsU.S. adults viewed a vignette describing a radiologist being sued for missing a brain bleed (N=652) or cancer (N=682). Participants were randomized to one of five conditions. In four conditions, they were told an AI system was used. Either AI agreed with the radiologist, also failing to find pathology (AI agree) or did find pathology (AI disagree). In the AI agree+FOR condition, AI agreed with the radiologist and an AI false omission rate (FOR) of 1% was presented. In the AI disagree+FDR condition, AI disagreed and an AI false discovery rate (FDR) of 50% was presented. There was also a no AI control condition. Otherwise, vignettes were identical. Participants indicated whether the radiologist met their duty of care as a proxy for whether they would side with defense (radiologist) or plaintiff in trial.ResultsParticipants were more likely to side with the plaintiff in the AI disagree vs. AI agree condition (brain bleed: 72.9% vs. 50.0%, p=0.0054; cancer: 78.7% vs. 63.5%, p=0.00365) and in the AI disagree vs. no AI condition (brain bleed: 72.9% vs. 56.3%, p=0.0054; cancer: 78.7% vs. 65.2%, p=0.00895). Participants were less likely to side with the plaintiff when FDR or FOR were provided: AI disagree vs AI disagree+FDR (brain bleed: 72.9% vs. 48.8%, p=0.00005; cancer: 78.7% vs. 73.1%, p=0.1507), and AI agree vs. AI agree+FOR (brain bleed: 50.0% vs. 34.0%, p=0.0044; cancer: 63.5% vs. 56.4%, p=0.1085).DiscussionRadiologists who failed to find an abnormality are viewed as more culpable when they used an AI system that detected the abnormality. Presenting participants with AI accuracy data decreased perceived liability. These findings have relevance for courtroom proceedings.
Publisher
Cold Spring Harbor Laboratory