Abstract
AbstractBackgroundLarge language models (LLMs) show promise in radiological diagnosis, but their performance may be affected by the context of the cases presented.PurposeTo investigate how providing information about prior probabilities influences the diagnostic performance of an LLM in radiological quiz cases.Materials and MethodsWe analyzed 322 consecutive cases from Radiology’s “Diagnosis Please” quiz using Claude 3.5 Sonnet under three conditions: without context (Condition 1), informed as quiz cases (Condition 2), and presented as primary care cases (Condition 3). Diagnostic accuracy was compared using McNemar’s test.ResultsThe overall accuracy rate significantly improved in Condition 2 compared to Condition 1 (70.2% vs. 64.9%, p=0.029). Conversely, the accuracy rate significantly decreased in Condition 3 compared to Condition 1 (59.9% vs. 70.2%, p<0.001).ConclusionProviding context about prior probabilities significantly affects the diagnostic performance of the LLM in radiological cases. This suggests that LLMs may incorporate Bayesian-like principles in their diagnostic approach, highlighting the potential for optimizing LLM’s performance in clinical settings by providing relevant contextual information.Key ResultsLLM’s overall accuracy improved from 64.9% to 70.2% when informed about quiz case nature (p=0.029).LLM’s overall accuracy decreased to 59.9% when presented with incorrect primary care context (p<0.001).Results suggest LLMs may utilize Bayesian-like principles in diagnostic reasoning, similar to human radiologists.Summary StatementProviding context about prior probabilities significantly influences LLM’s diagnostic performance in radiological cases, suggesting potential for optimizing LLM use in clinical practice through contextual information.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献