Author:
Oami Takehiko,Okada Yohei,Nakada Taka-aki
Abstract
AbstractBackgroundSystematic reviews require extensive time and effort to manually extract and synthesize data from numerous screened studies. This study aims to investigate the ability of large language models (LLMs) to automate data extraction with high accuracy and minimal bias, using clinical questions (CQs) of the Japanese Clinical Practice Guidelines for Management of Sepsis and Septic Shock (J-SSCG) 2024. the study will evaluate the accuracy of three LLMs and optimize their command prompts to enhance accuracy.MethodsThis prospective study will objectively evaluate the accuracy and reliability of the extracted data from selected literature in the systematic review process in J-SSCG 2024 using three LLMs (GPT-4 Turbo, Claude 3, and Gemini 1.5 Pro). Detailed assessment of errors will be determined according to the predefined criteria for further improvement. Additionally, the time to complete each task will be measured and compared among the three LLMs. Following the primary analysis, we will optimize the original command with integration of prompt engineering techniques in the secondary analysis.Trial registrationThis research is submitted with the University hospital medical information network clinical trial registry (UMIN-CTR) [UMIN000054461].Conflicts of interestAll authors declare no conflicts of interest to have.
Publisher
Cold Spring Harbor Laboratory