Abstract
AbstractIntroductionAccurate identification of study designs and risk of bias (RoB) assessment is crucial for evidence synthesis in research. However, mislabelling of case-control studies (CCS) is prevalent, leading to a downgraded quality of evidence. Large Language Models (LLMs), a form of artificial intelligence, have shown impressive performance in various medical tasks. Still, their utility and application in categorising study designs and assessing RoB needs to be further explored. This study will evaluate the performance of four publicly available LLMs (ChatGPT-3.5, ChatGPT-4, Claude 3 Sonnet, Claude 3 Opus) in accurately identifying CCS designs from the neurosurgical literature. Secondly, we will assess the human-LLM interrater agreement for RoB assessment of true CCS.MethodsWe identified thirty-four top-ranking neurosurgical-focused journals and searched them on PubMed/MEDLINE for manuscripts reported as CCS in the title/abstract. Human reviewers will independently assess study designs and RoB using the Newcastle-Ottawa Scale. The methods sections/full-text articles will be provided to LLMs to determine study designs and assess RoB. Cohen’s kappa will be used to evaluate human-human, human-LLM and LLM-LLM interrater agreement. Logistic regression will be used to assess study characteristics affecting performance. Ap-value < 0.05 at a 95% confidence interval will be considered statistically significant.ConclusionIf the human-LLM agreement is high, LLMs could become valuable teaching and quality assurance tools for critical appraisal in neurosurgery and other medical fields. This study will contribute to validating LLMs for specialised scientific tasks in evidence synthesis. This could lead to reduced review costs, faster completion, standardisation, and minimal errors in evidence synthesis.
Publisher
Cold Spring Harbor Laboratory