Abstract
AbstractImportanceSpin is a common form of biased reporting that misrepresents study results in publications as more positive than an objective assessment would indicate, but its prevalence in psychiatric journals is unknown.ObjectiveTo apply a large language model to characterize the extent to which original reports of pharmacologic and non-pharmacologic interventions in psychiatric journals reflect spin.DesignWe identified abstracts from studies published between 2013 and 2023 in 3 high-impact psychiatric journals describing randomized trials or meta-analyses of interventions.Main Outcome and MeasurePresence or absence of spin estimated by a large language model (GPT4-turbo, turbo-2024-04-09), validated using gold standard abstracts with and without spin.ResultsAmong a total of 663 abstracts, 296 (44.6%) exhibited possible or probable spin – 230/529 (43.5%) randomized trials, 66/134 (49.3%) meta-analyses; 148/310 (47.7%) for medication, 107/238 (45.0%) for psychotherapy, and 41/115 (35.7%) for other interventions. In a multivariable logistic regression model, reports of randomized trials, and non-pharmacologic/non-psychotherapy interventions, were less likely to exhibit spin, as were more recent publicationsConclusions and RelevanceA substantial subset of psychiatric intervention abstracts in high-impact journals may contain results presented in a potentially misleading way, with the potential to impact clinical practice. The success in automating spin detection via large language models may facilitate identification and revision to minimize spin in future publications.
Publisher
Cold Spring Harbor Laboratory