Abstract
Objectives
To assess the reporting quality of missing data in economic evaluations conducted alongside pragmatic randomized controlled trials (pRCTs).
Design
Cross-sectional survey.
Setting
Data were extracted from PubMed and OVID (Embase, CENTRAL, HTA database, and NIH EED) from January 1, 2010, to April 24, 2022. Economic evaluations conducted with pRCTs were included and secondary analyses, abstracts, comments, letters, notes, editorials, protocols, subgroup analyses, pilot and feasibility trials, post-hoc analyses, and reviews were excluded. Two groups of two independent reviewers identified the relevant articles, and data were extracted from three groups of two reviewers.
Main outcome measures
Descriptive analyses were performed to assess characteristics of the included studies, missingness in the included studies, and handling of missing data.
Results
A total of 715 studies were identified, of which 152 met the inclusion criteria. Overall, 113 articles reported missing data, 119 reported missing costs, and 132 reported missing effects. More than 50% (58/113) of the articles reported the proportion or quantity of overall missingness, and 64.71% and 54.55% reported missing costs and effects, respectively. The proportion of missingness of < 5% in the overall group was 3.45%, whereas the proportions of missing costs and effects were both lower than 10% (5.26% vs. 8.45%). In terms of the proportion of missing data, the overall missingness rate was 30.22% in 58 studies, whereas the median proportion of missing data was slightly higher than that of the missing effects (30.92% vs. 27.78%). For details on dealing with missing data, 56 (36.84%) studies conducted a sensitivity analysis on handling missing data. Of these studies, 12.50% reported missing mechanisms, and 83.93% examined handling methods.
Conclusions
Insufficient description and reporting of missing data, along with a high proportion of missing data in pRCT-based economic evaluations, could decrease the reliability and extrapolation of conclusions, leading to misleading decision-making. Future research should include an increased sample size by fully considering the potential proportion of missing data and enhance the transparency and evidence quality of economic evaluation alongside pragmatic trials.