Abstract
AbstractBackgroundMathematical models and empirical epidemiologic studies (e.g., randomized and observational studies) are complementary tools but may produce conflicting results for a given research question. We used sensitivity analyses and bias analyses to explore such discrepancies in a study of the indirect effects of influenza vaccination.MethodsWe fit an age-structured, deterministic, compartmental model to estimate indirect effects of a school-based influenza vaccination program in California that was evaluated in a previous matched cohort study. To understand discrepancies in their results, we used 1) a model with constrained parameters such that projections matched the cohort study; and 2) probabilistic bias analyses to identify potential biases (e.g., outcome misclassification due to incomplete influenza testing) that, if corrected, would align the empirical results with the mathematical model.ResultsThe indirect effect estimate (% reduction in influenza hospitalization among older adults in intervention vs. control) was 22.3% (95% CI 7.6% – 37.1%) in the cohort study but only 1.6% (95% Bayesian credible intervals 0.4 – 4.4%) in the mathematical model. When constrained, mathematical models aligned with the cohort study when there was substantially lower pre-existing immunity among school-age children and older adults. Conversely, empirical estimates corrected for potential bias aligned with mathematical model estimates only if influenza testing rates were 15-23% lower in the intervention vs. comparison site.ConclusionsSensitivity and bias analysis can shed light on why results of mathematical models and empirical epidemiologic studies differ for the same research question, and in turn, can improve study and model design.
Publisher
Cold Spring Harbor Laboratory