The Use of Expert Elicitation among Computational Modeling Studies in Health Research: A Systematic Review

Author:

Cadham Christopher J.1ORCID,Knoll Marie2,Sánchez-Romero Luz María2,Cummings K. Michael3,Douglas Clifford E.14,Liber Alex2,Mendez David1,Meza Rafael5,Mistry Ritesh6,Sertkaya Aylin7,Travis Nargiz12ORCID,Levy David T.2

Affiliation:

1. Department of Health Management and Policy, University of Michigan, School of Public Health, Ann Arbor, MI, USA

2. Georgetown University, Lombardi Comprehensive Cancer Center, Washington, DC, USA

3. Department of Psychiatry & Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA

4. University of Michigan, Tobacco Research Network, Ann Arbor, MI, USA

5. Department of Epidemiology, University of Michigan School of Public Health, Ann Arbor, MI, USA

6. Department of Health Behavior and Health Education, University of Michigan School of Public Health, Ann Arbor, MI, USA

7. Eastern Research Group, Inc., Lexington, MA, USA

Abstract

Background Expert elicitation (EE) has been used across disciplines to estimate input parameters for computational modeling research when information is sparse or conflictual. Objectives We conducted a systematic review to compare EE methods used to generate model input parameters in health research. Data Sources PubMed and Web of Science. Study Eligibility Modeling studies that reported the use of EE as the source for model input probabilities were included if they were published in English before June 2021 and reported health outcomes. Data Abstraction and Synthesis Studies were classified as “formal” EE methods if they explicitly reported details of their elicitation process. Those that stated use of expert opinion but provided limited information were classified as “indeterminate” methods. In both groups, we abstracted citation details, study design, modeling methodology, a description of elicited parameters, and elicitation methods. Comparisons were made between elicitation methods. Study Appraisal Studies that conducted a formal EE were appraised on the reporting quality of the EE. Quality appraisal was not conducted for studies of indeterminate methods. Results The search identified 1520 articles, of which 152 were included. Of the included studies, 40 were classified as formal EE and 112 as indeterminate methods. Most studies were cost-effectiveness analyses (77.6%). Forty-seven indeterminate method studies provided no information on methods for generating estimates. Among formal EEs, the average reporting quality score was 9 out of 16. Limitations Elicitations on nonhealth topics and those reported in the gray literature were not included. Conclusions We found poor reporting of EE methods used in modeling studies, making it difficult to discern meaningful differences in approaches. Improved quality standards for EEs would improve the validity and replicability of computational models. Highlights We find extensive use of expert elicitation for the development of model input parameters, but most studies do not provide adequate details of their elicitation methods. Lack of reporting hinders greater discussion of the merits and challenges of using expert elicitation for model input parameter development. There is a need to establish expert elicitation best practices and reporting guidelines.

Funder

National Cancer Institute

Publisher

SAGE Publications

Subject

Health Policy

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3