A review of measurement practice in studies of clinical decision support systems 1998–2017

Author:

Scott Philip J1,Brown Angela W1,Adedeji Taiwo1,Wyatt Jeremy C2,Georgiou Andrew3ORCID,Eisenstein Eric L4,Friedman Charles P5

Affiliation:

1. Centre for Healthcare Modelling and Informatics, University of Portsmouth, Portsmouth, UK

2. Wessex Institute of Health Research, University of Southampton, Southampton, UK

3. Australian Institute of Health Innovation, Macquarie University, Sydney, Australia

4. Duke Clinical Research Institute, Duke University Medical Center, Durham, North Carolina, USA

5. Schools of Medicine, Information and Public Health, University of Michigan, Ann Arbor, Michigan, USA

Abstract

Abstract Objective To assess measurement practice in clinical decision support evaluation studies. Materials and Methods We identified empirical studies evaluating clinical decision support systems published from 1998 to 2017. We reviewed titles, abstracts, and full paper contents for evidence of attention to measurement validity, reliability, or reuse. We used Friedman and Wyatt’s typology to categorize the studies. Results There were 391 studies that met the inclusion criteria. Study types in this cohort were primarily field user effect studies (n = 210) or problem impact studies (n = 150). Of those, 280 studies (72%) had no evidence of attention to measurement methodology, and 111 (28%) had some evidence with 33 (8%) offering validity evidence; 45 (12%) offering reliability evidence; and 61 (16%) reporting measurement artefact reuse. Discussion Only 5 studies offered validity assessment within the study. Valid measures were predominantly observed in problem impact studies with the majority of measures being clinical or patient reported outcomes with validity measured elsewhere. Conclusion Measurement methodology is frequently ignored in empirical studies of clinical decision support systems and particularly so in field user effect studies. Authors may in fact be attending to measurement considerations and not reporting this or employing methods of unknown validity and reliability in their studies. In the latter case, reported study results may be biased and effect sizes misleading. We argue that replication studies to strengthen the evidence base require greater attention to measurement practice in health informatics research.

Publisher

Oxford University Press (OUP)

Subject

Health Informatics

Cited by 13 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3