OrderRex clinical user testing: a randomized trial of recommender system decision support on simulated cases

Author:

Kumar Andre1ORCID,Aikens Rachael C23,Hom Jason1,Shieh Lisa1,Chiang Jonathan4,Morales David5,Saini Divya5,Musen Mark4,Baiocchi Michael6,Altman Russ7,Goldstein Mary K89,Asch Steven1011,Chen Jonathan H14

Affiliation:

1. Division of Hospital Medicine, Department of Medicine, Stanford University, Stanford, California, USA

2. Program in Biomedical Informatics, Stanford University, Stanford, California, USA

3. Department of Statistics, Stanford University, Stanford, California, USA

4. Department of Medicine, Center for Biomedical Informatics Research, Stanford University, Stanford, California, USA

5. Department of Computer Science, Stanford University, Stanford, California, USA

6. Department of Epidemiology and Public Health, Stanford University, Stanford, California, USA

7. Departments of Bioengineering, Genetics, Medicine, and Biomedical Data Science, Stanford University, Stanford, California, USA

8. Geriatrics Research Education and Clinical Center, Veteran Affairs Palo Alto Health Care System, Palo Alto, California, USA

9. Primary Care and Outcomes Research (PCOR), Department of Medicine, Stanford University, Stanford, California, USA

10. Primary Care and Population Health, Department of Medicine, Stanford University, Stanford, California, USA

11. Center for Innovation to Implementation, Veteran Affairs Palo Alto Health Care System, Palo Alto, California, USA

Abstract

Abstract Objective To assess usability and usefulness of a machine learning-based order recommender system applied to simulated clinical cases. Materials and Methods 43 physicians entered orders for 5 simulated clinical cases using a clinical order entry interface with or without access to a previously developed automated order recommender system. Cases were randomly allocated to the recommender system in a 3:2 ratio. A panel of clinicians scored whether the orders placed were clinically appropriate. Our primary outcome included the difference in clinical appropriateness scores. Secondary outcomes included total number of orders, case time, and survey responses. Results Clinical appropriateness scores per order were comparable for cases randomized to the order recommender system (mean difference -0.11 order per score, 95% CI: [-0.41, 0.20]). Physicians using the recommender placed more orders (median 16 vs 15 orders, incidence rate ratio 1.09, 95%CI: [1.01-1.17]). Case times were comparable with the recommender system. Order suggestions generated from the recommender system were more likely to match physician needs than standard manual search options. Physicians used recommender suggestions in 98% of available cases. Approximately 95% of participants agreed the system would be useful for their workflows. Discussion User testing with a simulated electronic medical record interface can assess the value of machine learning and clinical decision support tools for clinician usability and acceptance before live deployments. Conclusions Clinicians can use and accept machine learned clinical order recommendations integrated into an electronic order entry interface in a simulated setting. The clinical appropriateness of orders entered was comparable even when supported by automated recommendations.

Funder

NIH

National Institute of Environmental Health Sciences

the Gordon and Betty Moore Foundation

Stanford Human-Centered Artificial Intelligence Seed

University Healthcare Alliance and Packard Children’s Health Alliance clinics

Publisher

Oxford University Press (OUP)

Subject

Health Informatics

Reference58 articles.

1. Scientific evidence underlying the ACC/AHA clinical practice guidelines;Tricoci;JAMA,2009

2. Health information technology: standards, implementation specifications, and certification criteria for electronic health record technology, 2014 Edition; revisions to the permanent certification program for health information technology;Fed Regist,2012

3. Big data meets the electronic medical record: a commentary on “identifying patients at increased risk for unplanned readmission;de Lissovoy;Med Care,2013

4. Evidence-based medicine in the EMR era;Frankovich;N Engl J Med,2011

Cited by 15 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3