Online physician ratings fail to predict actual performance on measures of quality, value, and peer review

Author:

Daskivich Timothy J12,Houman Justin1,Fuller Garth23,Black Jeanne T4,Kim Hyung L1,Spiegel Brennan235

Affiliation:

1. Division of Urology, Cedars-Sinai Medical Center, Los Angeles, CA, USA

2. Cedars-Sinai Center for Outcomes Research and Education (CS-CORE), Cedars-Sinai Medical Center, Los Angeles, CA, USA

3. Department of Medicine, Division of Health Services Research, Cedars-Sinai Health System, Los Angeles, CA, USA

4. Resource and Outcomes Management Department, Cedars-Sinai Health System, Los Angeles, CA, USA

5. Department of Health Policy and Management, UCLA Fielding School of Public Health, Los Angeles, CA, USA

Abstract

Abstract Objective Patients use online consumer ratings to identify high-performing physicians, but it is unclear if ratings are valid measures of clinical performance. We sought to determine whether online ratings of specialist physicians from 5 platforms predict quality of care, value of care, and peer-assessed physician performance. Materials and Methods We conducted an observational study of 78 physicians representing 8 medical and surgical specialties. We assessed the association of consumer ratings with specialty-specific performance scores (metrics including adherence to Choosing Wisely measures, 30-day readmissions, length of stay, and adjusted cost of care), primary care physician peer-review scores, and administrator peer-review scores. Results Across ratings platforms, multivariable models showed no significant association between mean consumer ratings and specialty-specific performance scores (β-coefficient range, −0.04, 0.04), primary care physician scores (β-coefficient range, −0.01, 0.3), and administrator scores (β-coefficient range, −0.2, 0.1). There was no association between ratings and score subdomains addressing quality or value-based care. Among physicians in the lowest quartile of specialty-specific performance scores, only 5%–32% had consumer ratings in the lowest quartile across platforms. Ratings were consistent across platforms; a physician’s score on one platform significantly predicted his/her score on another in 5 of 10 comparisons. Discussion Online ratings of specialist physicians do not predict objective measures of quality of care or peer assessment of clinical performance. Scores are consistent across platforms, suggesting that they jointly measure a latent construct that is unrelated to performance. Conclusion Online consumer ratings should not be used in isolation to select physicians, given their poor association with clinical performance.

Publisher

Oxford University Press (OUP)

Subject

Health Informatics

Cited by 59 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3