Planning and Reporting Effective Web-Based RAND/UCLA Appropriateness Method Panels: Literature Review and Preliminary Recommendations

Author:

Sparks Jordan BORCID,Klamerus Mandi LORCID,Caverly Tanner JORCID,Skurla Sarah EORCID,Hofer Timothy PORCID,Kerr Eve AORCID,Bernstein Steven JORCID,Damschroder Laura JORCID

Abstract

Background The RAND/UCLA Appropriateness Method (RAM), a variant of the Delphi Method, was developed to synthesize existing evidence and elicit the clinical judgement of medical experts on the appropriate treatment of specific clinical presentations. Technological advances now allow researchers to conduct expert panels on the internet, offering a cost-effective and convenient alternative to the traditional RAM. For example, the Department of Veterans Affairs recently used a web-based RAM to validate clinical recommendations for de-intensifying routine primary care services. A substantial literature describes and tests various aspects of the traditional RAM in health research; yet we know comparatively less about how researchers implement web-based expert panels. Objective The objectives of this study are twofold: (1) to understand how the web-based RAM process is currently used and reported in health research and (2) to provide preliminary reporting guidance for researchers to improve the transparency and reproducibility of reporting practices. Methods The PubMed database was searched to identify studies published between 2009 and 2019 that used a web-based RAM to measure the appropriateness of medical care. Methodological data from each article were abstracted. The following categories were assessed: composition and characteristics of the web-based expert panels, characteristics of panel procedures, results, and panel satisfaction and engagement. Results Of the 12 studies meeting the eligibility criteria and reviewed, only 42% (5/12) implemented the full RAM process with the remaining studies opting for a partial approach. Among those studies reporting, the median number of participants at first rating was 42. While 92% (11/12) of studies involved clinicians, 50% (6/12) involved multiple stakeholder types. Our review revealed that the studies failed to report on critical aspects of the RAM process. For example, no studies reported response rates with the denominator of previous rounds, 42% (5/12) did not provide panelists with feedback between rating periods, 50% (6/12) either did not have or did not report on the panel discussion period, and 25% (3/12) did not report on quality measures to assess aspects of the panel process (eg, satisfaction with the process). Conclusions Conducting web-based RAM panels will continue to be an appealing option for researchers seeking a safe, efficient, and democratic process of expert agreement. Our literature review uncovered inconsistent reporting frameworks and insufficient detail to evaluate study outcomes. We provide preliminary recommendations for reporting that are both timely and important for producing replicable, high-quality findings. The need for reporting standards is especially critical given that more people may prefer to participate in web-based rather than in-person panels due to the ongoing COVID-19 pandemic.

Publisher

JMIR Publications Inc.

Subject

Health Informatics

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3