BACKGROUND
Unwarranted variability in clinical practice is a challenging problem in practice today, leading to poor outcomes for patients and low-value care for providers, payers, and patients.
OBJECTIVE
In this study, we introduced a novel tool, QualityIQ, and determined the extent to which it helps primary care physicians to align care decisions with the latest best practices included in the Merit-Based Incentive Payment System (MIPS).
METHODS
We developed the fully automated QualityIQ patient simulation platform with real-time evidence-based feedback and gamified peer benchmarking. Each case included workup, diagnosis, and management questions with explicit evidence-based scoring criteria. We recruited practicing primary care physicians across the United States into the study via the web and conducted a cross-sectional study of clinical decisions among a national sample of primary care physicians, randomized to continuing medical education (CME) and non-CME study arms. Physicians “cared” for 8 weekly cases that covered typical primary care scenarios. We measured participation rates, changes in quality scores (including MIPS scores), self-reported practice change, and physician satisfaction with the tool. The primary outcomes for this study were evidence-based care scores within each case, adherence to MIPS measures, and variation in clinical decision-making among the primary care providers caring for the same patient.
RESULTS
We found strong, scalable engagement with the tool, with 75% of participants (61 non-CME and 59 CME) completing at least 6 of 8 total cases. We saw significant improvement in evidence-based clinical decisions across multiple conditions, such as diabetes (+8.3%, <i>P</i><.001) and osteoarthritis (+7.6%, <i>P</i>=.003) and with MIPS-related quality measures, such as diabetes eye examinations (+22%, <i>P</i><.001), depression screening (+11%, <i>P</i><.001), and asthma medications (+33%, <i>P</i><.001). Although the CME availability did not increase enrollment in the study, participants who were offered CME credits were more likely to complete at least 6 of the 8 cases.
CONCLUSIONS
Although CME availability did not prove to be important, the short, clinically detailed case simulations with real-time feedback and gamified peer benchmarking did lead to significant improvements in evidence-based care decisions among all practicing physicians.
CLINICALTRIAL
ClinicalTrials.gov NCT03800901; https://clinicaltrials.gov/ct2/show/NCT03800901