Affiliation:
1. From the Center for Quality and Safety, Department of Surgery, and Institute for Health Policy, Massachusetts General Hospital, and Harvard Medical School (D.M.S.), and Department of Health Care Policy, Harvard Medical School, and the Department of Biostatistics, Harvard School of Public Health (S.T.N.), Boston, Mass.
Abstract
Background—
A frequent challenge in outcomes research is the comparison of rates from different populations. One common example with substantial health policy implications involves the determination and comparison of hospital outcomes. The concept of “risk-adjusted” outcomes is frequently misunderstood, particularly when it is used to justify the direct comparison of performance at 2 specific institutions.
Methods and Results—
Data from 14 Massachusetts hospitals were analyzed for 4393 adults undergoing isolated coronary artery bypass graft surgery in 2003. Mortality estimates were adjusted using clinical data prospectively collected by hospital personnel and submitted to a data coordinating center designated by the state. The primary outcome was hospital-specific, risk-standardized, 30-day all-cause mortality after surgery. Propensity scores were used to assess the comparability of case mix (covariate balance) for each Massachusetts hospital relative to the pool of patients undergoing coronary artery bypass grafting surgery at the remaining hospitals and for selected pairwise comparisons. Using hierarchical logistic regression, we indirectly standardized the mortality rate of each hospital using its expected rate. Predictive cross-validation was used to avoid underidentification of true outlying hospitals. Overall, there was sufficient overlap between the case mix of each hospital and that of all other Massachusetts hospitals to justify comparison of individual hospital performance with that of the remaining hospitals. As expected, some pairwise hospital comparisons indicated lack of comparability. This finding illustrates the fallacy of assuming that risk adjustment per se is sufficient to permit direct side-by-side comparison of healthcare providers. In some instances, such analyses may be facilitated by the use of propensity scores to improve covariate balance between institutions and to justify such comparisons.
Conclusions—
Risk-adjusted outcomes, commonly the focus of public report cards, have a specific interpretation. Using indirect standardization, these outcomes reflect a provider’s performance for its specific case mix relative to the expected performance of an average provider for that same case mix. Unless study design or post hoc adjustments have resulted in reasonable overlap of case-mix distributions, such risk-adjusted outcomes should not be used to directly compare one institution with another.
Publisher
Ovid Technologies (Wolters Kluwer Health)
Subject
Physiology (medical),Cardiology and Cardiovascular Medicine
Reference52 articles.
1. Agency for Healthcare Research and Quality. Outcomes Research: Fact Sheet. Available at: http://www.ahrq.gov/clinic/outfact.htm. Accessed September 5 2007.
2. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington DC: National Academies Press; 2001.
3. Institute of Medicine. Performance Measurement: Accelerating Improvement. Washington DC: National Academies Press; 2006.
4. Gatsonis CA. Profiling providers of medical care. In: Armitage P Colton T ed. Encyclopedia of Biostatistics Volume 6 . 2nd ed. Chichester UK: John Wiley & Sons Ltd; 2005: 4252–4254.
5. Normand S-LT. Quality of care. In: Armitage P Colton T ed. Encyclopedia of Biostatistics Volume 6 . 2nd ed. Chichester UK: John Wiley & Sons Ltd; 2005: 4348–4352.
Cited by
126 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献