Reliability adjustment, a novel technique for quantifying and removing statistical “noise” from quality rankings, is becoming more widely used outside surgery. We sought to evaluate its impact on hospital outcomes assessed with the American College of Surgeons' National Surgical Quality Improvement Program (ACS-NSQIP).
We used prospective, clinical data from the ACS-NSQIP to identify all patients undergoing colon resection in 2007 (n = 181 hospitals, n = 18,455 patients). We first used standard NSQIP techniques to generate risk-adjusted mortality and morbidity rates for each hospital. Using hierarchical logistic regression models, we then adjusted these for reliability using empirical Bayes techniques. To evaluate the impact of reliability adjustment, we first estimated the extent to which hospital-level variation was reduced. We then compared hospital mortality and morbidity rankings and outlier status before and after reliability adjustment.
Reliability adjustment greatly diminished apparent variation in hospital outcomes. For risk-adjusted mortality, there was a 6-fold difference before (1.4%–7.8%) and less than a 2-fold difference (3.2% to 5.7%) after reliability adjustment. For risk-adjusted morbidity, there was a 2-fold difference (18.0%–38.2%) before and a 1.5-fold difference (20.8%–34.8%) after reliability adjustment. Reliability adjustment had a large impact on hospital mortality and morbidity rankings. For example, with rankings based on mortality, 44% (16 hospitals) of the “best” hospitals (top 20%) were reclassified after reliability adjustment. Similarly, 22% (8 hospitals) of the “worst” hospitals (bottom 20%) were reclassified after reliability adjustment.
Reliability adjustment reduces variation due to statistical noise and results in more accurate estimates of risk-adjusted hospital outcomes. Given the risk of misclassifying hospitals and surgeons using standard approaches, this technique should be considered when reporting surgical outcomes.