The origins of public performance reporting (PPR) in health care can be traced to 2 events. First, the publication of annual mortality rate reports for 17 groups of medical and surgical patients by the US Health Care Financing Administration between 1986 and 1992.1 Second, concern within the New York State Department of Health regarding the substantial variation in in-hospital mortality rates for coronary artery bypass graft (CABG) surgery around that time leading to the publication of risk-adjusted mortality data for all 28 hospitals performing CABG in that State.1 Other States and professional bodies followed and PPR is well established in the United States. New York, Pennsylvannia, and Massachusetts have subsequently added reporting of percutaneous coronary intervention (PCI). New York, Pennsylvania, and New Jersey report CABG mortality rates for hospitals and individual surgeons.
The establishment of these cardiac registries and PPR arrangements aimed to improve the quality of care in hospitals by providing incentives for hospitals and surgeons to improve performance, and by empowering patients to make informed decisions when selecting a hospital or a surgeon. Earlier studies found that PPR of CABG and PCI outcomes were associated with quality improvement activities among hospitals and surgeons,2,3 and a reduction in mortality.4,5 However, PPR remains controversial with some later studies reporting unintended consequences including risk aversion and denial of care to high-risk patients, for example, avoiding operating6,7 or out-of-State referrals.8 Other studies found no such effects.5 Perhaps as a result, New York began excluding high-risk patients with cardiogenic shock from its analysis of mortality rates in 2006.1
A considerable literature on the various effects of PPR on CABG and PCI now exists suggesting the need for a systematic review and meta-analysis. However, while several systematic reviews have been undertaken to summarize research at an all-causes level, none have focused exclusively on CABG and PCI (Appendix A, Supplemental Digital Content 1, http://links.lww.com/MLR/B624).9–16 Only Campanella et al15 conducted a relevant meta-analysis. None properly considered the type of impact studied—performance (quality) effects or selection (use of health services) (Appendix B, Supplemental Digital Content 1, http://links.lww.com/MLR/B624). The aim of this study is to undertake a systematic review and meta-analysis of the impacts of PPR on health service quality plus any subsequent changes in usage of the health services whose quality indicators have been publicly reported. The topic areas are PPR and its impacts on market share, mortality, and patient mix associated with treatments involving the use of CABG and PCI.
Data Sources and Searches
Six databases were searched for articles published from their inception dates to April 16, 2015: Medline; Embase; Psyinfo; the Cumulative Index to Nursing and Allied Health Literature; Evidence-Based Medicine Reviews; and EconLit. Our search strategy was based on Ketelaar et al13 study which covered: randomized controlled trials (RCTs); cluster RCTs; quasirandomized trials; cluster quasirandomized trials; interrupted time series studies; and controlled before-after studies. We extended our search to include cross-sectional designs where these conformed to the Meta-analysis of Observational Studies in Epidemiology guidelines.17 Search terms were amended with the assistance of a librarian (Appendix C, Supplemental Digital Content 1, http://links.lww.com/MLR/B624). Results of searches were downloaded into Endnote X7.
The search strategy was later extended because, when comparing the output of our search with that of other systematic reviews, particularly Campanella et al,15 it became clear that a number of studies had been conducted by nonepidemiologists (eg, health economists) with these study designs not using standard epidemiological descriptors. A second search of the databases was conducted on November 14, 2016 to include: experimental study; nonrandomized study; observational cohort; time trend; and comparative study. Articles were also screened from previous systematic reviews on PPR.12,13,15,16,18–20 Articles published before 2000 were not included because the practice of PPR, before then was significantly different. This is because of the widespread use of PPR online in the 2000s and the growth of PCI as a substitute for CABG in the mid-1990s which may have had the added effect of changing the overall population receiving CABG in the 2000s.21,22
Articles were included if: (1) they examined the effect of PPR on outcomes among health care purchasers, providers, or consumers; (2) the study design was observational or experimental. Articles were excluded if: (1) performance reporting was not publicly disclosed; (2) they reported hypothetical choices; (3) the study design was qualitative; (4) they were published in languages other than English; (5) they were published before the year 2000; (6) where pay-for-performance effects were not disaggregated from PPR; and (7) if they involved long-term care.
Two authors independently screened at the title and abstract level for relevance. The remaining articles were screened at full-text level. A screening guide adapted from a previous study was used (Appendix D, Supplemental Digital Content 1, http://links.lww.com/MLR/B624).23 Discrepancies between authors were discussed between them and if they remained unresolved, a third author made the final decision. Studies were grouped per (a) the type of provider/service whose performance was being publicly reported and (b) the effect of impact of PPR (an improvement in performance or a selection/change in health service usage either by provider or consumer).
Data Extraction and Quality Assessment
The following information was extracted from the articles: authors, year of publication, country, study design, study population, sample size, type of PPR data, outcome measures, statistical analysis, and findings. A risk of bias assessment was then made. The methodological quality of observational studies was assessed with the Newcastle-Ottawa Scale (NOS)24 and RCT studies with the Cochrane Collaboration’s tool.25 Both tools are commonly used in systematic reviews26,27 and previous evaluation studies have shown satisfactory psychometric properties.28,29 The NOS uses a star system to evaluate the quality of the studies based on 3 domains: the selection of the study groups; the comparability of the groups; and the ascertainment of either the exposure/outcome of interest (Appendix E, Supplemental Digital Content 1, http://links.lww.com/MLR/B624). The Cochrane Collaboration’s tool uses 6 domains to evaluate the quality of RCT studies: selection bias; performance bias; detection bias; attrition bias; selective reporting; and other sources of bias. These methods of bias assessment ensure that the only studies included are ones where the effects of potentially confounding variables on study findings have been adjusted for. For example, the effects of changes in risk levels in populations over time on mortality or other outcomes, as a result of risk-averse practices.
Data Synthesis and Analysis
Effect size estimates were extracted from the studies where possible by one author and reviewed by a second author. Pooled random effect size estimate was calculated using Comprehensive Meta-Analysis software version 3 (Biostat, Englewood, NJ). Studies that did not report appropriate/sufficient data were not included in the meta-analysis but retained in the systematic review. A random effect was selected to account for the heterogeneity of the measures across the studies. Heterogeneity was calculated with the I2 statistics. I2 describes the percentage of total variation across studies that are due to heterogeneity rather than chance, with a value of 0% indicating no observed heterogeneity and larger values indicating increasing heterogeneity. I2 values of 25%, 50%, and 75%, correspond to low, moderate, and high levels of heterogeneity, respectively.30 Publication bias was assessed with the Egger test. Egger is a statistical test that detects asymmetry in a funnel plot, where the null hypothesis denotes no publication bias (symmetry) and the alternative hypothesis indicates publication bias (asymmetry).31
Study Selection and Quality Assessment
We identified 5961 records through our search and previous reviews search (Fig. 1). Following titles and abstracts screening, 5875 records were excluded, leaving 86 articles for full-text screening. Following this, 32 articles were excluded including 11 that were deemed low quality. An additional 5 articles were included via hand search and 1 from the EconLit search. We included 60 articles in our synthesis. These were categorized into 4 groups: (1) CABG and PCI; (2) health plans; (3) hospital performance; and (4) physician performance. Results of the latter 3 will be reported elsewhere. We found 22 studies examining the impact of public reporting of CABG and PCI performance data on market share, mortality, and patient mix outcomes. All studies were rated as moderate quality.
Characteristics of the included CABG and PCI studies are described in Tables 1–3 by outcome type. There were 13 CABG studies, 6 PCI studies and 3 studies that included both CABG and PCI samples. All studies were published between 2003 and 2016. In total, 21 studies were published in academic journals and 1 was a PhD dissertation. Studies were conducted in the United States (n=19), Canada (n=1), Italy (n=1), and the UK (n=1). Study designs included non-RCT quasiexperiment (n=12), before and after (n=6), retrospective cohort (n=2), and time series (n=2). The total sample size across all studies excluding two32,33 (not provided) consisted of 4,201,388 participants. The sample size per study ranged from 545 to 967,882. The most common type of PPR were report cards (n=16). Outcomes examined included market share (n=6), mortality (n=19), and patient mix outcomes (n=14). The total number of outcomes does not reflect the total number of studies given that many studies examined >1 outcome.
Effects of PPR on Market Share (CABG)
Six of the 13 CABG studies examined the effects of report cards on hospitals market share.32–37 Romano et al34 and Shukla35 reported an increase in mean market share of low-mortality outlier hospitals and decrease in high-mortality hospitals postrelease of report cards. Dranove and Sfekas36 found that, while high-ranking hospitals in New York reported no effect of report card scores on market share, those hospitals with “negative news” in the original report experienced a decrease in market share. Jha and Epstein33 and Wang et al37 reported no effect of report cards on market share for both high and low-performing hospitals. Romano and Zhou32 reported only very temporary effects.
However, Jha and Epstein33 and Wang et al37 reported a higher proportion of poorly performing surgeons had retired postrelease of the report cards. The former found that >20% of surgeons in the bottom quartile (ie, those with high-risk-adjusted mortality rates) stopped practicing CABG surgery within 2 years after the release of the reports, in comparison to 5% of surgeons in the top quartile. No meta-analysis was conducted as we were only able to extract data from 2 studies.33,34
Effects of PPR on Mortality (CABG)
Ten of the 13 CABG studies examined the effect of PPR on mortality.33–35,38–44 Definitions of mortality varied across the studies: operative mortality (n=3)34,38,39; in-hospital mortality (n=4)35,42–44; 30-day mortality (n=1)40; mortality within 1 year of admission (n=1)41; and mortality undefined (n=1).33 Five studies reported no changes in mortality rates.34,35,38,40,42 In contrast, Li et al,39 Hannan et al,43 Jha and Epstein,33 Dranove et al,41 and Chou et al44 found a significant reduction in mortality rates following the dissemination of report cards. Jha and Epstein33 did so by reporting changes in risk-adjusted mortality rates for high-performing and low-performing hospitals and surgeons after the introduction of report cards. Dranove et al41 concluded that the decline in 12-month mortality rates for a CABG population but not an acute myocardial infarction (AMI) population, that was partly but not fully risk-adjusted, was due to a shift in patient mix of CABG procedures toward healthier patients rather than report cards. Chou et al44 reported a 5% to 10% reduction in mortality in more competitive hospital markets.
A meta-analysis was conducted on 5 of the 8 short-term mortality studies (Fig. 2A), as we were unable to extract data from 3 studies.35,40,44 Result of the random-effects meta-analysis indicated that PPR was associated with reduced short-term mortality; however, this was not statistically significant [odds ratio (OR)=0.86; 95% confidence interval (CI)=0.71-1.04; P=0.11]. Substantial heterogeneity was observed between effect sizes (I2=91.52%). Result of the Egger test was not statistically significant (P=0.33).
Effects of PPR on Mortality (PCI)
Six of the 6 PCI studies examined the effect of PPR on 30-day mortality (n=2)45,46 and in-hospital mortality (n=4).47–50 A further 3 studies examined in-hospital mortality for both CABG/PCI samples.51–53 Three studies reported no differences in 30-day mortality among AMI patients undergoing PCI in State(s) with and without PPR.45,46,50 In contrast, Waldo et al,47 McCabe et al,48 and Boyden et al,49 reported lower in-hospital mortality rate for AMI patients treated in States with PPR relative to States without PPR. Waldo et al47 also reported higher in-hospital mortality rate among AMI patients who did not undergo PCI in States with PPR, compared with States without PPR. These different outcomes in the PCI and non-PCI populations, each determined after appropriate risk adjustment, were attributed by the authors to different risk-severity levels in the 2 populations as a result of risk-averse practices among surgeons. McCabe et al48 also concluded that risk-averse practices among surgeons were responsible for their findings. However, they reported only unadjusted observed rates (and adjusted predicted mortality rates).
Among AMI patients with cardiogenic shock, Apolito et al51 reported no difference in in-hospital mortality for AMI patients with cardiogenic shock treated with PCI/CABG, though an increase in those not revascularized in New York compared with States without PPR. In contrast, McCabe et al52 reported lower in-hospital mortality rate for patients with cardiogenic shock undergoing PCI/CABG in New York compared with States without PPR following a change in PPR of mortality rates in New York in 2006 to exclude AMI patients with cardiogenic shock from analysis. Bangalore et al53 reported lower in-hospital mortality rate for patients with cardiogenic shock undergoing PCI, over time for both New York and Michigan (non-PPR State) but no differences between the States at each timepoint.
A meta-analysis was conducted on the 5 short-term mortality studies (Fig. 2B) with the exception of McCabe et al48 (unable to extract data) and the AMI patients with cardiogenic shock studies.51–53 Result of the random effect meta-analysis indicated that PPR was associated with reduced short-term mortality but this was not statistically significant (OR=0.86; 95% CI=0.71-1.05; P=0.15). Substantial heterogeneity was observed between effect sizes (I2=87.33). Result of the Egger test was not statistically significant (P=0.92).
Effects of PPR on Patient Mix (CABG)
Six of the 13 CABG studies examined the effect of report cards on patient mix.34,37,39,41–43 An additional 2 studies focused solely on CABG among AMI patients with cardiogenic shock.52,53 Romano et al34 concluded that the release of hospital performance reports in California was associated with increased volume at low-mortality hospitals, and may have reduced referrals of high-risk patients to high-mortality hospitals. Dranove et al41 reported that illness severity fell in patients receiving CABG following report cards release. However, the authors also found that the proportion of severe cases of AMI in teaching hospitals increased in States with PPR. On the contrary, 4 studies37,39,42,43 reported no changes in overall patient case mix, concluding there was no decrease in access for high-risk patients receiving CABG surgery. Hannan et al43 reported a higher proportion of high-risk patients undergoing CABG surgery in States with PPR than the rest of the country.
Among AMI patients with cardiogenic shock, McCabe et al52 and Bangalore et al53 reported no change in the proportion of patients who underwent CABG in New York after the exclusion of cardiogenic shock from PPR on mortality rates in 2006. Given the various measures of patient mix across the studies, no meta-analysis was conducted.
Effects of PPR on Patient Mix (PCI)
Five of the 6 studies examined the effect of PPR on AMI patient mix.45,47–50 Three additional studies investigated the impact of PPR on patient mix: 1 comprised both PCI and CABG populations51 and 2 studies focused solely on AMI patients with cardiogenic shock.52,53 All 5 studies found differences in AMI patient mix.45,47–50 They reported that high-risk patients were less likely to be treated with PCI in States with PPR, compared with States without PPR. Similarly, among AMI patients with cardiogenic shock, Apolito et al51 reported that high-risk patients with cardiogenic shock in New York were less likely to undergo CABG/PCI treatments than States without PPR. McCabe et al52 and Bangalore et al53 found a substantial increase in PCI being performed following the exclusion of patients with cardiogenic shock from PPR of mortality rates in New York. However, the overall rate of PCI performed in New York remained much lower than in States without PPR. Given the different definitions of patient mix, no meta-analysis was conducted.
Findings varied across type of outcomes and procedures. For short-term mortality and CABG, 5 of the 10 studies reported a reduction in mortality.33,39,41,43,44 Meta-analysis of a subset of studies indicated a near significant decline (OR=0.86; 95% CI=0.71-1.04; P=0.11). For short-term mortality and PCI, 3 of the 6 studies47–49 reported a reduction in mortality. Meta-analysis of a subset of studies indicated a near significant decline (OR=0.86; 95% CI=0.71-1.05; P=0.15).
For market share and CABG, the results provided some evidence with 3 of the 6 studies indicating an increase in market share in low-mortality hospitals,32,34,35 2 of these also showing a decrease in market share in high-mortality hospitals.32,34 Two studies reported withdrawal from practice by poorly performing surgeons.33,37 For patient mix and PCI, the results provided moderate evidence with 5 of the 6 studies reporting a change in mix.45,47–50 In 4 studies, the change was toward PCI in patients with reduced severity of disease.
Our mortality findings, although not statistically significant, are similar to those previously reported. Campanella et al15 reported that PPR was associated with reduced mortality for cardiovascular diseases [risk ratio, 0.83 (95% CI, 0.77-0.91; P<0.0001; I2=95%)] based on a meta-analysis of 6 studies. The difference is likely due to: (1) 7 additional studies, not considered by Campanella et al15 being included in our meta-analyses; (2) our treatments were stratified (CABG vs. PCI); (3) including studies in our meta-analyses, that only focused on short-term mortality; (4) time period restricted to 2000 onwards; (5) consideration of study quality; and (6) inability to extract data from Guru et al40 for meta-analysis—though retained in review. Totten et al14 reported that 8 of 13 studies reported small declines in mortality of cardiac reporting programs.
Although there is some evidence of PPR reducing mortality rates in CABG/PCI-treated patients, evidence for major effects, >25 years after the introduction of PPR does not exist. This is a matter of some concern. Akin to previous reviews, we have identified, not for the first time, both positive and other effects of PPR. The former includes some movement in the treatment of patients from high-mortality to low-mortality hospitals and the withdrawal/retirement of low-performing surgeons. The latter includes risk-averse practices for PCI patients by their doctors. However, this should not explain reduction in mortality rates in the treated population in studies with proper risk adjustment. The significance of risk aversion, however, may be complex—detrimental for AMI patients with cardiogenic shock (now largely avoided in New York as these cases are no longer subject to PPR), beneficial for patients with coronary artery disease with multivessel coronary artery disease and or concomitant diabetes mellitus.54 Perhaps, the reduction in mortality rates may be attributed to surgeons wanting to maintain or improve their reputation by ceasing to perform inadvisable procedures on potentially unsalvageable patients. This requires further study. Other system impacts can also be further researched (eg, the workforce impacts of the withdrawal of low-performing surgeons).
PPR practice could also be improved. Wasfy et al55 have argued for a shift in PPR focus from procedures to disease-based population health. Positive impacts on patients undergoing the relevant procedures may obscure the fact that negative effects may, as an indirect consequence, occur in patients not undergoing the relevant procedure. The results of Waldo et al,47 while only 1 study, attest to this. They reported higher mortality rates among patients with AMI not treated with PCI in States with PPR, compared with States without PPR. Therefore, PPR effects on patients not treated by CABG/PCI also requires further study.
Wasfy et al55 have argued that better measures of outcomes are desirable. Mortality, while being easy to measure and to understand, may not be the best measure of quality as it is of low frequency and differences may not discriminate well between provider groups. Other outcome measures like postprocedure angina, revascularizations performed, and process measures may be desirable. Publicly reporting these outcomes can drive improvement in the delivery of care as providers identify underperforming areas. For consumers, transparency and accountability of providers can increase awareness, trust, and confidence in the health system, and support health care decision-making.
Hannan1 has argued for improvements in the completeness, accuracy, and risk-adjustment of rates. This is necessary both to overcome “gaming the system” and to build confidence in the results. The further audit of results, as reported by hospitals, may be necessary. Both Hannan1 and Wasfy et al55 agree that the use of large administrative databases should be avoided as they do not properly record clinical data, such as diagnosis and risk factors.
Both Hannan1 and Wasfy et al55 argued that the involvement of multiple-constituency stakeholders including experts, providers, and consumers is desirable in the development of PPR systems to promote public acceptance, use, and impact. Finally, PPR is only 1 quality assurance approach and is perhaps best undertaken in conjunction with other approaches such as Continuous Quality Improvement, Pay-for-Performance, and Evidence-based/clinical guidelines.
We did not include articles published pre-2000. This means that impacts of PPR, particularly positive impacts, could have occurred in the 1990s and we would not have detected these. The search did not include studies in languages other than English, gray literature or qualitative studies. Studies that did not explicitly describe their research design may have also been missed. Results of the meta-analyses should be interpreted with caution as only a subset of studies were suitable for meta-analysis. In addition, there were high levels of heterogeneity, likely due to the small number of studies and the inclusion of various study designs.30 Subgroup analysis was not possible, particularly to look at PPR effects at different times in this 16-year period. This would have been beneficial, as studies conducted in the same States (CABG in California—Romano et al34 and Li et al39) and (PCI in New York compared with Michigan—Moscucci et al50 and Boyden et al49) showed somewhat more positive PPR effects in the study conducted later. The study period difference for Moscucci et al50 and Boyden et al49 was 13 years (1998–1999 to 2011–2012). Romano et al34 and Li et al39 were closer together—the former, 1997–2002, the latter 2003–2006. However, there was voluntary participation in the former (1997–2002) compared with mandated hospital participation in the latter (2003–2006). The literature has overwhelmingly been derived from 1 country and 1 health system (United States). It should also be noted that meta-analyses being oriented to average effects are insensitive to differences in study results due to differences in context and minor methodological differences between individual studies. These should be further studied.
The authors thank Dr Stuart McLennan who conducted the first search, Dr Angela Nicholas and Andrea Timothy for screening the titles and abstracts from the first search, and Jim Berryman for assisting in the search strategies.
1. Hannan ELBarach PR, Jacobs JP, Lipshultz SE, Laussen PC. Public reporting of cardiac data: pros, cons, and lessons for the future. Pediatric and Congenital Cardiac Care. London: Springer; 2015:467–477.
2. Dziuban SW, McIlduff JB, Miller SJ, et al. How a New York cardiac surgery program uses outcomes data. Ann Thorac Surg. 1994;58:1871–1876.
3. Chassin MR. Achieving and sustaining improved quality: lessons from New York State and cardiac surgery. Health Aff. 2002;21:40–51.
4. Hannan EL, Kilburn H, Racz M, et al. Improving the outcomes of coronary artery bypass surgery in New York State. JAMA. 1994;271:761–766.
5. Peterson ED, DeLong ER, Jollis JG, et al. The effects of New York’s bypass surgery provider profiling on access to care and patient outcomes in the elderly. J Am Coll Cardiol. 1998;32:993–999.
6. Burack JH, Impellizzeri P, Homel P, et al. Public reporting of surgical mortality: a survey of New York State cardiothoracic surgeons. Ann Thorac Surg. 1999;68:1195–1200.
7. Narins CR, Dozier AM, Ling FS, et al. The influence of public reporting of outcome data on medical decision making by physicians. Arch Intern Med. 2005;165:83–87.
8. Omoigui NA, Miller DP, Brown KJ, et al. Outmigration for coronary bypass surgery in an era of public dissemination of clinical outcomes. Circulation. 1996;93:27–33.
9. Marshall MN, Shekelle P, Leatherman S, et al. The public release of performance data: what do we expect to gain? A review of the evidence. JAMA. 2000;283:1866–1874.
10. Schauffler HH, Mordavsky JK. Consumer reports in health care: do they make a difference? Annu Rev Public Health. 2001;22:69–89.
11. Fung CH, Lim Y-W, Mattke S, et al. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148:111–123.
12. Faber M, Bosch M, Wollersheim H, et al. Public reporting in health care: how do consumers use quality-of-care information?: a systematic review. Med Care. 2009;47:1–8.
13. Ketelaar NA, Faber MJ, Flottorp S, et al. Public release of performance data in changing the behaviour of healthcare consumers, professionals or organisations. Cochrane Libr. 2011;11:1–62.
14. Totten AM, Wagner J, Tiwari A, et al. Closing the quality gap: revisiting the state of the science (vol. 5: public reporting as a quality improvement strategy). Evid Rep Technol Assess. 2012;2085:1.
15. Campanella P, Vukovic V, Parente P, et al. The impact of public reporting on clinical outcomes: a systematic review and meta-analysis. BMC Health Serv Res. 2016;16:296–309.
16. Berger ZD, Joy SM, Hutfless S, et al. Can public reporting impact patient outcomes and disparities? A systematic review. Patient Educ Couns. 2013;93:480–487.
17. Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. JAMA. 2000;283:2008–2012.
18. Mukamel DB, Haeder SF, Weimer DL. Top-down and bottom-up approaches to health care quality: the impacts of regulation and report cards. Annu Rev Public Health. 2014;35:477–497.
19. Chen J. Public reporting of health system performance: review of evidence on impact on patients, providers and healthcare organisations: an evidence check rapid review brokered by the Sax Institute for the Bureau of Health Information. 2010. Available at: www.saxinstitute.org.au
. Accessed May 11, 2017.
20. Pearse J, Mazevska D. The impact of public disclosure of health performance data on effectiveness and efficiency: an evidence check rapid review brokered by the Sax Institute. 2010. Available at: www.saxinstitute.org.au
. Accessed May 11, 2017.
21. Ulrich MR, Brock DM, Ziskind AA. Analysis of trends in coronary artery bypass grafting and percutaneous coronary intervention
rates in Washington state from 1987 to 2001. Am J Cardiol. 2003;92:836–839.
22. Harlan BJ. Statewide reporting of coronary artery surgery results: a view from California. J Thorac Cardiovasc Surg. 2001;121:409–417.
23. Paradies Y, Ben J, Denson N, et al. Racism as a determinant of health: a systematic review and meta-analysis. PloS One. 2015;10:e0138511.
24. Wells G, Shea B, O’Connell D, et al. The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses. 2014. Available at: www.ohri.ca/programs/clinical_epidemiology/oxford.asp
. Accessed May 11, 2017.
25. Higgins JP, Altman DG, Gøtzsche PC, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928–d5936.
26. Quigley JM, Thompson JC, Halfpenny NJ, et al. Critical appraisal of nonrandomized studies—a review of recommended and commonly used tools. J Eval Clin Pract. 2018. Doi:10.1111/jep.12889. [Epub ahead of print].
27. Zeng X, Zhang Y, Kwong JSW, et al. The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline: a systematic review. J Evid Based Med. 2015;8:2–10.
28. Savović J, Weeks L, Sterne JA, et al. Evaluation of the Cochrane Collaboration’s tool for assessing the risk of bias in randomized trials: focus groups, online survey, proposed recommendations and their implementation. Syst Rev. 2014;3:37–48.
29. Luchini C, Stubbs B, Solmi M, et al. Assessing the quality of studies in meta-analyses: advantages and limitations of the Newcastle Ottawa Scale. World J Meta-Anal. 2017;5:80–84.
30. Higgins JPT, Thompson SG, Deeks JJ, et al. Measuring inconsistency in meta-analyses. BMJ. 2003;327:557–560.
31. Sedgwick P. Meta-analyses: how to read a funnel plot. BMJ. 2013;346:346–347.
32. Romano PS, Zhou H. Do well-publicized risk-adjusted outcomes reports affect hospital volume? Med Care. 2004;42:367–377.
33. Jha AK, Epstein AM. The predictive accuracy of the New York State coronary artery bypass surgery report-card system. Health Aff. 2006;25:844–855.
34. Romano PS, Marcin JP, Dai JJ, et al. Impact of public reporting of coronary artery bypass graft surgery performance data on market share, mortality, and patient selection. Med Care. 2011;49:1118–1125.
35. Shukla M. Long-term Impact of Coronary Artery Bypass Graft Surgery (CABG) Report Cards on CABG Mortality and Provider Marker Share and Volume. USA: The Faculty of the School of Public health and Health Services, The George Washington University; 2013.
36. Dranove D, Sfekas A. Start spreading the news: a structural estimate of the effects of New York hospital report cards. J Health Econ. 2008;27:1201–1207.
37. Wang J, Hockenberry J, Chou S-Y, et al. Do bad report cards have consequences? Impacts of publicly reported provider quality information on the CABG market in Pennsylvania. J Health Econ. 2011;30:392–407.
38. Khan OA, Iyengar S, Pontefract DE, et al. Impact of surgeon-specific data reporting on surgical training. Ann R Coll Surg Engl. 2007;89:796–798.
39. Li Z, Carlisle DM, Marcin JP, et al. Impact of public reporting on access to coronary artery bypass surgery: the California Outcomes Reporting Program. Ann Thorac Surg. 2010;89:1131–1138.
40. Guru V, Fremes SE, Naylor CD, et al. Public versus private institutional performance reporting: what is mandatory for quality improvement? Am Heart J. 2006;152:573–578.
41. Dranove D, Kessler D, McClellan M, et al. Is more information better? The effects of “report cards” on health care providers. J Polit Econ. 2003;111:555–588.
42. Chen Y, Meinecke J. Do healthcare report cards cause providers to select patients and raise quality of care? Health Econ. 2012;21(S1):33–55.
43. Hannan EL, Sarrazin MSV, Doran DR, et al. Provider profiling and quality improvement efforts in coronary artery bypass graft surgery: the effect on short-term mortality among Medicare beneficiaries. Med Care. 2003;41:1164–1172.
44. Chou S-Y, Deily ME, Li S, et al. Competition and the impact of online hospital report cards. J Health Econ. 2014;34:42–58.
45. Joynt KE, Blumenthal DM, Orav EJ, et al. Association of public reporting for percutaneous coronary intervention
with utilization and outcomes among Medicare beneficiaries with acute myocardial infarction. JAMA. 2012;308:1460–1468.
46. Renzi C, Asta F, Fusco D, et al. Does public reporting improve the quality of hospital care for acute myocardial infarction? Results from a regional outcome evaluation program in Italy. Int J Qual Health Care. 2014;26:223–230.
47. Waldo SW, McCabe JM, O’Brien C, et al. Association between public reporting of outcomes with procedural management and mortality for patients with acute myocardial infarction. J Am Coll Cardiol. 2015;65:1119–1126.
48. McCabe JM, Joynt KE, Welt FG, et al. Impact of public reporting and outlier status identification on percutaneous coronary intervention
case selection in Massachusetts. JACC Cardiovasc Interv. 2013;6:625–630.
49. Boyden TF, Joynt KE, McCoy L, et al. Collaborative quality improvement vs public reporting for percutaneous coronary intervention
: a comparison of percutaneous coronary intervention
in New York vs Michigan. Am Heart J. 2015;170:1227–1233.
50. Moscucci M, Eagle KA, Share D, et al. Public reporting and case selection for percutaneous coronary interventions: an analysis from two large multicenter percutaneous coronary intervention
databases. J Am Coll Cardiol. 2005;45:1759–1765.
51. Apolito RA, Greenberg MA, Menegus MA, et al. Impact of the New York State Cardiac Surgery and Percutaneous Coronary Intervention
Reporting System on the management of patients with acute myocardial infarction complicated by cardiogenic shock. Am Heart J. 2008;155:267–273.
52. McCabe JM, Waldo SW, Kennedy KF, et al. Treatment and outcomes of acute myocardial infarction complicated by shock after public reporting policy changes in New York. JAMA Cardiol. 2016;1:648–654.
53. Bangalore S, Guo Y, Xu J, et al. Rates of invasive management of cardiogenic shock in New York before and after exclusion from public reporting. JAMA Cardiol. 2016;1:640–647.
54. Boden WE, Mancini GBJ. CABG for complex CAD: when will evidence-based practice align with evidence-based medicine? J Am Coll Cardiol. 2016;67:56–58.
55. Wasfy JH, Borden WB, Secemsky EA, et al. Public reporting in cardiovascular medicine. Circulation. 2015;131:1518–1527.