Skip Navigation LinksHome > February 2009 - Volume 84 - Issue 2 > Can Hospital Rankings Measure Clinical and Educational Quali...
Academic Medicine:
doi: 10.1097/ACM.0b013e3181939034
Institutional Issues

Can Hospital Rankings Measure Clinical and Educational Quality?

Philibert, Ingrid PhD, MBA

Free Access
Article Outline
Collapse Box

Author Information

Dr. Philibert is senior vice president, Field Activities, Accreditation Council for Graduate Medical Education, Chicago, Illinois.

Correspondence should be addressed to Dr. Philibert, Accreditation Council for Graduate Medical Education, 515 North State Street, Suite 2000, Chicago, IL 60610; telephone: (312) 755-5003; e-mail: (iphilibert@acgme.org).

Collapse Box

Abstract

Background: A relative dearth of relevant data hampers efforts to demonstrate a link between educational and clinical quality and may preclude residency applicants from identifying programs with the best clinical outcomes. Existing clinical rankings could fill this gap if they are based on sound judgments about quality.

Method: To explore the potential of the U.S. News & World Report “America’s Best Hospitals” clinical rankings in measuring the quality of clinical and learning environments, the author systematically reviewed the U.S. and Canadian literature for 1975 through 2007 regarding quality indicators and teaching hospitals. Individual data elements of the rankings were examined to assess the extent to which they included accepted measures of clinical performance.

Results: A total of 187 articles met the inclusion criteria of addressing clinical quality criteria relevant to the rankings and quality assessment in teaching hospitals. Statistical examination of the data underlying the rankings and their relationship with measures of educational and clinical quality showed the rankings are largely based on institutional “prestige.” Ranked clinical programs and institutions consistently outperform counterparts on available indices, suggesting that the data elements underlying the rankings may provide valid assessments about the quality of care in educational settings.

Conclusions: Data elements in the rankings can be used to assess clinical and, to a lesser extent, educational quality, but the number of specialties and ranked institutions is too small to have a significant effect on widespread clinical or educational quality, unless ranked institutions serve as sites for the development, study, and dissemination of best practices.

In 1997, the Accreditation Council for Graduate Medical Education (ACGME) introduced the Outcome Project as a long-term initiative to increase emphasis on educational outcomes in the accreditation of residency education programs. The Outcome Project entails moving from standards that define structure and educational processes to accreditation based on programs’ achievements in producing competent graduates.1 In Phase Three, which began in July 2006, the Outcome Project also sought to demonstrate a relationship between educational and clinical quality, highlighting the importance of the attributes of the learning environment to the professional development and the attainment of competence for independent practice.2 Despite a growing emphasis on performance improvement, in 2008, ACGME’s accreditation model focuses on compliance with minimum standards, and GME program outcomes shared with residency applicants and the public are limited to the programs’ accreditation status and the interval between reviews.3 Although this information indicates a program’s “substantial compliance,” it does not provide detail on the nature and extent of possible problems or whether a program provides a superior clinical or educational environment. With valid information on clinical quality, applicants could make more informed decisions about which GME programs to consider instead of relying on information shared by programs, existing sources that lack outcomes data, or word of mouth.

A question important to Phase Three of the Outcome Project, as well as to the medical community and the general public, is whether programs with better clinical outcomes produce superior graduates. A comprehensive review of 52 articles about the factors affecting resident performance concluded there were few clear associations, and the review suggested a major gap in the literature on factors that affect resident performance.4 Current residency program quality assessment uses National Residency Matching Program (NRMP) performance and graduates’ performance on the American Board of Medical Specialties certifying examinations as proxies for residents’ global educational outcomes (or attainment of competence). However, these measures are unable to predict clinical performance,5 and they are influenced to a certain extent by programs’ selectivity and graduates’ general cognitive abilities.6 For prestigious, highly selective programs that accept some of the strongest applicants, these measures may overstate the programs’ contributions in preparing competent physicians.

Back to Top | Article Outline

The “America’s Best Hospitals” Rankings

Consumer or third-party rankings exist for a range of domains, from purchasing a car to selecting a college.7,8 These rankings assign quantitative and/or qualitative measures to attributes, weighting multiple criteria to create a list that orders entities from top (best or most desirable) to bottom (worst or least desirable). One ranking, “America’s Best Hospitals,” released since 1990 by U.S. News & World Report, has received public attention rare for a quality measure. Despite critiques that they lack validity,9,10 the “America’s Best Hospitals” rankings are valued by institutions and enjoy greater public recognition than more “scientific” sources of information on clinical quality available to patients and referring physicians.11,12 As a potential source of data on the quality of residency programs, the rankings have advantages, including public availability and presentation of data for individual clinical specialties—the unit of analysis of interest to applicants.

The “America’s Best Hospitals” rankings encompass 16 specialties and multidisciplinary domains like cancer and heart care.13 All specialty rankings include a reputation score derived from a survey of board-certified physicians asked to list the five “best” hospitals in their specialty.13 In four specialties (Ophthalmology, Psychiatry, Rehabilitation, and Rheumatology) the rankings are based solely on reputation, with this information collected through a survey of board-certified physicians in each specialty. In seven specialties and subspecialties (Ear, Nose, and Throat; Endocrinology; Geriatrics; Gynecology; Neurology and Neurosurgery; Orthopaedic Surgery; and Urology) and, starting in 2007, Geriatrics and Pediatrics, and in five multidisciplinary domains (Cancer; Digestive Disorders; Heart Care and Surgery; Kidney Disease; and Respiratory Disorders), the rankings also incorporate information on program structure, process, and outcomes, in keeping with established concepts around the measurement of quality.14 This information comes from multiple sources and emphasizes technology, multidisciplinary approaches to care, and accreditation and external recognition, such as designation as a cancer center or “Nurse Magnet Hospital,” with clinical outcomes assessed via a case-mix-adjusted mortality index.13

The information on patient outcomes, institutional designations, and technical capabilities is aggregated into a composite index that weighs structure, process, and outcomes.13 In 2007, Geriatrics and Pediatrics, which previously had used only reputation scores to determine hospital rankings, began to use clinical data as well. For Pediatrics, this produced a ranking of children’s hospitals indistinguishable from that for earlier years.15 U.S. News & World Report also publishes an institutional “Honor Roll” of hospitals that rank at least two standard deviations above the mean in six or more specialties. This encompassed 14 hospitals in 2006 and 18 in 2007.

In this review of the literature, I explore the utility of the “America’s Best Hospitals” clinical rankings for indicating educational quality of GME programs. The aim is to analyze whether these existing rankings are based on sound judgments about quality and could fill in the knowledge gap around the connection between educational and clinical quality.

Back to Top | Article Outline

Method

Literature review

I searched the literature for evidence that the data elements included in the “America’s Best Hospitals” rankings are valid indicators of health care quality. The search focused on terms related to clinical quality and outcomes, alone or in combination. It was limited to publications from the United States and Canada from 1975 through 2007. I examined reference lists of these articles for additional potential sources.

A total of 938 articles met the initial search criteria, relating to quality indicators and teaching hospitals. I reviewed the abstracts of 517 articles and the full text of selected articles. Of these, 187 met the inclusion criteria of addressing clinical quality criteria of relevance to the rankings and an assessment of quality in teaching hospitals. For selected areas, such as the relationship between clinical volume and mortality or between nurse staffing and outcomes, the literature is vast, and space constraints made it impossible to include all articles in the review. Selection for inclusion focused on seminal articles and meta-analyses. I used a standard form of data extraction that collected information on study design, sample size, and relevant outcomes.

Back to Top | Article Outline
Statistical analysis of the elements included in the rankings

The second analysis examined the data elements in the “America’s Best Hospitals” rankings to assess the extent to which they included accepted measures of clinical performance. The rankings are based on a summed score in which reputation, mortality, and process and structure attributes are said to receive equal weight.13 To explore the effect of the individual items on the rankings, I analyzed six specialty rankings with structure, process, and outcome indicators that could be matched with core residency programs for the contribution of individual variables on the score that determined rank. I entered four variables into a linear regression model (SAS PROC SQL): reputation, mortality, volume, and nursing index (registered nurse [RN] FTEs divided by average daily census). The rankings’ Technology Index, Nurse Magnet Hospital status, Patient/Community Service score, and Trauma Center Designation were not used because of overlapping measures (such as RN staffing being a criterion in Nurse Magnet Hospital designation) and because ranked institutions were similar on many of these indicators.

Back to Top | Article Outline

Results

Literature review

Detailed results of the literature review are shown in Table 1, showing data for 30 of 187 articles relevant to assessing the validity and utility of the rankings and/or the data elements included in them. Three studies reported that hospitals included in the “America’s Best Hospitals” rankings have lower risk-adjusted mortality than comparison institutions.16–18 Studies of the relationship between clinical volume, mortality, and quality showed hospitals with higher volumes had better mortality outcomes, with this relationship supported across studies and meta-analyses and systematic reviews.19–24 The association was stronger for surgical procedures and particularly strong for cardiac surgery.25 However, newer studies with better risk adjustment produced smaller estimates of the positive effect of volume on mortality.26–27 Individual surgeons’ operative volumes also were associated with improved quality outcomes, and this effect was independent of and in addition to the effect of institutional volume.28–29

Table 1
Table 1
Image Tools
Table 1
Table 1
Image Tools

Virtually all hospitals in the rankings are teaching hospitals. One likely reason is the inclusion criteria, which specify membership in the Council of Teaching Hospitals or the presence of cutting-edge technology; both of these criteria favor teaching hospitals.13 In addition, the physician survey that produces the reputation score asks respondents to “list the five hospitals (and/or affiliated medical schools) [italics added] in the United States that you believe provide the best care for patients with the most serious and difficult medical problems.” This may suggest to respondents that a medical school affiliation is desirable. Studies exploring whether teaching hospitals have better outcomes than their nonteaching counterparts have shown lower mortality in teaching hospitals, but some also found higher costs,30–33 and in one study teaching hospitals performed less well in the areas of prevention and health counseling than did nonteaching hospitals.34 Two other criteria in the rankings—third-party recognition and designation35–38 and higher nurse staffing ratios39,40—also were associated with lower patient mortality after adjusting for patient and other characteristics.

One systematic study has shown that only 55% of care delivered across all hospitals conforms to accepted best practice,41 and recent systems for assessing hospital quality take into consideration adherence to accepted clinical guidelines that experts have declared important in avoiding unnecessary variation and in increasing the quality and efficiency of care.42,43 Two studies reported that the improved outcomes in ranked hospitals may relate to greater adherence to treatment guidelines.17,18 However, a study comparing quality rankings based on adherence to clinical guidelines (the Medicare program’s Hospital Compare system) found that fewer than 50% of the hospitals in the “America’s Best Hospitals” rank list for cardiac diagnoses and only 15% of hospitals ranked for respiratory disorders also had top quartile Hospital Compare scores for these diagnoses. Further, only one third of “America’s Best Hospitals” Honor Roll institutions placed in the top quartile of the Hospital Compare system.44 The University Healthsystem Consortium (UHC) included two Honor Roll institutions in its ranking of the five top-performing hospitals.45 The other institutions in the UHC ranking were not represented in the “America’s Best Hospitals” institutional Honor Roll and were represented only to a limited degree in the individual specialties rankings.

Back to Top | Article Outline
Relationship among the data elements in the rankings

Table 2 presents the analysis of the contribution of each explanatory variable for the six chosen specialties in the rankings. For two indicators, reputation and mortality, the significance is at .001 or below across all specialties. Volume is significant for all specialties except Otolaryngology, and nursing FTEs are significant only for heart care. Tests for multicolinearity were negative. The partial r2 values for the explanatory variables suggest that, despite statistical significance, their added contribution of variables beyond reputation is small, and rank is determined primarily by reputation.

Table 2
Table 2
Image Tools

Table 3 shows the contribution of the explanatory variables in a model that omits reputation, using the maximum R2 improvement method (SAS), which tries to find the “best” model. This showed that that between 9% and 41% of the ranking scores are explained by the remaining variables, with volume making the largest contribution in all specialties and only one other variable (nursing index for Neurology and Neurological Surgery) approaching reaching significance at the .05 level. This confirmed critiques that the reputation component of the rankings overshadowed other data that may offer more valid information on clinical quality.9,10

Table 3
Table 3
Image Tools
Back to Top | Article Outline
Relevance to an improved learning environment

To explore the link between clinical and educational quality, the analysis assessed the performance of ranked programs on accepted measures of educational quality, including accreditation performance, NRMP performance, and, to the extent available, graduates’ board certification performance. For accreditation performance, residency programs in the ACGME’s database of accredited programs were matched to the “America’s Best Hospitals” rankings, producing a sample of 390 accredited programs in institutions that were ranked in the clinical specialty represented by the program. Analysis of their accreditation performance showed a much lower rate of adverse actions (less than 1% compared with an adverse action rate of 8% for all ACGME-accredited programs). In addition, the interval between accreditation review, which reflects ACGME’s and the review committee’s comfort with compliance and educational quality, is significantly longer for ranked programs (P = .02). Analysis of ranked programs’ performance in the 2006 and 2007 resident matches showed all had high fill rates, but it was not possible to distinguish their performance from that of a sizable share of nonranked programs. In addition, match performance reflects programs’ hiring success, not educational achievements.

American Board of Medical Specialties (ABMS) certification examination pass rates offer valid information about educational outcomes, albeit in a limited range emphasizing medical knowledge. To date, only two ABMS member boards (the American Board of Internal Medicine and the American Board of Pediatrics) have released pass rate data for individual programs. Pediatrics is the only specialty in the “America’s Best Hospitals” rankings with this information available. Analysis showed that pediatrics programs in ranked institutions outperformed the national average, with an ABMS certification pass rate of 93.8% compared with a national average of 78.9%; ranked programs’ percentage of graduates taking the exam also is slightly higher (94.9% versus 91.1%). To assess this relationship for 25 pediatrics programs represented in the rankings in more detail, the analysis explored the relationship between programs’ place in the 2006 rankings and their rank by 2005–2006 ABMS certification pass rate. The analysis used the Kruskal-Wallis test, a nonparametric statistic for examining groups, in which the actual observations in a group are replaced by their rank.46 The value of 0.422 on the Kruskal-Wallis test suggests a small positive relationship between the two rankings, despite the range restriction in the data.

Clinical “centers of excellence” (in which a group of highly trained and specialized experts focus on a particular clinical area) for cancer and cardiac care and digestive disorders found in ranked hospitals may be of potential importance for the learning environment they create. The high degree of specialization of physicians likely found in these centers has been shown to improve performance in diagnosis and care,47 though this advantage seems to be limited to complex cases.48 Reasons for this limitation include that experts care for patients with similar diseases, work in settings with better equipment, and have more knowledgeable colleagues, all of which increase performance feedback.49 These factors also could contribute to a better learning environment in ranked institutions. If ranked institutions have better adherence to guidelines that promote efficient, high-quality care, this may be relevant to educational outcomes, because residents seem to adopt the practice attributes of their educational setting, as evidenced by the relatively lower performance of residents training in an environment with few constraints on exams about appropriate use of medical resources.50

Back to Top | Article Outline

Discussion

The analyses showed that institutions included in the “America’s Best Hospitals” rankings outperformed comparison institutions, with improved performance in clinical dimensions and in the limited reported attributes linked to the learning environment for resident physicians. However, the rankings have three limitations that may reduce their utility for making judgments about clinical or educational quality. First, ranked institutions’ greater renown may allow them to be more selective in recruiting residents, with better educational outcomes resulting from this, not a superior learning environment provided by the residency programs in these institutions. Second, peer judgments about quality used to determine reputation scores may be based on the prestige of the institutions in which the services reside, suggesting that the specialty rankings and their reaggregation into an institutional Honor Roll may be confounded. Finally, the rankings are limited to specialties that emphasize specialized services over prevention and health maintenance, and the value placed on cutting-edge technology may persuade hospitals to invest in such innovations, contributing to overuse and added burden on the health care system.

One limitation of this systematic review of the data underlying the rankings is the lack of primary studies in many areas relating to measures of quality underlying the standards. The available literature allows only a tantalizing glimpse at the relationship between improved clinical and educational outcomes, and it does not truly answer the question of how they coexist and interact in teaching institutions. Research is needed to more fully explore the attributes of quality pertaining to the structure and process of care and how this affects outcomes. To enhance the usefulness of the rankings in assessing clinical and educational excellence, the rankings would need to incorporate added valid quality indicators. Possible quality indicators for which some information is presently available include factors affecting mortality, such as medical errors,51,52 and information on efforts to design safe systems of care.53–55 From an educational perspective, the rankings do not include information on team work, reflective practice, or efforts to use data to improve clinical quality—concepts considered vital to the education of physicians and other health professionals.56,57

As they are currently constituted and used, the rankings permitted only a tantalizing glimpse at the relationship between clinical and educational quality, and they do not answer the question of how the two coexist and interact in teaching settings. The rankings’ most serious limitation for the purpose of improving clinical and educational quality in the United States is that the number of ranked programs and institutions is too small to affect care or the professional development of physicians—unless these institutions perform an added role as sites for the development, study, and dissemination of best practices. That this may occur already is shown in articles on quality improvements originating from a several Honor Roll institutions relating to clinical performance58–61 and enhancing the learning environment for physicians and other health professionals.62–64 Expanding this work to a larger number of institutions and to the education of health professionals is likely to have a more profound effect on health care quality for Americans than would efforts to develop the ideal quality ranking.

It is encouraging for programs and institutions wishing to improve their performance that the dearth of comparative information need not be an impediment to using available data to improve clinical care and education, either independently or in collaboration with partners. Going beyond these efforts to a national dataset on clinical and educational quality will require research to refine measures of educational achievement and performance in practice to expand the small body of evidence that institutions with a better care environment seem to produce graduates superior in measures of competence for practice. This would also realize the aim of the Outcome Project of finding ways to effectively link quality in resident education and practice.

Back to Top | Article Outline

References

1 Batalden P, Leach D, Swing S, Dreyfus H, Dreyfus S. General competencies and accreditation in graduate medical education. Health Aff (Millwood). 2002;21:103–111.

2 ACGME Outcome Project. Timeline— working guidelines. Available at: (http://www.acgme.org/outcome/project/timeline/TIMELINE_index_frame.htm). Accessed October 7, 2008.

3 Accreditation Council for Graduate Medical Education. Understanding the Difference Between Accreditation, Licensure and Certification. Available at: (http://www.acgme.org/acWebsite/RRC_140/140_20UnderstandingtheDifference.pdf). Accessed October 7, 2008.

4 Mitchell M, Srinivasan M, West DC, et al. Factors affecting resident performance: Development of a theoretical model and a focused literature review. Acad Med. 2005;80:376–389.

5 Borowitz SM, Saulsbury FT, Wilson WG. Information collected during the residency match process does not predict clinical performance. Arch Pediatr Adolesc Med. 2000;154:256–260.

6 Boyse TD, Patterson SK, Cohan RH, et al. Does medical school performance predict radiology resident performance? Acad Radiol. 2002;9:437–445.

7 Clarke M. Quantifying quality: What can the U.S. News and World Report rankings tell us about the quality of higher education? Educ Policy Anal Arch. March 2002;10(16). Available at: (http://epaa.asu.edu/epaa/v10n16). Accessed October 7, 2008.

8 Chen Y, Xie J. Third-party product review and firm marketing strategy. Mark Sci. 2005;24:218–240.

9 Green J, Wintfeld N, Krasner M, Wells C. In search of America’s best hospitals. The promise and reality of quality assessment. JAMA. 1997;277:1152–1155.

10 McGaghie WC, Thompson JA. America’s best medical schools: A critique of the U.S. News & World Report rankings. Acad Med. 2001;76:985–992.

11 Schneider EC, Epstein AM. Use of public performance reports. JAMA. 1998;279:1638–1642.

12 Hannan EL, Stone CC, Biddle TL, DeBuono BA. Public release of cardiac surgery outcomes data in New York: What do New York state cardiologists think of it? Am Heart J. 1997;134:1120–1128.

13 McFarlane E, Murphy J, Olmsted MG, Drozd EM, Hill C. America’s Best Hospitals. 2007 Methodology. Research Triangle Park, NC: RTI International; July 2007.

14 Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q. 1966;44(suppl):166–206.

15 U.S. News & World Report. Best Children’s Hospitals: General Pediatrics. Available at: (http://health.usnews.com/usnews/health/best-hospitals/search.php?spec=ihqpeds&). Accessed October 7, 2008.

16 Wang OJ, Wang Y, Lichtman JH, et al. “America’s Best Hospitals” in the treatment of acute myocardial infarction. Arch Intern Med. 2007;167:1345–1351.

17 Chen J, Radford MJ, Wang Y, Marciniak TA, Krumholz HM. Do “America’s Best Hospitals” perform better for acute myocardial infarction? N Engl J Med. 1999;340:286–292.

18 Williams SC, Koss RG, Morton DJ, Loeb JM. Performance of top-ranked heart care hospitals on evidence-based process measures. Circulation. 2006;114:558–564.

19 Flood AB, Scott WR, Ewy W. Does practice make perfect? Part I: The relation between hospital volume and outcomes for selected diagnostic categories. Med Care. 1984;22:98–114.

20 Birkmeyer JD, Siewers AE, Finlayson EV, et al. Hospital volume and surgical mortality in the United States. N Engl J Med. 2002;346:1128–1137.

21 Allareddy V, Allareddy V, Konety BR. Specificity of procedure volume and in-hospital mortality association. Ann Surg. 2007;246:135–139.

22 Barker FG 2nd, Amin-Hanjani S, Butler WE, Ogilvy CS, Carter BS. In-hospital mortality and morbidity after surgical treatment of unruptured intracranial aneurysms in the United States, 1996–2000: The effect of hospital and surgeon volume. Neurosurgery. 2003;52:995–1007.

23 Shackley P, Slack R, Booth A, Michaels J. Is there a positive volume–outcome relationship in peripheral vascular surgery? Results of a systematic review. Eur J Vasc Endovasc Surg. 2000;20:326–335.

24 Halm EA, Lee C, Chassin MR. Is volume related to outcome in health care? A systematic review and methodologic critique of the literature. Ann Intern Med. 2002;137:511–520.

25 Peterson ED, Coombs LP, DeLong ER, Haan CK, Ferguson TB. Procedural volume as a marker of quality for CABG surgery. JAMA. 2004;291:195–201.

26 Epstein AJ, Rathore SS, Krumholz HM, Volpp KG. Volume-based referral for cardiovascular procedures in the United States: A cross-sectional regression analysis. BMC Health Serv Res. 2005;5:42.

27 Rogowski JA, Horbar JD, Staiger DO, et al. Indirect vs. direct hospital quality indicators for very low-birth-weight infants. JAMA. 2004;291:202–209.

28 Begg CB, Riedel ER, Bach PB, et al. Variations in morbidity after radical prostatectomy. N Engl J Med. 2002;346:1138–1144.

29 Schrag D, Panageas KS, Riedel E, et al. Hospital and surgeon procedure volume as proctors of outcome following rectal cancer resection. Ann Surg. 2002;236:583–592.

30 Taylor DH Jr, Whellan DJ, Sloan FA. Effects of admission to a teaching hospital on the cost and quality of care for Medicare beneficiaries. N Engl J Med. 1999;340:293–299.

31 Rosenthal GE, Harper DL, Quinn LM, Cooper GS. Severity-adjusted mortality and length of stay in teaching and nonteaching hospitals. Results of a regional study. JAMA. 1997;278:485–490.

32 Polanczyk CA, Lane A, Coburn M, et al. Hospital outcomes in major teaching, minor teaching, and nonteaching hospitals in New York State. Am J Med. 2002;112:255–261.

33 Zimmerman JE, Shortell SM, Knaus WA, et al. Value and cost of teaching hospitals: A prospective, multicenter, inception cohort study. Crit Care Med. 1993;21:1432–1442.

34 Landon BE, Normand SL, Lessler A, et al. Quality of care for the treatment of acute medical conditions in US hospitals. Arch Intern Med. 2006;166:2511–2517.

35 Sampalis JS, Lavoie A, Boukas S, et al. Trauma center designation: Initial impact on trauma-related mortality. J Trauma. 1995;39:232–237.

36 Aiken LH, Smith HL, Lake ET. Lower Medicare mortality among a set of hospitals known for good nursing care. Med Care. 1994;32:771–787.

37 Birkmeyer NJ, Goodney PP, Stukel TA, Hillner BE, Birkmeyer JD. Do cancer centers designated by the National Cancer Institute have better surgical outcomes? Cancer. 2005;103:435–441.

38 Demetriades D, Martin M, Salim A, et al. Relationship between American College of Surgeons trauma center designation and mortality in patients with severe trauma (injury severity score >15). J Am Coll Surg. 2006;202:212–215.

39 Kane RL, Shamliyan TA, Mueller C, Duval S, Wilt TJ. The association of registered nurse staffing levels and patient outcomes: Systematic review and meta-analysis. Med Care. 2007;45:1195–1204.

40 Needleman J, Buerhaus P, Mattke S, Stewart M, Zelevinsky K. Nurse-staffing levels and quality of care in hospitals. N Engl J Med. 2002;346:1415–1422.

41 Asch SM, Kerr EA, Keesey J, et al. Who is at greatest risk for receiving poor-quality health care? N Engl J Med. 2006;354:1147–1156.

42 Atkins D, Fink K, Slutsky J; Agency for Healthcare Research and Quality; North American Evidence-based Practice Centers. Better information for better health care: The Evidence-based Practice Center program and the Agency for Healthcare Research and Quality. Ann Intern Med. 2005;142:1035–1041.

43 Lohr KN. Rating the strength of scientific evidence: Relevance for quality improvement programs. Int J Qual Health Care. 2004;16: 9–18.

44 Halasyamani LK, Davis MM. Conflicting measures of hospital quality: Ratings from “Hospital Compare” versus “Best Hospitals.” J Hosp Med. 2007;2:128–134.

45 University Healthsystem Consortium. Recognized as Top Performers in the 2007 UHC Quality and Accountability Ranking. Available at: (http://public.uhc.edu/publicweb/About/Resources/QATopPerf_press_Oct07.pdf). Accessed October 7, 2008.

46 Sheskin DJ. Handbook of Parametric and Nonparametric Statistical Procedures. 3rd ed. Boca Raton, Fla: Chapman & Hall/CRC: 2004.

47 Esserman L, Cowley H, Eberle C, et al. Improving the accuracy of mammography: Volume and outcome relationships. J Natl Cancer Inst. 2002;94:369–375.

48 Simpson SA, Gilhooly KJ. Diagnostic thinking processes: Evidence from a constructive interaction study of electrocardiogram (ECG) interpretation. Appl Cogn Psychol. 1997;11:543–554.

49 Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004;79(10 suppl):S70–S81.

50 Fisher ES. The paradox of plenty: Implications for performance measurement and pay for performance. Manag Care. 2006;15(10 suppl 8):3–8.

51 Singh H, Thomas EJ, Petersen LA, Studdert DM. Medical errors involving trainees: A study of closed malpractice claims from 5 insurers. Arch Intern Med. 2007;167:2030–2036.

52 Singh H, Naik AD, Rao R, Petersen LA. Reducing diagnostic errors through effective communication: Harnessing the power of information technology. J Gen Intern Med. 2008;23:489–494.

53 Glance LG, Osler TM, Mukamel DB, Dick AW. Effect of complications on mortality after coronary artery bypass grafting surgery: Evidence from New York State. J Thorac Cardiovasc Surg. 2007;134:53–58.

54 Silber JH, Rosenbaum PR, Williams SV, Ross RN, Schwartz JS. The relationship between choice of outcome measure and hospital rank in general surgical procedures: Implications for quality assessment. Int J Qual Health Care. 1997;9:193–200.

55 Karsh BT, Holden RJ, Alper SJ, Or CK. A human factors engineering paradigm for patient safety: Designing to support the performance of the healthcare professional. Qual Saf Health Care. 2006;15(suppl 1):i59–i65.

56 Davies HTO, Nutley SM, Mannion R. Organisational quality culture and quality of health care. Qual Health Care. 2000;9:111–119.

57 Galbraith RM, Holtman MC, Clyman SG. Use of assessment to reinforce patient safety as a habit. Qual Saf Health Care. 2006; 15(suppl 1):i30–i33.

58 Berenholtz SM, Pronovost PJ, Lipsett PA, et al. Eliminating catheter-related bloodstream infections in the intensive care unit. Crit Care Med. 2004;32:2014–2020.

59 Johnson T, Currie G, Keill P, et al. NewYork-Presbyterian Hospital: Translating innovation into practice. Jt Comm J Qual Patient Saf. 2005;31:554–560.

60 Pronovost PJ, Berenholtz SM, Goeschel CA, et al. Creating high reliability in health care organizations. Health Serv Res. 2006;41: 1599–1617.

61 Pronovost PJ, King J, Holzmueller CG, et al. A web-based tool for the Comprehensive Unit-based Safety Program (CUSP). Jt Comm J Qual Patient Saf. 2006;32:119–129.

62 Lypson ML, Frohna JG, Gruppen LD, Woolliscroft JO. Assessing residents’ competencies at baseline: Identifying the gaps. Acad Med. 2004;79:564–570.

63 Gordon JA, Wilkerson WM, Shaffer DW, Armstrong EG. “Practicing” medicine without risk: Students’ and educators’ responses to high-fidelity patient simulation. Acad Med. 2001;76:469–472.

64 Afessa B, Kennedy CC, Klarich KW, et al. Introduction of a 14-hour work shift model for housestaff in the medical ICU. Chest. 2005;128:3910–3915.

© 2009 Association of American Medical Colleges

Login

Article Tools

Images

Share