Home Current Issue Previous Issues Published Ahead-of-Print Collections For Authors Journal Info
Skip Navigation LinksHome > June 2012 - Volume 87 - Issue 6 > Contemporary Performance of U.S. Teaching and Nonteaching Ho...
Academic Medicine:
doi: 10.1097/ACM.0b013e318253676a
Institutional Issues

Contemporary Performance of U.S. Teaching and Nonteaching Hospitals

Shahian, David M. MD; Nordberg, Paul MS; Meyer, Gregg S. MD, MSc; Blanchfield, Bonnie B. CPA, ScD; Mort, Elizabeth A. MD, MPH; Torchiana, David F. MD; Normand, Sharon-Lise T. PhD

Free Access
Supplemental Author Material
Article Outline
Collapse Box

Author Information

Dr. Shahian is professor of surgery, Harvard Medical School, and associate medical director, Center for Quality and Safety, Massachusetts General Hospital, Boston, Massachusetts.

Mr. Nordberg is senior consultant for performance improvement, Center for Quality and Safety, Massachusetts General Hospital, Boston, Massachusetts.

Dr. Meyer is senior vice president, Center for Quality and Safety, Massachusetts General Hospital, and associate professor of medicine, Harvard Medical School, Boston, Massachusetts.

Dr. Blanchfield is senior research scientist, Institute for Technology Assessment and Massachusetts General Hospital Physician Organization, Massachusetts General Hospital, Boston, Massachusetts.

Dr. Mort is vice president, Center for Quality and Safety, Massachusetts General Hospital, and instructor in medicine, Harvard Medical School, Boston, Massachusetts.

Dr. Torchiana is president, Massachusetts General Hospital Physician Organization, Massachusetts General Hospital, and associate professor of surgery, Harvard Medical School, Boston, Massachusetts.

Dr. Normand is professor of health care policy, Harvard Medical School, and professor of biostatistics, Harvard School of Public Health, Boston, Massachusetts.

Correspondence should be addressed to Dr. Shahian, Center for Quality and Safety, Massachusetts General Hospital, 55 Fruit St., Boston, MA 02114; telephone: (617) 643-4335; fax: (617) 726-4304; e-mail: dshahian@partners.org.

Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A88 and http://links.lww.com/ACADMED/A89.

Collapse Box

Abstract

Purpose: To compare the performance of U.S. teaching and nonteaching hospitals using a portfolio of contemporary, publicly reported metrics.

Method: The authors classified acute care general hospitals filing a Medicare Institutional Cost Report according to teaching intensity: nonteaching, teaching, or Council of Teaching Hospitals member. They compared aggregate results across categories for Hospital Compare process compliance, mortality, and readmission rates (acute myocardial infarction [AMI], heart failure, pneumonia); Surgical Care Improvement Project (SCIP) performance; compliance with Leapfrog standards; patient experience; patient services and key technologies; safety (computerized physician order entry, intensive care unit staffing, National Quality Forum safe practices, hospital-acquired conditions); and cost/resource utilization (Medicare-adjusted expense per case; Leapfrog efficiency and resource use standards).

Results: Availability of patient services and advanced technologies were associated with teaching intensity (P < .0001), as were most hospital safety metrics. Teaching intensity was favorably associated with SCIP performance, AMI and heart failure process scores, and mortality (P < .0001). It was unfavorably associated with higher AMI and pneumonia readmission rates (P < .0001) and lower scores for individual patient satisfaction measures. Costs per case were similar (P = .4194) across hospital categories after correction for federally allowed adjustments (case mix, wages, and low-income patient care).

Conclusions: Teaching hospitals offer advanced clinical capabilities, educate the next generation of providers, care for disadvantaged urban populations, and are leaders in health care research and innovation. However, many stakeholders may be unaware of an additional value—relatively higher quality and safety in many areas, with similar adjusted costs.

The role of teaching hospitals in the U.S. health care system has long been the subject of study and debate.1,2 Although the research and educational contributions of such institutions are self-evident, some have questioned whether their overall performance and value to our health care system justify their higher unadjusted costs.3 Current health care expenditures are unsustainable, and policy makers increasingly stress that referral and reimbursement strategies should be linked to quality. In this context, many acknowledge the unique capability of major teaching hospitals to care for the most complex medical and surgical conditions, but their performance superiority for treating more common conditions is less widely acknowledged.4,5 Numerous studies have addressed these issues, but most have focused on single end points such as mortality and are based on data from the 1980s and 1990s. Since then, our health care system has changed in many ways, including an increasing emphasis on performance measurement and public reporting.

We evaluated U.S. hospital performance data using a contemporary, broad portfolio of publicly reported quality metrics, many of which were unavailable for prior studies. Our goal was to assess whether there were significant differences in various domains of performance across the spectrum of hospital teaching intensity (Council of Teaching Hospitals [COTH] members, non-COTH teaching hospitals, and nonteaching hospitals).

Back to Top | Article Outline

Method

Unit of analysis

We defined a hospital as an organization with a unique Medicare identification number filing a Medicare Institutional Cost Report (ICR) and identifying itself as a short-term acute care hospital not under federal control. We linked multiple outcomes to each hospital using various data sources.

Back to Top | Article Outline
Data sources

We selected measures for our analysis based on their public availability and widespread use by providers and consumers. No one performance rating system is perfect, and each exhibits varying degrees of construct validity and reliability. Furthermore, many of the original data that we aggregated are derived from administrative sources with their inherent limitations, including risk adjustment.

We used financial and statistical data from the Medicare ICRs6 from fiscal year (FY) 2008 (using the Centers for Medicare and Medicaid Services [CMS] definition of October 1 to September 30) or, in a few instances, from the most recent year available at the time of our analyses (57 from FY 2006; 833 from FY 2007; 3,913 from FY 2008; and 6 from FY 2009). We also used CMS hospital-specific average case mix indices7 to adjust for variations in the severity and complexity of patients across hospitals and the resulting intensity of services required to treat those patients. In addition, we used geographical wage indices from the Medicare Payment Advisory Commission (MedPAC)8 to standardize wages across hospitals for regional differences in labor costs.

We compiled data on CMS hospital quality indicators from the Hospital Compare database, as of July 2009.8 These data included process-of-care information and Hospital Consumer Assessment of Health Care Providers and Systems (HCAHPS) patient experience reports from January to December 2008, and outcomes measures from July 2005 to June 2008. From 2008 results prepared and published by the Leapfrog Group, we extracted relevant data on quality of care, resource allocation, and efficiency.9 Finally, we obtained data from the 2007 American Hospital Association (AHA) annual survey that characterized hospitals’ clinical technologies, patient services, and electronic health record (EHR) implementation. We categorized these data using the Key Technologies and Patient Services classification system developed by U.S. News & World Report (USNWR) and RTI International for their published hospital rankings.10

Back to Top | Article Outline
Categorization of teaching intensity

We categorized the hospitals in our analyses by teaching intensity using a common classification scheme11,12: COTH members (often referred to as major teaching hospitals or academic medical centers), non-COTH teaching hospitals (self-reported teaching status on Medicare ICRs, often referred to as minor teaching hospitals), and nonteaching hospitals.

Others have categorized hospitals by their ratio of interns and residents to beds (IRB).13 As a sensitivity analysis, we explored the differences between results derived from our classification scheme and this alternative approach by selectively reanalyzing some of our process measure data using IRB ratios of >0.25 for major teaching hospitals, 0.10 to 0.25 for minor teaching hospitals, and <0.10 for nonteaching hospitals.

Back to Top | Article Outline
Economic considerations

The structural framework for our resource utilization analyses parallels the Medicare prospective payment reimbursement model and the approach MedPAC has used in similar analyses.14 We first identified total inpatient costs for routine and ancillary services, as reported on the Medicare ICRs. We quantified the percentages of these total costs that are attributable to specific factors recognized in the MedPAC analyses,14,15 including those for case mix, patient transfers, outliers, geographical wage differences, and disproportionate share hospitals (DSHs), which care for indigent patients (see Supplemental Digital Appendix 1, http://links.lww.com/ACADMED/A88, for a complete description of our analysis).

For our subsequent analyses, we accepted MedPAC’s implicit policy assumption that the additional revenue provided by the Medicare payment system fairly reflects the additional costs to hospitals related to these factors and should thus be subsidized. We recognize that others may not agree with this approach, and we can neither validate nor disprove its reasonableness. However, we believe it is logical to expect that hospitals caring for a higher-intensity case mix population would incur higher costs. Similarly, hospitals located in areas with higher wage indices would have higher personnel costs. The higher costs associated with DSH patients may include both unmeasured severity of illness indicators (those not accounted for by case mix alone) and underpayment for services. Following MedPAC’s example, we also separated out the cost of interest on capital expenditures to isolate possible irrelevant differences in borrowing expenses related to asset levels.

To supplement these federal data, we also stratified the 2008 Leapfrog survey results by teaching intensity for a number of outcomes including resource use (length of stay and readmission) and care efficiency for acute myocardial infarction (AMI), percutaneous coronary interventions (PCI), pneumonia, and coronary artery bypass grafting surgery (CABG).

Back to Top | Article Outline
Measuring hospital performance

We first assessed structural measures thought to be associated with high-quality care. These included our adaptation of the USNWR Key Technologies and Patient Services categories as applied to hospitals that provided data to the 2007 AHA annual survey. These indicators encompassed some advanced technologies which are necessary regionally but not required at every hospital, as well as many patient services, such as palliative care, translation, and patient controlled analgesia, which would apply to most acute care hospitals. We also incorporated structural measure information from the 2007 AHA EHR adoption survey, which categorizes the degree of EHR implementation, including computerized physician order entry (CPOE), decision support, and physician use of EHRs to order laboratory tests. From the American College of Surgeons, we determined hospital trauma center designation, which categorizes an emergency department’s capabilities into three levels (Level 1 being the most advanced). Finally, we obtained the results for other structural measures as reported in the 2008 Leapfrog Group survey, including procedural volume (a quality surrogate for some conditions and procedures), CPOE implementation, and intensive care unit (ICU) staffing (24/7 intensivist availability, on-site during daytime and by phone at night).

Next, we determined compliance with evidence-based processes of care for common medical conditions and surgical procedures using 2008 Hospital Compare and Leapfrog Group survey data. Similarly, we analyzed outcomes using Hospital Compare 30-day risk-adjusted mortality and readmission rates for AMI, heart failure, and pneumonia, and Leapfrog Group results for a number of common procedures, including abdominal aneurysm repair, aortic valve replacement, bariatric surgery, CABG, esophagectomy, pancreatectomy, and PCI.

We evaluated safety using structure, process, and outcomes indicators from the 2008 Leapfrog summary report, including survey data (CPOE implementation, ICU staffing, and compliance with 17 of 34 National Quality Forum safe practices), empirical data (hospital-acquired infections, injuries, and pressure ulcers), and policy implementation data (nonbilling for “never” events). We quantified educational activities using the Medicare ICRs (e.g., IRB ratios, indirect and direct medical expenses). Research activity (e.g., grants, funding amounts, publications) was difficult to assess and compare because of the inconsistent attribution of research sponsorship (hospital versus medical school). Because of this methodological obstacle, and given that teaching hospitals are widely recognized as the leaders in medical research, we did not include research activity as a performance metric.

Finally, we measured patient satisfaction using the 2008 HCAHPS survey results.

Back to Top | Article Outline
Data linking methodology

We linked performance data from each source for each hospital through the use of Medicare identification numbers. The Leapfrog survey results came from a subset of 1,281 hospitals reporting to the Leapfrog Group in 2008 (26.6% of the total of 4,809 in this study; see Supplemental Digital Table 1, http://links.lww.com/ACADMED/A89). Using a variety of strategies, we were able to link 1,117 of the Leapfrog hospitals (23.2% of the total hospitals in this study) to other data sources.

Back to Top | Article Outline
Data analyses

When our data sources reported aggregate results only as percentages or similar scores, we weighted the analyses by volume when possible, as indicated in the table footnotes. We tested the significance of performance trends across levels of teaching intensity using the Mantel-Haenszel chi-square test for categorical variables16 and the Pearson correlation coefficient for continuous data with hospital categories ordered 1, 2, 3. All statistical tests were two tailed. We used SAS Version 9.1 (SAS Institute, Cary, North Carolina) for all statistical analyses.

Back to Top | Article Outline

Results

Hospital characteristics

Using data from FY 2006–2009 Medicare ICRs, we identified 4,809 eligible nonfederal acute care hospitals. In Supplemental Digital Table 1 (http://links.lww.com/ACADMED/A89), we summarize our data sources and include the number of hospitals that were available for each comparison. Although multiple data sources were available for many hospitals, the number of hospitals with information available from every major source (Medicare case mix indices, the Hospital Compare database, the Leapfrog database, and the AHA database) was small (229 of 4,809; 4.76%).

Compared with nonteaching hospitals, COTH and non-COTH teaching hospitals are larger and more often located in urban areas with higher wage indices (see Table 1). Virtually no teaching hospitals (COTH or non-COTH) are rural primary care or critical access hospitals, whereas almost a third of nonteaching hospitals are designated as such. Nonteaching hospitals are six times more likely than COTH members to be for-profit institutions. Both COTH and non-COTH teaching hospitals had higher average case mix indices, indicating that they care for more complex patients than nonteaching hospitals. Most Level 1 trauma centers and transplant centers are located in COTH teaching hospitals. Consistent with previous national analyses,17 the substantial local and regional economic impact of COTH hospitals is evident from their large average number of staff and total salaries.

Table 1
Table 1
Image Tools
Back to Top | Article Outline
Structural indicators of quality

The percentage of hospitals offering specific clinical services and technologies increased progressively from nonteaching to non-COTH teaching hospitals to COTH members (see Supplemental Digital Table 2, http://links.lww.com/ACADMED/A89). Although some of these capabilities are advanced and not required at every hospital, such as specialized diagnostic radiology and robotic surgery, others serve more basic needs including translation services, pain management, and end-of-life care. Teaching hospitals, particularly COTH members, were more often compliant with ICU staffing standards and CPOE systems, according to Leapfrog survey results (see Supplemental Digital Table 3, http://links.lww.com/ACADMED/A89), and the 2007 AHA EHR adoption survey corroborated the latter findings (results available on request). Finally, COTH members most often met Leapfrog volume standards for selected surgical procedures.

Back to Top | Article Outline
Process and outcomes measures

Teaching hospitals outperformed nonteaching hospitals in most, but not all, publicly reported Hospital Compare metrics (see Table 2). For AMI, heart failure, and Surgical Care Improvement Project process measures, teaching status consistently correlated with higher process measure compliance, whereas for pneumonia the results were mixed. Teaching status (particularly COTH membership) was favorably associated with the volume-weighted averages of 30-day risk-standardized mortality but was unfavorably associated with higher readmission rates for AMI and pneumonia. Leapfrog Group scores for resource use (length of stay and readmissions) were variable across conditions and procedures, in some instances favoring teaching hospitals and, in others, nonteaching hospitals (see Supplemental Digital Table 3, http://links.lww.com/ACADMED/A89). However, compared with nonteaching hospitals, teaching hospitals were more often fully compliant with Leapfrog standards for six major surgical procedures (see Supplemental Digital Table 3, http://links.lww.com/ACADMED/A89).

Table 2
Table 2
Image Tools
Back to Top | Article Outline
Safety practices and adverse events

Teaching hospitals more often fully complied with recommended safety practices than nonteaching hospitals, as indicated by the frequency of selected adverse events and a subset of the National Quality Forum safe practices9 (process and structure) as reported in the Leapfrog survey (see Supplemental Digital Table 3, http://links.lww.com/ACADMED/A89). The Leapfrog hospital-acquired infection standard was met more often by both types of teaching hospitals, but full compliance with the pressure ulcer reduction standard occurred less frequently at COTH-member hospitals.

Back to Top | Article Outline
Patient-reported experience

We found that nonteaching hospitals significantly outscored teaching hospitals in all HCAHPS domains except in overall willingness to recommend, for which our results did not demonstrate a significant trend across levels of teaching intensity (see Table 2).

Back to Top | Article Outline
Costs

Unadjusted cost of care per inpatient discharge was substantially higher for COTH members (see Table 1). However, after we adjusted cost for outlier cases, geographic wage differences, case mix differences, and the cost of caring for indigent and underserved populations, the differences between groups were statistically nonsignificant.

Back to Top | Article Outline
Alternative classification of teaching intensity

We reanalyzed select process measure data by classifying according to the IRB ratio (results available on request). We found that some hospitals shifted from the non-COTH teaching hospital category to the nonteaching hospital category and, infrequently, to the highest teaching intensity category (IRB > 0.25). Despite these shifts, we found that the directional trends in process measure scores were generally similar to those of our primary analyses.

Back to Top | Article Outline

Discussion

Advocates for teaching hospitals emphasize their contributions to medical education, research, and innovation as well as their unique social missions.1 They have the capability to care for the most complex patients,1,2,18,19 and, because they are typically located in urban environments, they provide a considerable proportion of the care to indigent and underserved populations.1,2,20,21 Teaching hospitals must maintain costly, advanced technologies to provide specialized patient services that not all hospitals can reproduce but that are necessary as a regional “standby” resource (e.g., for transplants, major burns, or catastrophic trauma).2 Also, teaching hospitals offer an increased breadth of clinician expertise that reflects the diversity and volume of the complex conditions that they treat and the procedures that they perform. For all these reasons, teaching hospitals, especially major academic centers, are typically the preferred referral centers for patients too complex or ill to be cared for at other institutions.

Many previous studies have demonstrated the superior performance of teaching hospitals on indicators such as short-, intermediate-, and long-term mortality,3,4,11,18,20,22–30 processes of care,11,28–33 and equitable treatment quality.34 In addition, teaching hospitals have achieved better results for some complex surgical procedures, such as esophagectomy, pancreatectomy, and lung resection,12,35 but not necessarily for more common, less complex procedures, such as hysterectomy.36 Teaching hospitals may be better equipped to “rescue” critically ill patients who experience life-threatening complications37–40 because of their structural advantages (e.g., ICU and monitoring capabilities, experienced staff, IT capacity, and on-site, off-hours care coverage).

Although these specialized capabilities are widely acknowledged, some have questioned whether teaching hospitals provide demonstrably better care for the majority of routine conditions and, consequently, whether their higher costs and reimbursement levels are justified.3,5 Some studies have shown only small or inconsistent differences between teaching and nonteaching hospitals in mortality,41–44 use of evidence-based processes,43,45,46 and the frequency of adverse events and complications.40,47 Some investigations even suggest that adverse events occur more frequently at urban teaching hospitals. However, these results may reflect unmeasured severity of illness and a greater number of surgical interventions at teaching hospitals, both of which are associated with an increased risk of complications.48–50 Brennan and colleagues51 found a higher overall rate of adverse events at teaching hospitals, yet they attributed a smaller proportion of those adverse events to negligence.

Our data affirm that teaching hospitals provide care for the indigent and underserved, for complex patients, and for those with severe conditions and injuries that transcend the capabilities of most hospitals. They have more often fully implemented important safety practices, such as Leapfrog ICU staffing and CPOE standards, the National Quality Forum safe practices included in the Leapfrog Group survey results, and Surgical Care Improvement Project measures reported by Hospital Compare.

According to our data, major teaching hospitals often provide a statistically higher level of clinical performance, as assessed by more consistent adherence to evidence-based processes of care and lower mortality rates. These include not just the complex diagnoses and critically ill patients where major teaching hospitals might be expected to excel, but also some of the more common conditions and procedures that are treated at most acute care hospitals.

Our results indicate that major teaching hospitals did not have uniformly better performance. For example, nonteaching hospitals had lower readmission rates for AMI and pneumonia according to Hospital Compare data, and they were more often fully compliant with Leapfrog resource use standards for AMI and pneumonia (shorter lengths of stay and lower readmissions rates). These poorer results for teaching hospitals may reflect suboptimal coordination of care between academic referral centers and local primary care providers, and they afford an opportunity for improving interprovider communication and care transitions. However, patients who are referred to major teaching hospitals may also have unmeasured severity markers, such as transfer status52 or other subtle biases that are not always accounted for in risk models but that may impact the performance scores of teaching hospitals.

Our financial analyses confirmed that unadjusted costs are higher at teaching hospitals, especially at COTH members. However, these differences between teaching and nonteaching hospitals are attributable to excess costs of care, which national policy accepts as related to case mix index, geographic wage differentials, outlier costs, and low-income (DSH) patients.18 Because teaching hospitals are more commonly in high-cost-of-living urban settings and care for a higher proportion of disadvantaged and complex patients, their costs are necessarily higher than those of nonteaching hospitals.

Similar to the role of teaching hospitals in serving indigent urban populations, nonteaching hospitals often fulfill a parallel role for historically underserved rural patients, as providers of rural primary and critical access care. Analogous to the additional federal reimbursement payments for which urban teaching hospitals are often eligible, nonteaching critical access hospitals also receive special federal reimbursement payments (typically 101% of their costs).

Our data indicated that nonteaching hospitals consistently outscored teaching hospitals in most HCAHPS domains, except in overall willingness to recommend, for which our results did not demonstrate a consistent trend across categories of teaching intensity. Most53–55 but not all56 previous studies report similar findings. These observations may reflect the larger size and urban location of teaching hospitals; higher patient expectations for such hospitals; greater number of doctors (including trainees) involved with a patient’s care; and suboptimal patient care plan coordination between residents and attending physicians on some services. Previous studies show that patients at referral centers are often less satisfied with physician–patient continuity, care coordination, and communication.19 In our study, teaching hospitals’ lower scores on all individual patient experience measures suggest opportunities for improvement, including the study of factors associated with the superior scores at nonteaching and smaller teaching hospitals. However, the fact that patients are still likely to recommend large teaching hospitals suggests that current measures capture only some aspects of care that are important to patients. Future research may identify other characteristics of care at teaching hospitals, perhaps less tangible than those captured by current patient satisfaction metrics, which are apparently valued by patients.

Back to Top | Article Outline
Limitations

Our classification scheme represents only one of many potential ways to categorize teaching intensity. Although other methods have been described,13 our approach is generally consistent with previous work in this area.3,11,20 Although the use of IRB ratios may seem a more quantitative and precise way to characterize teaching intensity, we observed in separate analyses that this approach can sometimes produce confusing results, mainly because it is tied to Medicare reimbursement policies and caps. Our primary interest in this study was to characterize the academic environment of an institution, which may include the presence of trainees (full-time or rotating), medical school affiliation, or other close relationships with a major academic center (e.g., shared staff or common care practices). We believe that our method is equally or more valid for this purpose than the IRB approach, which is based solely on number of trainees and used primarily to determine reimbursement.

We acknowledge that other hospital features, such as urban location and size, may confound the relationship of teaching status to hospital performance.

Some of our data are based on surveys, which have their own inherent caveats of selection and response bias. For example, Leapfrog respondents may represent a nonrandom subset of U.S. acute care hospitals that are particularly interested in quality and safety.57 Nonetheless, our Leapfrog analyses included a large number of both teaching and nonteaching hospitals, and external evidence suggests that voluntary reporting has not biased Leapfrog survey results.58

Our aggregate analyses describe only the average performance for each hospital type. Some nonteaching hospitals are exceptionally high performing, and some teaching hospitals are underperforming. Each type of hospital has its own strengths and serves a unique and critical role in its community.

Finally, many intangible attributes of both teaching and nonteaching hospitals cannot be quantified and directly compared, and thus we could not include them in our analyses.

Back to Top | Article Outline
Conclusions

Teaching hospitals, particularly major academic medical centers, offer unique and advanced clinical capabilities, train the next generation of health care providers, and are the major locus for medical research and innovation. However, the association of teaching intensity with performance on a broad portfolio of publicly reported metrics, including those related to more common conditions and procedures, has not been studied using contemporary data. We found that for many but not all quality and safety domains, performance was favorably associated with teaching intensity. Furthermore, when adjusted according to Medicare guidelines, costs were not significantly different across the spectrum of teaching intensity. These findings should be considered in any health reform discussions because they demonstrate a value of teaching hospitals that may not be apparent to all stakeholders—relatively higher performance in many areas, with similar adjusted costs.

Funding/Support: Internal funding, Massachusetts General Hospital Center for Quality and Safety.

Other disclosures: All authors of this report are employed by or associated with academic medical centers or medical schools.

Ethical approval: Expedited review (category 5) and approval by Partners/Massachusetts General Hospital institutional review board under Minimal Risk criterion.

Back to Top | Article Outline

References

1. Blumenthal D, Campbell EG, Weissman JS. The social missions of academic health centers. N Engl J Med. 1997;337:1550–1553

2. Commonwealth Fund Task Force on Academic Health Centers. Envisioning the Future of Academic Medical Centers.. 2003 New York, NY Commonwealth Fund

3. Ayanian JZ, Weissman JS. Teaching hospitals and quality of care: a review of the literature. Milbank Q. 2002;80:569–593, v

4. Ayanian JZ, Weissman JS, Chasan-Taber S, Epstein AM. Quality of care for two common illnesses in teaching and nonteaching hospitals. Health Aff (Millwood). 1998;17:194–205

5. Bombardieri M. Are the elite academic hospitals always a patient’s best choice? Boston Globe. December 28, 2008:A14

6. Centers for Medicare and Medicaid Services. . Healthcare Cost Report Information System (HCRIS). http://www.cms.hhs.gov/CostReports/. Accessed February 22, 2012.

7. Centers for Medicare and Medicaid Services. . FY 2009 Final Rule Case Mix Index. http://www.cms.hhs.gov/AcuteInpatientPPS/FFD/list.asp#TopOfPage. Accessed February 22, 2012.

8. Centers for Medicare and Medicaid Services. . Hospital Compare Database. www.hospitalcompare.hhs.gov/. Accessed February 22, 2012.

9. Leapfrog Group. . 2008 Leapfrog survey results by region. http://www.leapfroggroup.org/. Accessed July 6, 2009 [Access now requires additional payment].

10. McFarlane E, Murphy J, Olmsted MG, Drozd EM, Hill C. 2009 Methodology: “America’s Best Hospitals.” http://static.usnews.com/documents/health/2009-best-hospitals-methodology.pdf?s_cid=related-links:TOP. Accessed February 22, 2012.

11. Allison JJ, Kiefe CI, Weissman NW, et al. Relationship of hospital teaching status with quality of care and mortality for Medicare patients with acute MI. JAMA. 2000;284:1256–1262

12. Dimick JB, Cowan JA Jr, Colletti LM, Upchurch GR Jr. Hospital teaching status and outcomes of complex surgical procedures in the United States. Arch Surg. 2004;139:137–141

13. Volpp KG, Rosen AK, Rosenbaum PR, Romano PS, Even-Shoshan O, Wang Y, et al. Mortality among hospitalized Medicare beneficiaries in the first 2 years following ACGME resident duty hour reform. JAMA. 2007;298:975–983

14. Medicare Payment Advisory Commission. Report to the Congress: Medicare Payment Policy.. 2009 Washington, DC Medicare Payment Advisory Commission

15. Medicare Payment Advisory Commission. Additional technical information on constructing a compensation index from BLS data. In: Report to the Congress: Promoting Greater Efficiency in Medicare.. 2007 Washington, DC Medicare PaymentAdvisory Commission

16. Mantel N. Chi-square tests with one degree of freedom; extensions of the Mantel-Haenszel procedure. J Am Stat Assoc. 1963;58:690–700

17. Association of American Medical Colleges. . The Economic Impact of AAMC-Member Medical Schools and Teaching Hospitals. http://ahsc-ntf.org/docs/AHSCs/Reports/The%20Economic%20Impact.pdf. Accessed February 22, 2012.

18. Taylor DH Jr, Whellan DJ, Sloan FA. Effects of admission to a teaching hospital on the cost and quality of care for Medicare beneficiaries. N Engl J Med. 1999;340:293–299

19. Kassirer JP. Hospitals, heal yourselves. N Engl J Med. 1999;340:309–310

20. Rosenthal GE, Harper DL, Quinn LM, Cooper GS. Severity-adjusted mortality and length of stay in teaching and nonteaching hospitals. Results of a regional study. JAMA. 1997;278:485–490

21. Werner RM, Goldman LE, Dudley RA. Comparison of change in quality of care between safety-net and non-safety-net hospitals. JAMA. 2008;299:2180–2187

22. Yuan Z, Cooper GS, Einstadter D, Cebul RD, Rimm AA. The association between hospital type and mortality and length of stay: a study of 16.9 million hospitalized Medicare beneficiaries. Med Care. 2000;38:231–245

23. Polanczyk CA, Lane A, Coburn M, Philbin EF, Dec GW, DiSalvo TG. Hospital outcomes in major teaching, minor teaching, and nonteaching hospitals in New York state. Am J Med. 2002;112:255–261

24. Hartz AJ, Krakauer H, Kuhn EM, et al. Hospital characteristics and mortality rates. N Engl J Med. 1989;321:1720–1725

25. Kahn KL, Pearson ML, Harrison ER, Desmond KA, Rogers WH, Rubenstein LV, et al. Health care for black and poor hospitalized Medicare patients. JAMA. 1994;271:1169–1174

26. Keeler EB, Rubenstein LV, Kahn KL, et al. Hospital characteristics and quality of care. JAMA. 1992;268:1709–1714

27. Kuhn EM, Hartz AJ, Krakauer H, Bailey RC, Rimm AA. The relationship of hospital ownership and teaching status to 30- and 180-day adjusted mortality rates. Med Care. 1994;32:1098–1108

28. Kupersmith J. Quality of care in teaching hospitals: a literature review. Acad Med. 2005;80:458–466

29. Chen J, Radford MJ, Wang Y, Marciniak TA, Krumholz HM. Do “America’s Best Hospitals” perform better for acute myocardial infarction? N Engl J Med. 1999;340:286–292

30. Gutierrez JC, Hurley JD, Housri N, Perez EA, Byrne MM, Koniaris LG. Are many community hospitals undertreating breast cancer? Lessons from 24,834 patients. Ann Surg. 2008;248:154–162

31. Jha AK, Li Z, Orav EJ, Epstein AM. Care in U.S. hospitals––the Hospital Quality Alliance program. N Engl J Med. 2005;353:265–274

32. Patel MR, Chen AY, Roe MT, et al. A comparison of acute coronary syndrome care at academic and nonacademic hospitals. Am J Med. 2007;120:40–46

33. Landon BE, Normand SL, Lessler A, et al. Quality of care for the treatment of acute medical conditions in US hospitals. Arch Intern Med. 2006;166:2511–2517

34. Goldman LE, Vittinghoff E, Dudley RA. Quality of care in hospitals with a high percent of Medicaid patients. Med Care. 2007;45:579–583

35. Meguid RA, Brooke BS, Chang DC, Sherwood JT, Brock MV, Yang SC. Are surgical outcomes for lung cancer resections improved at teaching hospitals? Ann Thorac Surg. 2008;85:1015–1024.

36. Juillard C, Lashoher A, Sewell CA, Uddin S, Griffith JG, Chang DC. A national analysis of the relationship between hospital volume, academic center status, and surgical outcomes for abdominal hysterectomy done for leiomyoma. J Am Coll Surg. 2009;208:599–606

37. Silber JH, Rosenbaum PR, Schwartz JS, Ross RN, Williams SV. Evaluation of the complication rate as a measure of quality of care in coronary artery bypass graft surgery. JAMA. 1995;274:317–323

38. Silber JH, Rosenbaum PR, Romano PS, et al. Hospital teaching intensity, patient race, and surgical outcomes. Arch Surg. 2009;144:113–120

39. Silber JH, Williams SV, Krakauer H, Schwartz JS. Hospital and patient characteristics associated with death after surgery. A study of adverse occurrence and failure to rescue. Med Care. 1992;30:615–629

40. Thornlow DK, Stukenborg GJ. The association between hospital characteristics and rates of preventable complications and adverse events. Med Care. 2006;44:265–269

41. Papanikolaou PN, Christidi GD, Ioannidis JP. Patient outcomes with teaching versus nonteaching healthcare: a systematic review. PLoS Med. 2006;3:e341

42. Iezzoni LI. Major teaching hospitals defying Darwin. JAMA. 1997;278:520

43. Fonarow GC, Yancy CW, Heywood JT. Adherence to heart failure quality-of-care indicators in US hospitals: analysis of the ADHERE registry. Arch Intern Med. 2005;165:1469–1477

44. Whittle J, Lin CJ, Lave JR, et al. Relationship of provider characteristics to outcomes, process, and costs of care for community-acquired pneumonia. Med Care. 1998;36:977–987

45. Vogeli C, Kang R, Landrum MB, Hasnain-Wynia R, Weissman JS. Quality of care provided to individual patients in US hospitals: results from an analysis of national Hospital Quality Alliance data. Med Care. 2009;47:591–599

46. Knapp RM. Quality and safety performance in teaching hospitals. Am Surg. 2006;72:1051–1054

47. Vartak S, Ward MM, Vaughn TE. Do postoperative complications vary by hospital teaching status? Med Care. 2008;46:25–32

48. Romano PS, Geppert JJ, Davies S, Miller MR, Elixhauser A, McDonald KM. A national profile of patient safety in U.S. hospitals. Health Aff (Millwood). 2003;22:154–166

49. Miller MR, Elixhauser A, Zhan C, Meyer GS. Patient safety indicators: Using administrative data to identify potential patient safety concerns. Health Serv Res. 2001;36:110–132

50. Iezzoni LI, Daley J, Heeren T, et al. Using administrative data to screen hospitals for high complication rates. Inquiry. 1994;31:40–55

51. Brennan TA, Hebert LE, Laird NM, et al. Hospital characteristics associated with adverse events and substandard care. JAMA. 1991;265:3265–3269

52. Rosenberg AL, Hofer TP, Strachan C, Watts CM, Hayward RA. Accepting critically ill transfer patients: adverse effect on a referral center’s outcome and benchmark measures. Ann Intern Med. 2003;138:882–890

53. Sjetne IS, Veenstra M, Stavem K. The effect of hospital size and teaching status on patient experiences with hospital care: a multilevel analysis. Med Care. 2007;45:252–258

54. Finkelstein BS, Singh J, Silvers JB, Neuhauser D, Rosenthal GE. Patient and hospital characteristics associated with patient assessments of hospital obstetrical care. Med Care. 1998;36:AS68–AS78

55. Co JP, Ferris TG, Marino BL, Homer CJ, Perrin JM. Are hospital characteristics associated with parental views of pediatric inpatient care quality? Pediatrics. 2003;111:308–314

56. Jha AK, Orav EJ, Zheng J, Epstein AM. Patients’ perception of hospital care in the United States. N Engl J Med. 2008;359:1921–1931

57. Leapfrog Group. . . The Leapfrog Group Factsheet. http://www.leapfroggroup.org/about_us/leapfrog-factsheet. Accessed February 22, 2012.

58. Ghaferi AA, Osborne NH, Dimick JB. Does voluntary reporting bias hospital quality rankings? J Surg Res. 2009;161:190–194

Supplemental Digital Content

Back to Top | Article Outline

© 2012 Association of American Medical Colleges

Login

Article Tools

Images

Share