Home Current Issue Previous Issues Published Ahead-of-Print Collections For Authors Journal Info
Skip Navigation LinksHome > April 2013 - Volume 88 - Issue 4 > Does Admission to a Teaching Hospital Affect Acute Myocardia...
Academic Medicine:
doi: 10.1097/ACM.0b013e3182858673
Research Reports

Does Admission to a Teaching Hospital Affect Acute Myocardial Infarction Survival?

Navathe, Amol S. MD, PhD; Silber, Jeffrey H. MD, PhD; Zhu, Jingsan MBA; Volpp, Kevin G. MD, PhD

Free Access
Article Outline
Collapse Box

Author Information

Dr. Navathe is clinical fellow, Harvard Medical School, Boston, Massachusetts, and adjunct fellow, Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, Pennsylvania.

Dr. Silber is professor of pediatrics and anesthesiology and critical care, Perelman School of Medicine, professor of health care management, Wharton School, University of Pennsylvania, and director, Center for Outcomes Research, Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania.

Mr. Zhu is assistant director of data analytics, Center for Health Incentives and Behavioral Economics, Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, Pennsylvania.

Dr. Volpp is professor of medicine, Perelman School of Medicine, professor of health care management, Wharton School, and director, Center for Health Incentives and Behavioral Economics, Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, Pennsylvania.

Correspondence should be addressed to Dr. Navathe, Brigham and Women’s Hospital and Harvard Medical School, 75 Francis St., Room PBB-B4, Boston, MA 02115; telephone: (617) 732-5500; e-mail: anavathe@partners.org.

Collapse Box

Abstract

Purpose: Previous studies have found that teaching hospitals produce better acute myocardial infarction (AMI) outcomes than nonteaching hospitals. However, these analyses generally excluded patients transferred out of nonteaching hospitals and did not study outcomes by patient risk level. The objective of this study was to determine whether admission to a teaching hospital was associated with greater survival after accounting for patient transfers and patient severity.

Method: This observational study used logistic models to examine the association between hospital teaching status and 30-day mortality of AMI patients, adjusting for patient comorbidities and common time trends. The sample included 1,309,554 Medicare patients admitted from 1996 to 2004 to 3,761 acute care hospitals for AMI. The primary outcome was 30-day all-cause, all-location mortality.

Results: Mortality was slightly lower in minor teaching hospitals compared with nonteaching hospitals (odds ratio [OR] 0.97; 95% confidence interval [CI] 0.95–0.99) but not different between major teaching and nonteaching hospitals (OR 1.01; 95% CI 0.96–1.03). The odds of mortality in minor teaching hospitals decreased 4.2% relative to nonteaching hospitals during the seven-year period (OR from 0.98 to 0.94). There was no consistent pattern of association between teaching status and patient severity.

Conclusions: After correctly accounting for the ability of nonteaching hospitals to appropriately transfer patients in need of different care, there was no survival benefit on average for initial admission to a teaching hospital for AMI. Further more, higher-risk patients did not benefit from initial admission to teaching hospitals.

Admission to a teaching hospital is generally considered a survival advantage when compared with admission to a nonteaching hospital based on results from previous studies demonstrating better outcomes for common, high-severity conditions in teaching hospitals.1–6 Specifically, for acute myocardial infarction (AMI), the literature has consistently shown survival advantages from admission to teaching hospitals.1,5,6

We reexamined the conventional wisdom that hospital teaching status, measured as the ratio of residents to hospital beds, is associated with lower mortality. Teaching status has been of interest as a marker for quality of care because it is a highly observable hospital characteristic. For example, affiliation with a university or medical school is often included in hospital marketing.7 Furthermore, there is strong interest from administrators, educators, and policy makers that patient care not be compromised as a result of the educational mission of teaching hospitals. We reexamined teaching intensity as a marker for better outcomes for two reasons: (1) Methodological issues cast doubt on the validity of conclusions from prior studies and the resulting conventional wisdom,8,9 and (2) the degree to which previous findings hold true may have changed over time.

First, recent literature suggests that the association between teaching intensity and better patient outcomes may have been spuriously driven by exclusion of patients transferred between hospitals.8,9 These patients make up a select group who tend to be of lower severity than the average AMI patient.9,10 Because transferring patients to another hospital is more common among nonteaching than teaching hospitals, the exclusion of transfer patients may have influenced the selection of patients studied in nonteaching hospitals and could have led to biased results. Furthermore, recent studies establishing the Centers for Medicare and Medicaid Services (CMS) hospital ranking methodology and examining factors associated with better AMI outcomes have endorsed the inclusion of transfers and assigning of outcomes to index hospitals (i.e., the first hospital to which a patient is admitted) in such measures.9,11 We hypothesized that the inclusion of transferred patients and assignment to index hospitals would decrease the survival advantage conferred by admission to a teaching hospital.

Second, the association between teaching intensity and quality of care may have changed over time. In particular, rapid technological advances and their widespread adoption led to steady improvement in mortality rates for AMI patients, providing an opportunity for nonteaching hospitals to narrow the gap in quality of care with teaching hospitals.12,13 We hypothesized that the gap between teaching and nonteaching hospitals in AMI mortality would decrease over time.

Finally, we attempted to extend the literature by examining the degree to which outcomes varied in more versus less teaching-intensive hospitals for patients of differing severity. Through these analyses, we tested the hypothesis that the most ill patients would fare best at teaching hospitals.

As financial pressures on hospitals increase, including policy changes that affect teaching hospitals exclusively (e.g., resident duty hours restrictions), it is important to reexamine the relative quality of care at teaching hospitals.14,15 Furthermore, as technological advances became more ubiquitous, the previous drivers of differences in outcomes across hospital characteristics may have changed. Because teaching status is an easily observable characteristic that has been correlated with better outcomes in the past, we reexamined this issue with teaching status as a marker, consistent with prior literature, rather than investigating a causal relationship.

Back to Top | Article Outline

Method

Data

We used data from the Medicare Provider Analysis and Review (MEDPAR)16 files for the academic years 1996 to 2004 (July 1, 1996 to June 30, 2004). We used the sample from 1996 as out-of-sample data to enable risk stratification, and we performed outcomes analysis on the 1997–2004 data. MEDPAR contains data on each inpatient admission for all Medicare enrollees; includes demographic information, transfer status, principal diagnosis, and up to 10 secondary diagnoses (comorbidities) per admission; and is linked to data on deaths post hospital discharge. We chose AMI because, as an emergent condition, patient selection (the nonrandom matching of patients to hospitals based on unobservable characteristics of both patients and hospitals) is less likely to be a confounding factor than it may be for other conditions. We selected admissions for new-onset AMIs, identified by an International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9)17 principal discharge diagnosis code of 410.X1 (AMI). We selected 1996–2004 as the study period to correspond with periods from relevant previous studies1,2,4–6 and to include a time period during which AMI care and decisions to transfer patients to other hospitals for specific interventions were rapidly evolving into patterns of care that persist today.12,13,18 Data from the Medicare cost reports19 provided information on the number of residents and beds for each hospital.

Back to Top | Article Outline
Study sample

Our initial patient sample for outcomes analysis included data from all 5,253 hospitals that had data for the years 1997–2004, and 2,720,818 admissions of Medicare patients who had a principal diagnosis of AMI (ICD-9 code 410.X1). For patients who had multiple AMI admissions during the study years, we randomly selected one admission for inclusion in the analysis, leading to the exclusion of 356,131 admissions and 17 hospitals.

Of the remaining patient sample, we excluded patients for the following reasons: age younger than 66 or older than 90 (n = 407,979 patients and 35 hospitals); enrolled in a health management organization (n = 181,549 patients and 10 hospitals); admission dates after dates of death (n = 248 patients and 0 hospitals); transferred in from unknown index hospitals (n = 175,472 patients and 60 hospitals); “discharged alive in less than two days” (n = 24,506 patients and 7 hospitals); and hospitals that did not report data in all years from 1996 to 2004 (n = 62,795 patients and 1,335 hospitals). We excluded the patients discharged alive less than two days after the admission date because they were unlikely to represent true AMIs, since the standard of care for AMIs included a length of stay beyond 48 hours; thus, these patients most likely represented coding errors. We excluded 28 hospitals (9,298 patients) because of excessive variation in the resident-to-bed (RB) ratio from year to year, and for an additional 110 hospitals we imputed RB ratio values for single-year outliers by averaging across other years.

The final study sample for outcomes analysis included data on 1,309,554 admissions and 3,761 hospitals for the main study period July 1, 1997 to June 30, 2004. We separated a subsample of 193,286 patients from 3,671 hospitals in the period from July 1, 1996 to June 30, 1997 as out-of-sample data to compute a risk score used in the patient severity analysis. The University of Pennsylvania and Children’s Hospital of Philadelphia institutional review boards approved this study.

Back to Top | Article Outline
Measure of teaching intensity

We divided hospitals into three categories of teaching intensity based on their RB ratio, with a higher ratio of residents reflecting greater teaching intensity, in accordance with the previous literature. Hospitals were categorized as “major teaching” if the RB ratio was greater than or equal to 0.25, “minor teaching” if the RB ratio was above 0 and below 0.25, and “nonteaching” if the RB ratio was 0.2,20 We followed definitions from prior studies to enable direct comparison of results as well as to use standard definitions of categories of teaching intensity accepted in the literature.

Back to Top | Article Outline
Outcome measures and risk adjustment

The outcome variable was all-location mortality (as opposed to in-hospital mortality) within 30 days after hospital admission because it is less prone to bias from differences in the style of practice concerning length of stay.21 We followed the risk-adjustment strategy employed in several studies and controlled for patient severity using 27 Elixhauser comorbidities, including a six-month look back to enrich risk adjustment.22–30 This approach has outperformed alternatives and is validated for use with administrative data.31,32 We also included controls for age, gender, whether the patient had been admitted for a previous AMI in the prior six months, the total number of previous admissions in the prior six months, whether the patient had undergone either a coronary artery bypass graft surgery or percutaneous transluminal coronary angioplasty procedure, and anatomic location of AMI (anterior, inferior, subendocardial, other). We also calculated unadjusted models (no patient-level risk adjustment) for comparison.

We created an index of patient severity—a “risk score”—to evaluate the interaction between teaching intensity and severity using a logistic patient risk model on out-of-sample data to avoid bias from a generated regressor.33–35

Back to Top | Article Outline
Patients transferred between hospitals

An important aspect of our analysis was our approach to patients transferred out of an index hospital, and we explored three ways in which transferred patients could be treated: (1) exclude transferred patients from the sample and therefore exclude their outcomes from both index and receiving hospitals,1–6 (2) include transferred patients and assign their outcomes to receiving hospitals, and (3) include transferred patients and assign their outcomes to index hospitals.9,11,23–30 In our primary analyses we followed the third method—all subsequent care and outcomes were considered the responsibility of the index hospital because the prompt initiation of appropriate care is a highly significant predictor of AMI outcomes and because subsequent clinical decisions would follow at least in part from these initial decisions.8,9,11,36–39

Our rationale follows that of Krumholz and colleagues,11 who assigned transferred patients to the index hospital in developing the CMS 30-day mortality measure used in ranking hospitals. They note that doing so captures the component of quality reflected in deciding when and to which hospital to transfer a patient, with the quality of care at the receiving hospital being a consequence of the index hospital’s decision making.11 This approach was also endorsed and used by Kosseim and colleagues9 in finding that proper medical management and timely decisions to transfer patients contributed to better AMI outcomes.

Although much prior literature has explicitly aimed to evaluate teaching status as a marker for quality of care, previous studies that have examined the relationship between teaching status and mortality have generally excluded patients transferred to other hospitals1,5 or made no mention of transfer status.2–4,6 Thus, for comparison, we performed analyses using the first and second methods (excluding transferred patients from the sample and therefore excluding their outcomes from both index and receiving hospitals, and including transferred patients and assigning their outcomes to receiving hospitals).

Back to Top | Article Outline
Statistical analyses

We contrasted analyses by whether patients transferred out of hospitals were excluded from the analysis (which allows for comparability with previous studies), had their results assigned to the receiving hospital, or had their results assigned to the index hospital of admission (primary analysis). We then evaluated how the relationship between teaching intensity and mortality changed over time and examined how the relationship differed for patients of varying severity. These analyses tested the hypotheses that (1) including transferred patients would diminish differences in outcomes between teaching and nonteaching hospitals, (2) mortality rates for teaching and nonteaching hospitals would converge over time, and (3) teaching-hospital outcomes would be superior for sicker patients.

We used patient-level logistic regression to evaluate the association between mortality and teaching intensity. Logistic models controlled for patient characteristics, but not hospital characteristics other than teaching status, because our intent was to evaluate the use of teaching status as a marker for hospital outcomes. We performed a likelihood ratio chi-square test for divergent trends over time by comparing the overall fit of a model without any year controls with a model with interaction terms between each year and the RB ratio.

Our final set of analyses evaluated how different levels of patient severity influenced the teaching status-mortality trends observed. We ran the logistic model on three groups differentiated on the basis of risk score deciles. Patients in the three highest deciles of risk were categorized in the high-risk group, those in the middle four deciles of risk were medium risk, and those at the bottom three risk deciles were assigned to the low-risk group.

Standard errors used the Huber–White correction to reflect possible dependence of patients clustered in hospitals.40,41 The statistical program SAS version 9.1.3 (SAS Institute Inc., Cary, North Carolina) was used, and the “genmod” procedure was used for logistic regressions.

Back to Top | Article Outline

Results

Nonteaching hospitals made up 70% (n = 2,633) of hospitals and treated 54% (n = 707,159) of the AMI patients in the sample when including patients who were transferred out of the index hospital (see Table 1). Unadjusted mortality within 30 days of hospital admission was highest for nonteaching hospitals at 18.6% and was lower for teaching hospitals, with a mortality rate of 17.3% for minor teaching hospitals and 17.5% for major teaching hospitals (23.0%, 18.8%, and 18.5%, respectively, when excluding patients transferred out). Unadjusted mortality was nearly identical between nonteaching, minor teaching, and major teaching hospitals for the patients who were transferred out to other hospitals.

Table 1
Table 1
Image Tools

Patients who were transferred out to a second hospital disproportionately came from nonteaching hospitals. Thirty-one percent (n = 218,512) of patients initially admitted to a nonteaching hospital were transferred out compared with 14% (n = 61,876) of minor teaching hospital patients and 10% (n = 14,405) of patients initially admitted to major teaching hospitals.

The percentage of patients with each type of AMI, the percentage with multiple comorbidities, and the average severity scores were very similar for hospitals of differing teaching intensity (see Table 1). Exclusion of transferred patients increased the average severity score among remaining patients, indicating lower average severity for transferred patients. Furthermore, the average severity scores for the transferred patients were significantly lower than the scores for those patients who were not transferred, regardless of the hospital’s teaching intensity category.

In adjusted analyses, we first replicated analyses preformed in previous studies comparing odds of mortality in major and minor teaching hospitals with odds of mortality in nonteaching hospitals with patient-level risk adjustment.1,5,6,20 Excluding transferred patients as done in the prior literature resulted in strong associations between teaching status and mortality (see Table 2). Similarly, assigning outcomes from transferred patients to the receiving hospitals showed significantly lower odds of mortality in major and minor teaching hospitals versus nonteaching hospitals. When assigning transferred patients to the index hospital, however, there was little correlation between teaching status and improved mortality.

Table 2
Table 2
Image Tools

Figure 1 illustrates the pattern of association between teaching status and risk-adjusted AMI mortality over the years of the study. There were significant changes in the association between teaching intensity and mortality over time (P = .01), with the odds of mortality at minor teaching hospitals relative to the odds of mortality at nonteaching hospitals changing from 0.98 to 0.94 and gaining statistical significance over the seven-year period. The odds of mortality at major teaching hospitals were not statistically different from the odds of mortality at nonteaching hospitals, though the magnitude changed from 1.01 to 0.96—a 4.1% reduction. Furthermore, the odds of mortality at major teaching hospitals did not change significantly relative to the odds of mortality at minor teaching hospitals during the study period.

Figure 1
Figure 1
Image Tools

Adjusted analyses did not indicate a systematic pattern between patient risk scores and differences in mortality between teaching and nonteaching hospitals when including transferred patients and assigning their outcomes to the index hospital (see Table 3). Low-risk and medium-risk patients fared better at minor teaching hospitals versus nonteaching hospitals, but there was no estimated difference for high-risk patients admitted to minor teaching hospitals versus high-risk patients admitted to nonteaching hospitals. Patient outcomes did not vary by risk group in major teaching hospitals versus nonteaching hospitals.

Table 3
Table 3
Image Tools
Back to Top | Article Outline

Discussion and Conclusions

Using seven years of data on all Medicare patients admitted with AMI, we found that the previously observed associations between lower mortality and hospital teaching status were highly sensitive to the exclusion of patients transferred out of hospitals. We also found that these associations depend on the hospital to which patient outcomes are attributed when transferred patients are included in analyses. When we excluded transferred patients or assigned their outcomes to receiving hospitals, we observed a relationship between hospital teaching status and lower mortality that was consistent with the prior literature. However, when we included transferred patients and assigned their outcomes to the index hospital, the beneficial association between teaching intensity and patient outcomes diminished sharply for minor teaching hospitals and disappeared altogether for major teaching hospitals. Given the emerging consensus that patient outcomes should be assigned to the index hospital and that transfer patients should be included in these types of analyses,8,9,11 this result suggests that the relationship between teaching status and better outcomes may not be as robust as suggested by the prior literature on this topic.1–6

Previous studies of outcomes in AMI patients have suggested that the odds of mortality are 9% lower in minor teaching hospitals and 20% to 25% lower in major teaching hospitals than the odds of mortality in nonteaching hospitals.1,6 However, it is possible that the differences we observed in the association between teaching status and mortality based on how transfers are handled could extend to previous studies as well. None of the previous studies we considered on teaching status and mortality explicitly mentioned inclusion of transferred patients or the attribution of their outcomes to the index hospital of admission. The one study that may have included transferred patients (the text does not indicate this explicitly but suggests inclusion)6 found results qualitatively similar to ours, with mortality of coronary disease patients in major and minor teaching hospitals no different from the mortality of these patients in nonteaching hospitals (odds ratio [OR] 0.76, 95% confidence interval [CI] 0.55–1.07; OR 0.81, 95% CI 0.58–1.13, respectively).6 In contrast, in two studies that explicitly exclude patients transferred out of an index hospital, Allison and colleagues1 and Rosenthal and colleagues5 found better outcomes for teaching hospitals with magnitudes consistent with those we observed in our analyses that replicated the approach of excluding transferred patients. These findings are also consistent with earlier studies.3,4 Bradley and colleagues,8 in a study that included transferred patients and found little correlation between process measures and AMI mortality, suggested that the exclusion of transferred patients in the prior literature may underlie the contrasting results between their study and previous studies that had found significant correlations between mortality and teaching status. This suggests that the previously reported association between teaching status and lower mortality may be much smaller than had been reported—and may not even exist.

From a methodological standpoint, it seems most logical to include transferred patients and to assign their outcomes to the hospital of index admission9,11,23–30,42,43 because the quality and speed of the initial workup for AMI patients are major determinants of patient outcomes.37–39 AMI patients who are transferred to other hospitals compose a very select portion of the patient population—they must survive long enough to be transferred and are less sick on average.9,10 Excluding these transferred patients altogether leads to biased estimates: a disproportionate number of smaller, nonteaching hospitals appear to have higher mortality estimates. Assigning transferred patients’ outcomes to the receiving hospital is unappealing because the receiving hospital would be held accountable for care delivered at the index hospital, care which it neither initiated nor had control over. The alternative of assigning the outcomes to the index hospital is favored because although it introduces imprecision in measurement, the index hospital controls initiation of care, the timeliness of the diagnostic workup and treatments, and the decision to transfer the patient. This clinical judgment includes the timing of the transfer and the hospital that receives the transfer. Hence, the downstream care of the patient is determined, at least in part, by index-hospital medical decision making.

Because inclusion of transferred patients reduced differences in mortality between teaching and nonteaching hospitals, one reasonable conclusion is that the nonteaching hospitals successfully transferred patients who benefited from more advanced care when indicated, such that these hospitals produced similar outcomes to the teaching hospitals’ regardless of the underlying severity of patients. Although these results do not necessarily suggest that equivalent skills and facilities are available at nonteaching and teaching hospitals, they do suggest that the survival rates are similar regardless of where a patient was initially admitted.

Our longitudinal analysis suggested that AMI mortality improved slightly faster at teaching hospitals than at nonteaching hospitals, in contrast to our hypothesis. The only previous study to examine the degree to which outcomes changed over time in more versus less teaching-intensive hospitals found that the quality difference between teaching hospitals and small nonteaching hospitals narrowed during the early 1980s, with differences in risk-adjusted mortality rates reducing by 0.7%.4 This differs from the results we report here, but it may reflect differences in the relative rate of improvement in outcomes for teaching and nonteaching hospitals in different time periods.

Previous studies have not examined the relationship between patient severity and patient outcomes in hospitals of differing teaching intensity. Our findings do not support an association between patient severity and hospital teaching status in AMI outcomes. This suggests that nonteaching hospitals are able to handle high-risk patients as well as they handle lower-risk patients. Further work should be conducted to examine whether nonteaching hospitals fare as well with more severely ill patients for diseases besides AMI.

Our study has a number of limitations. Although AMI patients tend to be admitted emergently to the nearest hospital, which makes measures of hospital outcomes less susceptible to biases of self-selection than in examination of elective conditions, there could be bias in either direction if our risk adjustment failed to account for unmeasured aspects of patient severity that were systematically different in teaching and nonteaching hospitals. Second, because teaching hospitals often engage in clinical research as a by-product of the medical care they provide, it is conceivable that teaching hospitals record comorbid conditions more carefully than do some nonteaching hospitals, possibly introducing bias into analyses that adjust for such conditions. Third, our study faces the limitations in risk adjustment associated with use of an administrative dataset, in that our risk-adjustment model can only account for patient characteristics represented by codes in billing data and excludes detailed clinical data available only in medical records. Fourth, to enable comparisons with previously published work, we limited our data collection period to 1996–2004. It is possible that the associations and trends we observed did not continue beyond our study period into more recent years; however, this is not likely because recent studies have found that trends and general patterns of care continued into years immediately following our study period.16,44 Finally, we used teaching intensity as a marker for all hospital characteristics found in teaching hospitals, so any observed associations reflect differences in a variety of technological capabilities and staffing that are inherent to being a teaching hospital, as opposed to teaching status per se.

In an era of increasing efforts toward public reporting of outcomes, the results of this study suggest that the teaching intensity of the hospital where patients are admitted may be less important to survival from AMI than had previously been thought, even when considering underlying patient severity. Understanding the degree to which the findings of this study apply to other conditions is important in determining the applicability of teaching intensity as a marker of better patient outcomes and in considering implications for further policies that may disproportionately affect teaching hospitals.

Acknowledgments: The authors wish to acknowledge Orit Even-Shoshan for administering the research program.

Funding/Support: This work was funded by grant R01 HL082637 from the National Heart, Lung, and Blood Institute.

Other disclosures: None.

Ethical approval: Approval was obtained from the University of Pennsylvania and Children’s Hospital of Philadelphia institutional review boards.

Back to Top | Article Outline

References

1. Allison JJ, Kiefe CI, Weissman NW, et al. Relationship of hospital teaching status with quality of care and mortality for Medicare patients with acute MI. JAMA. 2000;284:1256–1262

2. Ayanian JZ, Weissman JS, Chasan-Taber S, Epstein AM. Quality of care for two common illnesses in teaching and nonteaching hospitals. Health Aff (Millwood). 1998;17:194–205

3. Hartz AJ, Krakauer H, Kuhn EM, et al. Hospital characteristics and mortality rates. N Engl J Med. 1989;321:1720–1725

4. Keeler EB, Rubenstein LV, Kahn KL, et al. Hospital characteristics and quality of care. JAMA. 1992;268:1709–1714

5. Rosenthal GE, Harper DL, Quinn LM, Cooper GS. Severity-adjusted mortality and length of stay in teaching and nonteaching hospitals. Results of a regional study. JAMA. 1997;278:485–490

6. Taylor DH Jr, Whellan DJ, Sloan FA. Effects of admission to a teaching hospital on the cost and quality of care for Medicare beneficiaries. N Engl J Med. 1999;340:293–299

7. Langabeer JR, Napiewocki J. Competitive Business Strategy for Teaching Hospitals. 2000 Westport, Conn Greenwood Publishing Group:115–120

8. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: Correlation among process measures and relationship with short-term mortality. JAMA. 2006;296:72–78

9. Kosseim M, Mayo NE, Scott S, et al. Ranking hospitals according to acute myocardial infarction mortality: Should transfers be included? Med Care. 2006;44:664–670

10. Mehta RH, Stalhandske EJ, McCargar PA, Ruane TJ, Eagle KA. Elderly patients at highest risk with acute myocardial infarction are more frequently transferred from community hospitals to tertiary centers: Reality or myth? Am Heart J. 1999;138(4 pt 1):688–695

11. Krumholz HM, Normand SL, Spertus JA, Shahian DM, Bradley EH. Measuring performance for treating heart attacks and heart failure: The case for outcomes measurement. Health Aff (Millwood). 2007;26:75–85

12. McNamara RL, Herrin J, Bradley EH, et al.NRMI Investigators. Hospital improvement in time to reperfusion in patients with acute myocardial infarction, 1999 to 2002. J Am Coll Cardiol. 2006;47:45–51

13. Rogers WJ, Canto JG, Lambrew CT, et al. Temporal trends in the treatment of over 1.5 million patients with myocardial infarction in the US from 1990 through 1999: The National Registry of Myocardial Infarction 1, 2 and 3. J Am Coll Cardiol. 2000;36:2056–2063

14. . H.R. 3590. Patient Protection and Affordable Care Act. One Hundred Eleventh Congress of the United States of America. 2010

15. . Focus on Health Reform: Summary of New Health Reform Law. 2010 Menlo Park, Calif Henry J. Kaiser Family Foundation;

16. Centers for Medicare and Medicaid Services.. MEDPAR. http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/MedicareFeeforSvcPartsAB/MEDPAR.html. Accessed December 28, 2012

17. World Health Organization.. International Classification of Diseases. http://www.who.int/classifications/icd/en/. Accessed December 28, 2012

18. Roe MT, Messenger JC, Weintraub WS, et al. Treatments, trends, and outcomes of acute myocardial infarction and percutaneous coronary intervention. J Am Coll Cardiol. 2010;56:254–263

19. Centers for Medicare and Medicaid Services.. Cost reports. http://www.cms.gov/Research-Statistics-Data-and-Systems/Files-for-Order/CostReports/index.html?redirect=/costreports. Accessed December 28, 2012

20. Ayanian JZ, Weissman JS. Teaching hospitals and quality of care: A review of the literature. Milbank Q. 2002;80:569–593, v

21. Iezzoni LI Risk Adjustment for Measuring Health Outcomes. 2003 Chicago, Ill Health Administration Press

22. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36:8–27

23. Volpp KG, Rosen AK, Rosenbaum PR, et al. Mortality among hospitalized Medicare beneficiaries in the first 2 years following ACGME resident duty hour reform. JAMA. 2007;298:975–983

24. Volpp KG, Rosen AK, Rosenbaum PR, et al. Mortality among patients in VA hospitals in the first 2 years following ACGME resident duty hour reform. JAMA. 2007;298:984–992

25. Volpp KG, Rosen AK, Rosenbaum PR, et al. Did duty hour reform lead to better outcomes among the highest risk patients? J Gen Intern Med. 2009;24:1149–1155

26. Silber JH, Rosenbaum PR, Rosen AK, et al. Prolonged hospital stay and the resident duty hour rules of 2003. Med Care. 2009;47:1191–1200

27. Rosen AK, Loveland SA, Romano PS, et al. Effects of resident duty hour reform on surgical and procedural patient safety indicators among hospitalized Veterans Health Administration and Medicare patients. Med Care. 2009;47:723–731

28. Press MJ, Silber JH, Rosen AK, et al. The impact of resident duty hour reform on hospital readmission rates among Medicare beneficiaries. J Gen Intern Med. 2011;26:405–411

29. Navathe AS, Volpp KG, Konetzka RT, et al. A longitudinal analysis of the impact of hospital service line profitability on the likelihood of readmission. Med Care Res Rev. 2012;69:414–431

30. Navathe AS, Silber JH, Small DS, et al. Teaching hospital financial status and patient outcomes following ACGME resident duty hour reform [published online ahead of print August 2, ]. Health Serv Res. 2012 doi:10.1111/j.1475-6773.2012.01453.x

31. Stukenborg GJ, Wagner DP, Connors AF Jr. Comparison of the performance of two comorbidity measures, with and without information from prior hospitalizations. Med Care. 2001;39:727–739

32. Southern DA, Quan H, Ghali WA. Comparison of the Elixhauser and Charlson/Deyo methods of comorbidity measurement in administrative data. Med Care. 2004;42:355–360

33. Pagan A. Econometric issues in the analysis of regressions with generated regressors. Int Econ Rev (Philadelphia). 1984;25:221–247
34. Wooldridge JM Econometric Analysis of Cross Section and Panel Data. 2002 Cambridge, Mass MIT Press
35. Hansen BB. The prognostic analogue of the propensity score. Biometrika. 2008;95:481–488

36. Krumholz HM, Chen J, Wang Y, Radford MJ, Chen YT, Marciniak TA. Comparing AMI mortality among hospitals in patients 65 years of age and older: Evaluating methods of risk adjustment. Circulation. 1999;99:2986–2992

37. Canto JG, Shlipak MG, Rogers WJ, et al. Prevalence, clinical characteristics, and mortality among patients with myocardial infarction presenting without chest pain. JAMA. 2000;283:3223–3229

38. Ryan TJ, Anderson JL, Antman EM, et al. ACC/AHA guidelines for the management of patients with acute myocardial infarction. A report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee on Management of Acute Myocardial Infarction). J Am Coll Cardiol. 1996;28:1328–1428

39. Krumholz HM, Wang Y, Mattera JA, et al. An administrative claims model suitable for profiling hospital performance based on 30-day mortality rates among patients with an acute myocardial infarction. Circulation. 2006;113:1683–1692

40. Huber PJLe Cam LM, Neyman J eds. The behavior of maximum likelihood estimates under nonstandard conditions. In: Proceedings of the Fifth Berkeley Symposium in Mathematical Statistics and Probability. 1967 Berkeley, Calif University of California Press:221–233

41. White H. Maximum likelihood estimation of misspecified models. Econometrica. 1982;50:1–25

42. Romano PS, Luft HS, Rainwater J. Third Report of the California Hospital Outcomes Project (1997): Report on Heart Attack, 1991–1993. Volume 2. Reports for the California Office of Statewide Health Planning and Development, Center for Healthcare Policy and Research, UC Davis. Sacramento, Calif: Office of Statewide Health Planning and Development; 1997

43. Tu JV, Austin PC, Walld R, Roos L, Agras J, McDonald KM. Development and validation of the Ontario acute myocardial infarction mortality prediction rules. J Am Coll Cardiol. 2001;37:992–997

44. Krumholz HM, Wang Y, Chen J, et al. Reduction in acute myocardial infarction mortality in the United States: Risk-standardized mortality rates from 1995–2006. JAMA. 2009;302:767–773

© 2013 Association of American Medical Colleges

Login

Article Tools

Images

Share