Skip Navigation LinksHome > May 2005 - Volume 80 - Issue 5 > Quality of Care in Teaching Hospitals: A Literature Review
Academic Medicine:
Research Report

Quality of Care in Teaching Hospitals: A Literature Review

Kupersmith, Joel MD

Free Access
Article Outline
Collapse Box

Author Information

Dr. Kupersmith is a Petersdorf Scholar-in-Residence, the Association of American Medical Colleges, Washington, DC.

Correspondence should be addressed to Dr. Kupersmith, Room 535, Association of American Medical Colleges, 2450 N Street NW, Washington, DC 20037; telephone: (202) 828-0686; e-mail: 〈jkupersmith@aamc.org〉.

Collapse Box

Abstract

Purpose: To compare the quality of care in teaching hospitals with that in nonteaching hospitals.

Method: By performing a literature review via PubMed, the author identified and surveyed 23 studies that compared the quality of care in teaching hospitals with that in nonteaching hospitals. The studies were published from 1989-2004 and in all but one case dealt exclusively with U.S. hospitals.

Results: The teaching hospitals studied had better-quality measures than did nonteaching hospitals in the predominant number of studies reviewed. Process measures were significantly better in teaching hospitals in seven of the eight studies where such measures were observed, and equal in the other study. Risk-adjusted mortality was lower in teaching hospitals in nine of the 15 studies using that measure, not significantly different in five, and significantly lower in nonteaching hospitals in one study (in pediatric intensive care units, even though the teaching hospitals had a better process of care). In nonmortality outcomes, teaching hospitals were better in one study using that measure; there were no significant differences in five other such studies. Major teaching hospitals had more favorable outcomes end points than did minor teaching hospitals in eight studies in which they were compared. Including only those six studies using clinical data for process analysis or risk adjustment, teaching hospitals had a better process in all six and lower adjusted mortality in five of seven studies where that measure was used.

Conclusions: Overall, the favorable results in teaching hospitals extended over a range of locations, conditions, and populations, including routine as well as complex conditions. However, the quality measured in these studies was not at target levels across the spectrum of hospitals. There needs to be a continuous and determined effort for improvement in all institutions. It is to be hoped that teaching hospitals will take the lead not only in continuously improving their own quality, but also in developing and evaluating ever improving methods of quality assessment.

Teaching hospitals are an important component of the health care system. They are the vital sites of education and research and bring to bear considerable and varied expertise to clinical care. However, it has not been clear to everyone that they do in fact deliver care of higher quality than that delivered by nonteaching hospitals. This applies not only for the care of the more complex and sophisticated illnesses, where better quality may be expected, but also for the care of more “routine” conditions.

In this article, I report information from published studies that compare the quality of care in teaching hospitals with that in nonteaching hospitals. Twenty-three studies, performed from 1989–2004, were reviewed.1–23 All but one of the studies dealt exclusively with U.S. hospitals. Studies were chosen after reviewing the literature using PubMed to uncover the studies about this topic published from 1985 to 2004 that presented appropriate data.

Back to Top | Article Outline

Characteristics of the Studies

Back to Top | Article Outline
Setting.

In all instances, the settings studied were hospital inpatient ones, with additional outpatient follow-up as long as 2 years.

Back to Top | Article Outline
Selection of institutions studied.

For four of the studies, the investigators selected the institutions randomly1–3,23; for four others, they selected the institutions comprehensively.4,11,15,22 The institutions in the other 15 reports were selected for study by some form of voluntary recruitment (nine) 5,12–14,16,18–21 or by some combination of the above methods (six).6–10,17,23

Back to Top | Article Outline
Definition of “teaching hospital.”

The most commonly used definition of teaching hospitals was that of the American Hospital Association, i.e., those hospitals that are members of the Council of Teaching Hospitals (COTH) 4,7,9,10,12,14–16,21 (which requires an affiliation agreement with a medical school, at least four approved residency programs, at least two of which are in medicine, surgery, obstetrics-gynecology, pediatrics, family practice, or psychiatry) or have a resident/bed ratio of ≥ .25. 1,2,6,11,17–19,22 In some instances, teaching hospitals were defined simply by the presence of residents in training5,8,20; by their ownership by, or affiliation with, a medical school3; by the presence of medical student clerkships13; or by other criteria (major medical school affiliation with five or more residencies or with medical students rotating through the intensive care unit23). Often they were further distinguished as “major” and “minor” teaching hospitals by their resident/bed ratios (cutoffs at .097 to > .27)1,2,20 or by being “affiliates”3 or non-COTH hospitals with residencies.12,16,21

Back to Top | Article Outline
Measures used.

Quality measures (or indicators) in general use are of three types: structural, process, and outcomes.24 Structural measures are buildings, equipment and personnel characteristics.24,25 Process measures are those related to encounters between physicians and other providers or health care workers concerning what is done to a patient. They are further divided into implicit (quality assessed via clinician review) and explicit (automatic quality and judgments, e.g., blood pressure should be measured on a routine adult physical examination).25 Outcomes measures refer to the impact of care on health status, that is, what happens to a patient. (See the Discussion for comments on the merits of each measure.)

Since teaching status is a structural characteristic on its own, structural characteristics, when represented, were generally separate independent variables or formed part of the multivariate analysis exclusions in the studies reviewed. Process analysis was performed in eight studies1,2,8,9,12,13,22,23 and further subdivided into implicit and explicit processes in two.2,8 Measures included use of medications, monitoring, consultations, and procedures. Outcomes analysis was performed in 21 studies.1,3,4,8,10–23 The predominant outcome measure was mortality (15),1,4,6–8,10,12–14,16–18,20,22,23 but also included preventable adverse events (2),3,21 survey of patient safety,15 patient satisfaction,5 and postsurgical complications.11,17 Both process and outcomes analyses were performed in six of the studies.1,8,12,13,22,23

Back to Top | Article Outline
Administrative versus clinical data.

Administrative data are provided by billing claims and include various demographics, primary diagnosis, mortality, location in the hospital, presence of comorbidity and /or complications (based on other diagnoses coded) and—in the case of ambulatory patients where they are billed separately—laboratory data.26 Clinical data are provided via chart review by clinicians. Administrative data were used in ten of the studies reviewed4,6,7,10–12,15,17,19,22, clinical review in another 10,2,3,5,9,13,14,16,18,21,23 and combination of the two in three.1,8,20

Back to Top | Article Outline
Insurance.

Medicare patients and data formed the basis of the studies in nine instances,1,2,6–10,20,22 while 15 were all-payer.3–5,11–19,21–23

Back to Top | Article Outline
Location.

The geographic locations of the hospitals varied greatly. All but one study focused on U.S. hospitals only, either national in scope1,6,7,10,13,15,18,20,23 or confined to as few as one state and as many as six states.2–5,8,11,12,16,17,19,21,22 One study included hospitals that were both in the United States and in other countries.14

Back to Top | Article Outline
Conditions studied.

When individual conditions were studied, the most common were cardiovascular, comprising acute myocardial infarction (MI) and congestive heart failure (CHF).1,2,8 Other conditions were cerebrovascular accident (CVA),8 chronic lung disease, pneumonia,2,8 hip fracture,8 AIDS,4 and postoperative events.11,18,19 A number of studies included all medical and surgical diagnoses in a particular setting.6,7,9,10 Generally, the more routine conditions were represented in the studies.

Back to Top | Article Outline
Risk adjustment.

Since teaching hospitals are known to care for a sicker group of patients, risk adjustments are highly pertinent. Without risk adjustment, an increased mortality related to intrinsic disease in a sicker patient may be mistaken for that related to level of care. Types of risk adjustment utilized in the hospitals studied were via clinical data in ten,1–3,8,9,13,14,16,18,23 administrative data in ten,4,6,7,9,10,12,15,17,19,20,22 both in one,21 and modified in one5. In one study, there was no risk adjustment.11

Adjustments were generally made on the basis of severity of illness, comorbidity, and demographics such as age, sex, and socioeconomic status. In administrative data, location within the hospital, such as the intensive care unit (ICU), and coded comorbidity were important elements (see the Discussion for a description of limitations of the administrative data.)

Back to Top | Article Outline
Other characteristics studied.

In addition to teaching hospital status, other hospital and patient characteristics were also analyzed, including ownership, size, volume, rural or urban location, and credentialing procedures.3–11,13–20,22,23

Back to Top | Article Outline

Findings

Back to Top | Article Outline
Quality measures overall.

In the predominant number of studies reviewed, the teaching hospitals had better-quality measures than did the nonteaching hospitals (see Tables 1 and 2).

Table 1
Table 1
Image Tools
Table 1
Table 1
Image Tools
Table 2
Table 2
Image Tools
Back to Top | Article Outline
Process measures.

Process measures were significantly better in teaching hospitals in seven1,2,8,9,12,13,23 of the eight studies in which process measures were observed (see Table 1). These included clinical studies on pharmacologic therapy in acute MI,1 Peer Review Organization analysis,9 monitoring and therapy in pediatric13 and adult23 ICUs,13,23 and use of invasive cardiac procedures in patients with acute MI, CHF, and CVA.12 In addition, the teaching hospitals had better overall results in two assessments of both implicit and explicit measures, including in a wide spectrum of physician and nurse cognitive measures as well as in diagnostic and therapeutic interventions.2,8 The one process study with generally equal results among hospitals was one that discussed the care of patients with community-acquired pneumonia.22

Back to Top | Article Outline
Outcomes measures—adjusted mortality.

Teaching hospitals had significantly better risk-adjusted mortality in nine1,7,8,10,12,16,18,20,23 of the 15 studies where adjusted mortality was examined. Included were MI, CHF, pneumonia, CVA, obstructive lung disease, hip fracture, gastrointestinal hemorrhage,1,8,12,16,20 coronary artery bypass graft patients18, ICU patients,23 and in some instances all Medicare or other diagnoses.7,10 In five of the 15 adjusted morality studies reviewed, there were no significant differences in this measure between teaching and nonteaching hospitals for mortality associated with AIDS, patients under Medicare, low-birth-weight infants, acute MI, and pneumonia.4,6,14,17,22

Adjusted mortality was significantly lower in nonteaching hospitals in one of the 15 studies involving patients in pediatric ICUs. In this study, the teaching hospitals had better processes of care (see Table 1); unexpectedly, other factors ordinarily associated with better quality (especially in high-tech ICU situations) such as volume, children’s hospital location, and unit coordination had no effect on mortality. However, mortality was diminished when pediatric intensivists were used.13 Post hoc analysis suggested that the presence of residents was associated with increased mortality.

Back to Top | Article Outline
Outcomes—other end points.

Six of the 23 studies reviewed reported on nonmortality outcomes. In the two studies on postoperative combined end points, there was no significant difference among hospitals in one of the studies (reporting nonadjusted mortality, MI, and CVA),11 and the overall results were unclear in the other study (reporting adjusted mortality and surgical and medical complications in hospitals selected by the volunteering of data). In that study, the endpoint was not significantly different or higher in teaching hospitals in each of three postoperative states.19 In two of the six studies involving preventable adverse events, teaching hospitals had reduced such events in one study3 and significantly reduced drug complications—but not overall adverse events—in the other.21 One study showed no significant differences between hospitals in a patient satisfaction survey on obstetrical services.5 Finally, a study on safety indicators in patients with all-payer insurance had variable results. Individual indicators favored both urban teaching and nonteaching hospitals, although more of the indicators favored the nonteaching hospitals; no statistical analysis was performed.15

Back to Top | Article Outline
Studies in which both process and outcomes were evaluated.

In six of the above studies, both outcome and process measures were analyzed. These were consistent with each other in five studies.1,8,12,22,23 All but one of the studies (in which there was no significant difference)22 favored teaching hospitals; these findings implied but certainly did not assure a causal relationship of the process measures to mortality. In one of these studies, the mortality differences were almost eliminated when adjustments were made for process measures related to medication (receipt of aspirin, angiotensin-converting enzyme inhibitors, or beta blockers), strongly implying a relationship to better process and care.1 The one exception to the concurrence of process and outcomes measures was the pediatric ICU study noted above, where, in teaching hospitals, better process measures (higher use of intraarterial and urinary catheters, mechanical ventilation, and vasoactive drugs, although not use of vital signs) did not match outcomes.13

Back to Top | Article Outline
Major versus minor teaching hospitals.

In seven of eight studies in which major and minor teaching hospitals were compared, the major ones had better outcomes end points than did the minor, “other,” or “affiliated” teaching hospitals.1–3,12,16,20,21 In three of these studies, the endpoint gradient rose stepwise from nonteaching to minor or affiliated teaching hospitals to major teaching hospitals.1–3 In another study, the resident/bed cutoff over which teaching hospitals had better outcomes was .062.8

Back to Top | Article Outline
Findings in studies using clinical data.

One methodologic limitation noted by many authors is the use of administrative data, especially for risk adjustment (see below). Including only those studies using clinical data for process analysis or risk adjustment (see Table 2) produces a total of 12 of the 23 studies reviewed. In these, teaching hospitals had better process measures in all six1,2,8,9,13,23 of the studies using such measures, lower adjusted mortality in five1,8,16,18,23 of seven1,8,13,14,16,18,23 studies using that measure, a lower3 number of preventable adverse events and no significant difference21 in preventable adverse events in one study each, and no significant difference in patient satisfaction in another study.5

Back to Top | Article Outline
Other observations.

As expected, at the teaching hospitals where structural features were examined, these features were better than at the nonteaching hospitals. Such features include percentage of specialty-board-certified physicians, nurse/patient ratios, and high-tech equipment. Severity of illness was also higher in teaching hospitals. Risk-adjusted length of stay was greater in teaching hospitals in one study23; variable, depending on diagnosis, in another12; and shorter in a third study.16 Adjusted costs were higher in teaching hospitals in three studies.12,20,23

Consistent with previous studies, the hospitals that had higher volume, increased occupancy rate, increased proportion of board-certified physicians, and increased technical sophistication generally7–9,11,17,18 but not always14 had better process and outcome measures regardless of teaching status. However, like teaching hospitals, they were not favored in the one patient satisfaction survey.5

Back to Top | Article Outline

Discussion

In the findings presented above, the bulk of studies reviewed unquestionably show that quality of care is better in teaching hospitals. Process of care, the most sensitive and precise measure of quality,25,28 was better in teaching hospitals in all but one study, where the hospitals being compared were similar. Adjusted mortality was more favorable in teaching hospitals in nine studies, higher in one, and not significantly different in five others. When studies that used strictly administrative data or had no risk adjustment are excluded, all six using process analyses and five of seven using adjusted mortality analyses favored teaching hospitals.

However, it should also be noted that the measures of quality are not at target levels across the entire spectrum of hospitals. Defects in the hospital system as a whole have been plainly conveyed, for example, in the Institute of Medicine’s landmark reports on quality, medical errors, and patient safety.29,30 It is clear that there needs to be a continuous and determined effort for improvement in all types of hospitals. In addition, it should be noted that quality may not be and probably is not uniform within institutions.

Regarding the more favorable findings in teaching hospital, overall these findings extended over a range of locations, conditions, and populations. It has been thought by some that teaching hospitals may offer higher quality in complex conditions where their attributes have added value, but not in routine conditions. However, substantial data favored teaching hospitals over a spectrum of diseases that included both highly specialized and routine care (see Table 1).

Three studies were in the much-discussed area of patient safety and preventable adverse events. Two regional studies of preventable adverse events showed either an advantage for teaching hospitals3 or no significant difference.21 One very extensive national study using administrative data to examine 20 patient safety indicators found that some of these indicators were at lower (i.e., less adverse) levels and more of them were at higher levels in urban teaching hospitals than in nonteaching hospitals (urban or rural).15 Although this study may have been limited by the fact that documentation of adverse events was better in teaching hospitals, by the use of administrative risk adjustment and by the lack of statistical analysis, the issue warrants—and will no doubt receive—further observation.

Note should also be made of the two studies in pediatric ICU populations (one of which was a recent update14 of a previous study31 with similar results), which showed no better14 or worse outcomes13 for either teaching or higher-volume hospitals or those with better unit coordination. These two studies dealt with complex patient care, where these hospital attributes are considered to have the advantage. In one of the studies, better process of care in the teaching hospitals in the pediatric ICU study did not match outcomes.13 Further studies of care in this age group are needed.

Back to Top | Article Outline
Limitations of the studies
General limitations.

Studies on quality of care, including those reviewed here, are by their nature observational, usually retrospective, and usually characterized by nonrandom selection. Therefore, they may not be without bias, do not have the rigor of—and are not designed to be—randomized controlled trials.32 Rather, they are designed to determine if the state of present medical knowledge has been applied as well as it could have been.

Specifically in the present review, it should be noted that the results of the studies reviewed have merely been added together. The totals in various categories are not in any way equivalent to a formal meta-analysis, which would not be suited to the studies reviewed. Also, selection of studies was carried out by reviewing the literature found on PubMed. It is possible that some studies in which teaching hospital status represented a minor component were not detected. Another review of this subject, which was published in 2002 and includes many of the studies examined here,33 is suggested for further reading.

Back to Top | Article Outline
Generalizability.

An important question is whether data chosen in a particular study are generalizable.32 Many quality-of-care analyses reviewed were limited to a location (one state or a modest group of states),2–5,8,11,12,16,17,19,21,22 a specific population, specific diseases, or a specific type of insurance such as Medicare. In fact, few of the studies gave a comprehensive view of quality. Regional differences in care have been well noted and studied,34 and the fact that outcomes for different conditions in the same study may vary16 highlights the problem of generalizability. However, the fact that the studies reviewed covered a gamut of locations, conditions, and populations suggests that the conclusions may be generalizable.

Back to Top | Article Outline
Administrative versus clinical data.

Administrative-claims data were widely used in the studies reviewed (15 of them). Advantages of this type of data include low cost and ease of acquisition, ease of follow-up over long periods of time and, apart from coding issues, absence of reporting errors.35,36 However, while highly useful and yielding considerable good and otherwise unobtainable data for research and quality assessment, they are not sensitive enough to detect many refinements of clinical care. Severity of illness, for one, is difficult to obtain and functional status almost impossible.37,38 As noted above, Table 2 includes only those studies using clinical data for process analysis or risk adjustment.

In contrast, clinical data from chart abstractions yields a more complete picture. These data are much more extensive and detailed but require review by an abstracter. Information collected for billing, although subject to coding errors,36 is directly part of the care of patients and thus has a strong rationale for accuracy in that it is used to collect proper reimbursement. However, abstracting charts is not part of direct patient care and must always be embedded with checks for accuracy and precision.

Back to Top | Article Outline
Risk adjustment.

Risk adjustment, which is most important for teaching hospitals that take care of sicker and more vulnerable patients, is problematic no matter whether it is assessed by administrative or clinical data. Here, administrative data are especially problematic and may not have enough depth; that is, the data may depend on the aggressiveness of coding, do not ordinarily include physiologic measurements,38 and do not easily record differences in disease stage. Also, the impact of comorbidity (important in administrative data assessments) may be influenced by which particular comorbidities are used.39 Further, there is no way to distinguish between coded comorbidities present on hospital admission, which are the appropriate measures to evaluate intrinsic risk, and codes added during hospitalization, which may not represent intrinsic risk per se.

It is also important to note that risk adjustment may be relevant for other measures besides mortality. Sicker patients, who have more opportunity for preventable severe adverse events than others, may also display lower patient satisfaction.40 In a study of this measure on an obstetrical service, patients with better health status (as measured by the SF-36 questionnaire) gave their care a better grade.5

Back to Top | Article Outline
Process versus outcomes data.

While outcome measures would seem more meaningful, convincing, and aligned with health system goals than would process measures, an important stumbling block is that they may not align to a sufficient degree with quality to reliably reflect it. Not everyone with a poor process of care suffers a poor outcome.41 For example, if a patient with acute MI is sent home from the emergency unit without thrombolytic or other proper treatment but still lives, the fact remains that the patient did not receive an appropriate quality of care.25 In addition, deaths in a particular study may not occur frequently enough to detect changes (beta error). Process measures are generally considered to be more sensitive and can be more closely related to quality,25 but a rigorous observation in an outcome measure can be very convincing.

Back to Top | Article Outline
Future considerations

There are many improved ways by which quality of care might be analyzed, and hopefully new technology and new concepts will place the discipline of quality assessment on a new level. First, the general use of electronic medical records (EMR) will be associated with an easier availability of much more accurate and complete clinical data. If appropriate EMR-generated data become more extensive and accessible, research on quality of care may become more commonplace and improve dramatically (subject of course to privacy regulations and other factors). One result of this could be that quality-of-care studies extend more to pediatric and adult pre-Medicare age groups and ambulatory care.

Also important, and stimulated in part by the concern over patient safety, there are new and expanding aspects to quality consideration beyond the traditional sets of evaluations of the doctor-patient contact. Quality of care should now also be concerned with organizational attributes, system process issues, and the general workings of teams according to models that often derive from other industries.

Other approaches might include research on more precise assessment of patient risk, studies linking outcomes to specific process measures; new concepts of patient-centered care, methodologies deriving from effectiveness and health services research, and studies of behaviors associated with the best possible patient outcomes. It is important that teaching hospitals take the lead not only in continuously improving their own quality but also in developing and evaluating ever-improving methods of quality assessment.

Note added in proof: While this manuscript was in press, two possibly relevant works appeared. In a study on Medicare patients, Fisher et al. (Health Affairs Web posting 〈http://content.healthaffairs.org/cgi/reprint/hlthaff.var.19v1〉) showed that there were differences among COTH hospitals in intensity of care of acute myocardial infarction, colorectal cancer and hip fracture. In a Web site research brief on privately insured patients using administrative data, Kane et al. showed that Massachusetts COTH hospitals and “community” hospitals (including some minor teaching hospitals) had about the same postoperative complication rates (see 〈http://www.pioneerinstitute.org/pdf/kane-web.pdf〉).

Back to Top | Article Outline

References

1Allison JJ, Kiefe CI, Weissman NW, et al. Relationship of hospital teaching status with quality of care and mortality for Medicare patients with acute MI. JAMA. 2000;284:1256–62.

2Ayanian JZ, Weissman JS, Chasan-Taber S, Epstein AM. Quality of care for two common illnesses in teaching and nonteaching hospitals. Health Aff (Millwood). 1998;17:194–205.

3Brennan TA, Hebert LE, Laird NM, et al. Hospital characteristics associated with adverse events and substandard care. JAMA. 1991;265:3265–9.

4Cunningham WE, Tisnado DM, Lui HH, Nakazono TT, Carlisle DM. The effect of hospital experience on mortality among patients hospitalized with acquired immunodeficiency syndrome in California. Am J Med. 1999;107:137–43.

5Finkelstein BS, Singh J, Silvers JB, Neuhauser D, Rosenthal GE. Patient and hospital characteristics associated with patient assessments of hospital obstetrical care. Med Care 1998; 36:AS68-78.

6Fleming ST, McMahon LF Jr, Desharnais SI, Chesney JD, Wroblewski RT. The measurement of mortality. A risk-adjusted variable time window approach. Med Care. 1991;29:815–28.

7Hartz AJ, Krakauer H, Kuhn EM, et al. Hospital characteristics and mortality rates. N Engl J Med. 1989;321:1720–5.

8Keeler EB, Rubenstein LV, Kahn KL, et al. Hospital characteristics and quality of care. JAMA. 1992;268:1709–14.

9Kuhn EM, Hartz AJ, Gottlieb MS, Rimm AA. The relationship of hospital characteristics and the results of peer review in six large states. Med Care. 1991;29:1028–38.

10Kuhn EM, Hartz AJ, Krakauer H, Bailey RC, Rimm AA. The relationship of hospital ownership and teaching status to 30- and 180-day adjusted mortality rates. Med Care. 1994;32:1098–108.

11Pearce WH, Parker MA, Feinglass J, Ujiki M, Manheim LM. The importance of surgeon volume and training in outcomes for vascular surgical procedures. J Vasc Surg 1999;29:768–76; discussion 777–8.

12Polanczyk CA, Lane A, Coburn M, Philbin EF, Dec GW, DiSalvo TG. Hospital outcomes in major teaching, minor teaching, and nonteaching hospitals in New York state. Am J Med. 2002;112:255–61.

13Pollack MM, Cuerdon TT, Patel KM, Ruttimann UE, Getson PR, Levetown M. Impact of quality-of-care factors on pediatric intensive care unit mortality. JAMA. 1994;272:941–6.

14Rogowski JA, Horbar JD, Staiger DO, Kenny M, Carpenter J, Geppert J. Indirect vs direct hospital quality indicators for very low-birth-weight infants. JAMA. 2004;291:202–9.

15Romano PS, Geppert JJ, Davies S, Miller MR, Elixhauser A, McDonald KM. A national profile of patient safety in U.S. hospitals. Health Aff (Millwood). 2003;22:154–66.

16Rosenthal GE, Harper DL, Quinn LM, Cooper GS. Severity-adjusted mortality and length of stay in teaching and nonteaching hospitals. Results of a regional study. JAMA. 1997;278:485–90.

17Schultz MA, van Servellen G, Litwin MS, McLaughlin EJ, Uman GC. Can hospital structural and financial characteristics explain variations in mortality caused by acute myocardial infarction? Appl Nurs Res. 1999;12:210–4.

18Silber JH, Rosenbaum PR, Schwartz JS, Ross RN, Williams SV. Evaluation of the complication rate as a measure of quality of care in coronary artery bypass graft surgery. JAMA. 1995;274:317–23.

19Sloan FA, Conover CJ, Provenzale D. Hospital credentialing and quality of care. Soc Sci Med. 2000;50:77–88.

20Taylor DH Jr, Whellan DJ, Sloan FA. Effects of admission to a teaching hospital on the cost and quality of care for Medicare beneficiaries. N Engl J Med. 1999;340:293–9.

21Thomas EJ, Orav EJ, Brennan TA. Hospital ownership and preventable adverse events. J Gen Intern Med. 2000;15:211–9.

22Whittle J, Lin CJ, Lave JR, et al. Relationship of provider characteristics to outcomes, process, and costs of care for community-acquired pneumonia. Med Care. 1998;36:977–87.

23Zimmerman JE, Shortell SM, Knaus WA, et al. Value and cost of teaching hospitals: a prospective, multicenter, inception cohort study. Crit Care Med. 1993;21:1432–42.

24Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q. 1966;44(7 suppl):166–206.

25Brook RH, McGlynn EA, Cleary PD. Quality of health care. Part 2: measuring quality of care. N Engl J Med. 1996;335:966–70.

26Brook RH, Lohr KN. Monitoring quality of care in the Medicare program. Two proposed systems. JAMA. 1987;258:3138–41.

27Butler J, Weingarten JP Jr, Weddle JA, Jain MK. Differences among hospitals in delivery of care for heart failure. J Healthc Qual. 2003;25:4–10; quiz 11, 39.
28Brook RH, McGlynn EA, Shekelle PG. Defining and measuring quality of care: a perspective from US researchers. Int J Qual Health Care. 2000;12:281–95.

29To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press, 2000.

30Crossing the quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press, 2001.

31Horbar JD, Badger GJ, Lewit EM, Rogowski J, Shiono PH. Hospital and patient characteristics associated with variation in 28-day mortality rates for very low birth weight infants. Vermont Oxford Network. Pediatrics. 1997;99:149–56.

32Starfield B. Quality-of-care research: internal elegance and external relevance. JAMA. 1998;280:1006–8.

33Ayanian JZ, Weissman JS. Teaching hospitals and quality of care: a review of the literature. Milbank Q. 2002;80:569–93.

34Normand ST, Glickman ME, Sharma RG, McNeil BJ. Using admission characteristics to predict short-term mortality from myocardial infarction in elderly patients. Results from the Cooperative Cardiovascular Project. JAMA. 1996;275:1322–8.

35Wennberg JE, Roos N, Sola L, Schori A, Jaffe R. Use of claims data systems to evaluate health care outcomes. Mortality and reoperation following prostatectomy. JAMA. 1987;257:933–6.

36Iezzoni LI, Daley J, Heeren T, et al. Using administrative data to screen hospitals for high complication rates. Inquiry. 1994;31:40–55.

37Jollis JG, Ancukiewicz M, DeLong ER, Pryor DB, Muhlbaier LH, Mark DB. Discordance of databases designed for claims payment versus clinical information systems. Implications for outcomes research. Ann Intern Med. 1993;119:844–50.

38Pine M, Norusis M, Jones B, Rosenthal GE. Predictions of hospital mortality rates: a comparison of data sources. Ann Intern Med. 1997;126:347–54.

39Shapiro MF, Park RE, Keesey J, Brook RH. The effect of alternative case-mix adjustments on mortality differences between municipal and voluntary hospitals in New York City. Health Serv Res. 1994;29:95–112.

40Cleary PD, McNeil BJ. Patient satisfaction as an indicator of quality care. Inquiry. 1988;25:25–36.

41Kahn KL, Rogers WH, Rubenstein LV, et al. Measuring quality of care with explicit process criteria before and after implementation of the DRG-based prospective payment system. JAMA. 1990;264:1969–73.

Cited By:

This article has been cited 28 time(s).

American Heart Journal
Differences in the outcome of patients undergoing percutaneous coronary interventions at teaching versus non-teaching hospitals
Sandhu, A; Moscucci, M; Dixon, S; Wohns, DH; Share, D; LaLonde, T; Smith, D; Gurm, HS
American Heart Journal, 166(3): 401-408.
10.1016/j.ahj.2013.06.018
CrossRef
Strategic Management Journal
Coordinating and competing in ecosystems: How organizational forms shape new technology investments
Kapoor, R; Lee, JM
Strategic Management Journal, 34(3): 274-296.
10.1002/smj.2010
CrossRef
Oral Surgery Oral Medicine Oral Pathology Oral Radiology
Longitudinal trends in discharge patterns of orthognathic surgeries: is there a regionalization of procedures in teaching hospitals?
Allareddy, V; Ackerman, MB; Venugopalan, SR; Yadav, S; Nanda, VS; Nanda, R
Oral Surgery Oral Medicine Oral Pathology Oral Radiology, 115(5): 583-588.
10.1016/j.oooo.2012.09.003
CrossRef
Plos Medicine
Patient outcomes with teaching versus nonteaching healthcare: A systematic review
Papanikolaou, PN; Christidi, GD; Ioannidis, JPA
Plos Medicine, 3(9): 1603-1615.
ARTN e341
CrossRef
European Heart Journal
Safety and effectiveness of bivalirudin in routine care of patients undergoing percutaneous coronary intervention
Rassen, JA; Mittleman, MA; Glynn, RJ; Brookhart, MA; Schneeweiss, S
European Heart Journal, 31(5): 561-572.
10.1093/eurheartj/ehp437
CrossRef
American Journal of Gastroenterology
A significant decline in the American domination of research in gastroenterology with increasing globalization from 1980 to 2005: An analysis of American authorship among 8,251 articles
Cappell, MS; Davis, M
American Journal of Gastroenterology, 103(5): 1065-1074.
10.1111/j.1572-0241.2007.01767.x
CrossRef
Medical Care Research and Review
Testing the Association Between Patient Safety Indicators and Hospital Structural Characteristics in VA and Nonfederal Hospitals
Rivard, PE; Elixhauser, A; Christiansen, CL; Zhao, S; Rosen, AK
Medical Care Research and Review, 67(3): 321-341.
10.1177/1077558709347378
CrossRef
Journal of Pediatrics
Emergency department admission decision-making: An opportunity for quality improvement in medical education and practice
Miles, PV
Journal of Pediatrics, 149(5): 598-599.
10.1016/j.jpeds.2006.08.031
CrossRef
Jama-Journal of the American Medical Association
Justifying patient risks associated with medical education
Chiong, W
Jama-Journal of the American Medical Association, 298(9): 1046-1048.

Academic Medicine
Effect of critical care medicine fellows on patient outcome in the intensive care unit
Peets, AD; Boiteau, PJE; Doig, CJ
Academic Medicine, 81(): S1-S4.

American Journal of Medicine
Strategies for residency programs that improve medicine departments and teaching hospitals
Rahim, A; Anderson, RA
American Journal of Medicine, 121(5): 450-455.
10.1016/j.amjmed.2007.12.010
CrossRef
Research in Nursing & Health
Nurse Staffing and Post-Surgical Complications Using the Present on Admission Indicator
Mark, BA; Harless, DW
Research in Nursing & Health, 33(1): 35-47.
10.1002/nur.20361
CrossRef
Archives of Surgery
Hospital Teaching Intensity, Patient Race, and Surgical Outcomes INVITED CRITIQUE
Lipsett, PA
Archives of Surgery, 144(2): 121.

Clinical and Investigative Medicine
Effect of US State Certificate of Need regulation of operating rooms on surgical resident training
Fric-Shamji, EC; Shamji, MF
Clinical and Investigative Medicine, 33(2): E78-E84.

Journal of Surgical Education
Does Participation in Graduate Medical Education Contribute to Improved Patient Outcomes as Outlined by Surgical Care Improvement Project Guidelines?
Thors, A; Dunki-Jacobs, E; Engel, AM; McDonough, S; Welling, RE
Journal of Surgical Education, 67(1): 9-13.
10.1016/j.jsurg.2009.12.002
CrossRef
Sao Paulo Medical Journal
Medical learning in a private hospital: patients' and companions' perspectives
Sousa, ADJE; Tajra, CDM; Coelho, RD; Gomes, CM; Teixeira, RA
Sao Paulo Medical Journal, 127(2): 101-104.

Archives of Internal Medicine
Predictors of Early Hospital Readmission After Acute Pulmonary Embolism
Aujesky, D; Mor, MK; Geng, M; Stone, RA; Fine, MJ; Ibrahim, SA
Archives of Internal Medicine, 169(3): 287-293.

American Journal of Medical Quality
Ambulatory Quality Improvement in Academic Medical Centers: A Changing Landscape
Leas, BF; Goldfarb, NI; Browne, RC; Keroack, M; Nash, DB
American Journal of Medical Quality, 24(4): 287-294.
10.1177/1062860609334898
CrossRef
Postgraduate Medical Journal
Medical mortality in Pakistan: experience at a tertiary care hospital
Tariq, M; Jafri, W; Ansari, T; Awan, S; Ali, F; Shah, M; Jamil, S; Riaz, M; Shafqat, S
Postgraduate Medical Journal, 85(): 470-474.
10.1136/pgmj.2008.074898
CrossRef
Journal of Biomedical Informatics
Building a hospital referral expert system with a Prediction and Optimization-Based Decision Support System algorithm
Chi, CL; Street, WN; Ward, MM
Journal of Biomedical Informatics, 41(2): 371-386.
10.1016/j.jbi.2007.10.002
CrossRef
Medical Care
The Effect of Hospital and Surgeon Volume on Racial Differences in Recurrence-Free Survival After Radical Prostatectomy
Gooden, KM; Howard, DL; Carpenter, WR; Carson, AP; Taylor, YJ; Peacock, S; Godley, PA
Medical Care, 46(): 1170-1176.

Archives of Surgery
Hospital Teaching Intensity, Patient Race, and Surgical Outcomes
Silber, JH; Rosenbaum, PR; Romano, PS; Rosen, AK; Wang, YL; Teng, Y; Halenar, MJ; Even-Shoshan, O; Volpp, KG
Archives of Surgery, 144(2): 113-120.

Academic Medicine
Safety in the Academic Medical Center: Transforming Challenges into Ingredients for Improvement
Blumenthal, D; Ferris, TG
Academic Medicine, 81(9): 817-822.

PDF (59)
Critical Care Medicine
Effect of interhospital transfer on resource utilization and outcomes at a tertiary care referral center*
Golestanian, E; Scruggs, JE; Gangnon, RE; Mak, RP; Wood, KE
Critical Care Medicine, 35(6): 1470-1476.
10.1097/01.CCM.0000265741.16192.D9
PDF (261) | CrossRef
Health Care Management Review
Managing to improve quality: The relationship between accreditation standards, safety practices, and patient outcomes
Thornlow, DK; Merwin, E
Health Care Management Review, 34(3): 262-272.
10.1097/HMR.0b013e3181a16bce
PDF (121) | CrossRef
Medical Care
Do Postoperative Complications Vary by Hospital Teaching Status?
Vartak, S; Ward, MM; Vaughn, TE
Medical Care, 46(1): 25-32.
10.1097/MLR.0b013e3181484927
PDF (284) | CrossRef
Medical Care
The Association Between Hospital Characteristics and Rates of Preventable Complications and Adverse Events
Thornlow, DK; Stukenborg, GJ
Medical Care, 44(3): 265-269.
10.1097/01.mlr.0000199668.42261.a3
PDF (196) | CrossRef
Medical Care
The Effect of Hospital Size and Teaching Status on Patient Experiences With Hospital Care: A Multilevel Analysis
Sjetne, IS; Veenstra, M; Stavem, K
Medical Care, 45(3): 252-258.
10.1097/01.mlr.0000252162.78915.62
PDF (285) | CrossRef
Back to Top | Article Outline

© 2005 Association of American Medical Colleges

Login

Article Tools

Images

Share