Norcini, John J. PhD; Kimball, Harry R. MD; Lipner, Rebecca S. PhD
Over the past decade, the specialty board certification of physicians has become increasingly important to patients, managed care organizations, and health care employers.1 It is one of the few objective indicators of overall competence, and it is commonly used to indicate a physician's ability to deliver high-quality patient care. Information about certification is collected by nearly all organizations that confer credentials on physicians, and it is currently included in the Health Plan Employer Data and Information Set (HEDIS) criteria reported annually by managed care organizations to the National Commission on Quality Assessment. Nonetheless, some have questioned whether “having boards” is useful, and they point to a lack of evidence relating a physician's certification status to the medical outcomes of his or her patients. Only one study in internal medicine has examined the relationship between certification and actual patient outcomes, and it found that certified internists offered increased preventive services.2 Many other studies have documented associations between certification and medical school experiences,3,4 graduate training experiences,5–7 practice volume and experience,8–10 and other measures of competence.11–13
Recent studies have shown differences in the outcomes of patients with acute myocardial infarction (AMI) according to the self-designated specialty status of their physicians. Using Medicare patients, Jollis and colleagues14 reported that, after adjusting for clinical severity, patients admitted to the hospital by self-designated cardiologists were 12% less likely to die within one year than were patients admitted by self-designated primary care physicians. Similarly, using data collected by the Pennsylvania Health Care Cost Containment Council (PHC4), Nash and colleagues15 reported lower in-hospital mortality rates (adjusted for severity) among patients of self-designated cardiologists. A more extensive analysis of the PHC4 data by Casale et al. confirmed and extended these results.16 However, these studies used self-reported specialization, and neither verified information about the training or certification status of physicians.
The purpose of this study was to determine whether differences existed in the outcomes of patients with acute myocardial infarction depending on whether or not the attending physician was certified as either a family practitioner, an internist, or a cardiologist. We analyzed a combination of the PHC415–18 data set and the American Medical Association's (AMA's) master file. The PHC4 collected information on all admissions for AMI in Pennsylvania in 1993, and the AMA master file contains information on the specialty board certification status of physicians and their years of graduation from medical school. Specifically, our study compared certified and self-designated family practitioners, internists, and cardiologists with respect to the illness of their patients. In addition, we looked at the in-hospital mortality rates (adjusted for severity of illness) for patients of these physicians; the characteristics of the hospitals the physicians worked in; and the physicians' own characteristics, including time since graduation from medical school and number of AMIs they each treated in 1993.
The Pennsylvania Health Care Cost Containment Council is a state agency mandated to collect and report health care data. As part of a special project, hospitals in Pennsylvania were required to give the council detailed administrative and clinical information concerning all cases where AMI was the principal diagnosis and the initial episode of care during 1993. Data were collected on admission and from the charts for 40,684 hospitalizations.
An attending physician was assigned to patients by the hospital and its doctors; physicians self-reported their specializations. The PHC4 recorded hospital characteristics, such as the availability of advanced cardiac care (percutaneous transluminal coronary recanalization and coronary bypass surgery); the primary payer for each patient; and hospital location, which ranged from 100% rural to 100% urban. Previous research demonstrated that patients treated for AMI in rural or mostly urban hospitals (75% or greater) had a higher rate of mortality.16 We coded the PHC4 location data so that patients treated in rural or mostly urban hospitals were assigned a 0, and all other patients were assigned a 1. After collecting the data, the PHC4 offered physicians and hospitals the opportunity to correct patient-level data, add diagnoses and procedures, request case exclusions, and confirm physician attribution.
Of the 40,684 hospitalizations, 1,677 (4.1%) were excluded from our analysis for a variety of administrative and clinical reasons (e.g., hospital closure; patients who refused treatment; clinical complexity, including anoxic brain damage, metastatic cancer, extensive trauma, and cardiac transplant). In addition, our analysis included only patients who were admitted directly to a hospital for AMI, thus excluding the 8,292 cases that began as transfers from other acute-care facilities.
To calculate the probability of death for each patient hospitalized with AMI, the PHC4 identified possible predictors of in-hospital mortality by reviewing the literature and seeking advice from the Clinical Advisory Panel (an ad hoc committee made up of surgeons, cardiologists, generalists, and researchers that gave advice to the council) and the Technical Advisory Group (a standing committee of the council). Based on this input, PHC4 then calculated and cross-validated expected in-hospital mortality using a backwards stepwise logistic regression model.18 In addition to MediQual's Atlas™ Admission Severity Group (a measure of clinical instability that summarizes 23 variables), the PHC4 used age, age squared, admission type and source, cardiac dysrythmias, cardiogenic shock, cardiomyopathy, conduction disorders, diabetes, dialysis, gender, heart failure, hypertension with or without complications, infarct site, malignant neoplasm, payer, prior coronary bypass artery grafting surgery, and renal failure to construct models of potential risk-adjustment factors. Of these, hypertension with or without complications, heart failure, age squared, malignant neoplasm, and admission source and type did not add anything to the prediction of mortality over and above the other variables, and these were not used in the final model.
The AMA master file contains the specialty board certification status and year of graduation from medical school for all physicians. We matched these data with the PHC4's patient data set using the individual physicians' license numbers, resulting in the identification of 4,863 physicians. Certified or self-designated surgeons (n = 114), physicians not identifying a specialty (n = 69), physicians whose certification status did not match their self-designated specialties (n = 89), and physicians whose years of graduation from medical school were missing (n = 45) were excluded. The 4,546 remaining doctors analyzed in this study managed 28,756 patient's hospitalizations for AMI.
For each physician, patient volume was determined by summing the number of hospitalizations for AMI he or she managed in Pennsylvania in 1993. The number of years since graduation from medical school was calculated by subtracting a physician's date of graduation from 1993.
To compare certified and self-designated family medicine practitioners, internists, and cardiologists with respect to the illness of their patients, each of the variables used to predict mortality in the PHC4 model was analyzed. In addition, data related to hospitals' and physicians' characteristics were examined. Likelihood-ratio chi-square tests and F-tests were used to determine whether there were statistically significant differences in AMI patients' severities of illness based on the physicians' specialization and certification.
These simple tests of differences do not allow joint consideration of the factors that bear on mortality, nor do they permit adjustment for the exertion of influence by a number of variables at the same time. As a consequence, we fitted a linear regression model to the data (a curve-estimation procedure confirmed the appropriateness of a linear model for this data set). We used the generalized estimating equations method (GEE), where patients were clustered within physician, to fit the model. We applied GEE because it is suitable for a repeated-measures design (i.e., correlated observations) where the outcome is binary. In our study, the outcome measure—mortality—is binary, and patients treated by the same physician are not independent in the statistical sense. Analysis done at the level of patients without regard to patient clustering would overemphasize the contribution of high-volume physicians.
In the model, the dependent measure was mortality. The independent variables were:
▪ probability of death (using the mortality equation developed by PHC4),
▪ availability of advanced cardiac care,
▪ hospital location,
▪ physician volume,
▪ number of years since the physician graduated medical school,
▪ whether the physician was a family practitioner,
▪ whether the physician was an internist, and
▪ whether the physician was certified.
Table 1 displays the characteristics of patients, hospitals, and physicians according to the self-designated specialties (family medicine, internal medicine, or cardiology) and certification status of the attending physicians. Likelihood-ratio chi-squares tests and F-tests revealed statistically significant differences among specialties and certification status for all of the variables except the presence of cardiomyopathy. Compared with the other specialties, the cardiologists had younger patients who were more often men and less severely ill as indicated by the Admission Severity Group and predicted mortality. In addition, cardiologists' patients were less often on Medicare and less often treated in hospitals located in rural or urban communities. Not surprisingly, the cardiologists handled a greater volume of AMIs than did the primary care physicians.
As a group, non-certified physicians, regardless of specialty, had graduated from medical school before certified physicians, and they treated more of their patients in hospitals that lacked advanced cardiac care. Within each specialty, certified physicians' patients had slightly higher predicted rates of mortality, but they had lower actual rates of mortality.
To adjust for the influence exerted by several variables at the same time, we fitted a regression model to the data using GEE. Table 2 presents the results. We were reassured to find that predicted mortality (all measures of severity of illness, as well as payer) had the strongest relationship with a patient's death.
A patient's being treated in an advanced cardiac care facility did not make a statistically significant difference with respect to his or her mortality, but the location of the hospital did affect mortality. When all other variables were kept constant, a hospital located outside a rural or urban community was associated with 17% less mortality.
The characteristics of individual physicians were also very important, and all were statistically significant. Again, when all other variables were kept constant, every additional 16 cases of AMI a physician treated (i.e., an increase in the volume of patients seen by a physician) was associated with a 10% decrease in mortality. Conversely, we found a.5% increase in AMI patients' mortality for every year since the physician had graduated from medical school. In terms of a physician's specialty, treatment by either a family practitioner or an internist (as compared with a cardiologist) was associated with a 25–26% increase in AMI patients' mortality. Finally, certification was also significantly related to mortality. When all other variables were held constant, certification was associated with a 15% reduction in AMI patients' mortality.
The purpose of this study was to determine whether there were differences in the outcomes of patients admitted with AMI according to whether their attending physicians were certified or self-designated family practitioners, internists, and cardiologists. We found that lower patient mortality from AMI was associated with treatment by an attending physician who was a cardiologist, cared for larger numbers of patients, was closer to his or her year of graduation from medical school, and was certified. Previous studies have shown that patient volume is predictive of reduced patient mortality, and this result was replicated here.16,20,21 Likewise, our findings seem to corroborate previous work that has shown that specialization, proximity to graduation from medical school, and hospital location are related to attending physicians' performance.2,8,16,22 The unique contribution of this study is that our findings show that an attending physician's certification status is also associated with AMI patients' mortality, even when taking into account many other variables.
To get a sense of the magnitude of the effects, we calculated what would have happened if certified doctors had treated all of the study's approximately 30,000 hospitalized AMI patients. We estimate that 481 fewer in-hospital deaths could be expected when compared with treatment of all patients by non-certified physicians. Similarly, cardiologists were associated with significantly better mortality results among AMI patients than were primary care physicians. If cardiologists had treated all of the study's approximately 30,000 patients, we estimate that 802 fewer in-hospital deaths could be expected when compared with treatment of all patients by primary care doctors.
The methods used by PHC4 in assembling the data have many strengths, and their work was conducted with considerable care. However, there are a variety of issues having to do with data collection that could potentially influence the findings of this study.23 PHC4 verified the data, providing hospitals and physicians the opportunity to make corrections. Nonetheless, the processes of identifying a single physician as the attending of record varied among institutions, and it is possible that more than one doctor may have contributed to clinical outcomes in some instances. However, where this occurs, its effect is to obscure differences among physicians, thereby working against the ability to make distinctions based on specialization and certification status.
In addition, there are limitations in the risk-adjustment procedures the PHC4 used, including the inability to fully distinguish between complications and coexisting conditions, variations in coding, and categorization of the Admissions Severity Group score.23 Some of these limitations, like that imposed by the categorization of the Admissions Severity Group score, apply equally to all groups and mask differences among physicians. However, others may be confounded by a physician's certification status or specialization and, to the degree that they are, these limitations have the potential to affect the findings reported here.
In any retrospective study such as this, not all of the factors that lead to a particular patient's outcome can be captured. For instance, PHC4 collected no information about why a given patient was attended by a certain physician, the length of their acquaintance, or the quality of their relationship. Although the major causes of mortality were captured, it is not possible to rule out these other uncontrolled factors.
Despite these limitations, our study's findings have a number of implications. First, the findings support the validity of the certification processes of specialty boards. In our study, certification was associated with lower mortality, irrespective of a physician's specialty, even after taking account of severity of illness, hospital characteristics, patient volume, and years since graduation from medical school. In other studies, certification has been shown to correlate with a variety of educational and practice experiences, and this study provides evidence that it is associated with better clinical outcomes for patients with AMI as well.2–13 This is not surprising, because to become certified physicians must satisfactorily complete accredited training and pass rigorous examinations. Future studies should focus on the relative contributions of the quality and duration of training and examination performance to this outcome.
Second, the results of our study underscore the importance of considering certification as opposed to self-designation in the process of assigning credentials to physicians. In our study, approximately one fourth of the patients' hospitalizations were managed by self-designated but uncertified physicians, and a similar proportion of all doctors—regardless of specialty— were uncertified. This relatively large percentage of the caregivers had higher rates of mortality among their AMI patients. Although certification should not be used as a sole marker of competence, this study's findings show that it certainly should be an important consideration in making decisions about credentials.
Third, we were surprised to find that, in this study of patients with AMI, the cardiologists managed less severely ill patients than did the generalists. One possible explanation is that younger patients with more treatable underlying conditions found their way to cardiologists. However, this result requires further study.
Fourth, our findings demonstrate the need to be careful when using physicians' self-designations of their specialties to compare the outcomes of their patients. More primary care physicians than cardiologists are uncertified, so contrasting the two groups will over-state the magnitude of their differences. Future studies should consider certification status when making such comparisons.
Fifth, certification is associated with the quality of the medical schools physicians attend3,4 as well as a variety of graduate experiences,5–7 including faculty-resident ratio and length of training. The findings of our study suggest that these aspects of the educational process may be associated with patients' outcomes as well. Certification could serve as a useful intermediate outcome, until it is possible to collect enough clinical data to permit direct comparisons between educational processes and patients' outcomes.
Finally, our study may have implications for policy development. Lower mortality rates among patients with AMI might be obtained by limiting their treatment to those physicians who are certified, are relatively recent graduates from medical school, and have considerable experience with this condition. Not surprisingly, certified cardiologists best fit this description, followed by certified internists and family practitioners. Of course, the exclusive use of certified cardiologists or any other group would require balancing patients' mortality rates against other factors, such as the availability of certified doctors, the costs of additional training, the fragmentation of care, and the outcomes of patients with comorbid conditions.
1. Grambling A. Health plans want to know: are you certified? Managed Care. May 1994:39–41.
2. Ramsey PG, Carline, JD, Inui TS, Larson EB, LoGerfo JP, Wenrich MD. Predictive validity of certification by the American Board of Internal Medicine. Ann Intern Med. 1989;110:719–26.
3. Norcini JJ, Shea, JA, Webster GD, Benson JA Jr. Predictors of the performance of foreign medical graduates on the 1982 certifying examination in internal medicine. JAMA. 1986;256:3367–70.
4. Shea JA, Norcini JJ, Day SC, Webster GD, Benson JA Jr. Performance of U.S. citizen Caribbean medical school graduates on the American Board of Internal Medicine certifying examinations, 1984–1987. Int J Teach Learn Med. 1989;1:10–5.
5. Norcini JJ, Fletcher SW, Quimby BB, Shea JA. Performance of women candidates on the American Board of Internal Medicine certifying examination, 1973-1982. Ann Intern Med. 1985;102:115–8.
6. Norcini JJ, Grosso LJ, Shea JA, Webster GD. The relationship between features of residency training and ABIM certifying examination performance. J Gen Intern Med. 1987;2:330–6.
7. Norcini JJ. Indicators of the educational effectiveness of subspecialty training programs. Acad Med. 1995;70:512–6.
8. Norcini JJ, Lipner RS, Benson JA Jr, Webster GD. An analysis of the knowledge base of practicing internists as measured by the 1980 recertification examination. Ann Intern Med. 1985;102:385–9.
9. Shea JA, Norcini JJ, Baranowski RA, Langdon LO, Popp RL. A comparison of video and print formats in the assessment of skill in interpreting cardiovascular motion studies. Eval Health Prof. 1992;15:325–40.
10. Steel K, Norcini JJ, Brummel-Smith K, Erwin D, Markson L. The first certifying examination in geriatric medicine. J Am Geriatr Soc. 1988;37:1188–91.
11. Norcini JJ, Webster GD, Grosso LJ, Blank LL, Benson JA Jr. Ratings of clinical competence and performance of certification examination. J Med Educ. 1987;62:457–62.
12. Norcini JJ, Meskauskas JA, Langdon LO, Webster GD. An evaluation of a computer simulation in the assessment of physician competence. Eval Health Prof. 1986;9:286–304.
13. Day SC, Norcini JJ, Diserens D, et al. The validity of an essay test of clinical judgment. Acad Med. 1990;65:(9 suppl):S39–S40.
14. Jollis JG, DeLong ER, Peterson ED, et al. Outcome of acute myocardial infarction according to the specialty of the admitting physician. N Engl J Med. 1996;335:1880–7.
15. Nash IS, Nash DB, Fuster V. Do cardiologists do it better? J Am Coll Cardiol. 1997;29:475–8.
16. Casale PN, Jones JL, Wolf MS, Pey Y, Eby LM. Patients treated by cardiologists have lower in-house mortality for acute myocardial infarction. J Am Coll Cardiol. 1998;32:885–9.
17. Focus on Heart Attack in Pennsylvania: The Technical Report—1993 Part B. Harrisburg, PA: Pennsylvania Health Care Cost Containment Council, 1996.
18. Focus on Heart Attack in Pennsylvania: The Technical Report—1993 Part A. Harrisburg, PA: Pennsylvania Health Care Cost Containment Council, 1996.
19. Luft HS, Romano PS. Chance, continuity, and change in hospital mortality rates: coronary artery bypass graft patients in California hospitals, 1983 to 1989. JAMA. 1993;270:331–7.
20. Flood AB, Scott WR, Ewy W. Does practice make perfect? Part I: the relations between hospital volume and outcomes for selected diagnostic categories. Med Care. 1984;22:98–114.
21. Showstack JA, Rosenfeld KE, Garnick DW, Luft HS, Schaffarzick RW, Fowles J. Association of volume with outcome of coronary artery bypass graft surgery: scheduled vs non-scheduled operations. JAMA. 1987;257:785–9.
22. Day SC, Norcini J, Webster GD, Viner ED, Chirico AM. The effect of changes in medical knowledge on examination at the time of recertification. Proc Annu Conf Res Med Educ. 1988;27:139–44.
23. Jollis JG, Romano PS. Pennsylvania's focus on heart attack—grading the scorecard. N Engl J Med. 1998;338:983–7.