In recent years, managed care has become an increasingly prevalent approach to the delivery of health care. The overwhelming majority of physicians (92%) have at least one managed care contract.1 As of July 1, 2000, more than 80 million people were enrolled in managed care plans in the United States,2 and only 8% of Americans with employer-sponsored health insurance coverage continued to have traditional indemnity insurance.3
Previous studies have suggested that graduating medical students, residents, and young physicians do not feel adequately prepared to practice in the new, rapidly evolving health care environment.4–7 In addition, a recent study found that at medical colleges, many deans, faculty, and residents have negative attitudes about managed care, and that these attitudes may have a substantial effect on their students. This same study raised the issue of whether medical schools can, in fact, provide adequate instruction in managed care practice.8
These findings, coupled with the belief that managed care will continue to be the predominant form of health care delivery in the United States for the foreseeable future, have resulted in appeals for better preparation of medical students for work in this environment.4,6,9–11 A number of publications have defined specific managed care skills and competencies that should be included in the medical school curriculum.4,9,10,12,13,14,15 Recently, medical schools have begun to address perceived deficiencies in their curricula, adding new courses in population health, evidence-based medicine, cost—effectiveness, and the organization and financing of medical care.16 The percentage of schools using managed care settings for students' clinical rotations has increased; in 1996–1997, only 18% of medical schools reported that all students had experience in a managed care setting, compared with 31.5% for 1998–1999.17
The purpose of our study was to examine recent medical school graduates' perceptions of the adequacy of instruction in managed care and in 11 curricular content areas (CCAs) considered by many authorities to be a necessary part of a managed care curriculum. We also wanted to better understand whether medical students perceived these CCAs as relevant to managed care. In addition, another objective of our study was to determine the extent to which students' perceptions of the adequacy of instruction were related to the degrees of managed care penetration in the locations of their respective medical schools.
We obtained data from the Association of American Medical Colleges' (AAMC) 1999 Medical School Graduation Questionnaire (GQ). The GQ is administered to graduating fourth-year students at all U.S. medical schools near the end of the senior year. In 1999, the GQ was administered via the Internet, with a total of 12,734 students responding, for an overall response rate of 80%. We chose 1999 because this was the first year that an item on adequacy of instruction in managed care was specifically included in the GQ.
Students' ratings of adequacy of instruction. The 1999 GQ contained a set of questions intended to assess students' perceptions of the adequacy of their instruction. For specific CCAs, students were asked “Do you believe that the amount of time devoted to your instruction in each of the following areas was inadequate, appropriate, or excessive?” For our study, we focused on 12 CCAs from the GQ that were representative of the content areas often cited as necessary for practicing effectively in a managed care environment (see Table 1).4,10,12–15
Managed care penetration. We derived our estimates of managed care penetration from data in a 1998 national health maintenance organization (HMO) census conducted and reported by InterStudy.18 Each medical school was matched with the nearest metropolitan statistical area (MSA) for which information about managed care penetration was available. For two schools that could not be matched with an MSA within 30 miles, we averaged managed care penetration statistics for the MSAs in each school's respective state, and used that average as the value for that school.
We conducted student-level and school-level analyses. For both, respondents were deleted if they had missing values on any of the 12 CCAs that were the focus of our study (a total of 65 students). We did not include the three medical schools in Puerto Rico (a total of 231 students), because data for managed care penetration were not available from InterStudy18 for Puerto Rico.
Overall frequencies of responses were calculated for all CCAs at the student level. In addition, responses to the managed care CCA were cross-tabulated with responses to each of the 11 other CCAs, and the corresponding polychoric correlations were calculated and tested for statistical significance. The polychoric correlation is appropriate when the intent is to describe the relationship between two ordinal variables with more than two levels.19 Response options were coded 1 = inadequate, 2 = appropriate, and 3 = excessive.
For the school-level analyses, we omitted schools if estimated response rates were less than 25%, because such low response rates might be expected to yield biased or unstable estimates of proportions for these schools. Thus, we omitted five schools (with a total of 73 students) from the school-level analyses. The total number of schools in the school-level analyses was 116.
School-level analyses that focused on the proportion of students who indicated that instruction was inadequate were calculated for each item for each school. Pearson product—moment correlations between the proportion of students who rated instruction in managed care as inadequate and the proportion of students who rated instruction as inadequate in each of the other 11 CCAs were calculated and tested for statistical significance.
To examine the relationship between managed care penetration and students' ratings of instruction in the identified CCAs, we also calculated Pearson product—moment correlations between managed care penetration and the proportion rating instruction as inadequate.
A p value of .00147 was set for statistical significance. We calculated this p value by setting the desired overall significance at .05, and then using the Bonferroni correction to calculate the appropriate p value for the number of statistics to be evaluated.
Of the 12,438 GQ respondents in the student-level analyses, 42.9% were women, 66.9% were white, 19.5% were Asian/Pacific Islander, 7.1% were black (non-Hispanic), 5.2% were Hispanic, and .8% were American Indian or Alaska Native. These percentages were similar to those published for all 1999 medical school graduates: 42.6% were women; 18.3% were Asian/Pacific Islander; 7.7% were black (non-Hispanic); 6.7% were Hispanic, and .8% were American Indian or Alaska Native.17
The five schools we omitted from the school-level analyses due to their low response rates had a mean managed care penetration of 20.2, compared with a mean of 34.7 for the schools that we included in the analyses. The omitted schools also tended to have smaller class sizes; average class size of the entering class in 1996 was 89 for the omitted schools, 132 for the 116 schools we included in our study.
Inadequacy of Instruction—Student-level Results
The percentages of respondents who indicated that instruction in managed care and related CCAs was inadequate are shown in Table 1. Sixty percent of 1999 graduates felt that they had received inadequate instruction in managed care. In addition, a majority of the graduates rated instruction in practice management, quality assurance, medical care cost control, and cost-effective medical practice as inadequate. More than one third indicated that instruction in risk management had been inadequate. Smaller percentages of graduates rated instruction inadequate in health promotion and disease prevention and the five CCAs related to clinical decision making and clinical care (care of ambulatory patients, primary care, management of disease, teamwork with other health professionals, and evidence-based medicine).
The polychoric correlations between the responses to the managed care and other CCAs are also presented in Table 1. All of the polychoric correlations were highly statistically significant, as would be expected given the very large number of respondents used in our analysis. The content areas with the largest correlations with managed care were medical care cost control (.73), practice management (.70), cost-effective medical practice (.69), and quality assurance (.66). These high correlations reflect the degree of overlap of students' ratings of instruction in these content areas; for instance, students who reported that instruction was inadequate in managed care also tended to report that instruction was inadequate in medical care cost control.
Inadequacy of Instruction—School-level Results
Results of our school-level analysis are presented in Table 2 and Table 3, and are consistent with the results of the student-level analysis. The means and standard deviations of the proportion of respondents in a school who indicated that instruction was inadequate for each CCA are reported for completeness, but, as expected, are almost identical to the percentages in the student-level analysis.
The correlations between the proportion of respondents in a school who rated instruction in managed care as inadequate and the proportion who rated instruction inadequate in the 11 other CCAs are shown in Table 2. These correlations differ in magnitude from the polychoric correlations computed on the student-level data, but the pattern is very similar. The CCAs with the highest correlations with the managed care CCA were medical care cost control (.85), practice management (.81), cost-effective medical practice (.77), and quality assurance in medicine (.72). All of these correlations were statistically significant (p < .001). Correlations between responses to the managed care CCA and those related to clinical decision making/clinical care and population-based medicine did not reach statistical significance.
Instruction and Managed Care Penetration
The correlations between managed care penetration and ratings of inadequacy of instruction are presented in Table 3. Only instruction in managed care was significantly correlated with managed care penetration. This correlation was negative, indicating that in schools in locations with higher managed care penetration, students were less likely to judge instruction in managed care as inadequate.
We analyzed more than 12,000 medical students' responses to the 1999 GQ and found that a high percentage of these students believed they had not received adequate instruction in managed care. Of the CCAs related to managed care in our study, students appeared to feel that instruction was weakest in the specific areas of practice management, medical care cost control, cost-effective medical practice, and quality assurance in medicine. More than half of all respondents reported that instruction in these areas was inadequate, and more than one third (39%) indicated that instruction was inadequate in all five areas considered under the heading practice management. New curricula relating to managed care education have been instituted at many medical schools during the years that this cohort was enrolled, and the use of managed care settings for clinical rotations has been on the increase. Students in the class of 1999 were among the first to experience recent curricular changes and had greater access to managed care sites than earlier classes. However, as such changes continue, we must monitor the perceptions of students from subsequent classes, as these students are likely to experience an even greater emphasis on managed care topics throughout their medical school careers.
The correlations between ratings of managed care and the related CCAs are informative. At one level, these correlations help to identify the specific areas where increased curricular attention is warranted. At the same time, these correlations provide insight as to how graduating medical school students define managed care. In general, our results suggest that students consider instruction related to cost control, cost-effectiveness, quality assurance, and practice management as highly related to managed care. Ratings of instruction in population-based medicine and clinical decision making/clinical care are related to ratings of managed care instruction, but to a lesser degree. Students who felt that they had received sufficient preparation in clinical decision making and clinical care did not necessarily feel that they had received adequate instruction in managed care. Unfortunately, the design of our study did not allow us to determine whether students did not consider these CCAs to be related to managed care, or whether they considered them related but sufficiently covered in the curriculum. Thus, while some experts have stressed that the skills necessary to successfully practice in the managed care environment include a number of clinical skills,20 this is not necessarily how students define managed care.
The finding that managed care penetration was related to ratings of adequacy of instruction in managed care may be due to greater access to managed care settings for clinical experiences, and a concomitant increase in perceived knowledge of managed care in locations with high penetration. However, schools in locations with greater managed care penetration may provide more information relevant to managed care as part of the curriculum, in response to the local environment. Thus, students in locations with high managed care penetration may provide more positive ratings as a result of an increased presence of managed care in the curriculum, as well as increased exposure to the managed care environment. Unfortunately, we did not have access to the detailed curricular information needed to evaluate the relative effect of curriculum and exposure.
The fact that the correlations between managed care penetration and the other CCAs evaluated were not significant suggests that, while managed care penetration does affect instruction or perception of instruction in managed care, it does not necessarily have a similar effect on related CCAs that many authorities feel should be included in a managed care curriculum. Thus, while students in locations with high managed care penetration are more likely to report that instruction in managed care, per se, is adequate, ratings of adequacy of instruction in related CCAs do not improve commensurately.
We should acknowledge a number of our study's limitations. First, our analyses were based on graduating students' reports of the adequacy of instruction, and not on direct measures of actual instruction. While there is evidence that students' perceptions are valid measures of educational effectiveness,21,22,23,24,25,26,27,28 such ratings are indirect measures.
A second potential limitation has to do with the relationship between quality and quantity. The wording on the GQ asked students to rate the amount of time devoted to instruction, but it is likely that students responded in terms of quality as well. In addition, the terms used in the GQ were not defined. While this is common practice in survey research, it is possible that different groups of students (e.g., students from different schools) interpreted the terms differently.
As in any survey research study, the fact that the response rate is less than 100% allows for potential bias in the results. However, the relatively high response rate, the large sample size, and the comparability of the respondents' demographics to the entire class of graduating medical students, suggest that non-response bias is unlikely to have substantially affected our results.
Finally, we do not know whether surveys of residents and practicing physicians would yield comparable results. From an educational perspective, it is reasonable to survey graduating medical students, as they are in the best position to provide timely feedback on the curriculum. However, perceptions of “adequacy” may change as a function of experience, and these same students might view their undergraduate education differently after greater exposure to practice. Related to the issue of changes in perception over time is the issue of whether certain CCAs should be addressed at the undergraduate level. Medical students' ratings of adequacy of instruction are based on their perceptions at the time of graduation and, therefore, reflect a perceived, and not necessarily an actual, inadequacy. For instance, while many graduates felt that they had not received adequate instruction in cost control, this topic may be more appropriate for residency than for medical school.
In spite of these limitations, our study has important implications for medical education. Our results suggest a number of CCAs that medical schools may wish to target in future efforts to develop curricula in managed care education. Students' responses to the GQ showed they believed they had not received adequate instruction in managed care, or in practice management and the economic aspects of health care delivery. In addition, the responses suggest that the medical students defined managed care in terms of managing costs, rather than managing health care, or developing population-based approaches to the delivery of health care. Thus, in the managed care curricula, medical schools should clarify and emphasize the links between the economic aspects of health care delivery and the principles of primary care, disease prevention, interdisciplinary team care, evidence-based medicine, and population-based health.
1. Gonzales ML, Zhang P (eds). Physician Marketplace Statistics 1997/98. Chicago, IL: American Medical Association, 1998.
2. Competitive Edge HMO Directory. Version 11.1. St. Paul, MN: InterStudy Publications, 2001.
3. Dudley R, Luft H. Managed care in transition. N Engl J Med. 2001;344:1087–92.
4. Meyer GS, Potter A, Gary N. A national survey to define a new core curriculum to prepare physicians for managed care practice. Acad Med. 1997;72:669–76.
5. Campbell EG, Weissman JS, Ausiello J, Wyatt S, Blumenthal D. Understanding the relationship between market competition and students' ratings of the managed care content of their undergraduate medical education. Acad Med. 2001;76:51–9.
6. Cantor JC, Baker LC, Hughes RG. Preparedness for practice. Young physicians' views of their professional education. JAMA. 1993;270:1035–40.
7. Finocchio LJ, Bailiff PJ, Grant RW, O'Neil EH. Professional competencies in the changing health care system: physicians' views on the importance and adequacy of formal training in medical school. Acad Med. 1995;70:1023–8.
8. Simon SR, Pan RJ, Sullivan AM, et al. Views of managed care—a survey of students, residents, faculty, and deans at medical schools in the United States. N Engl J Med. 1999;340:928–36.
9. Cohen JJ. Educational mandates from managed care. Acad Med. 1995;70:381.
10. Lurie N. Preparing physicians for practice in managed care environments. Acad Med. 1996;71:1044–9.
11. Reid WM, Hostetler RM, Webb SC, Cimino PC. Time to put managed care into medical and public health education. Acad Med. 1995;70:662–4.
12. Jacobs MO, Mott PD. Physician characteristics and training emphasis considered desirable by leaders of HMOs. J Med Educ. 1987;62:725–31.
13. Krause KC. Educating for patient care in the 21st century. Fam Med. 1995;27:354–6.
14. Veloski J, Barzansky B, Nash DB, Bastacky S, Stevens DP. Medical student education in managed care settings: beyond HMOs. JAMA. 1996;276:667–71.
15. Yedidia MJ, Gillespie CC, Moore GT. Specific clinical competencies for managing care: views of residency directors and managed care medical directors. JAMA. 2000;284:1093–8.
16. Whitcomb ME, Anderson MB. Transformation of medical students' education: work in progress and continuing challenges. Acad Med. 1999;74:1076–9.
17. Barzansky B, Jonas H, Etzel S. Educational programs in U.S. medical schools 1998–1999. JAMA. 1999;282:840–6.
18. The Competitive Edge Part III: Regional Market Analysis. Vol. 8.1. St. Paul, MN: InterStudy Publications, 1998:170.
19. Joreskog K, Sorbom D. Prelis2: User's Reference Guide. Chicago, IL: Scientific Software, 1996.
20. Blumenthal D, Thier SO. Managed care and medical education: the new fundamentals. JAMA. 1996;276:725–7.
21. McKeachie WJ. Student ratings: the validity of use. Am Psychol. 1997;52:1218–25.
22. Scriven M. Student Ratings Offer Useful Input to Teacher Evaluation. Washington, DC: Office of Educational Research and Improvement, 1995.
23. Ripley RM. Student evaluations of professors: are they of value? J Med Educ. 1975;50:951–8.
24. Marsh HW. Validity of students' evaluations of college teaching: a multitrait—multimethod analysis. J Educ Psychol. 1982;74:264–79.
25. Prosser M, Trigwell K. Student evaluation of teaching and courses: student study strategies as a criterion of validity. Higher Educ. 1990;20:135–42.
26. Harrison PD, Ryan JM, Moore PS. College students' self-insights and common implicit theories in ratings of teaching effectiveness. J Educ Psychol. 1996;88:775–82.
27. Cohen PA. Synthesizing research results on teacher evaluation using meta-analytic procedures. Paper presented at the Annual Meeting of the American Psychological Association. Washington, DC, Aug. 23–27, 1982. (ERIC Document Reproduction Service, No ED223646).
© 2002 Association of American Medical Colleges
28. Hawkins RE, Sumption KF, Gaglione MM, Holmboe ES. The in-training examination in internal medicine: resident perceptions and lack of correlation between resident scores and faculty predictions of resident performance. Am J Med. 1999;106:206–9.