Secondary Logo

Journal Logo

Measuring Residents' Care Management Knowledge

How Are We Doing?

WILLIAMS, BRENT C.; KACHUR, ELIZABETH; FROHNA, JOHN G.; HALPERN, RALPH; JENSEN, JAMES; YEDIDIA, MICHAEL

Section Editor(s): Williams, Reed G. PhD

PAPERS: EVALUATION METHODS: WHAT DO WE KNOW?
Free

Correspondence: Brent C. Williams, MD, MPH, 300 N. Ingalls Building, Room 7E16, The University of Michigan, Ann Arbor, MI 48109-0429.

The challenge of training resident—physicians to practice in the health care system of the 21st century has been recognized by a wide range of organizations, including professional organizations, government and private organizations, and accrediting bodies.1,2,3,4,5 The profound curricular changes necessary to train physicians have been termed the managed care curriculum, and include a wide array of competencies, including health care systems organization and finance, evidence-based medicine, and patient—physician communication. A term that better reflects the range of competencies necessary for practicing in the “new” health care environment is the care management curriculum. The content of the care-management curriculum was elaborated during the 1990s by at least nine major groups or organizations.6

Also during the 1990s, numerous medical schools and residency programs implemented educational programs in care management. Many new and innovative educational programs were made possible through two efforts—Partnerships for Quality Education (PQE), 〈http://www.pqe.org〉, which focuses on residency training, and the Undergraduate Medical Education for the 21st Century (UME-21) project,5 which focuses on medical schools. Curricular reform to incorporate this content is now expected of all schools and programs, as the Association of American Medical Colleges and the Accreditation Council for Graduate Medical Education (ACGME) have both defined expanded sets of competencies as goals for physician training.

Despite substantial progress in developing and implementing care management curricula in undergraduate and graduate medical education, reliable and valid methods to assess resident performance in care management have not been published or disseminated. Yet learner assessment is an essential component of any newly developed educational program,7 and serves to motivate learners, to provide feedback to learners, and as a basis for evaluating the educational program.7,8

The purpose of our study was to determine the content and quality of written care management assessment instruments being used in residency programs in the United States.

Back to Top | Article Outline

Method

Instrument Selection

Existing instruments to measure residents' care management knowledge were identified through several methods. First, a literature search identified articles reporting results of residents' care management knowledge assessment, and authors of relevant published articles were contacted. Second, e-mail requests were sent to participating PQE and UME-21 sites requesting copies of instruments for potential inclusion in the study. Finally, leaders of PQE and UME-21 and medical educators and program directors throughout the country were contacted to identify relevant assessment instruments of which they were aware.

Written instruments were solicited that (a) assessed knowledge in non-clinical domains related to the practice of medicine, (b) included at least some closed-ended (multiple-choice and matching) questions, and (c) were used or intended for use among resident physicians. Instruments focusing solely on measuring residents' attitudes towards managed care or used exclusively among medical students were not solicited.

Back to Top | Article Outline

Content Classification

The content of each item from the instruments was classified according to care management domains. The classification system was taken from a recent synthesis of nine managed care curricula,6 and included ten domains and 59 subdomains. The ten domains were (1) Healthcare System Overview, (2) Population-based Care, (3) Quality Measurement and Improvement, (4) Medical Management, (5) Preventive Care, (6) Physician—Patient Communication, (7) Ethics, (8) Teamwork and Collaboration, (9) Information Management and Technology, and (10) Practice Management. So, for example, under the subdomain Healthcare Systems Overview, health system organization, practice structures, and delivery systems are illustrative, and under Quality Measurement, methods for measuring performance and assessing quality of care are illustrative.

To classify items into domains and subdomains, several steps were taken. First, two authors, serving as primary content reviewers (BW and JF), independently classified a sample of 35 items that were chosen to represent the range of domains represented by the items, types of questions, and instruments. Second, these two investigators jointly refined the classification system to capture item content and defined a classification protocol. The two primary content reviewers then independently classified all remaining items. Finally, two additional content reviewers (EK and MY) each independently classified separate 20% random samples of the items according to the classification protocol and stratified by instrument. Agreement was measured using two-way kappas comparing ten-class domain and 59-class subdomain classifications between each pair of reviewers.

Back to Top | Article Outline

Quality Review

To measure the quality of each item, 25 nationally recognized experts were recruited, including education evaluators, clinician educators, program directors, managed care executives, resident physicians, and a representative of the ACGME (see Appendix A). Participants attended a one-and-one-half-day workshop to review the content and quality of the managed care assessment items.

Prior to the workshop, each participant rated the quality of a subset of items according to three pre-specified criteria: (a) relevance to clinical practice, both now and in five years, (b) cognitive level (recall versus application, with higher quality associated with application items), and (c) format, following guidelines and examples provided by the National Board of Medical Examiners.9 Each item was reviewed by two participants (termed quality reviewers), who rated the overall quality of each item based on these criteria. Ratings for each criterion, as well as overall quality, were recorded on a five-point scale. Quality reviewers were limited to participants with ongoing experience in clinical education or education evaluation, and excluded participants with primarily administrative positions.

At the workshop, participants met in small groups of six to eight members to further review each item and reach consensus as to its quality. Each group included a mix of clinician—educators, education evaluators, and managed care medical directors. The results of the content classification prior to the workshop showed that the total number of items was large, and the majority of items were from a single domain—Healthcare Systems Overview. To reduce the number of items to be reviewed, the quality of items from the Healthcare Systems Overview domain was not subject to small-group review. High-quality items from the Healthcare Systems Overview domain were arbitrarily defined as those items for which both quality reviewers rated the overall quality as high (4 or 5 on the five-point scale). Each item from the remaining domains was briefly presented in small-group settings by the two quality reviewers, followed by an open discussion using the same review criteria. Items were then classified by consensus as high- (few or no changes recommended), medium- (possibly usable with revisions) or low- (use not recommended) quality.

Back to Top | Article Outline

Results

Instrument Selection

All authors who were contacted consented to including their instruments in the review. A total of 12 instruments containing 318 items were identified from ten organizations (see Appendix B). Eight items were inadvertently dropped, for a final pool of 310 items. The median number (range) of items from each instrument was 23 (9, 52). After the review and analyses, two instruments with a total of 45 items were revealed to have been used among medical students only, but these were retained in the analyses. No additional instruments for residents were identified in conversations and presentations among medical educators.

Back to Top | Article Outline

Content Classification

Preliminary review of 35 items by the two content reviewers showed that all items could be classified within the ten-domain classification system. Two pairs of subdomains whose content could not be distinguished were collapsed. Agreement in comparing the classification of items into domains by the two primary content was substantial (kappa = .79), and into subdomains it was moderate (kappa = .55). Most disagreements between the two primary content reviewers occurred among the subdomains contained under the domain of Healthcare Systems Overview. Agreement between secondary reviewers and each of the primary reviewers classifying items into domains was substantial (kappas .65–.77), and for subdomains, moderate (kappas .41–.50).

Back to Top | Article Outline

Item Content

The majority of items (180 items; 58%) were from the Healthcare Systems Overview domain (see Table 1). A total of 101 of the remaining 130 items (or 33% of the total) related to two domains: Medical Management and Quality Measurement and Improvement. Of the 69 items in Medical Management domain, 39 (56% of the items in this domain) related to a single subdomain—Evidence-based Medicine.

TABLE 1

TABLE 1

Back to Top | Article Outline

Item Quality

Of the 180 Healthcare Systems Overview items, only 17 (9%) were rated as high quality by the two pre-workshop reviewers (see Table 1). After small-group review of the remaining 130 items, 17 (13%) were rated as high-, 39 (30%) as medium-, and 74 (57%) as low-quality (see Table 1). Nearly all (14 of 17) of the high-quality items in domains other than Healthcare Systems Overview related to a single subdomain in Medical Practice—Evidence-based Medicine. The remaining three high-quality items were from the Quality Measurement and Improvement domain.

Although formal designations for the reasons for quality ratings were not made for all items, discussions among the entire group were held to identify general patterns of content and quality among the items. Common reasons for medium- or low-quality ratings were poor format and time-sensitive content. Poorly formatted questions lacked clarity of stem or included poor sets of response choices. Time-sensitive items, found mainly in the Healthcare Systems Overview domain, covered structural features of the health care system (e.g., organizational models or payment mechanisms) subject to change or be replaced by other models over time.

Back to Top | Article Outline

Discussion

Assessment methods and instruments reflect curricular priorities,10 motivate learners,8 and provide data to evaluate and justify educational programs.7 Among alternative assessment methods, written assessment plays a vital role in measuring physicians' competence. Written tests reliably measure knowledge and can be uniformly administered to large numbers of learners at relatively low cost. Written test performance correlates highly with subsequent written test performance,11 but is more weakly correlated with measures of clinical performance.12

Among residency programs that are using written instruments to measure residents' knowledge of the care management curriculum, current assessment focuses on a very narrow range of competency areas, and few items are of high quality. Dominant areas currently assessed using written instruments are Healthcare Systems Overview, Evidence-based Medicine, and Quality Measurement and Improvement. Although our study was not designed to measure the proportion of residency programs using written assessment instruments for the care-management curriculum, with over 40,000 family medicine, internal medicine, and pediatrics programs in the United States the percentage is probably low.

This study provides evidence that during the 1990s residency curricular reform to prepare learners to practice in the evolving health care system emphasized structural features, organizations, and payment mechanisms in health care. Examples of this type of content include distinctions among staff-model and group health maintenance organizations (HMOs), independent practice associations (IPAs), and preferred provider organizations (PPOs) and familiarity with key organizations in health care, such as the Centers for Medicare and Medicaid Services and the National Committee on Quality Assurance (NCQA). While understanding these concepts is important to clinical practice, they are most valuable as background to other more applied content areas,6 and are quite likely to change over time. Our findings also reflect the recent emphasis in residency training on evidence-based medicine, which focuses on clinical epidemiology, critical appraisal skills, and scientific study design.

Less attention has been paid by educators and academic leaders, however, to assessing residents' mastery of areas such as population-based care, case management, teamwork, and clinical information management. These content areas relate directly to daily practice in the complex and interdependent health care system of the 21st century but are probably under-assessed. The rapid growth of medical knowledge and technology, specialization, the complexity of health care delivery systems, and the need for rapid mobilization and communication of information across sites and providers necessitate mastery of a broader range of content areas than are evidently captured in existing “managed care” curriculum assessment methods.

The major limitation of our study is that it may have omitted some written instruments for residency care-management curricula. However, the uniformity of findings among the instruments we surveyed makes it unlikely that this “snapshot” of assessment instruments is substantially misleading regarding its major findings.

The development of high-quality written instruments to measure a range of domains in the care-management curriculum would be useful to motivate learners and provide feedback to them about their knowledge, to evaluate the effectiveness of care management curricula among groups of learners, and as a guide for curricular reform in residency programs.

Back to Top | Article Outline

References

1. Association of American Medical Colleges. Contemporary issues in medicine—medical informatics and population health: Report II of the Medical School Objectives Project. Acad Med. 1999;74:130–41.
2. Council on Graduate Medical Education Resource Paper: Preparing Learners for Practice in a Managed Care Environment. Rockville, MD: Health Resources and Services Administration, September 1997.
3. O'Neil EH and the Pew Health Professions Commission. Recreating Health Professional Practice for a New Century. The Fourth Report of the Pew Health Professions Commission. San Francisco, CA: Pew Health Professions Commission, 1998.
4. Accreditation Council for Graduate Medical Education (ACGME) General Competencies. Version 1.3. September 28, 1999. Chicago, IL: ACGME, 2000, 〈http://www.acgme.org〉.
5. Rabinowitz HK, Babbott D, Bastacky S, et al. Innovative approaches to educating medical students for practice in a changing health care environment: the National UME-21 Project. Acad Med. 2001;76:587–97.
6. Halpern R, Lee MY, Boulter PR, Phillips RR. A synthesis of nine major reports on physicians' competencies for the emerging practice environment. Acad Med. 2001;76:606–15.
7. Kern DE, Thomas PA, Howard DM, Bass EB. 1998. Step 6: Evaluation and feedback. Chapter seven in: Curriculum Development for Medical Education: A Six-step Approach. Baltimore, MD: Johns Hopkins University Press, 1998:70–5.
8. Rowntree D. The purposes of assessment. Chapter 2 in: Assessing Students: How Shall We Know Them? East Brunswick, NJ: Nichols, 1995:22–7.
9. Case SM, Swanson DB. Constructing Written Test Questions for the Basic and Clinical Sciences. 3rd ed. Philadelphia, PA: National Board of Medical Examiners, 2000. [Available at 〈http://www.nbme.org〉].
10. Linn RL, Gronlund NE. Instructional goals and objectives: foundation for assessment. Chapter 3 in: Measurement and Assessment in Teaching. Upper Saddle River, NJ: Prentice Hall, 2000:33.
11. Case SM, Swanson DB. Validity of NBME Part I and Part II scores for selection of residents on orthopaedic surgery, dermatology, and preventive medicine. Chapter 10 in: Gonnella JS, Hojat M, Erdmann JB, Veloski JJ (eds). Assessment Measures in Medical School, Residency, and Practice: The Connections. New York, Springer, 1993.
Back to Top | Article Outline

APPENDIX A

Table

Table

12. Veloski JJ, Hojat M, Gonnella JS. A validity study of Part III of the National Board Examination. Eval Health Prof. 1990;13:227–40.
Back to Top | Article Outline

APPENDIX B

Back to Top | Article Outline

Institutions or Organizations Contributing Written Assessment Instruments for Residents' Care Management Knowledge

Cornell University

Maimonides Medical Center

Partnerships for Quality Education

St. Vincent's Medical Center, New York

Thomas Jefferson University

Tufts Managed Care Institute

University of California—Davis

University of Michigan

University of Nebraska

University of South Carolina

© 2002 by the Association of American Medical Colleges