Skip Navigation LinksHome > August 2013 - Volume 88 - Issue 8 > Playing With Curricular Milestones in the Educational Sandbo...
Academic Medicine:
doi: 10.1097/ACM.0b013e31829a3967
Research Reports

Playing With Curricular Milestones in the Educational Sandbox: Q-sort Results From an Internal Medicine Educational Collaborative

Meade, Lauren B. MD; Caverzagie, Kelly J. MD; Swing, Susan R. PhD; Jones, Ron R. MD; O’Malley, Cheryl W. MD; Yamazaki, Kenji PhD; Zaas, and Aimee K. MD, MHS

Free Access
Supplemental Author Material
Article Outline
Collapse Box

Author Information

Dr. Meade is assistant professor, Department of Medicine, Tufts University School of Medicine, associate program director for internal medicine, Baystate Health, and chair, Educational Research Outcomes Collaborative, Internal Medicine, Baystate Health, Springfield, Massachusetts.

Dr. Caverzagie is associate vice chair for quality and physician competence, Department of Internal Medicine, University of Nebraska Medical Center, Omaha, Nebraska.

Dr. Swing is vice president, Outcome Assessment, Accreditation Council for Graduate Medical Education, Chicago, Illinois.

Dr. Jones is associate director, Internal Medicine Residency, Summa Health System, and associate professor of internal medicine, Northeast Ohio Medical University, Rootstown, Ohio.

Dr. O’Malley is director, Internal Medicine Residency Program, Banner Good Samaritan Medical Center, and assistant professor of medicine, University of Arizona College of Medicine, Phoenix, Arizona.

Dr. Yamazaki is outcome assessment research associate, Accreditation Council for Graduate Medical Education, Chicago, Illinois.

Dr. Zaas is associate professor, Division of Infectious Diseases and International Health, Department of Medicine, and program director, Internal Medicine Residency, Duke University School of Medicine, Durham, North Carolina.

Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A137.

Correspondence should be addressed to Dr. Meade, 759 Chestnut St., Springfield, MA 01199; telephone: (413) 794-8121; e-mail: Lauren.meade@bhs.org.

Collapse Box

Abstract

Purpose: In competency-based medical education, the focus of assessment is on learner demonstration of predefined outcomes or competencies. One strategy being used in internal medicine (IM) is applying curricular milestones to assessment and reporting milestones to competence determination. The authors report a practical method for identifying sets of curricular milestones for assessment of a landmark, or a point where a resident can be entrusted with increased responsibility.

Method: Thirteen IM residency programs joined in an educational collaborative to apply curricular milestones to training. The authors developed a game using Q-sort methodology to identify high-priority milestones for the landmark “Ready for indirect supervision in essential ambulatory care” (EsAMB). During May to December 2010, the programs’ambulatory faculty participated in the Q-sort game to prioritize 22 milestones for EsAMB. The authors analyzed the data to identify the top 8 milestones.

Results: In total, 149 faculty units (1–4 faculty each) participated. There was strong agreement on the top eight milestones; six had more than 92% agreement across programs, and five had 75% agreement across all faculty units. During the Q-sort game, faculty engaged in dynamic discussion about milestones and expressed interest in applying the game to other milestones and educational settings.

Conclusions: The Q-sort game enabled diverse programs to prioritize curricular milestones with interprogram and interparticipant consistency. A Q-sort exercise is an engaging and playful way to address milestones in medical education and may provide a practical first step toward using milestones in the real-world educational setting.

Competency-based medical education (CBME) is an outcomes-based educational approach that uses an organizing framework of competencies. In the United States, the Accreditation Council for Graduate Medical Education (ACGME) general competencies, first introduced in 1999, provide this framework. However, these six competencies’ broad, generic nature has made them difficult to translate into the measurable knowledge, skills, and attitudes necessary for outcomes-based curriculum development and assessment.

To move beyond this barrier, an important next step is the articulation of developmental milestones1 for each specialty. Milestones serve as valuable resources for assessment and feedback in CBME, and internal medicine (IM) and pediatrics have led the way in developing them. In 2009, the IM milestones were published2 and received positive feedback from the IM education community. IM program directors have dedicated significant efforts to implementing these milestones (now referred to as curricular milestones) into curricula, assessment, and evaluations3–5 in accordance with the principles of CBME. The results of milestones implementation will be crucial to help meet the expectations of the ACGME’s Next Accreditation System (NAS).6

In CBME, faculty determine whether trainees have demonstrated competence and are ready to progress to the next career phase or into unsupervised practice. This requires assessment of the expected competencies in a clinical context.7 Milestone achievement progresses until a resident reaches a “landmark,” that is, a critical event that marks a turning point inresidenteducation such that she or hecan be entrusted with increased responsibility.8

In this article, we describe a practical method for identifying sets of milestones pertinent to resident attainment of landmarks* for use in developing applications for training. Landmarks can be employed to evaluate resident competence in the context of clinical training by focusing assessment on common opportunities rooted in residency programs. Examples include the chief resident choosing a resident to be the leader of a Code Blue team or a program director deciding a resident is capable of acting as an inpatient ward team supervisor. In both examples, the resident is being entrusted with greater responsibility and autonomy, which implies a level of demonstrated competence.9 Here, we report the results of a Q-sort exercise in which faculty at 13 IM residency programs identified high-priority milestones for the landmark we called “Ready for indirect supervision in essential ambulatory care” (EsAMB). We defined EsAMB as that moment in training when an ambulatory preceptor is no longer compelled to enter the exam room to confirm findings presented by a resident for some simple cases.

Back to Top | Article Outline

Method

Identification of EsAMB milestones

In fall 2009, we established the Educational Research Outcomes Collaborative (E-ROC) as a 13-program subgroup within the Educational Innovations Project (EIP) for IM, with a goal of applying milestones to residency training. The 13 participating IM residency programs reflect the diversity of programs in the EIP in terms of size, region, and academic affiliation (Table 1). The E-ROC working group consisted of one principal investigator (PI) from each program and convened via monthly conference calls and at biannual meetings of the Association of Program Directors of Internal Medicine from October 2009 to April 2011.

Table 1
Table 1
Image Tools

The E-ROC design team, consisting of the authors, began the milestones project by asking the open-ended question “How can we apply the draft list of curricular milestones to training?” We chose EsAMB as the landmark for study on the basis of its clearly defined time-based corollary: the primary care exception rule,10 which permits attending physicians to supervise postgraduate year 1 residents without direct attending–patient contact after the first six months of training.

From January 2010 to April 2010, we held weekly conference calls and three half-day video conferences. Through an iterative process, we analyzed and selected 51 of the draft curricular milestones2 relating to EsAMB. We further reduced this set to the 30 that could be assessed by direct observation and revised our selections with input we received from E-ROC during the larger group’s monthly calls.

As part of a larger study, we addressed the need for prioritization of the milestones by designing a faculty exercise that used the Q-sort method for rigor,11 as described below. To make the Q-sort activity feasible in terms of time and to fit the milestones into a quasi-normal distribution (a requirement of Q-sort),12,13 we narrowed the 30 milestones down to 22. In the process of narrowing, we recognized similar content among closely related milestones and gave preference to those that were most likely to be directly observed in the ambulatory care setting. The 22 milestones we selected were then approved by the 13 participating E-ROC programs.

Back to Top | Article Outline
Q-sort applied in a game format

Q-sort methodology enables researchers to study subjectivity using a combination of qualitative and quantitative methods: It prioritizes the opinions of an observer11 and provides an organized means of identifying priorities and areas of divergent opinions among a group.12 In a Q-sort, the observer rank-orders a set of statements from most important to least important, using an inverted quasi-normal distribution.11,13 The sample of statements (the Q sample) represents either an existing framework or one derived from interview statements obtained during a qualitative research study.11,13 Thus, the statements are the unit of analysis; the number of observers is less important than is their theoretical relevance to the topic.12 Once sorted, statements are analyzed by rank category with a standard deviation. The Q-sort method uses a mathematical substructure to reveal priorities of subjective viewpoints of the observers. The results of the Q-sort can be used to describe the sample of viewpoints, in this case in terms of prioritization of milestones, rather than the sample of observers themselves.13

For this study, we developed a “Q-sort game” that engaged faculty at the participating programs in the milestone decision process through active learning. We printed the 22 milestone choices onto “playing cards” and the normal distribution onto a “game board” consisting of seven columns arranged in an inverted normal distribution. The columns, moving from left to right, are numbered to denote most to least important (see Figure 1). Column 7 (most important) and column 1 (least important) each include only one choice position. As the columns move toward the center (neutral importance), there are more choice positions such that column 6 and column 2 each have three positions of choice, and column 5 and column 3 each have four positions of choice.

Figure 1
Figure 1
Image Tools
Table 2
Table 2
Image Tools

In our Q-sort game, participants start with 22 playing cards representing the milestones and place them in the appropriate column for priority. For example, the participant in Figure 1 prioritized the milestone coded as PC/F1 as the most important and thus placed it in column 7; she placed milestones PC/A1, PC/C1, and PC/B1 in column 6, denoting the second most important position (all items in a column are considered to be of equal importance). She placed the four milestones she prioritized as least important in columns 1 and 2. Participants move the milestones around on the board until they are satisfied with the rank order. When the game is complete, they tape the milestones to their boards and give the final prioritization schemes to the facilitator. The top eight milestones are those placed in columns 7 to 5, which denote milestones prioritized higher than neutral.

Back to Top | Article Outline
Q-sort game participants and procedure

IM faculty at the 13 E-ROC programs participated in the EsAMB Q-sort exercise over the period May 2010 to December 2010. Each program’s PI invited all ambulatory faculty to a 90-minute workshop at which she or he provided directions for the exercise using the faculty development slide set and instructions we created. We distributed the game electronically to the site PIs, who had them printed locally with minimal cost to their institutions. At one institution, the game was played electronically but faculty received the same instructions. Other variations included the setting and timing of the Q-sort activity. Some programs performed the exercise during a broader, time-protected program educational day, and others did it during short, directed sessions. Some programs gathered large groups of faculty, whereas others involved only core faculty.

Faculty, working in faculty units of one to four individuals, were asked to rank the 22 milestones on the basis of their importance in deciding whether a trainee had reached the EsAMB landmark. Site PIs collected the completed game boards from participants and reported the numerical values on each gameboard. All programs’ Q-sort results were returned to us by e-mail for collation and analysis.

We analyzed the Q-sort results by calculating the mean rank order of milestones, both by faculty units and by program. We also counted how many faculty units and programs prioritized each milestone in their top eight choices (i.e., those prioritized higher than neutral on the game board). For each milestone, we calculated a program rank order mean by averaging across the program’s faculty unit rank orders; we also calculated a mean for each milestone by averaging across all faculty units. We identified the eight milestones with the highest faculty means as the top priorities. We calculated a percent agreement for each of the high-priority milestones based on the number of programs that had individually ranked them in their top eight; we repeated this analysis using faculty units as the unit ofanalysis.

Each participating institution obtained institutional review board approval or an educational exempt status for this faculty exercise (see Table 1).

Back to Top | Article Outline

Results

The Q-sort game was distributed to a total of 149 faculty units (approximately 250 faculty) at the 13 participating IM residency programs. Six faculty units from different programs reported data incorrectly, and their rankings could not be analyzed. Thus, our analysis was based on data from 143 faculty units at 13 programs (Table 1). Table 2 presents the 22 milestones in order of priority for EsAMB. The top 8 milestones are clustered in the Patient Care (PC) and Professionalism (P) competencies, and 2(25%) of them focus on recognition oflimits.

Among their top 8 milestones, 13 (100%) programs chose PC/A1, PC/F1, and P/F5, whereas 12 (92%) programs chose PC/B1, PC/C1, and PC/C2 (Table 3). Thus, 6 (75%) of the top 8 milestones had at least 92% agreement. For the seventh- and eighth-priority milestones, there was a drop in the level of agreement of both faculty units (80 [56%] and 59 [41%], respectively) and programs (9 [69%] and 4 [31%], respectively). This drop is explained by how few faculty and programs prioritized other milestones in their top 8: 11 of the 14 lower-priority milestones were prioritized in the top 8 by 2 to 31 (0.1%–22%) of the faculty units and 0 to 1 (0%–8%) of the programs. Interestingly, one program chose 3 unique milestones, clustered in the Interpersonal and Communication Skills competency, in its top 8 (IPC/A2, IPC/A3, IPC/A4). Across faculty units, there was greater than 75% agreement for 5 (63%) of the top 8 milestones: PC/F1, PC/A1, P/F5, PC/C1, and PC/C2.

Table 3
Table 3
Image Tools

Although not formally studied, the qualitative experience of the participating faculty was reported by the site PIs. They indicated that the Q-sort game was educationally productive and engaged faculty in lively, active discussions about the order of importance as they prioritized the milestones. Faculty completed the Q-sort exercise expressing more confidence in the behaviors they would like to directly observe in order to advance their residents in EsAMB. They also expressed a better understanding of steps along the way to competence as a result of having to differentiate between skills in EsAMB and in more complex ambulatory settings. The PIs felt that the process of sharing perspectives and ideas enhanced the educational experience and thus the exercise was more valuable as faculty development when it was conducted as a group rather than an individual activity. As noted above, one program conducted the Q-sort as an individual activity using an electronic format; this had the benefit of reaching more faculty members by eliminating the need to attend a meeting to perform the exercise. Overall, faculty expressed interest in performing the game again with a new landmark or another group of faculty. Finally, faculty described the exercise as “useful” and “fun,” indicating strong intellectual and creative engagement with these educational concepts.

Back to Top | Article Outline

Discussion

Milestones and entrustable professional activities (EPAs) are receiving considerable scrutiny in medical education, moving residency training from a “dwell-time,” apprenticeship model to an observed, entrustment-based compilation of targets for competence. To meet the ACGME’s requirement that IM residency programs use reporting milestones4 for the NAS beginning July 2014, programs are charged with developing a structure for implementation. Engaging faculty in assessment using milestones can sometimes be difficult, however. As this study demonstrates, using Q-sort—a standardized research method for the study of subjectivity—can actively engage faculty in prioritizing curricular milestones2 for training landmarks and achieve concordance between programs and participants. Following our Q-sort exercise, site PIs reported that faculty members gained a sense of optimism for direct observation, milestones, andCBME.

Back to Top | Article Outline
Limitations

One limitation of this study relates to the uncertainty of the initial published set of curricular milestones.2 Milestones identified as priorities in a Q-sort for a landmark (or an EPA) are dependent on the initial milestone set to be sorted. Using an exclusion process such as the Q-sort raises the question of whether other milestones could have been included in our priority set of 22 milestones, perhaps drawn from other sources. In addition, the inclusion of other categories of participants (e.g., nurses or patients) might have influenced the prioritization. Whereas the results of a Q-sort can vary depending on the starting set, participants, and framing issue, we found the general method to be a useful construct to prioritize any subjectively held perspectives.

Another limitation is that all participating programs were EIP programs, chosen by the ACGME in 2006 as programs of excellence for innovation. EIP programs are, therefore, adept at and have institutional support for educational innovation, are willing to take more risks, and have an incentive to produce positive educational outcomes.14 Although these factors may be more apparent in EIP programs, we assert that most residency programs are similarly focused on improving their means of trainee assessment, which may be affected by the implementation of the NAS.

Finally, this research is limited as a tightly controlled study because it is a complex intervention.15 Complex interventions must be adjusted to the local educational environment to be adopted successfully, so we allowed for variability within the overarching guidelines we set for this exercise. This study design intentionally gave local control to the site PI to meet the goal of collaboration in a real-world educational setting. This decision does, however, increase the ecological validity of this study.

Back to Top | Article Outline
Next steps

The next phase of the EsAMB milestones study is to apply the high-priority curricular milestones to the educational setting. Using the top eight EsAMB milestones, we constructed a feedback tool for faculty to record observations of trainees in the ambulatory setting. Five E-ROC programs (29 faculty) piloted the feedback tool in the ambulatory setting from December 2010 to February 2011. (For the final EsAMB feedback tool, see Supplemental Digital Appendix 1, http://links.lww.com/ACADMED/A137.) We are currently studying faculty and resident acceptance of this evaluation method and its correlates with trainee performance.

Our Q-sort activity is one practical and engaging step toward the development of a new training model. This exercise can be adapted to engage faculty in prioritizing milestones for other EPAs. It is unclear whether there will be the same agreement for other EPAs and whether other EPAs will be as conducive to this type of prioritization method. Similar Q-sort exercises may also be useful along the continuum of physician training from undergraduate education to recertification. Although this study provides a set of eight milestones that faculty ranked as high priority for EsAMB, it is unclear whether these are the correct milestones for evaluation of competence in this landmark. Further study is needed to understand the relevance of Q-sort for applying milestones to assessment.

Many IM programs are actively applying curricular milestones to EPAs such as “discharging a patient safely,” “leading a code effectively,” and “managing a health care team efficiently.” Caution is advised as CBME has not yet been proven to measure distinct attributes that people can demonstrate in their actual work.16 Other important education principles, such as those of andragogy, may be of equal value.17 The paradigm shift to CBME using milestones is sound educational theory driven by public accountability and, thus, may fulfill an overall social mission of training excellence and patient care.16,18

Back to Top | Article Outline

Conclusions

The application of milestones in the real-world educational setting is a new challenge to residency programs. A Q-sort exercise is simple to distribute, cost-effective, and engages faculty in examining relationships between curricular milestones and landmarks/EPAs. As our results show, our Q-sort game enabled diverse programs to prioritize milestones for EsAMB with interprogram and interparticipant consistency. Q-sort is a playful way to address milestones in medical education and may provide a practical first step toward using milestones in the real-world educational setting.

Acknowledgments: The authors wish to thank Dr. William Iobst for guiding the early iterative process and Dr. Terry Albanese and Sarah Hood for research and project support. They would also like to thank the following principal investigators from the corresponding programs: Drs. Biana Ieybishkis and Mark Gennis, Aurora Healthcare; Drs. Sam Ives and Anne Pereira, Hennepin County Medical Center; Drs. Jason Post and Denise Dupras, Mayo Clinic College of Medicine; Dr. David Shaw, Scripps Mercy Hospital; Drs. Siegfried Yu and Andrew Varney, Southern Illinois University; Dr. Rebecca Shunk, University of California, San Francisco, School of Medicine; Drs. Bradley Mathis and Eric Warm, University of Cincinnati; Drs. Mary Thompson and Kathleen O’Connell, University of Wisconsin; and Dr. Christopher Nabors, Westchester Medical/New York Medical College.

Funding/Support: None.

Other disclosures: None.

Ethical approval: Ethical approval was granted or the study was determined to be exempt as per each participating institution’s internal review board.

Previous presentations: Data were previously reported at a Workshop for Associate Program Directors Internal Medicine Spring Meeting, Atlanta, Georgia, April 24–25, 2012.

* During the course of this study, the term entrustable professional activity (EPA) gained acceptance for use in the assessment of residents in training. EPAs are the critical activities in a profession that a member of the profession needs to perform competently.7 Our use of the term landmark is likely synonymous with EPA, but for the integrity of the study, we use landmark in this article. Cited Here...

Back to Top | Article Outline

References

1. Nasca T. The CEO’s first column—The next step in the outcomes-based accreditation project. ACGME Bull. May 2008:2–4

2. Green ML, Aagaard EM, Caverzagie KJ, et al. Charting the road to competence: Developmental milestones for internal medicine. J Grad Med Educ. 2009;1:5–20

3. Varney A, Todd C, Hingle S, Clark M. Description of a developmental criterion-referenced assessment for promoting competence in internal medicine residents. J Grad Med Educ. 2009;1:73–81

4. Caverzagie KJ, Iobst WF, Aagaard EM, et al. The internal medicine reporting milestones and the next accreditation system. Ann Intern Med. 2013;158:557–559

5. Meade LB, Borden SH, McArdle P, Rosenblum MJ, Picchioni MS, Hinchey KT. From theory to actual practice: Creation and application of milestones in an internal medicine residency program, 2004–2010. Med Teach. 2012;34:717–723

6. Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—Rationale and benefits. N Engl J Med. 2012;366:1051–1056

7. ten Cate O, Scheele F. Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice? Acad Med. 2007;82:542–547

8. Caverzagie K, Hood S, Iobst W. Windows to competence: A framework for implementing milestones to visualize resident competence. October 2010Poster presented at: 31st Annual Alliance for Academic Internal Medicine Conference San Antonio, Tex

9. Kennedy TJ, Regehr G, Baker GR, Lingard L. Point-of-care assessment of medical trainee competence for independent clinical work. Acad Med. 2008;83(10 suppl):S89–S92

10. Department of Health and Human Services, Centers for Medicare and Medicaid Services. Guidelines for Teaching Physicians, Interns and Residents. http://www.cms.gov/Outreach-and-Education/Medicare-Learning-Network-MLN/MLNProducts/downloads/gdelinesteachgresfctsht.pdf. Accessed April 23, 2013

11. Brown SR. Q methodology and qualitative research. Qual Health Res. 1996;6:561–567

12. Valenta AL, Wigger U. Q-methodology: Definition and application in health care informatics. J Am Med Inform Assoc. 1997;4:501–510

13. van Exel J, deGraaf G. Q methodology: A sneak preview. http://www.qmethodology.net/. Accessed March 29, 2011.

14. Mladenovic J, Bush R, Frohna J. Internal medicine’s educational innovations project: Improving health care and learning. Am J Med. 2009;122:398–404

15. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew MMedical Research Council Guidance. . Developing and evaluating complex interventions: The new Medical Research Council guidance. BMJ. 2008;337:a1655

16. Lurie SJ, Mooney CJ, Lyness JM. Commentary: Pitfalls in assessment of competency-based educational objectives. Acad Med. 2011;86:412–414

17. Hinchey KT, Rothberg MB. Can residents learn to be good doctors without harming patients? J Gen Intern Med. 2010;25:760–761

18. Medicare Payment Advisory Commission. . Graduate medical education financing: Focusing on educational priorities. Report to the Congress: Aligning Incentivesin Medicare. 2010 Washington, DC Medicare Payment Advisory Commission http://medpac.gov/documents/Jun10_EntireReport.pdf. Accessed May 3,2013

Supplemental Digital Content

Back to Top | Article Outline

© 2013 by the Association of American Medical Colleges

Login

Article Tools

Images

Share