Secondary Logo

Journal Logo

Innovation Reports

The McMaster Modular Assessment Program (McMAP)

A Theoretically Grounded Work-Based Assessment System for an Emergency Medicine Residency Program

Chan, Teresa MD; Sherbino, Jonathan MD, MEd for the McMAP Collaborators

Author Information
doi: 10.1097/ACM.0000000000000707

Abstract

Problem

The shift to competency-based medical education (CBME) is ushering in a need for frequent, criterion-based, authentic assessments of learners that incorporate qualitative measures.1 These qualitative measures will play significant roles in elucidating resident performance for assessment.2 To assess resident competence, generalist programs such as emergency medicine (EM)—which must cover a broad content and skills base—require a substantial number of work-based assessment (WBA) instruments that may include qualitative as well as quantitative measures.

In this report, we describe the development of the McMaster Modular Assessment Program (McMAP), a novel WBA system designed to integrate both quantitative and qualitative measures to generate robust reports on resident performance in the McMaster University Royal College EM residency program. We report our initial experience using McMAP to assess the performance of residents at the junior and intermediate levels (roughly postgraduate years 1 and 2).

Approach

The junior and intermediate levels of McMAP were developed in 2010–2011 by a team of 27 medical educators, education scientists, and residents from six institutions in Canada and the United States. This project was granted an ethics exemption by the Hamilton Integrated Research Ethics Board.

McMAP development

The McMaster University EM residency program’s previous resident assessment system consisted of end-of-rotation reports that contained 67 Likert-scaled items and 2 optional narrative comment fields. These reports were filled out by a single faculty member at the end of a monthlong rotation. There was no system in place to record day-to-day observations to inform these end-of-rotation reports.

In 2010, our residency program completed a targeted needs assessment to define perceived and identified needs for our resident assessment system. As part of this quality improvement effort, 36/53 faculty members (68%) and 30/31 residents (97%) participated in a series of focus groups and completed a survey. The themes that were identified and triangulated via cross-referencing with the assessment literature are included in Supplemental Digital Appendix 1, available at http://links.lww.com/ACADMED/A271.

With the results of this needs assessment in mind, a group of educators at the McMaster University EM residency program collaborated with educators in the EM residency programs at the University of Alberta and the University of Saskatchewan to develop an assessment program organized around the CanMEDS physician competency framework and based on educational theory from the assessment literature. Two-person teams were tasked with developing eight EM-specific WBA instruments. These instruments were structured as focused, partial mini-clinical evaluation exercises (CEXs)—essentially “micro”-CEXs—that could be used in a busy emergency department (ED) environment. Each instrument was mapped to a physician role from the CanMEDS competency framework at the junior level and the intermediate level. Each instrument was designed to provide a template for clinical faculty to assess resident competence in a key EM clinical task (e.g., performing a history, charting, obtaining consent for therapy) via direct observation. All instruments were peer reviewed and refined based on feedback.

In total, 52 WBA instruments were originally created or adapted from existing assessment instruments. All of these instruments were reviewed by an international panel consisting of four clinical content experts (two American and two Canadian attending EM physicians) and two EM residents (one American and one Canadian). The Americans were from the EM residency programs of Louisiana State University, Michigan State University, and Oregon Health & Science University. Each of these reviewers completed a Q-sort, matching the WBA instruments to either the CanMEDS or the Accreditation Council for Graduate Medical Education competency framework. This method was used to check the system for its relevance (i.e., adequate sampling of competencies), as well as its representativeness across postgraduate years and competency frameworks. Ten instruments were removed during this peer-review process.

The final 42 WBA instruments comprising McMAP were bundled along common themes (i.e., CanMEDS roles) for use during our residency program’s four-week rotations. Our residents follow a preset, annual schedule with different CanMEDS roles emphasized each month. The instruments are delivered in a deliberate manner over two years. The 42 WBA instruments are divided into groups of 7 to 8 (i.e., blocks) that emphasize two CanMEDS roles at a time to accommodate focused assessment of practice within a rotation. Each instrument is repeated at least twice per year to allow for convenience sampling based on case presentations that arise each day. This approach facilitates a spiral curriculum with a return to common tasks for greater depth of learning.

McMAP components

Every shift, residents are observed by an attending physician, who rates their performance of a specific, defined task and their global performance. The observation and documentation takes the faculty member 5 to 10 minutes per resident per shift. (For a sample daily task checklist and rating instrument, see Appendix 1. For a sample daily global rating instrument, see Appendix 2.)

Specific tasks.

Assessment tasks and criterion standards are mapped to level of training, providing milestones for performance. Most of the instruments (38/42; 90%) include a structured task checklist, and all instruments use behaviorally anchored scales, which guide faculty assessors to ensure a shared mental model among faculty raters.3 The criterion-based, standardized anchors that define each level of achievement (i.e., “needs assistance” through “ready for the next level”) assist faculty members in rating residents’ performance of tasks in a consistent manner. Nearly all of the instruments (40/42; 95%) facilitate opportunistic direct observation; only 2 (5%) allow for a simulation option (e.g., a hypothetical response in lieu of a real patient case). Figure 1 shows the distribution of instruments by their primary CanMEDS roles for postgraduate years 1 and 2.

Figure 1
Figure 1:
Distribution of the 42 McMaster Modular Assessment Program (McMAP) work-based assessment instruments by the primary CanMEDS roles to which they map, by level of training, McMaster University emergency medicine residency program. PGY indicates postgraduate year.

Global performance.

The daily global rating instrument is completed to capture the resident’s global performance of all tasks during that shift. It uses milestone-based behavioral anchors that align to a progression of competence (ranging from “needs assistance” to “ready to be an intermediate resident” or “ready to be a senior resident”). This global rating allows the faculty member to assess the entirety of a learner’s behavior during a shift, beyond the specific task emphasized that day.

Exceptional events.

In 2012, an exceptional events reporting system was added, separate from the daily WBA instrument, to increase documentation of exceptional performance. This facet of the assessment system allows an attending physician to confidentially submit information about exceptionally good and bad performance to a third-party mediator. Thus far, this reporting system has shown promise in increasing the sensitivity of our assessment system to detect outlier behaviors from residents.

The role of qualitative data

For a McMAP task or global rating instrument to be complete, the rater is required to provide narrative comments to augment the numerical scores. These qualitative data provide a “thick” description of resident performance and prompt formative feedback at the end of the shift.2

The role of “choice architecture”

To ensure that we continually evaluate and improve McMAP, we are using a continuous quality improvement (CQI) process. Via surveys and focus groups of residents and faculty, we have found that the biggest draw of McMAP is the translation of physician competencies (e.g., CanMEDS roles) into clinically identifiable, EM-specific tasks. The use of behaviorally anchored scales that guide faculty assessors, the use of checklists that deconstruct tasks for residents (and junior faculty) into simpler subelements, and the inclusion of mandatory qualitative comments are examples of the use of choice architecture in McMAP. These features focus faculty members and residents toward best assessment practices. In essence, these assessment tasks serve as a form of “just-in-time” faculty development, providing guidance to clinical teachers to help them diagnose areas of concern and achievement for residents.

In addition, the alignment of our WBAs to authentic EM tasks, rather than to generic physician competencies, helps faculty generate more specific and actionable feedback. Whereas our previous system was organized around general CanMEDS roles (e.g., Communicator), McMAP focuses on observable EM tasks (e.g., charting) that can be reliably mapped back to a specific CanMEDS role. Finally, multiple assessments of the components of each CanMEDS role allow for more reliable and specific assessment of a competence across the entire physician competency framework.

Reports

All assessments are directly entered into McMAP’s online portal (http://mcmapevents.wix.com/portal), which aggregates the information into password-protected personalized databases that residents can review as desired. At the end of each monthlong rotation, all data from the instruments completed for each resident (approximately 15 task-specific ratings and 15 global performance ratings, with 30 narrative comments) are compiled from this electronic data collection system to generate a draft report of each resident’s performance. Incomplete assessments and borderline/failing ratings are highlighted. Taken together, these reports result in a sampling of different skill sets throughout the year, with each block emphasizing different tasks, which in turn are assembled to provide an overall picture of each resident’s performance over multiple months.

Each resident’s aggregated data report from the end of a monthlong rotation is provided to the rotation’s preceptor, who performs a thematic analysis. This results in a qualitative end-of-rotation report that discusses (1) the resident’s performance on the specific tasks that map to a specific CanMEDS role emphasized during the rotation; (2) the resident’s global performance; and (3) tailored advice for the resident for continuous learner improvement. The preceptor flags marginal performances for review by the residency education committee’s assessment subcommittee, which recommends remediation plans.

Decision-making processes

To make promotion and remediation decisions, we use a mixed model that incorporates both cut points (i.e., preestablished minimal standards for resident performance) and jury-based review (in which major stakeholders, including residents and faculty, assess the aggregated evidence from the successive months to inform decisions about promotion or remediation). This mixed model is new and replaces a system wherein only the program director and assistant program director would periodically review resident reports and/or exam scores. In 2015, our residency program adopted a continuous review system (overseen by the assessment subcommittee) that flags anomalies in resident performance and advises the main decision-making body (the residency education committee).

Outcomes

McMAP was piloted for junior- and intermediate-level residents in academic year 2011–2012. In the first nine months, we gathered more than 4,000 data points for 15 residents in postgraduate years 1 and 2. These data points were 38% qualitative (written comments) and 62% quantitative (completed checklists, ratings of tasks or daily global performance). Our system generated 64 aggregated score reports and 64 end-of-rotation reports.

To determine the efficacy of McMAP, we audited the quality of the end-of-rotation reports using the Completed Clinical Evaluation Report Rating (CCERR) tool.4 The CCERR tool, a nine-item scoring system to evaluate the quality of end-of-rotation reports, has been previously validated across a wide range of specialties and has demonstrated high reliability.4

We compared CCERR scores of end-of-rotation reports from before and after the introduction of McMAP. We randomly selected 25 end-of-rotation reports for postgraduate year 1 and 2 residents from a pre-McMAP year (2010–2011, the year before McMAP was introduced) and an early McMAP year (2012–2013, the year after McMAP was piloted). Unique identifiers were redacted from all of these reports. All 50 reports were independently scored by two investigators (T.C., J.S.) using the CCERR tool. The level of agreement between the two raters on the CCERR scale was high (Cronbach alpha = 0.92; df = 49, P < .001). There was a doubling of median CCERR scores from the pre-McMAP year to the early McMAP year (13.8/45 [interquartile range = 11.3–15.8] versus 27.5/45 [interquartile range = 20.5–23.5]; P < .001). All nine item subscores within the CCERR also increased significantly after McMAP was introduced. This may be a result of basing end-of-rotation reports in McMAP on robust documentation of performance by multiple raters throughout a rotation rather than relying on a single faculty member’s recall at rotation’s end as in our previous system.

In addition, residents participating in CQI focus groups have noted that a greater incidence of formative feedback has accompanied the implementation of McMAP.

Next Steps

Clinical environments such as EDs require direct observation and assessment of resident performance despite frequent interruptions, nonstandard patient presentations, and competing interests (i.e., patient care versus resident learning). As we have described above, our EM residency program has implemented McMAP, a novel theory-based WBA system with both quantitative and qualitative measures that allows us to generate robust reports on resident performance while keeping pace with the busy ED environment. Key features of McMAP include the following:

  • Aligning assessments with the clinical environment. McMAP turns large, abstract ideas (e.g., the CanMEDS Communicator role) into clinically identifiable tasks (e.g., medical record documentation or communicating a management plan to a patient). During our CQI process, this feature has been identified as the most important aspect of the system.
  • Being programmatic. McMAP deliberately maps WBA instruments to the CanMEDS competency framework.5
  • Creating a shared mental model. McMAP uses criterion-based standards with behavioral anchors to create a common understanding of expected learner performance among faculty assessors. The WBA instruments are specific to EM practice and help decrease assessor variability through a shared mental model.3
  • Harnessing the wisdom of crowds. McMAP uses multiple assessments by multiple assessors via multiple instruments to ensure a reliable sampling across all of the domains of physician competence. This approach mimics the design of an objective structured clinical examination or the multiple mini-interview.

As of the summer of 2014, we piloted McMAP with senior residents. The McMAP team developed 31 new WBA instruments that were specifically designed to assess senior residents on milestones and entrustable professional activities. Each instrument was mapped to at least one CanMEDS role and designed to assess senior-level competencies. These instruments are divided into 4 blocks, emphasizing ED management, flow, patient safety, and other higher-order skills. There are ongoing efforts to analyze the relationships between residents’ junior-, intermediate-, and senior-level ratings in an effort to determine whether we can use McMAP to identify exceptional or at-risk residents earlier via their performance trajectories.

Moving forward, our next steps include deciding how to handle “big data” in assessment and delineating policies for promotion decisions.

Handling “big data” in assessment

More research is needed about processes to combine and represent larger quantities of learner assessment data. During our pilot year, we gathered a large amount of data (> 4,000 data points for 15 residents in nine months). The medical education literature, however, mainly describes processes for handling big data in relation to standardized examinations, which are not intended for either longitudinal assessment or mixed forms of data. Effective promotion or remediation decisions will require credible interpretation of data, which is incumbent on (1) data representation and score compilation and (2) policies based on defined markers of competence. McMAP data lend themselves to analysis of an individual resident’s learning trajectory, but there are only a few published processes (none of them validated) in medical education to guide the use of such metrics.

Policies for promotion decisions

Without policies to guide promotion decisions, McMAP is merely a comprehensive data collection system. McMAP end-of-rotation reports provide the resident education committee’s assessment subcommittee with robust data to inform decisions about promotion or remediation. However, to convert McMAP into a true CBME system, policies that allow for the tailored progression of learners as a function of their assessments and learning trajectories are required. Recognizing that competence is often task specific, and not generalizable across all EM domains, we must determine at what point a resident should be permitted to advance to a more senior role. For example, does every milestone need to be achieved for a particular stage of training to justify promotion, or must a resident achieve only a key, representative sampling of milestones? These questions have yet to be answered.

Conclusions

McMAP provides a functioning model for a WBA system that incorporates both task-specific and global assessments of resident performance and generates a significant amount of specific and informative qualitative and quantitative data on each trainee. The assessment instruments provide faculty assessors with rubrics to align their frames of reference and provide just-in-time faculty development. By aligning assessment instruments with authentic EM work-based tasks, this novel system has changed our residency program’s culture to normalize daily feedback.

Acknowledgments: The McMaster Modular Assessment Program (McMAP) Collaborators are a team of 25 educators and education scientists and 2 residents from three Canadian universities (McMaster University, the University of Alberta, and the University of Saskatchewan) and three U.S. universities (Louisiana State University, Michigan State University, and Oregon Health & Science University) who developed and reviewed the McMAP instruments. The authors would like to acknowledge the hard work of their fellow McMAP Collaborators (M. Ackerman, J. Cherian, N. Delbel, K. Dong, S. Dong, K. Hawley, M. Jalayer, B. Judge, R. Kerr, A. Kirkham, N. Lalani, A.R. Mallin, S. McClennan, P. Miller, A. Pardhan, G. Rutledge, K. Schiff, D. Sehdev, T. Swoboda, S. Upadhye, R. Valani, C. Wallner, M. Welsford, R. Woods, and A. Zaki). They also wish to thank the McMaster University Division of Emergency Medicine administrators (Teresa Vallera, Melissa Hymers, Neha Dharwan, and Amanda Li). In addition, the authors thank their friends and research colleagues, Dr. Kelly Dore, Dr. Geoff Norman, and Dr. Meghan McConnell, for their advice on this project. Finally, the authors would like to thank Dr. Ian Preyra (former program director of the Royal College Emergency Medicine Program), Dr. Alim Pardhan (program director of the Royal College Emergency Medicine Program), and Dr. Karen Schiff (associate program director of the Royal College Emergency Medicine Program) for providing the support, time, and mandate to implement McMAP.

References

1. Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR.. The role of assessment in competency-based medical education. Med Teach. 2010;32:676–682
2. Hodges B.. Assessment in the post-psychometric era: Learning to love the subjective and collective. Med Teach. 2013;35:564–568
3. Kogan JR, Conforti L, Bernabeo E, Iobst W, Holmboe E.. Opening the black box of clinical skills assessment via observation: A conceptual model. Med Educ. 2011;45:1048–1060
4. Dudek NL, Marks MB, Wood TJ, Lee AC.. Assessing the quality of supervisors’ completed clinical evaluation reports. Med Educ. 2008;42:816–822
5. Moonen-van Loon JM, Overeem K, Donkers HH, van der Vleuten CP, Driessen EW.. Composite reliability of a workplace-based assessment toolbox for postgraduate medical education. Adv Health Sci Educ Theory Pract. 2013;18:1087–1102

Sample Intermediate-Level McMaster Modular Assessment Program (McMAP) Daily Task Checklist and Rating Instrument, McMaster University Emergency Medicine Residency Programa,b

Name of Assessor: ___________________ Date: _________________

Minor Task | Discharge Instructions

Today’s focus is on discharge instructions.

Table
Table:
No title available.

Sample Intermediate-Level McMaster Modular Assessment Program (McMAP) Daily Global Rating Instrument, McMaster University Emergency Medicine Residency Programa,b

Table
Table:
No title available.
© 2015 by the Association of American Medical Colleges