Secondary Logo

Journal Logo

Articles

Recommendations for Reporting Mastery Education Research in Medicine (ReMERM)

Cohen, Elaine R. MEd; McGaghie, William C. PhD; Wayne, Diane B. MD; Lineberry, Matthew PhD; Yudkowsky, Rachel MD, MHPE; Barsuk, Jeffrey H. MD, MS

Author Information
doi: 10.1097/ACM.0000000000000933
  • Free

Abstract

Traditional medical education strategies rely on clinical training using a passive, time-limited apprenticeship model. This results in variable skill acquisition and retention among practicing clinicians.1–3 The historic “see one, do one, teach one” approach4–6 allows mistakes to be handed down from one generation of trainees to the next—potentially exposing patients to medical error and risk. Alternatively, use of the mastery model, a strict form of competency-based education, ensures that learners are rigorously assessed and have the necessary skills to provide safe and effective patient care.7

Prior scholarship has identified seven principles that characterize mastery learning programs8:

  1. Baseline, or diagnostic testing;
  2. Clear learning objectives, sequenced as units in increasing difficulty;
  3. Engagement in educational activities (e.g., deliberate practice, data interpretation, reading) focused on the objectives;
  4. A set minimum passing standard (MPS) (e.g., test score) for each educational unit;
  5. Formative testing with specific feedback to gauge unit completion at a preset MPS for mastery;
  6. Advancement to the next educational unit given measured achievement at or above the mastery standard; and
  7. Continued practice or study on an educational unit until the mastery standard is reached.

In mastery learning, training time varies, but outcomes (e.g., knowledge and skill acquisition) are uniform, allowing for little variability and high levels of achievement for all.8

In medical education, mastery learning has been used for the acquisition and maintenance of a variety of clinical skills.1,2,9–15 Simulation-based education using mastery learning has been shown to deliver better learning outcomes than simulation education without mastery.16,17 Medical education research programs that incorporate mastery learning have also shown improved patient care practices in advanced cardiac life support,17 paracentesis,18 lumbar puncture,2 and central venous catheter insertion skills.11,19,20 Two recent reviews document an increasing number of mastery learning manuscripts published in the medical education literature from 2003 to 2013.16,21 However, in general, the overall quality of reporting of experimental studies in medical education research has been uneven.22,23 Many studies lack rigor and well-defined research methods. Therefore, more rigorous and higher-quality research in the field of mastery learning is essential.

The Accreditation Council for Graduate Medical Education (ACGME) recently moved toward an outcomes-based accreditation system requiring assessments of trainees using competency-based Milestones.24 Additionally, the Association of American Medical Colleges (AAMC) has identified 13 new entrustable professional activities (EPAs) that medical school graduates are expected to be skilled in performing on the first day of residency.25 Accepted standards about how to rigorously define, develop, and assess mastery learning curricula are essential to accommodate these shifts across the medical education continuum.

Guidelines for reporting randomized trials (CONSORT),26 observational studies (STROBE),27 quality improvement projects (SQUIRE),28 systematic reviews and meta-analyses (PRISMA),29 and qualitative research (SRQR)30 have been proposed in the medical literature. These published reporting guidelines help authors conduct rigorous studies, prepare lucid manuscripts, and standardize reporting expectations for journals. However, to our knowledge there are no guidelines to report studies that evaluate mastery learning curriculum outcomes in medical education. Our goal in this article was thus to define standards for the quantitative evaluation of mastery learning curricula based on previously published guidelines in related fields and expert consensus.

Guideline Design

We reviewed recommendations published in the fields of clinical medicine26–30 and medical education16,31 to establish appropriate and rigorous guidelines for mastery learning research reports. The guidelines were compiled and agreed on by six authors—four with expertise in designing, evaluating, and implementing mastery learning curricula (E.C., J.B., D.W., W.M.), and two with expertise in performance assessment and medical education (M.L., R.Y.). We also reviewed previous guidelines from the fields of psychology32 and the social sciences33 to ensure the inclusivity of the current recommendations. We used a modified Delphi technique to reach consensus on the final guidelines. The guidelines were circulated to 12 other experts in medical education who were also invited to write articles for this mastery learning cluster, for comment and review. These individuals were a representative national sample with substantial experience in mastery learning research methodology. We incorporated all of their suggested revisions and additions by consensus. Reviewer revisions were mostly regarding clarity and wording; no substantive changes were made based on the 12 expert reviewers’ suggestions.

The result was a final list of reporting guidelines, the Reporting Mastery Education Research in Medicine (ReMERM) guidelines.

ReMERM Guidelines

The final guidelines we propose have 38 essential items in 22 categories for planning and executing a successful mastery learning research study (Table 1). Each item was determined to be either (1) required of all medical education research studies; or (2) required, and unique to mastery learning curriculum evaluation studies. The guidelines are divided into six sections: title and abstract; introduction; methods; results; discussion; and other information. Details of each item, with a specific focus on those unique to mastery learning, are described below.

Table 1
Table 1:
Reporting Mastery Education Research in Medicine (ReMERM) 2015 Guidelines

Title and abstract

The title of the research paper must include “mastery learning” to identify the study and ensure that it is properly indexed. The abstract should be consistent with what is included in the full text of the article, sufficiently summarizing the key findings. This is an essential step to all published research because readers often base their assessment of a manuscript on this information. It allows published research to be easily located by conventional literature search methods.

Introduction

The introduction should include a review of medical or medical education literature relevant to the subject and rationale for the study. The introduction should attempt to explain how the manuscript relates to the intended journal readership and how the idea under study is unique. Using a conceptual framework, at least a brief theoretical rationale should be provided for the study.34 Study goals or objectives should be described, in addition to stating a clear hypothesis.

Methods

The study design35 should be described (e.g., experimental versus quasi-experimental, with or without comparison groups).36 The methods should describe the study location, time frame, and participants. The learner or trainee participant population should be defined using a set of inclusion and exclusion criteria and sampling procedures.37 Authors should provide details about the size of the study cohort, institution, and location. A statement about ethics or institutional review board approval must be included.

Investigators must report the properties of the measurements or assessments they have adopted or developed. Various types of assessment tools can be used in medical education including written examinations,38 checklists,39 and global rating scales.40 Self-assessments are not recommended as a stand-alone outcome measure given their generally poor relationship to objective outcome measures.41,42 If assessment tools are newly created, authors should describe how instruments have been pilot tested to produce reliable data that inform valid decisions.43 Use of more than one assessor during this pilot stage is critical to obtain estimates of interrater reliability.44 All raters should be trained and calibrated to use the assessment tools to ensure consistency. Tools that are found to have low levels of interrater reliability can be rewritten or recalibrated. Use (and referencing) of previously published measures is acceptable, given presentations of reliability evidence and validity arguments in previous research reports.

One of the essential steps in mastery learning research is setting defensible standards. In the mastery learning model, a criterion-based MPS is set for each skill. Inferences based on meeting the MPS, and the consequences of not meeting passing standards, must be defined. The standard setting process should be documented, reported, or referenced in a mastery learning manuscript. Common criterion-based standard setting techniques in medical education include the Angoff, Hofstee, Ebel, borderline group, and contrasting groups methods.43 Any performance data shared with the judges as part of the standard setting exercise should be specified.45 Traditional standard setting methods43 may need to be modified when used in mastery learning settings; in particular, the MPS must be set to represent the performance of the learner who is well prepared to succeed at the next stage of training or practice, rather than reflecting minimal competence. For more detail on standard setting methods, see the article by Yudkowsky et al46 in this issue.

Details about the educational intervention are particularly important in a mastery learning curriculum evaluation study. Readers should be able to replicate the intervention based on the information provided. Authors should describe the teaching tools used for both instruction and evaluation. The amount of time required to complete the standard curriculum should also be reported. We recommend a baseline assessment (at the beginning of the curriculum) as part of the educational intervention because these assessments tend to provide extra feedback and can help focus both the learner and instructor on specific needs during practice sessions. In addition, research shows that performing a baseline assessment or pretest can improve subsequent learner performance.47 Next, a detailed description of the nature of the deliberate practice sessions is needed. Deliberate practice48 is a core feature of mastery learning programs that uses individualized feedback as an established strategy to help medical trainees acquire expertise. Details of the practice session(s) should include the resources required, practice duration and intensity, and whether practice was supervised. Description of the feedback and debriefing, another critical component of mastery learning, should contain information about content, method, duration, frequency, source, and facilitator expertise.49 More specifics on debriefing in mastery learning can be seen in the article by Eppich et al50 in this issue. Finally, details of the posttraining assessment are required. A flowchart showing the study design is helpful to describe the path of the investigation (Figure 1).

Figure 1
Figure 1:
Example flowchart of a mastery learning curriculum evaluation study design.

All research manuscripts should explain the predefined primary and secondary outcome measures. The primary outcome measure answers the main research question of the study. All other outcomes are secondary. Details including how (which assessment tools) and when (at what time point) each outcome is measured should be reported. If assessments are taken at different time points (e.g., pre- and posttest), description of sampled content (e.g., repeat identical test) should be included. For any effect estimates (e.g., testing for a hypothesized gain in scores after introducing a mastery learning curriculum reform), an a priori statistical power estimate should be reported, including specification of what effect size would be judged practically significant.51 Authors should provide details about which statistical tests are used for group and subgroup analyses. A skilled data analyst should be able to replicate the analyses given the provided information. The statistical software program used to perform the analyses should always be reported.

Results

The results of a study should begin with information reported about the trainees. The number of learners who completed the mastery learning program, in addition to those who did not (or were excluded), must be reported. Relevant demographic information or trainee characteristics should be included in a table or in the manuscript text. If the study compares more than one group, information on each set of trainees should be reported. Data on the number of participants who achieved mastery within the standard curriculum should be reported. The number and percentage of learners who needed extra time to reach the mastery standard should be stated.

Details should be provided to confirm that all measures are being used correctly. Reliability estimates must be reported for each assessment tool used in the study.37 Interrater reliability coefficients should be reported if more than one rater is used to grade trainee assessments.

Descriptive statistics for each outcome measure should be reported before the results of inferential statistical analyses. Violations of statistical test assumptions and missing data must be addressed. For each subgroup, on each outcome measure, information about sample size, central tendency (e.g., mean or median), and dispersion (e.g., variance or standard deviation) should be included. When group comparisons are made, a graphic presentation of each group’s central tendency (e.g., mean) is recommended, preferably with an indication of how precisely the central tendency was measured (e.g., 95% confidence intervals for each mean). P values should be reported to determine statistical significance. Results should be reported for all primary and secondary outcome measures, not just for analyses that were statistically significant. The additional time and resources required for trainees to achieve mastery should be reported as a secondary outcome.

Discussion

The discussion section of any research manuscript begins with an interpretation of the main results. This must include any unexpected findings. Authors should relate this information to previous work by other authors. Limitations of research studies should always be recognized and study weaknesses reported. Common examples include potential for bias, sample size, measurement imprecision, and threats to validity. The transfer (or generalizability) of the results should be discussed, demonstrating the external validity of the study. Future areas for research based on the findings of the current study should be expressed.

Other information

References must always be included in any research manuscript, and appropriate citations should be noted. Authors should report the sources of funding for the study and whether it presents conflict of interest. In addition, it is important to acknowledge any other support (staff/faculty/departmental) received for the conduct of the study, data analysis, or preparation of the manuscript.

Commentary

A shift toward competency-based (mastery) education of health care providers is necessary to provide more consistency in training, address individual learner needs, use more rigorous assessments, and ensure uniformly high levels of performance that ultimately may improve patient care quality. This change in medical education is overdue because studies have shown extreme variability in skills by residents,2 fellows,1 and senior attending physicians.3 One particularly dismaying finding comes from a study by Birkmeyer et al3 where 20 practicing attending surgeons provided videos of their best laparoscopic gastric bypass surgery. The skills of the surgeons were widely variable, and the surgeons with the poorest skills had the highest number of complications. However, if these 20 attending surgeons were required to meet an MPS as part of a mastery learning curriculum, there would likely be little variability in their skills as seen in other examples of the mastery learning framework.1,2,10–12,14,19 The ACGME and AAMC recently recognized this issue by mandating assessments of trainees using Milestones24 and EPAs,25 a step toward comprehensive competency-based education. However, a road map of how to perform these assessments is not as clearly spelled out. Because achievement of individual Milestones and EPAs reflects mastery of a specific area and readiness to move on to additional training, we believe there is a significant need for guidelines about how to design and report mastery learning medical education interventions.

The ReMERM guidelines highlight the importance of developing and reporting rigorous program evaluation and research studies. These guidelines can now join the list of many others26–30 to further improve the quality of reporting in medical education research. In addition, we believe the ReMERM guidelines can help researchers develop and report successful mastery learning studies that have the potential to benefit trainees, patients, and populations.

The development of these guidelines had several limitations. First, we developed the guidelines with input from 12 experts, yet there are likely several others who could have added to the guidelines we propose. Like many other research guidelines,26–30 we suspect that the ReMERM guidelines will evolve over time to reflect the constantly changing field of medical education. Second, the use of mastery learning is not widely adopted by medical educators because additional time and resources are needed to develop mastery learning curricula.16 Third, the majority of research in mastery learning is with procedural skills. However, several studies show the mastery learning model successfully applied to cognitive and communication skills.12,14,52 Fourth, the original intent of this manuscript was to give educators the tools they need to evaluate mastery learning research. However, some authors may choose to apply these guidelines to describe research using alternative rigorous teaching methods where appropriate. Finally, as with any new set of guidelines published in the medical literature, it will likely take significant time before the ReMERM guidelines are widely adopted.53,54

We anticipate that this first edition of the 38-item ReMERM guidelines will be used by educators, authors, peer reviewers, journal editors, and readers to help better understand and evaluate mastery learning curriculum research. We believe that over time the guidelines will need updating as other guidelines have,24,27 given the evolving nature of medical education. Expansion of the mastery model is essential for medical education to ensure that all learners achieve a high level of skill. With a national shift toward competency-based medical education, the ReMERM guidelines should help medical educators document and improve their assessment of trainees’ important clinical skills.

References

1. Barsuk JH, Ahya SN, Cohen ER, McGaghie WC, Wayne DB. Mastery learning of temporary hemodialysis catheter insertion by nephrology fellows using simulation technology and deliberate practice. Am J Kidney Dis. 2009;54:70–76
2. Barsuk JH, Cohen ER, Caprio T, McGaghie WC, Simuni T, Wayne DB. Simulation-based education with mastery learning improves residents’ lumbar puncture skills. Neurology. 2012;79:132–137
3. Birkmeyer JD, Finks JF, O’Reilly A, et al.Michigan Bariatric Surgery Collaborative. Surgical skill and complication rates after bariatric surgery. N Engl J Med. 2013;369:1434–1442
4. Duffy FD, Holmboe ES. What procedures should internists do? Ann Intern Med. 2007;146:392–393
5. Landro L. To reduce risks, hospitals enlist “proceduralists.” Wall St J. 2007 http://www.wsj.com/articles/SB118410727844462566. Accessed July 14, 2015
6. Tulsky JA, Chesney MA, Lo B. See one, do one, teach one? House staff experience discussing do-not-resuscitate orders. Arch Intern Med. 1996;156:1285–1289
7. McGaghie WC, Miller GE, Sajid AW, Telder TV. Competency-based curriculum development on medical education: An introduction. Public Health Pap. 1978(68):11–91
8. McGaghie WC, Siddall VJ, Mazmanian PE, Myers JAmerican College of Chest Physicians Health and Science Policy Committee. . Lessons for continuing medical education from simulation research in undergraduate and graduate medical education: Effectiveness of continuing medical education: American College of Chest Physicians Evidence-Based Educational Guidelines. Chest. 2009;135(3 suppl):62S–68S
9. Wayne DB, Barsuk JH, O’Leary KJ, Fudala MJ, McGaghie WC. Mastery learning of thoracentesis skills by internal medicine residents using simulation technology and deliberate practice. J Hosp Med. 2008;3:48–54
10. Barsuk JH, Cohen ER, Vozenilek JA, O’Connor LM, McGaghie WC, Wayne DB. Simulation-based education with mastery learning improves paracentesis skills. J Grad Med Educ. 2012;4:23–27
11. Barsuk JH, McGaghie WC, Cohen ER, O’Leary KJ, Wayne DB. Simulation-based mastery learning reduces complications during central venous catheter insertion in a medical intensive care unit. Crit Care Med. 2009;37:2697–2701
12. Wayne DB, Butter J, Siddall VJ, et al. Mastery learning of advanced cardiac life support skills by internal medicine residents using simulation technology and deliberate practice. J Gen Intern Med. 2006;21:251–256
13. Cohen J, Cohen SA, Vora KC, et al. Multicenter, randomized, controlled trial of virtual-reality simulator training in acquisition of competency in colonoscopy. Gastrointest Endosc. 2006;64:361–368
14. Cohen ER, Barsuk JH, Moazed F, et al. Making July safer: Simulation-based mastery learning during intern boot camp. Acad Med. 2013;88:233–239
15. Zendejas B, Cook DA, Hernández-Irizarry R, Huebner M, Farley DR. Mastery learning simulation-based curriculum for laparoscopic TEP inguinal hernia repair. J Surg Educ. 2012;69:208–214
16. Cook DA, Brydges R, Zendejas B, Hamstra SJ, Hatala R. Mastery learning for health professionals using technology-enhanced simulation: A systematic review and meta-analysis. Acad Med. 2013;88:1178–1186
17. Didwania A, McGaghie WC, Cohen ER, et al. Progress toward improving the quality of cardiac arrest medical team responses at an academic teaching hospital. J Grad Med Educ. 2011;3:211–216
18. Barsuk JH, Cohen ER, Feinglass J, McGaghie WC, Wayne DB. Clinical outcomes after bedside and interventional radiology paracentesis procedures. Am J Med. 2013;126:349–356
19. Barsuk JH, McGaghie WC, Cohen ER, Balachandran JS, Wayne DB. Use of simulation-based mastery learning to improve the quality of central venous catheter placement in a medical intensive care unit. J Hosp Med. 2009;4:397–403
20. Barsuk JH, Cohen ER, Feinglass J, McGaghie WC, Wayne DB. Use of simulation-based education to reduce catheter-related bloodstream infections. Arch Intern Med. 2009;169:1420–1423
21. McGaghie WC, Issenberg SB, Barsuk JH, Wayne DB. A critical review of simulation-based mastery learning with translational outcomes. Med Educ. 2014;48:375–385
22. Cook DA, Beckman TJ, Bordage G. Quality of reporting of experimental studies in medical education: A systematic review. Med Educ. 2007;41:737–745
23. Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review. Med Teach. 2005;27:10–28
24. Accreditation Council for Graduate Medical Education Milestones. https://www.acgme.org/acgmeweb/tabid/430/ProgramandInstitutionalAccreditation/NextAccreditationSystem/Milestones.aspx. Accessed July 14, 2015
25. Association of American Medical Colleges. Core entrustable professional activities for entering residency: Curriculum developers’ guide. https://members.aamc.org/eweb/upload/Core%20EPA%20Curriculum%20Dev%20Guide.pdf. Accessed July 14, 2015
26. Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010 explanation and elaboration: Updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c869
27. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JPSTROBE Initiative. . Strengthening the reporting of observational studies in epidemiology (STROBE) statement: Guidelines for reporting observational studies. BMJ. 2007;335:806–808
28. Ogrinc G, Mooney SE, Estrada C, et al. The SQUIRE (standards for quality improvement reporting excellence) guidelines for quality improvement reporting: Explanation and elaboration. Qual Saf Health Care. 2008;17(suppl 1):i13–i32
29. Moher D, Liberati A, Tetzlaff J, Altman DGPRISMA Group. . Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann Intern Med. 2009;151:264–269, W64
30. O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: A synthesis of recommendations. Acad Med. 2014;89:1245–1251
31. Education Group for Guidelines on Evaluation. . Guidelines for evaluating papers on educational interventions. BMJ. 1999;318:1265–1267
32. American Psychological Association (APA) Publications and Communications Board Working Group on Journal Article Reporting Standards. . Reporting standards for research in psychology. Why do we need them? What might they be? Am Psychol. 2008;63:839–851
33. American Educational Research Association. . Standards for reporting on empirical social science research in AERA publications. Educ Res. 2006;35:33–40
34. McGaghie WC, Bordage G, Shea JA. Problem statement, conceptual framework and research question. Acad Med. 2001;76:923–924
35. Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin; 2001.
36. Kratochwill TR, Levin JR Single-Case Intervention Research: Methodological and Statistical Advances. 2014 Washington, DC American Psychological Association
37. Cohen L, Morrison K, Manion L Research Methods in Education. 2000 London, UK Falmer Press
38. Downing SM, Haladyna TM Handbook of Test Development. 2006 Mahwah, NJ L. Erlbaum
39. Stufflebeam DL The checklists development checklist. 2000 http://www.wmich.edu/sites/default/files/attachments/u350/2014/guidelines_cdc.pdf. Accessed September 10, 2014
40. Adler MD, Vozenilek JA, Trainor JL, et al. Comparison of checklist and anchored global rating instruments for performance rating of simulated pediatric emergencies. Simul Healthc. 2011;6:18–24
41. Dunning D, Heath C, Suls JM. Flawed self-assessment: Implications for health, education, and the workplace. Psychol Sci Public Interest. 2004;5:69–106
42. Davis DA, Mazmanian PE, Fordis M, Van Harrison R, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence: A systematic review. JAMA. 2006;296:1094–1102
43. Downing SM, Tekian A, Yudkowsky R. Procedures for establishing defensible absolute passing scores on performance examinations in health professions education. Teach Learn Med. 2006;18:50–57
44. Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health. 1998;52:377–384
45. Wayne DB, Barsuk JH, Cohen E, McGaghie WC. Do baseline data influence standard setting for a clinical skills examination? Acad Med. 2007;82(10 suppl):S105–S108
46. Yudkowsky R, Park YS, Lineberry M, Knox A, Ritter EM. Setting mastery learning standards. Acad Med. 2015;90:1495–1500
47. Grigorenko EL, Sternberg RJ. Dynamic testing. Psychol Bull. 1998;124:75–111
48. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004;79(10 suppl):S70–S81
49. Cheng A, Eppich W, Grant V, Sherbino J, Zendejas B, Cook DA. Debriefing for technology-enhanced simulation: A systematic review and meta-analysis. Med Educ. 2014;48:657–666
50. Eppich WJ, Hunt EA, Duval-Arnould JM, Siddall VJ, Cheng A. Structuring feedback and debriefing to achieve mastery learning goals. Acad Med. 2015;90:1501–1508
51. Kraemer HC, Thiemann S How Many Subjects? Statistical Power Analysis in Research. 1987 Newbury Park, Calif Sage Publications
52. Butter J, McGaghie WC, Cohen ER, Kaye M, Wayne DB. Simulation-based mastery learning improves cardiac auscultation skills in medical students. J Gen Intern Med. 2010;25:780–785
53. Berwick DM. Disseminating innovations in health care. JAMA. 2003;289:1969–1975
54. Riley WT, Glasgow RE, Etheredge L, Abernethy AP. Rapid, responsive, relevant (R3) research: A call for a rapid learning health research enterprise. Clin Transl Med. 2013;2:10
© 2015 by the Association of American Medical Colleges