Secondary Logo

Journal Logo

A Shift on the Horizon: A Systematic Review of Assessment Tools for Plastic Surgery Trainees

Frendø, Martin M.D.

Plastic and Reconstructive Surgery: May 2019 - Volume 143 - Issue 5 - p 1129e
doi: 10.1097/PRS.0000000000005549
Letters
Free

Copenhagen Academy for Medical Education and Simulation, Rigshospitalet University Hospital, University of Copenhagen, Blegdamsvej 9, DK-2100 Copenhagen Ø, Denmark, martin.frendoe-soerensen.01@regionh.dk

Back to Top | Article Outline

Sir:

The recently published article “A Shift on the Horizon: A Systematic Review of Assessment Tools for Plastic Surgery Trainees”1 includes a systematic literature review and description of assessment tools used in plastic surgery. The authors should be congratulated on their work, which highlights the need for assessment tools to allow for competence-based training. From a scientific point of view, however, there are several concerns about the evaluation of assessment tools presented in the article.

The evaluation of the validity of assessment tools requires a systematic approach based on contemporary methods, as the theoretical basis for the assessment of surgical skills has undergone extensive development.2 A unified model of validity has replaced the use of different validity types (i.e., face, content, criterion, and concurrent).3 In this context, Messick’s framework remains the most widely accepted validity framework.4 Unfortunately, the present article does not acknowledge the past three decades of development in this area by using an outdated framework for validity. For instance, the concept of face validity (“do the participants subjectively like the test?”) used in the study to evaluate the assessment tools is widely considered to be “…akin to estimating the speed of a car based on its outward appearance…”5 and thus obsolete in modern educational research.

Regardless of the topic, we as a scientific community need to carefully evaluate the study methods used. Accordingly, if the Journal and our community wish to take medical education research in plastic surgery seriously, we need to apply the same vigorous standards as expected in clinical research.

In plastic surgery, the fundamental necessity of learning surgical skills and the increased focus on patient safety and work-hour restrictions call for more research on evidence-based learning—including in this Journal.

Back to Top | Article Outline

DISCLOSURE

The author has no financial interest to declare in relation to the content of this communication.

Martin Frendø, M.D.Copenhagen Academy for Medical Education and SimulationRigshospitalet University HospitalUniversity of CopenhagenBlegdamsvej 9DK-2100 Copenhagen Ø, Denmarkmartin.frendoe-soerensen.01@regionh.dk

Back to Top | Article Outline

REFERENCES

1. McKinnon VE, Kalun P, McRae MH, Sonnadara RR, Fahim C. A shift on the horizon: A systematic review of assessment tools for plastic surgery trainees. Plast Reconstr Surg. 2018;142:217e–231e.
2. Borgersen NJ, Naur TMH, Sørensen SMD, et al. Gathering validity evidence for surgical simulation: A systematic review. Ann Surg. 2018;267:1063–1068.
3. American Psychological Association; National Council on Measurement in Education; Joint Committee on Standards for Educational and Psychological Testing (U.S.). Standards for Educational and Psychological Testing. 2014.Washington, DC: American Psychological Association.
4. Messick S. Validity. ETS Res Rep Ser. 1987;1987:i–208.
5. Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: Theory and application. Am J Med. 2006;119:166.e7–16.
Copyright © 2019 by the American Society of Plastic Surgeons