Secondary Logo

Journal Logo

The Stanford Microsurgery and Resident Training (SMaRT) Scale: Validation of an On-Line Global Rating Scale for Technical Assessment

Satterwhite, Thomas MD; Son, Ji BS; Carey, Joseph MD; Echo, Anthony MD; Spurling, Terry BS; Paro, John MD; Gurtner, Geoffrey MD; Chang, James MD; Lee, Gordon K. MD FACS

doi: 10.1097/SAP.0000000000000139
Research Articles
Free

Introduction We previously reported results of our on-line microsurgery training program, showing that residents who had access to our website significantly improved their cognitive and technical skills. In this study, we report an objective means for expert evaluators to reliably rate trainees’ technical skills under the microscope, with the use of our novel global rating scale.

Methods Microsurgery Essentials” (http://smartmicrosurgery.com) is our on-line training curriculum. Residents were randomly divided into 2 groups: 1 group reviewed this online resource and the other did not. Pre- and post-tests consisted of videotaped microsurgical sessions in which the trainee performed “microsurgery” on 3 different models: latex glove, penrose drain, and the dorsal vessel of a chicken foot. The SMaRT (Stanford Microsurgery and Resident Training) scale, consisting of 9 categories graded on a 5-point Likert scale, was used to assess the trainees. Results were analyzed with ANOVA and Student t test, with P less than 0.05 indicating statistical significance.

Results Seventeen residents participated in the study. The SMaRT scale adequately differentiated the performance of more experienced senior residents (PGY-4 to PGY-6, total average score = 3.43) from less experienced junior residents (PGY-1 to PGY-3, total average score = 2.10, P < 0.0001). Residents who viewed themselves as being confident received a higher score on the SMaRT scale (average score 3.5), compared to residents who were not as confident (average score 2.1) (P < 0.001). There were no significant differences in scoring among all 3 evaluators (P > 0.05). Additionally, junior residents who had access to our website showed a significant increase in their graded technical performance by 0.7 points when compared to residents who did not have access to the website who showed an improvement of only 0.2 points (P = 0.01).

Conclusions Our SMaRT scale is valid and reliable in assessing the microsurgical skills of residents and other trainees. Current trainees are more likely to use self-directed on-line education because of its easy accessibility and interactive format. Our global rating scale can help ensure residents are achieving appropriate technical milestones.

From the Division of Plastic and Reconstructive Surgery, Department of Surgery, Stanford University Medical Center, Stanford, CA.

Received July 7, 2013, and accepted for publication, after revision, December 13, 2013.

Conflicts of Interest and Sources of Funding: This work was supported by the Plastic Surgery Foundation (PSF) Pilot Research Grant 2012.

Presented at the California Society of Plastic Surgeons’ Annual Conference, San Francisco, CA, May 2013.

Reprints: Gordon K. Lee, MD, FACS, Division of Plastic Surgery, Department of Surgery, Stanford University Medical Center, 770 Welch Road, Suite 400, Stanford, CA 94305. E-mail: glee@stanford.edu.

There is no consensus on a single validated method to assess the microsurgical skills of residents and trainees. There are several models of microsurgery education available: case logbooks, self-assessment, direct subjective observation, and observation with grading. Logbooks record residents’ presence in the operating room, recording only exposure to the microsurgery procedures, and does not measure competency. Direct observation by expert surgeons in the operating room can be quite subjective, where the resident must wait for the senior surgeon to say they are “ready.” With constraints in the resident workweek, and the need for patient safety, a validated assessment scale is needed to ensure appropriate progression of training of our residents.

To address the shortcomings of the traditional model, simulators have been developed in other surgical fields such as laparoscopy, urology, obstetrics, and vascular surgery, which have been utilized to train residents and students in a safe, controlled, risk-free setting.1–3 Positive, valid outcomes have been achieved using global rating scales and checklists in objective structured assessments of technical skills in some specialties.4 In particular, global rating scales have been shown to be quick, efficient, valid, and reliable.5–7

We have previously reported the effectiveness of our on-line microsurgery curriculum.8 The goal for the current study was to determine the validity and reliability of our novel microsurgery global rating scale, which we refer to as the Stanford Microsurgery and Resident Training (SMaRT) scale in evaluating resident performance.

Back to Top | Article Outline

METHODS

A webpage for microsurgery education, entitled “Microsurgery Essentials”, was developed at our institution as previously described,8 and it is publicly available at http://smartmicrosurgery.com. The on-line curriculum includes 4 sections entitled Preparation, Practice Models, Suturing, and Intra-Operative. The online modules contain descriptions on instruments, suturing, and the basic essentials on microsurgical technique.

The experimental assessment of the residents was conducted in a fashion that was previously described.8 All residents had a brief introductory didactic session, and a baseline written pre-test and baseline microscope recording session. Residents were then randomly divided: one group had access to the online resource over a 1-week time period, and the other group did not. Post-tests were then administered, consisting of a written quiz and a microsurgical session.

The video session consisted of the trainee performing “microsurgery” on 3 models of increasing fidelity: a latex glove model, a penrose drain, and the dorsal vessel of a chicken foot (Fig. 1). Each resident placed 3 interrupted 10.0 nylon sutures without direct guidance. The latex glove model was created by applying a piece of a latex glove over a cutout portion of cardboard. The chicken foot dorsal vessel was dissected and utilized as previously described.9 The sessions were then recorded, and the videos were de-identified and uploaded onto our on-line database.

FIGURE 1

FIGURE 1

Microsurgery videos were then critically evaluated by expert evaluators who were blinded to subjects’ level of training. The global rating scale used in the assessment of residents is detailed in Table 1. The 9 categories include Instrument Handling, Respect for “Tissue,” Efficiency, Suture Handling, Suturing Technique, Quality of Knot, Final Product, Operation Flow, and Overall Performance. Each category was graded on a 5-point Likert scale, in which 1 = failure and 5 = superior performance. Maximum achievable score is 5.

TABLE 1

TABLE 1

Statistical analysis using Stata/IC 11 software (StataCorp, College Station, TX) was performed. Two-tailed Fisher exact test for categorical data and Student t test (unequal variance) for continuous values were used, with P less than 0.05 indicating statistical significance.

Back to Top | Article Outline

RESULTS

A total of 17 residents were included in this study, representing all years of the program from post-graduate year (PGY) 1 through 6. Nine residents were randomized to have access to the “Microsurgery Essentials” website, while the remaining 8 residents did not.

Back to Top | Article Outline

Validity of the SMaRT Scale

Senior residents, PGY 4 through 6 were scored higher on the SMaRT scale than junior residents (PGY 1-3), receiving scores of 3.4 compared to 2.1, respectively (P < 0.001), as seen in Figure 2. Additionally, residents with more experience were scored higher on the SMaRT scale, with residents having performed more than 10 previous micro-anastomoses receiving an average score of 3.5 compared to a score of 2.3 among those residents who had performed less than 10 previous micro-anastomoses (P = 0.02), as seen in Figure 3. Residents were asked to rate their level of confidence in performing microsurgery. As illustrated in Figure 4, residents who perceived themselves as being confident received a higher score on the SMaRT scale (average score 3.5), compared to residents who were not as confident (average score 2.1, P < 0.001). When correlated with their written pre-test performance as seen in Figure 5, residents who scored well on their written test (receiving a minimum score of 80% correct) were also scored higher on the SMaRT scale, receiving an average score of 3.7 compared to 2.5 among those residents who received a written pre-test score of less than 80% (P = 0.03).

FIGURE 2

FIGURE 2

FIGURE 3

FIGURE 3

FIGURE 4

FIGURE 4

FIGURE 5

FIGURE 5

Back to Top | Article Outline

Reliability of the SMaRT Scale

Inter-rater reliability was assessed for the SMaRT scale. As illustrated in Figure 6, for a given PGY-6 resident (red bars), the average SMaRT score were similar among all of the 3 evaluators: 3.4, 3.9, 3.3 (P > 0.05). Similarly, for a given PGY-1 resident (blue bars) the average SMaRT score was similar among all of the 3 evaluators: 1.2, 1.0, 1.4 (P > 0.05). Overall, there was no statistically significant difference in scoring among all of the 3 expert evaluators.

FIGURE 6

FIGURE 6

Back to Top | Article Outline

Assessing Improvement With the SMaRT Scale

Residents who had access to the online curriculum showed improvement in their average SMaRT scores. Those who had access to the website had a 0.5-point increase in their average score when their technical performance was evaluated, compared to only a 0.3-point increase in the average score among residents who did not have access to the website as illustrated in Figure 7. Further sub-analysis in Figure 8 showed that junior residents who used the online curriculum had a greater improvement in their SMaRT score (average increase of 0.7 points) compared to senior residents who had an increase of only 0.2 points in their SMaRT score (P < 0.001).

FIGURE 7

FIGURE 7

FIGURE 8

FIGURE 8

Back to Top | Article Outline

DISCUSSION

We have previously reported on the effectiveness of our on-line microsurgery curriculum in improving the cognitive and technical performance of our residents.8 We had shown that access to the website improved performance on written test knowledge base assessment and improved the time required to complete a defined microsurgical task. In the current study, we demonstrate the validity and reliability of our global rating scale—the SMaRT scale—in assessing residents’ microsurgical technical ability, and documenting improvement after completing a web-based curriculum.

A plethora of evaluation tools currently exist: case logbooks, self-assessment, direct observation without criteria, direct observation with criteria, animal models with criteria, non-animal models with criteria, hand motion analysis, and virtual reality models.1–4,10–14 However, published data suggests that using direct observation with criteria is most helpful in giving feedback to the trainees to improve all aspects of microsurgical skill.5,7 Global rating scales provide a more thorough assessment of competence (ability to do something successfully), dexterity (skill in performing tasks), and validity (how closely the task resembles a real-life situation).15,16 Global rating scales, however, have the disadvantage of needing a senior member of the faculty to perform the assessment, which, we have noted, can be a time-intensive task. The ability to upload the videos on-line and allow the expert evaluators the ability to evaluate the videos at their convenience does ease the burden of reviewing these videos. Written and audio commentary from the evaluator can also be added to provide the trainee an individualized assessment of their performance.

Previous studies have shown microsurgical global rating scales are effective in assessing trainees.6,17 Temple et al had demonstrated their own UWOMSA (University of Western Ontario Microsurgical Skills Acquisition/Assessment) scale was reliable and valid in measuring residents’ performance.6 Our scale has additional components, and has been adapted and optimized to allow video assessment. Our scale was created by microsurgical experts based on factors deemed important for technical performance. Admittedly, our scale only assesses the performance under the microscope, while not thoroughly assessing the resident’s performance outside of view, such as body positioning, microscope setup, and selection of instruments. Direct observation of the resident may be necessary to allow a more thorough assessment of the trainee.

The validity of our SMaRT scale was verified on multiple levels. A faster and shorter time to complete the microsurgical task, higher level of experience, better performance on a written test, and higher-perceived confidence all correlated with a higher SMaRT score. There are several other aspects of the trainee that may also contribute to improved technical performance. Previous studies have shown that visual-spatial aptitude can be a contributing factor to improved technical performance.18,19 However, adequate training can bring all residents to the same level as those residents who are innately gifted in visual-spatial aptitude.19 Additional efforts can also be made to identify non-technical skills, such as ability to work in a team, collegiality, communication skills, professionalism, and knowledge, that play a role in surgical performance.20

The resident’s self-perception correlated with their SMaRT score. Residents who were confident in their abilities were found to have more previous experience. Thus, it seems the perception of confidence could be used as a reliable predictor for performing well. Our findings similarly corroborate with the results of Shanedling and colleagues who have shown the online perception of preparedness among orthopedic surgery residents were reliable markers for readiness to pass a cadaveric motor skills test of carpal tunnel release surgery.21

Ultimately, we are working to determine if our SMaRT scale correlates with actual performance in the operating room. The residents in the present study were assessed in a controlled laboratory environment, with limited fidelity of the models. Further assessment of the patency of the anastomotic repairs in the chicken foot dorsal vessel can be made by injecting the vessel with saline, as we have previously described.9 Previous studies have shown that evaluations in a simulation-based setting translate well to an actual OR environment.14,22 Assessment of our trainees while operating on real patients will be needed to provide an ultimate validation of our scale. In our program, we are working to establish a chronological, yearly assessment of the residents (from intern to chief year) to monitor improvement over the course of residency training. If any deficiencies are noted, steps can be made to help the trainee reach their expected level of technical performance. The goal is to ensure milestones are being met before operating on humans and before graduation from the program.

Back to Top | Article Outline

CONCLUSION

Our proposed curriculum involves an initial didactic session, followed by review of our on-line “Microsurgery Essentials” website, with subsequent video-recorded sessions under the microscope. The recordings are reviewed and evaluated by expert microsurgeons using the SMaRT (Stanford Microsurgery and Resident Training) scale, providing the trainee timely feedback. Our web-based curriculum with its associated global rating scale allows residents to practice microsurgery before stepping into the operating room, reducing frustrations for all involved, allowing a more efficient educational environment, and ensuring maximal safety for the patient.

Back to Top | Article Outline

REFERENCES

1. Tedesco MM, Pak JJ, Harris EJ Jr, et al. Simulation-based endovascular skills assessment: the future of credentialing? J Vasc Surg. 2008; 47: 1008–1; discussion 14.
2. Aggarwal R, Grantcharov TP, Eriksen JR, et al. An evidence-based virtual reality training program for novice laparoscopic surgeons. Ann Surg. 2006; 244: 310–314.
3. Chaer RA, Derubertis BG, Lin SC, et al. Simulation improves resident performance in catheter-based intervention: results of a randomized, controlled study. Ann Surg. 2006; 244: 343–352.
4. Goff B, Mandel L, Lentz G, et al. Assessment of resident surgical skills: is testing feasible? Am J Obstet Gynecol. 2005; 192: 1331–1338; discussion 8–40.
5. Regehr G, MacRae H, Reznick RK, et al. Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCE-format examination. Acad Med. 1998; 73: 993–997.
6. Temple CL, Ross DC. A new, validated instrument to evaluate competency in microsurgery: the University of Western Ontario Microsurgical Skills Acquisition/Assessment instrument [outcomes article]. Plast Reconstr Surg. 2011; 127: 215–222.
7. Reznick R, Regehr G, MacRae H, et al. Testing technical skill via an innovative “bench station” examination. Am J Surg. 1997; 173: 226–230.
8. Satterwhite T, Son J, Carey J, et al. Microsurgery education in residency training: validating an online curriculum. Ann Plast Surg. 2012; 68: 410–414.
9. Satterwhite T, Son J, Echo A, et al. The chicken foot dorsal vessel as a high-fidelity microsurgery practice model. Plast Reconstr Surg. 2013; 131: 311e–312e.
10. Achar RA, Lozano PA, Achar BN, et al. Experimental model for learning in vascular surgery and microsurgery: esophagus and trachea of chicken. Acta Cir Bras. 2011; 26: 101–106.
11. Carlson ML, Archibald DJ, Sorom AJ, et al. Under the microscope: assessing surgical aptitude of otolaryngology residency applicants. Laryngoscope. 2010; 120: 1109–1113.
12. Erel E, Aiyenibe B, Butler PE. Microsurgery simulators in virtual reality: review. Microsurgery. 2003; 23: 147–152.
13. Lannon DA, Atkins JA, Butler PE. Non-vital, prosthetic, and virtual reality models of microsurgical training. Microsurgery. 2001; 21: 389–393.
14. Starkes JL, Payk I, Hodges NJ. Developing a standardized test for the assessment of suturing skill in novice microsurgeons. Microsurgery. 1998; 18: 19–22.
15. Kalu PU, Atkins J, Baker D, et al. How do we assess microsurgical skill? Microsurgery. 2005; 25: 25–29.
16. Chan WY, Matteucci P, Southern SJ. Validation of microsurgical models in microsurgery training and competence: a review. Microsurgery. 2007; 27: 494–499.
17. Moulton CA, Dubrowski A, Macrae H, et al. Teaching surgical skills: what kind of practice makes perfect?: a randomized, controlled trial. Ann Surg. 2006; 244: 400–409.
18. Maan ZN, Maan IN, Darzi AW, et al. Systematic review of predictors of surgical performance. Br J Surg. 2012; 99: 1610–1621.
19. Wanzel KR, Hamstra SJ, Anastakis DJ, et al. Effect of visual-spatial ability on learning of spatially-complex surgical skills. Lancet. 2002; 359: 230–231.
20. Baldwin PJ, Paisley AM, Brown SP. Consultant surgeons’ opinion of the skills required of basic surgical trainees. Br J Surg. 1999; 86: 1078–1082.
21. Shanedling J, Van Heest A, Rodriguez M, et al. Validation of an online assessment of orthopedic surgery residents’ cognitive skills and preparedness for carpal tunnel release surgery. J Grad Med Educ. 2010; 2: 435–441.
22. Anastakis DJ, Regehr G, Reznick RK, et al. Assessment of technical skills transfer from the bench training model to the human model. Am J Surg. 1999; 177: 167–170.
Keywords:

microsurgery; on-line education; residency; surgery simulation; global rating scale

© 2014 by Lippincott Williams & Wilkins