Skip Navigation LinksHome > October 2013 - Volume 73 - Issue > Surgical Expertise in Neurosurgery: Integrating Theory Into...
Neurosurgery:
doi: 10.1227/NEU.0000000000000115
History and Developing Simulation in Medicine

Surgical Expertise in Neurosurgery: Integrating Theory Into Practice

Gélinas-Phaneuf, Nicholas MD; Del Maestro, Rolando F. MD, PhD

Section Editor(s): Harrop, James S. MD; Bendok, Bernard R. MD

Free Access
Article Outline
Collapse Box

Author Information

Neurosurgical Neurosimulation Research Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada

Correspondence: Rolando F. Del Maestro, MD, PhD, Neurosurgical Neurosimulation Research Centre, Montreal Neurological Institute Hospital, 3801 University, Room 438M, Montreal, Quebec, Canada, H3A 2B4. E-mail: rolando.delmaestro@mcgill.ca

Received June 17, 2013

Accepted July 15, 2013

Collapse Box

Abstract

The development of technical skills is a major goal of any neurosurgical training program. Residency programs in North America are focused on achieving an adequate level of training to produce technically competent surgeons. The training requirements and educational environments needed to produce expert surgeons are incompletely understood. This review explores the theoretical implications of training technical skills to expertise rather than competency in a complex field such as neurosurgery. First, the terms technical expertise and technical competency are defined. Definitions of these qualities are lacking in all surgical specialties. Second, the assessment of technical skills of neurosurgeons are investigated using an expert performance approach. This approach entails the design of tasks that can capture the level of expertise in a reproducible manner. One method to accomplish this involves the use of novel simulators with validated performance metrics. Third, the training of technical skills using simulation is studied in the optic of developing training curricula that would target the development of expertise rather than simple competency. Such curricula should include objective assessments of technical skills, appropriate feedback, and a distributed schedule of deliberate practice. Implementing a focus on the development of expertise rather than simple competency in surgical performance will lead to innovative developments in the field of neurosurgical education. Novel technologies, such as simulation, will play important roles in the training of future expert surgeons, and focused technical skills curricula with a sound theoretical basis should guide the development of all such programs.

ABBREVIATIONS: OSATS, Objective Structured Assessment of Technical Skill

VR, virtual reality

Ever since humans have been performing surgical treatments on their peers, a high degree of competency has been a reasonable expectation by both the patient and society. Recently, the assessment of technical skills competency has brought the critical topic of expertise to the forefront. This review addresses technical skills expertise in the domain of neurosurgery to advance the designs of research programs dedicated to improving the training, assessment, and certification of “expert” surgeons. First, the definitions of technical skills expertise in surgery are explored. There is a growing concern that expertise, as defined by surgeons, is only a representation of competency. The lack of clear definitions is presented and the consequences of this issue are discussed. The assessment of surgeons’ skills, primarily technical skills, are reviewed in the context of an expert performance approach to expertise. As defined by Ericsson and Charness,1 expert performance is a laboratory technique that could allow the assessment of the extent of expertise in a context that is as close as possible to the real tasks. Multiple tools to assess technical skills of surgeons were designed and validated in the past 2 decades, and their results are reviewed. The evidence relating expert performance and surgical simulation are discussed and the training of neurosurgeons to achieve an “expert” level of technical performance reviewed. The literature outlining the path to expertise in surgery is appraised in the context of a theoretical training curriculum focusing on the combination of expert performance assessment, deliberate practice, and simulation in neurosurgery.

Back to Top | Article Outline

A DEFINITION OF EXPERTISE?

Beginning in the late 19th century, surgeons were trained and assessed in a prolonged and organized fashion under the supervision of senior doctors.2 This novice/expert apprenticeship program is the dominant method of training surgeons.3 New technologies, society's pressure, and a host of external constraints such as lawsuits and reduced residents' working hours have resulted in a review of this model of teaching.2,4 It is now becoming increasingly difficult to practice one's surgical skills on patients without having reached a certain level of “expertise.” Most authors agree that new methods of training are required, but a fundamental (unanswered) question remains: Are the current training programs supposed to train surgeons to an “expert” or to only a competent level of performance?

In 1990, a report discussing the perception of expertise in medicine used indicators of expertise such as years of experience, specialty board certification, and/or academic rank or responsibility.5 It is evident that these criteria are poorly correlated with superior clinical performance. An example of such failure of these indicators to capture expertise involved peer-nominated diagnostic experts that did not objectively perform better than novices on standard cases.6 A systematic review that analyzed the number of years in practice along with clinical performance found a negative correlation, suggesting that more experience can be paradoxically associated with lower clinical performance.7 These findings outline the lack of consensus in what constitutes an “expert” physician or surgeon and the difficult task of objectively evaluating performance.

A precise definition of expertise in neurosurgery is needed. The current literature predominantly focuses on the theme of surgical “competency” rather than surgical “expertise.”8 This theme preference, originating from the growing importance of competency-based assessment programs, has become the favored method of evaluation by training programs. Definitions of surgical competence usually involve 2 aspects: the technical skills and the “other skills.” The latter skills have been outlined by 2 major groups in North America: the Royal College of Surgeons and Physicians of Canada and the Accreditation Council on Graduate Medical Education in the United States. These 2 organizations have highlighted aspects such as professionalism, communication skills, medical expertise, and collaboration skills as important “other skills.”9,10 A recent and very comprehensive definition of competence states that surgical competence encompasses knowledge and technical and social skills to solve familiar and novel situations to provide adequate patient care.8 This definition focuses, interestingly, on “adequate” rather than “excellent” patient care as a goal.

To evaluate competency, 2 general approaches can be used: the behaviorist approach, in which specific behaviors are rated, and the holistic approach, in which many aspects are combined and evaluated. Bhatti and Cummings8 have concluded that the holistic approach is probably the best way to assess competency, but also the most difficult because of the lack of defined methodologies to measure “other skills.” This review concentrates on a behavioristic approach because technical skills are measurable behaviors.

Confusion is apparent when patients and doctors define surgical competency. For a researcher and a neurosurgical training program, competency is more than just passing an examination.8 For the patient and family, competency is the complex set of “skills” that contribute to an excellent outcome. The definition of expert and expertise can vary from specialty to specialty and even within 1 surgical specialty. Is a consensus definition of expertise therefore possible and to what extent would such a definition affect the training of neurosurgeons and, ultimately, patient care?

With regard to patient care, the report “To Err Is Human”11 concluded that more than 98,000 deaths are caused by medical errors each year and that the majority of these errors were preventable. After the introduction of laparoscopic cholecystectomy, studies showed that surgeons who performed fewer than 30 laparoscopic cholecystectomies had a fivefold increase in bile duct injury.12 This prompted the general surgical community to reevaluate training methods and to develop box-trainer simulations and virtual reality (VR) simulators.2 In most general surgery studies, technical competency was usually defined as the number of procedures performed. Many neurosurgical procedures are not as stereotypical and as easily classified as those of general surgery. In neurosurgery, there may be significant variability in the surgical approach used to deal with a specific operative lesion. This inherent variability may be the result of not having the necessary phase III data that can be used to evaluate the short- and long-term patient outcomes of different operative approaches. This leads to a major problem when one tries to define technical competency in a specific operative intervention. To remedy this problem, a group of authors state that clear benchmarks for every operation should be available and developed to minimize complications and improve results.2 This process is inherently difficult and needs to be implemented for all neurosurgical procedures. No clear operational definition of technical competency/expertise in neurosurgery can, at present, be outlined.

A major assumption in most studies regarding expertise in medicine is that novices will inevitably become experts with enough practice. Mylopoulos and Regehr13 argue that routine expertise can be achieved this way, but a more accurate representation of expert performance probably relates to adaptive expertise. Routine experts are akin to skilled technicians in their specific domain, but they lack the training to face novel problems. Adaptive experts stretch the boundaries of their own limits by using flexible and creative means to solve complex and unexpected situations. A qualitative study on this topic involving undergraduate medical students identified that students believed that it was beyond the scope of their training to acquire innovative thought processes.14 The reasons for these ideas are unclear, but this issue needs to be addressed if adaptive expertise is to be a reasonable goal of neurosurgical training. Surgical expertise is best described as an adaptive expertise because, like pilots, neurosurgeons are trained to deal with complex situations involving unplanned and life-threatening events.15

The assumption behind all these issues is that understanding what is required to be an expert and how experts achieve a level of expertise will allow the development of training programs that enable more neurosurgical residents and practicing neurosurgeons to become “experts” and maintain their expertise.16 How the CanMeds competency or the Accreditation Council on Graduate Medical Education core competencies model help achieve the goal of increasing neurosurgical technical expertise needs further study.

Back to Top | Article Outline

EXPERT PERFORMANCE IN SURGERY: CURRENT EVIDENCE

Expertise can be represented by the ability to reproduce consistently superior performance on a given task, on demand.17 This task, when it represents the essence of what an expert does in a given field, can be used in the expert performance paradigm. It is the researchers' chore to ensure that the task truly represents points of expertise in the domain and not just associated epiphenomena.18 The issue of experience, instruction, and expertise has been extensively studied by the group led by Ericsson over the past 2 decades.1,17,18 To achieve a high level of expertise, experts need to master multiple technical factors, including the techniques involved in their specialty and have the motivation to pursue the development and continued development of expertise in these techniques.17-19 Failure to continue to improve and develop these techniques with time results in a significant loss of performance (Figure 1). Ericsson17,20 has proposed, that an expert performance scheme could (and should) be adaptable to surgery. Research methodologies assessing expert performance are best carried out in constrained environments, be it a laboratory or an operating room. One must find a task that represents with high fidelity an arena where the expert will give a constant superior performance and that this superior performance is actually derived from the alleged expertise.20

Figure 1
Figure 1
Image Tools

The current literature on the assessment of superior performance in surgery involves technical skills pertaining to specific surgical procedures. Simulator studies have investigated surgical technical performance out of the operating room setting, but rarely “other skills.”21 An objective assessment of technical skills has always been a goal of any neurosurgical training program. These assessments are carried out to ensure that their students have adequately mastered the different techniques required for their specific specialty. An extensive study has shown that checklists fail to measure differences between various levels of expertise.22 Checklists only discriminate expertise up to a certain point, after which intermediate experts score as well as experienced surgeons. This was in contrast to global rating scales that maintained their ability to differentiate levels of expertise over a larger range.22,23 The design of most studies on this topic involves an expert-novice design to obtain various forms of validity of the tool being tested.24 To integrate any technique or new technology into a formal training curriculum, it must be proved that training is useful and appropriate. A well-established series of validation steps needs to be undertaken: face, content, construct, and concurrent validity. Face and content validity determine that a technology is realistic and targets training skills that are required to be trained. Construct validity establishes that the scores obtained correlate with actual operative technical skill by discriminating experts from novices. This enables novices to practice and train in the technology until they reach the performance of an expert. The final step, concurrent validity, is particularly important should that technology be used for assessment as it demonstrates that the skills acquired during training reflect performance in the operating room. Reznick's group developed a tool to measure the performance of trainees and experts in the field of general surgery. This tool, the Objective Structured Assessment of Technical Skill (OSATS), has been shown to be both valid and reliable in multiple studies.25-27 However, the OSATS was never validated in neurosurgery, leading to the possibility of inaccurate assessments if used without proper validation in neurosurgical studies. A group from McGill University designed another tool to evaluate the performance of experts and novices during live laparoscopic surgery. This tool, the Global Operative Assessment of Laparoscopic Skills, has also been shown to be valid and reliable.28 A variety of others scales have been developed for other surgical specialties, including ophthalmology, gastroenterology, and ear-nose-throat surgery.29-31 There is currently no validated tool of this kind in neurosurgery. A current project, the Global Assessment of Intraoperative Neurosurgical Skills, is trying to addresses this issue in neurosurgery.32 Global Assessment of Intraoperative Neurosurgical Skills uses the same principle of expert-novice comparison to validate the tool. A major limitation in designing such a tool is the definition of what constitutes an expert in neurosurgery. Finding the appropriate tasks that allow an accurate assessment of the full extent of expertise of a subject in the operating room is a complex problem. A number of skills besides technical expertise influence the performance of any neurosurgical procedure. The evaluation of the many technical, cognitive, and social contributions that affect overall global performance is difficult. For example, brain tumors can be removed using multiple techniques, the majority of which have no randomized, controlled data to help determine whether one technical approach is equal or superior to another.

Evaluations of performance need to be carried out by experts to achieve the highest level of validity and reliability. Both OSATS and Global Operative Assessment of Laparoscopic Skills used expert reviewers to rate performance.26,28 This produced a level of reliability high enough to be used in high-stake evaluations.28 Rating such scales can be carried out during the procedure itself or a posteriori using recorded video. It is critically important to train the raters before using the tool to achieve a higher level of interrater reliability.33 Finally, using video recordings can shorten the evaluation time required per trainee, with a mean reduction to 15 minutes for 1 evaluation.34 Assuming that every novice will become an expert gives rise to the possibility that testing simple tasks will lead to “the risk of narrowing the definition of expertise to performance of the mundane.”13 The designer of such assessment tools must then ensure the complex representativeness of the task involved. This will not only lead to increased face validity of the task but to a closer agreement between the laboratory task and the actual operating room experience.

Back to Top | Article Outline
Using Simulation to Reach Expertise

This section discusses assessment and training of surgeons on simulators using the expert performance approach cognizant of the differences between real operations and simulated ones. First, a patient's surgical lesion presents a variety of individual characteristics that are currently difficult to integrate into a simulator or a laboratory setting, although patient rehearsal is being developed.19 However, the simulated task needs to be as “lifelike” as possible to remain within the boundaries of an expert performance approach.17 A simulated task is by nature constrained, but this has advantages when one evaluates different individuals because it ensures comparability. The transfer of competency/expertise from the simulator to the operating room needs to be carefully evaluated. Simulators can be physical models,35 augmented reality models, such as ProMIS,36 or complete VR models.37,38 A variety of simulators are being developed in the field of neurosurgery.38-41 Simulators allow trainees to practice without risks to the patient in a safe environment and remove the stress of the operating room environment from the task.2 The question of the optimal site and setup for a neurosurgical simulation center is not known, but it would seem reasonable that such simulators be housed in locations that are easily accessible to residents. To maximize resident, staff, and researcher interaction, the Neurosurgical Simulation Research Centre at the Montreal Neurological Institute and Hospital, which includes a variety of simulators and resident work stations, was located near a patient care area, a short distance from the hospital operating rooms (Figure 2).

Figure 2
Figure 2
Image Tools

There are theoretical advantages to high-fidelity simulators. Using Ericsson's expert performance approach of Ericsson and Charness,1 the closer one gets to the essence of expertise in a given field, the better are the insights concerning the differences between novices and experts. This is supported by a randomized study that has shown that a task-specific simulator yielded better results on specific suturing tasks in laparoscopic anatomises than training in basic suturing tasks.42 High-fidelity simulators have not been analyzed in cost-effectiveness studies, and focused research is necessary to justify the cost of development and use. The designers of simulation training curricula need to develop cost-efficient training programs by providing more than just simple technical skills on simulators because “learning how to carry out a surgical procedure, after the core skills have been mastered, is overwhelmingly a cognitive and not a technical task.”43

A major limitation of assessing expert performance on a simulator arises when only “part-tasks” are used. It has been demonstrated that superior performance using “part-tasks” can be attained independently without showing superior performance when the whole task is analyzed.18 Moving a simulated ring from peg to peg in a VR laparoscopic environment is not actually evaluating expertise but merely a component of the psychomotor skills required to achieve superior performance during live surgery. Conversely, a simple 2-dimensional crown preparation VR simulator in dentistry was able to differentiate experts from novices.44 The aviation industry uses “almost real” to train pilots. Tasks are effectively complete, and part-tasks are not practiced because the actual cockpit is used and not a simplification of the instruments needed to pilot the plane.45 Developing patient-specific reconstruction for training and rehearsal is a future goal of VR surgical simulators,19 but its full integration into a virtual neurosurgical operating room will take time. Some systems are currently trying to integrate patient-specific data, notably during aneurysm clipping simulation on the Dextroscope46 and tumor resection on the NeuroTouch.19

Simulated tasks and performance metrics on such tasks need to be validated before being integrated into a training curriculum. Kneebone47 has outlined a conceptual framework of such a program integrating simulation. He postulated that simulation in this context must attain 4 goals: (1) deliberate practice in a safe environment, (2) availability of expert tutors who can effectively scaffold and enhance the teaching material, (3) authentic simulation that fits in a community of practice, and (4) an environment that takes into account the emotional component of learning. These 4 theoretical goals have not been adequately addressed and evaluated regarding simulation surgical training. Currently, most performances on simulators are evaluated using performance metrics generated by computer systems. All VR simulators use different metrics that are not standardized across companies, making meaningful comparisons between simulators difficult.19,36,48-50 Even with reliable metrics, some studies have failed to show consistent construct validity, further limiting their use in a training curriculum.50 Future VR simulators should use novel, specific, and quantifiable metrics that can be thoroughly assessed for both construct validity and reliability before widespread adoption.

The expert performance model outlines that laboratory/simulated tasks need to be constrained to ensure repeatability and objective assessment.18 Once expert performance is achieved on the simulator, one should determine whether this expertise will improve the performance of the surgeon during “real” surgeries. Recent randomized, controlled trials and meta-analyses have addressed these issues.51-55 Data from these studies show that expert performance on a VR simulated task is associated with improved levels of performance using both pig models56 and actual “live” human surgeries.55

Simulators in neurosurgery can be used to explore 2 questions that relate directly to the question of neurosurgical expertise. First, how do expert neurosurgeons actually perform neurosurgical operations? Can questions such as what visual, tactile, and/or other cues are expert surgeons using during the technical components of their operations be assessed using simulation technology? Second, with the proper simulation tools and curriculum can the goal of neurosurgical training programs be shifted from teaching to competence to teaching to expert level? A number of new simulation systems are being developed in the area of neurosurgery.19,38,40,57,58 A new project, NeuroTouch, is currently under development at the National Research Council of Canada.19,38,59 The goal of this project is to develop a VR neurosurgical simulator in 3 dimensions with haptic feedback (Figure 2). The usefulness of this or other simulators in the assessment of expert performance and resident training remains to be defined.

Back to Top | Article Outline
A Training Curriculum Targeting Expertise: Is It a Possible Goal?

An expert performance approach to studying neurosurgical technical expertise should be transferable to a training curriculum. Because the literature on the topic primarily involves general surgery, a number of assumptions will need to be made to incorporate these research results into the development of neurosurgical curriculum. First, the theory of expert performance is a general theory, with possible applications in a variety of fields, including neurosurgery.17 Second, deliberate practice will be a sine qua non requirement in such a program.20 Third, experts will be assumed to be highly competent neurosurgeons for the sake of the discussion because no consensual definition exists as to what an expert neurosurgeon is. Fourth, feedback is an essential building block of such a program.20 Finally, such a training program should use current adult learning theories to achieve maximal efficiency.60

The pressure to change the training curriculum, focusing on surgical skills, is an important issue faced by all neurosurgical residency training committees.61 There are no reasons to prevent simulation from playing a major role in the neurosurgical education of trainees, focused on improving their technical skills acquisition.62 The use of simulation in a curriculum “should be driven by educational imperatives and not by technological innovation.”43 Objective assessment of technical skills is possible to a high degree of validity and reliability in surgery, with some data being developed for neurosurgery (Gélinas-Phaneuf et al. 2013, unpublished work).63 Initial studies involving medical students and neurosurgical residents were conducted in a pilot validation study using a NeuroTouch tumor resection scenario and demonstrated face, content, and construct validity in a competitive setting.63 At present, studies are being conducted using “expert” neurosurgeons in noncompetitive settings to increase our understanding of expert neurosurgical performance. Hence, the first requirement of an expert performance approach to assess and train neurosurgeons is possible because tasks (eg, the removal of an intracranial meningioma or metastasis) can be objectively measured in terms of performance. Deliberate practice in an expertise-oriented curriculum is essential. Multiple publications have consistently demonstrated that expert performers in a variety of fields have more than 10 years of purposeful practice to achieve an international level of expertise.20 In the field of surgery, innate talent has not been clearly demonstrated,61 and contradictive studies have associated higher the correlation of visuospatial skills to “expert” surgical performance remains unclear.64-66 Deliberate practice paradigms will need to be incorporated into the design of any curriculum whose goal is to train to specific levels of expertise. Ericsson20 demonstrated that deliberate practice is a purposeful, goal-oriented method that one uses to improve one's performance. It is not inherently pleasant, and it requires on the order of 10,000 hours to reach an international level of performance. In neurosurgery, this level of expertise may not be acquired until neurosurgeons reach their 40s or early 50s (Figure 3). A recent commentary67 calculated that 10,000 hours of deliberate practice are reached after approximately 6.9 years of training in neurosurgery. In Canada, this is the time equivalent to a regular training program and 1 additional year of fellowship. A critical question, however, has not been addressed: how much of the time in the hospital and operating room can actually be termed deliberate practice as opposed to the daily tasks of being a resident? If a neurosurgical training program goal is to use deliberate practice to improve performance, then simulation may be a reasonable technology to achieve this goal. Simulation allows a resident time to practice both routine and critical technical skills using a goal-directed expert performance approach.

Figure 3
Figure 3
Image Tools
Back to Top | Article Outline
How to Incorporate Simulation Into Such a Curriculum?

Most surgical skills are natural extensions of directed psychomotor tasks. The model of skill acquisition described by Fitts and Posner68 is the most widely cited in surgery.4 This model describes 3 stages of skill acquisition. The first stage involves cognitive acquisition in which an individual learns the multiple steps required to achieve the performance. Mistakes are frequent during this stage, and it represents the base of the learning curve. The second stage is called the associative stage, and it results in a gradual improvement of the performance. Mistakes are rarer, and this is represented as the rapid increase in skill on a learning curve. The last stage, the automatic stage (or plateau) is theorized to represent the accomplishment of enough practice to perform the task while freeing substantial amounts of cognitive inputs. This model raises a number of issues. First, authors disagree on the need to achieve automaticity. Proponents of the Fitts and Posner model have shown that dual-task possibilities of experts in surgery are increased when the task is automated.69 Also, the practice time required to achieve automaticity is variable from one individual to the next, greater than 100 repetitions on a simulator in 1 study.70 Ericsson17,20,71 argued that automaticity is counterproductive to the achievement of expertise. He suggested that once a task has reached automaticity, additional practice will not improve the performance but will only maintain the previous level of proficiency. This phenomenon has been named arrested development, and deliberate practice has been hypothesized to be 1 method to counter this roadblock to acquiring new skills72 (Figure 4). The debate on automaticity in learning a psychomotor task is still open, but the current evidence seems to point to a fusion of these 2 opposing views. Basic psychomotor skills can and should be automatized, but the whole performance paradigm should always have the ability to be improved through deliberate practice.

Figure 4
Figure 4
Image Tools

Simulated tasks have been shown to be helpful when designing a curriculum that implements the theoretical learning concepts previously outlined. A general surgery group integrated simulated tasks addressing the 3 stages of the Fitts and Posner model into the design of a competency-based curriculum.73 They demonstrated that the design of such a curriculum is feasible when assessing tasks of different levels of complexity, and proficiency in difficult tasks improved after training on easier ones. This may have resulted from partial automatization of the task. This group also demonstrated that VR whole task simulation of laparoscopic cholecystectomy had construct validity.74 Dunphy and Williamson60 reviewed various models of expertise in medicine, surgery, and nursing. They combined the Fitts and Posner model with Vygotsky's zone of proximal development model,75 resulting in a conceptual system that seems intuitively correct. This new template has 4 stages, with the first 3 stages similar to those of Fitts and Posner but adding the ideas of outside regulation during the first stage and self-regulation in the second stage, ideas taken from Vigotsky's zone of proximal development model in which the teacher serves as a scaffold. This model possesses the possibility to “regress” in stages, allowing for more deliberate practice.60 Because feedback is essential in a deliberate practice model, an understanding of the individual components of performance that would be helped by feedback is important.19,72 Without knowledge of the critical steps of a surgical procedure, it is difficult to give objective feedback tailored to a specific individual task.19,20 Cognitive task analyses have a definite role to play in determining the required steps in a specific surgical procedure.76 The type of feedback provided is important. Expert verbal feedback leads to more retention of a skill than simple computer printouts of performance.77 This may be an issue during self-practice on simulators, but regular feedback from the teacher during formal training sessions could compensate for this deficiency.78 The schedule of such practice is also important. There is evidence that distributed practice over time is more useful for long-term retention than massed practice.79 When a distributed curriculum was compared with nondistributed practice controls, this showed benefits on performance if distributed practice was scheduled.80 A curriculum aimed at achieving long-term expertise should thus allow for regular, preferably at least weekly, training sessions with expert feedback.81 The role that script-based mental rehearsal in the acquisition and maintenance of neurosurgical technical skills needs to be further defined and could play an important role in resident teaching.82

The training of technical skills is only 1 component of a surgical curriculum. The development of multiple other skills is required to train a competent/expert neurosurgeon. This can prove difficult to implement in an expert performance fashion because measuring objectively some of those skills, like professionalism, communication, and “other skills,” is either very complex or not yet available. Pilot studies in crisis situation management in the operating room show promise in the training of those important “other skills.”82 The aviation industry program called “NOTECHS” (NOn-TECHnical Skills), which can assess these skills in an operating room environment, has achieved variable levels of reliability in the different domains assessed.83

Back to Top | Article Outline
Is Expertise a Final Endpoint?

The personal advantages of being an expert neurosurgeon are not always obvious, and the monetary incentives are minimal in current Canadian/American society.84 What are the known and novel methods that can be used to measure and improve the competency of residents and practicing neurosurgeons to a more expert level of performance? Continued learning after graduation is paramount in any expertise model. Limiting oneself to only automatized operating skills may result in arrested skills development and decreased ability to deal with unexpected and rare life-threatening intraoperative events.72 Training solely on simulators is also a concern because “simulated” expertise is not the desired endpoint, only a means to an end. Deliberate practice using simulators may arm the trainees with appropriate surgical skills before they enter the operating room, hopefully resulting in (1) decreasing surgical errors and improving patient outcomes; (2) shortened learning times to master complex neurosurgical procedures; and (3) a lifelong interest in maintaining and augmenting their surgical skills.

Back to Top | Article Outline

CONCLUSION

The concept of surgical expertise has been explored in this review with the intent of developing a neurosurgical curriculum based on solid theoretical assumptions. The current literature on the subject is limited but can guide decisions when designing neurosurgical training programs. First, a clear consensual definition of expertise (or a high degree of competency) should be determined to guide future research. Expert neurosurgical performance needs to be measured in realistic situations and used as the goal in neurosurgical teaching. Deliberate practice and feedback should be the cornerstone of any such program. High-fidelity neurosurgical simulators and validated performance measurement tools should help propel neurosurgical research and training to accomplish the goal of training to expert surgical performance (Gélinas-Phaneuf et al. 2013, unpublished work).63 Time will tell whether neurosurgeons of the future will be more “expert,” whichever definition is used.

Back to Top | Article Outline
Disclosures

This work was supported by the Franco Di Giovanni, B-Strong, the Tony Colannino Foundations the Montréal English School Board, and the Montreal Neurological Institute and Hospital. Dr Gélinas-Phaneuf was funded by a generous contribution from the Harold and Audrey Fisher Brain Tumour Research Award. The authors have no personal financial or institutional interest in any of the drugs, materials, or devices described in this article.

Back to Top | Article Outline

REFERENCES

1. Ericsson KA, Charness N. Expert performance: its structure and acquisition. Am Psychol. 1994;49(8):725–747.

2. Tsuda S, Scott D, Doyle J, Jones DB. Surgical skills training and simulation. Curr Probl Surg. 2009;46(4):271–370.

3. Jaffer A, Bednarz B, Challacombe B, Sriprasad S. The assessment of surgical competency in the UK. Int J Surg. 2009;7(1):12–15.

4. Reznick RK, MacRae H. Teaching surgical skills: changes in the wind. N Engl J Med. 2006;355(25):2664–2669.

5. Elstein AS, Shulman LS, Sprafka SA. Medical problem-solving:a 10-year retrospective. Eval Health Professions. 1990;13(1):5–36.

6. Elstein A, Shulman L, Sprafka S. Medical Problem Solving: An Analysis of Clinical Reasoning. Cambridge, MA: Harvard University Press; 1978.

7. Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: the relationship between clinical experience and quality of health care. Ann Intern Med. 2005;142(4):260–273.

8. Bhatti NI, Cummings CW. Competency in surgical residency training: defining and raising the bar. Acad Med. 2007;82(6):569–573.

9. Chou S, Cole G, McLaughlin K, Lockyer J. CanMEDS evaluation in canadian postgraduate training programmes: tools used and programme director satisfaction. Med Educ. 2008;42(9):879–886.

10. Yaszay B, Kubiak E, Agel J, Hanel D. ACGME core competencies: where are we? Orthopedics. 2009;32(3):171.

11. Kohn LT, Corrigan J, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.

12. Moore MJ, Bennett CL. The learning curve for laparoscopic cholecystectomy. The Southern Surgeons Club. Am J Surg. 1995;170(1):55–59.

13. Mylopoulos M, Regehr G. Cognitive metaphors of expertise and knowledge: prospects and limitations for medical education. Med Educ. 2007;41(12):1159–1165.

14. Mylopoulos M, Regehr G. How student models of expertise and innovation impact the development of adaptive expertise in medicine. Med Educ. 2009;43(2):127–132.

15. Tsang PS, Vidulich MA, eds. Principles and Practice of Aviation Psychology. Boca Raton, FL: CRC Press; 2002.

16. Abernethy B, Poolton JM, Masters RS, Patil NG. Implications of an expertise model for surgical skills training. ANZ J Surg. 2008;78(12):1092–1095.

17. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004;79(10 suppl):S70–S81.

18. Ericsson KA, Lehmann AC. Expert and exceptional performance: evidence of maximal adaptation to task constraints. Annu Rev Psychol. 1996;47:273–305.

19. Choudhury N, Gélinas-Phaneuf N, Delorme S, Del Maestro R. Fundamentals of neurosurgery; virtual reality tasks for training and evaluation of technical skills. World Neurosurg. 2012 Nov 23. [Epub ahead of print].

20. Ericsson KA. An expert-performance perspective of research on medical expertise: the study of clinical performance. Med Educ. 2007;41(12):1124–1130.

21. Kahol K, Vankipuram M, Smith ML. Cognitive simulators for medical education and training. J Biomed Inform. 2009;42(4):593–604.

22. Hodges B, Regehr G, McNaughton N, Tiberius R, Hanson M. OSCE checklists do not capture increasing levels of expertise. Acad Med. 1999;74(10):1129–1134.

23. Regehr G, MacRae H, Reznick RK, Szalay D. Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCE-format examination. Acad Med. 1998;73(9):993–997.

24. Gallagher AG, Ritter EM, Satava RM. Fundamental principles of validation, and reliability: rigorous science for the assessment of surgical education and training. Surg Endosc. 2003;17(10):1525–1529.

25. Reznick R, Regehr G, MacRae H, Martin J, McCulloch W. Testing technical skill via an innovative “bench station” examination. Am J Surg. 1997;173(3):226–230.

26. Martin JA, Regehr G, Reznick R, et al.. Objective structured assessment of technical skill (OSATS) for surgical residents. Br J Surg. 1997;84(2):273–278.

27. Faulkner H, Regehr G, Martin J, Reznick R. Validation of an objective structured assessment of technical skill for surgical residents. Acad Med. 1996;71(12):1363–1365.

28. Vassiliou MC, Feldman LS, Andrew CG, et al.. A global assessment tool for evaluation of intraoperative laparoscopic skills. Am J Surg. 2005;190(1):107–113.

29. Cremers SL, Ciolino JB, Ferrufino-Ponce ZK, Henderson BA. Objective assessment of skills in intraocular surgery (OASIS). Ophthalmology. 2005;112(7):1236–1241.

30. Lin SY, Laeeq K, Ishii M, et al.. Development and pilot-testing of a feasible, reliable, and valid operative competency assessment tool for endoscopic sinus surgery. Am J Rhinol Allergy. 2009;23(3):354–359.

31. Vassiliou MC, Kaneva PA, Poulose BK, et al.. Global Assessment of Gastrointestinal Endoscopic Skills (GAGES): a valid measurement tool for technical skills in flexible endoscopy. Surg Endosc. 2010;24(8):1834–1841.

32. Gélinas-Phaneuf N, Okrainec A, Fried GM, Choudhury N, Del Maestro RF. The assessment of technical skills in neurosurgery: the development of the global assessment of intraoperative neurosurgical skills (GAINS) scale. Paper presented at: Canadian Conference on Medical Education (Poster); May 2010; Saint-John's, Newfoundland.

33. Vassiliou MC, Feldman LS, Fraser SA, et al.. Evaluating intraoperative laparoscopic skill: direct observation versus blinded videotaped performances. Surg Innov. 2007;14(3):211–216.

34. Dath D, Regehr G, Birch D, et al.. Toward reliable operative assessment: the reliability and feasibility of videotaped assessment of laparoscopic technical skills. Surg Endosc. 2004;18(12):1800–1804.

35. Fried GM. FLS assessment of competency using simulated laparoscopic tasks. J Gastrointest Surg. 2008;12(2):210–212.

36. Pellen MG, Horgan LF, Barton JR, Attwood SE. Construct validity of the ProMIS laparoscopic simulator. Surg Endosc. 2009;23(1):130–139.

37. Maithel S, Sierra R, Korndorffer J, et al.. Construct and face validity of MIST-VR, endotower, and CELTS: are we ready for skills assessment using simulators? Surg Endosc. 2006;20(1):104–112.

38. Delorme S, Laroche D, DiRaddo R, Del Maestro RF. NeuroTouch: a physics-based virtual simulator for cranial microneurosurgery training. Neurosugery. 2012;71(ons suppl 1):ons32–ons42.

39. Ferroli P, Tringali G, Acerbi F, et al.. Advanced 3-dimensional planning in neurosurgery. Neurosurgery. 2013;72(suppl 1):54–62.

40. Chan S, Conti F, Salisbury K, Blevins NH. Virtual reality simulation in neurosurgery: technologies and evolution. Neurosurgery. 2013;72(suppl 1):154–164.

41. Malone HR, Syed ON, Downes MS, D'Ambrosio AL, Quest DO, Kaiser MG. Simulation in neurosurgery: a review of computer-based simulation environments and their surgical applications. Neurosurgery. 2010;67(4):1105–1116.

42. Sabbagh R, Chatterjee S, Chawla A, Kapoor A, Matsumoto ED. Task-specific bench model training versus basic laparoscopic skills training for laparoscopic radical prostatectomy: a randomized controlled study. Can Urol Assoc J. 2009;3(1):22–30.

43. Windsor JA. Role of simulation in surgical education and training. ANZ J Surg. 2009;79(3):127–132.

44. Suebnukarn S, Phatthanasathiankul N, Sombatweroje S, Rhienmora P, Haddawy P. Process and outcome measures of expert/novice performance on a haptic virtual reality system. J Dent. 2009;37(9):658–665.

45. Hays RT, Jacobs JW, Prince C, Salas E. Flight simulator training effectiveness: a meta-analysis. Mil Psychol. 1992;4(2):63–74.

46. Wong GK, Zhu CX, Ahuja AT, Poon WS. Craniotomy and clipping of intracranial aneurysm in a stereoscopic virtual reality environment. Neurosurgery. 2007;61(3):564–568.

47. Kneebone R. Evaluating clinical simulations for learning procedural skills: a theory-based approach. Acad Med. 2005;80(6):549–553.

48. Bajka M, Tuchschmid S, Fink D, Székely G, Harders M. Establishing construct validity of a virtual-reality training simulator for hysteroscopy via a multimetric scoring system. Surg Endosc. 2010;24(1):79–88.

49. Fayez R, Feldman LS, Kaneva P, Fried GM. Testing the construct validity of the Simbionix GI Mentor II virtual reality colonoscopy simulator metrics: module matters. Surg Endosc. 2010;24(5):1060–1065.

50. Andreatta PB, Woodrum DT, Gauger PG, Minter RM. LapMentor metrics possess limited construct validity. Simul Healthc. 2008;3(1):16–25.

51. Gurusamy KS, Aggarwal R, Palanivelu L, Davidson BR. Virtual reality training for surgical trainees in laparoscopic surgery. Cochrane Database Syst Rev. 2009;(1):CD006575.

52. Gurusamy KS, Aggarwal R, Palanivelu L, Davidson BR. Systematic review of randomized controlled trials on the effectiveness of virtual reality training for laparoscopic surgery. Br J Surg. 2008;95(9):1088–1097.

53. Sturm LP, Windsor JA, Cosman PH, Cregan P, Hewett PJ, Maddern GJ. A systematic review of skills transfer after surgical simulation training. Ann Surg. 2008;248(2):166–179.

54. Grantcharov TP, Kristiansen VB, Bendix J, Bardram L, Rosenberg J, Funch-Jensen P. Randomized clinical trial of virtual reality simulation for laparoscopic skills training. Br J Surg. 2004;91(2):146–150.

55. Larsen CR, Soerensen JL, Grantcharov TP, et al.. Effect of virtual reality training on laparoscopic surgery: randomised controlled trial. BMJ. 2009;338:b1802.

56. Stelzer MK, Abdel MP, Sloan MP, Gould JC. Dry lab practice leads to improved laparoscopic performance in the operating room. J Surg Res. 2009;154(1):163–166.

57. Luciano CJ, Banerjee PP, Bellotte B, et al.. Learning retention of thoracic pedicle screw placement using a high-resolution augmented reality simulator with haptic feedback. Neurosurgery. 2011;69(1 suppl operative):ons14–ons19.

58. Alaraj A, Lemole MG, Finkle JH, et al.. Virtual reality training in neurosurgery: review of current status and future applications. Surg Neurol Int. 2011;2:52.

59. Neubauer A, Wolfsberger S. Virtual endoscopy in neurosurgery: a review. Neurosurgery. 2013;72(suppl 1):97–106.

60. Dunphy BC, Williamson SL. In pursuit of expertise. Toward an educational model for expertise development. Adv Health Sci Educ Theory Pract. 2004;9(2):107–127.

61. Bell RH Jr. Why Johnny cannot operate. Surgery. 2009;146(4):533–542.

62. Aggarwal R, Darzi A. Technical-skills training in the 21st century. N Engl J Med. 2006;355(25):2695–2696.

63. Gélinas-Phaneuf N, Choudhury N, Al-Habib AR, et al.. Assessing performance in brain tumor resection using a novel virtual reality simulator. Int J Comput Assist Radiol Surg. 2013 June 20. [Epub ahead of print].

64. Wanzel KR, Hamstra SJ, Caminiti MF, Anastakis DJ, Grober ED, Reznick RK. Visual-spatial ability correlates with efficiency of hand motion and successful surgical performance. Surgery. 2003;134(5):750–757.

65. Wanzel KR, Hamstra SJ, Anastakis DJ, Matsumoto ED, Cusimano MD. Effect of visual-spatial ability on learning of spatially-complex surgical skills. Lancet. 2002;359(9302):230–231.

66. Risucci DA. Visual spatial perception and surgical competence. Am J Surg. 2002;184(3):291–295.

67. Omahen DA. The 10,000-hour rule and residency training. CMAJ 2009;180(12):1272.

68. Fitts PM, Posner MI. Human Performance. Belmont, CA: Brooks/Cole; 1967.

69. Hsu KE, Man FY, Gizicki RA, Feldman LS, Fried GM. Experienced surgeons can do more than one thing at a time: effect of distraction on performance of a simple laparoscopic and cognitive task by experienced and novice surgeons. Surg Endosc. 2008;22(1):196–201.

70. Stefanidis D, Scerbo MW, Sechrist C, Mostafavi A, Heniford BT. Do novices display automaticity during simulator training? Am J Surg. 2008;195(2):210–213.

71. Ericsson KA. Deliberate practice and acquisition of expert performance: a general overview. Acad Emerg Med. 2008;15(11):988–994.

72. Ericsson KA. The influence of experience and deliberate practice on the development of superior expert performance. In: Ericsson KA, Charness N, Feltovich PJ, Hoffman RR, eds. The Cambridge Handbook of Expertise and Expert Performance. New York, NY: Cambridge University Press; 2006:683–703.

73. Aggarwal R, Grantcharov T, Moorthy K, Hance J, Darzi A. A competency-based virtual reality training curriculum for the acquisition of laparoscopic psychomotor skill. Am J Surg. 2006;191(1):128–133.

74. Aggarwal R, Crochet P, Dias A, Misra A, Ziprin P, Darzi A. Development of a virtual reality training curriculum for laparoscopic cholecystectomy. Br J Surg. 2009;96(9):1086–1093.

75. Vygotsky LS. Mind in Society: The Development of Higher Psychological. Cambridge, MA: Harvard University Press; 1978.

76. Peyre SE, Peyre CG, Hagen JA, et al.. Laparoscopic Nissen fundoplication assessment: task analysis as a model for the development of a procedural checklist. Surg Endosc. 2009;23(6):1227–1232.

77. Porte MC, Xeroulis G, Reznick RK, Dubrowski A. Verbal feedback from an expert is more effective than self-accessed feedback about motion efficiency in learning new surgical skills. Am J Surg. 2007;193(1):105–110.

78. Kruglikova I, Grantcharov TP, Drewes AM, Funch-Jensen P. The impact of constructive feedback on training in gastrointestinal endoscopy using high-fidelity virtual-reality simulation: a randomized controlled trial. Gut. 2010;59(2):181–185.

79. Moulton CA, Dubrowski A, Macrae H, Graham B, Grober E, Reznick R. Teaching surgical skills: what kind of practice makes perfect?: a randomized, controlled trial. Ann Surg. 2006;244(3):400–409.

80. Webb TP, Weigelt JA, Redlich PN, Anderson RC, Brasel KJ, Simpson D. Protected block curriculum enhances learning during general surgery residency training. Arch Surg. 2009;144(2):160–166.

81. Stefanidis D, Heniford BT. The formula for a successful laparoscopic skills curriculum. Arch Surg. 2009;144(1):77–82.

82. Undre S, Koutantji M, Sevdalis N, et al.. Multidisciplinary crisis simulations: the way forward for training surgical teams. World J Surg. 2007;31(9):1843–1853.

83. Sevdalis N, Davis R, Koutantji M, Undre S, Darzi A, Vincent CA. Reliability of a revised NOTECHS scale for use in surgical teams. Am J Surg. 2008;196(2):184–190.

84. Hunt E. Expertise, talent and social encouragement. In: Ericsson K, Charness N, Feltovich PJ, Hoffman RR, eds. The Cambridge Handbook of Expertise and Expert Performance. New York, NY: Cambridge University Press; 2006:31–38.

85. Aggarwal R, Grantcharov TP, Darzi A. Framework for systematic training and assessment of technical skills. J Am Coll Surg. 2007;204(4):697–705.

86. Marcus H, Vakharia V, Kirkman MA, Murphy M, Nandi D. Practice makes perfect? The role of simulation-based deliberate practice and script-based mental rehearsal in the acquisition and maintenance of operative neurosurgical skills. Neurosurgery. 2013;72(suppl 1):124–130.

Keywords:

Neurosurgical expertise; Neurosurgical practice; NeuroTouch; Simulation; Surgical theory

Copyright © by the Congress of Neurological Surgeons

Login

Article Tools

Images

Share

Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.