Secondary Logo

Journal Logo

Article

Cost analysis of objective resident cataract surgery assessments

Nandigam, Kiran MBA; Soh, Jonathan BS; Gensheimer, William G. MD; Ghazi, Ahmed MD, MSc; Khalifa, Yousuf M. MD*

Author Information
Journal of Cataract & Refractive Surgery: May 2015 - Volume 41 - Issue 5 - p 997-1003
doi: 10.1016/j.jcrs.2014.08.041
  • Free

Abstract

In 1998, the American Council for Graduate Medical Education (ACGME) began a national and cross-specialty mandate to improve the education and training of residents. Collectively termed the “Outcome Project,” 6 competencies (clinical care, medical knowledge, practice-based learning and improvement, systems-based care, communication, and professionalism) were established to guide residency programs’ training of residents. In 2002, the American Board of Ophthalmology added the assessment of surgical acumen to the ACGME’s 6 competencies.1

Ophthalmology programs quickly responded by developing a variety of surgical assessment tools. In an effort to move away from the problems associated with qualitative, subjective, and delayed preceptor evaluations, these tools were designed to evaluate resident surgical skill in a timely and objective manner.1–3 The assessments came in the form of intraoperative and postoperative outcome measures, animal laboratories, video assessments, self-reported procedure logs, wet laboratories, virtual reality simulators, motion-analysis systems, procedure-specific checklists, and global rating scales.4

Meeting the ACGME’s mandates required significant changes to curriculum, administrative staffing, faculty development, and the training, supervision, and evaluation of residents. This continuous improvement process has led to increased allocation of resources for the training of residents. At present, the operative cost of resident participation in 79 phacoemulsification cataract surgeries was calculated to be $8290 per resident per year.5 This calculation does not include the additional time the attending physician spends reviewing, evaluating, discussing, and documenting the resident’s surgical performance.

In addition, the speed with which a resident can complete a cataract surgery has an enormous impact on the cost of surgery training. A recent study compared case times between ophthalmology residents and attending surgeons, finding that resident cases of similar difficulty took, on average, an additional 20 minutes to complete.6 Dollar costs at this institution were valued at $11.24 per minute for cataract surgery, making the cost of each resident surgery approximately $225 more expensive than surgery by an attending physician. Adequate training and evaluation of resident cataract surgeries are necessary to reduce these additional time-associated costs for residents’ future cataract surgeries.

In addition to the increased faculty time spent with residents, there are upfront costs associated with meeting the ACGME mandates. Since 2007, the ACGME Residency Review Committee for Ophthalmology has required that “a surgical skills development resource (eg, a wet lab or simulators) must be available” to residents.7 This had added substantial costs to set up and maintain a wet lab or surgical simulator. It is necessary to assess the cost of all available methods to judge resident competency; residency programs are tasked with determining when a resident is competent, and therefore it is essential to determine which approved evaluation method is the most cost efficient for residency training.

The increase in spending paired with the proposed cuts in Graduate Medical Education funding forces residency programs to now consider the cost effectiveness of particular training and assessment tools.8,9 However, there is little research that estimates the current costs associated with published tools to train resident ophthalmologists. The aim of this study was to determine the costs associated with validated tools for the assessment of cataract surgery. This was determined quantitatively by measuring the fixed and time costs associated with each particular assessment tool. Additional qualitative data highlight the “ease of use” of each assessment tool.

Materials and methods

Overview

This study was exempt from institutional review board review. Based on a systematic search of the literature, Gensheimer et al.10 found that there are 8 tools for the assessment of technical surgical skills in ophthalmology resident cataract surgery training with demonstrated reliability and validity. In this study, we focused on presenting the costs associated with the same 8 tools for assessing surgical competency. The tools evaluated were the Eye Surgical Skills Assessment Test (ESSAT),11,12 the Iowa Ophthalmology Wet Laboratory Curriculum (OWL),13 the Human Reliability Analysis of Cataract Surgery (HRACS),14 the Objective Assessment of Skills in Intraocular Surgery (OASIS),15 the Global Rating Assessment of Skills in Intraocular Surgery (GRASIS),16 the Objective Structured Assessment of Cataract Surgical Skill (OSACSS),17 the Imperial College Surgical Assessment Device (ICSAD),18 and the Eyesi Cataract Surgery Simulator.19–21

The ESSAT is a 3-station (skin suturing, muscle recession, and phacoemulsification/wound construction and suturing) wet laboratory skills course designed to assess surgical technique of residents. Masked expert surgeons watch and assess each resident-recorded performance with the use of a task-specific checklist and global rating scale.

The OWL is a curriculum based on wet lab practice, learning objectives (phacoemulsification technical skills or knowledge), assessments (pretest and posttest of cognitive skills), feedback (formative and summative), and reflective and deliberate practice. Faculty document and evaluate resident performance with the Ophthalmology Objective Wet Lab Structured Assessment of Skill and Technique scoring rubric.

The HRACS is used to identify errors and correct tool use. A masked expert watches a video-recorded surgery and then identifies the number of errors and type of error (procedural or executional) performed in each predetermined phase of a cataract extraction.

The OASIS is a purely objective evaluation used with direct observation during surgery. The OASIS records preoperative (medical history, ocular history, visual acuity, manifest refraction), intraoperative (phacoemulsification technique and amount of time, capsulotomy technique, adverse events, overall surgical time), and postoperative (visual acuity, manifest refraction, corneal edema, posterior capsule opacification) measures, which are collected by a database that can provide outcome and complication rates.

Created as a subjective complement to OASIS, GRASIS is a global evaluation form that was adapted from the Objective Structured Assessment of Technical Skill tool. The GRASIS contains 10 elements, each graded on a 5-point Likert scale, which provides feedback on the resident’s entire surgical process (eg, professionalism, leadership, bedside manner) in addition to surgical technique.

The OSACSS tool is based on task-specific checklists and global rating scales. Three masked experts review video-recorded operations and grade 14 predetermined tasks and 6 global indices.

The ICSAD uses motion tracking and provides measurements on 3 parameters of movement: total path length, time, and number of individual hand movements. The electromagnetic tracking system (Isotrak II, Polhemus, Inc.) consists of a generator and 2 sensors that record the movement of the operator’s hands. At the time of publication, Isotrak II has been phased out of production and replaced with Fastrak (Polhemus, Inc). The Fastrak has 4 sensors and does not require attachment to the operator. It records the position (X, Y, Z) and orientation (azimuth, elevation, and roll) of the operator’s motions.

The Eyesi simulator provides a comprehensive virtual reality model of cataract surgery. It provides immediate feedback on measures such as microscope and instrument handling, surgical efficiency, and tissue treatment. Initial training is provided to residents and attending physicians via virtual webinars from the manufacturer.

Based on the 11 articles included in Gensheimer et al.’s10 review of the literature, a cited reference search was completed in Web of Science.A This search provided the number of times each assessment tool was referenced in the literature. We used citations count as a proxy for how prevalent each assessment tool is within ophthalmology resident cataract surgery training.

Qualitative Analysis

To complement the quantitative cost analysis, qualitative measures were determined based on attending requirements for each tool. “Low involvement” was defined as an evaluation tool based on a predetermined criteria and a given scale (Likert or predefined choices). It was also defined as having a completion time of 5 minutes. “Medium involvement” was defined as an evaluation tool with predetermined criteria with partial free response (ie, fill-in-the-blank) as well as a completion time of 10 minutes. “High involvement” constituted assessment tools without predetermined criteria and were thus purely free response (ie, free-standing evaluation) and a completion time of 15 minutes. Validation for the time frame estimates was provided by e-mail communication with creators and/or users of each tool from other institutes. The Eyesi and ICSAD provide computer-generated feedback of predetermined parameters. This information is logged and available for the attending and resident to review. Therefore, the level of involvement was considered low.

Cost Estimates

Because many of the values used were estimates, final calculation values were rounded to the nearest $10. Costs were obtained from communication with the Flaum Eye Institute’s (FEI) finance department, with values taken from FEI’s wet lab budget.21 Additional costs for tools were obtained by direct price quotations from supply companies, as used by FEI. Zendejas et al.22 found that many cost-effectiveness analyses lacked expenses beyond tool and supply costs. An educational cost-effectiveness framework based on work from Levin23 includes “(1) personal costs, (2) facility costs, (3) equipment and materials costs, (4) other program inputs, and (5) required client inputs.” These were incorporated into the cost estimates for each assessment tool, with costs divided into initial and annual categories. Initial costs were defined as those relating to infrastructure (wet lab equipment, surgical simulator, cameras, and microscopes), software, and supplies that are required to start up the assessment tool; these are 1-time purchases. Initial costs varied depending on whether the evaluation tool was based on live surgery direct observation (HRACS, OSACSS, OASIS, GRASIS, ICSAD), wet lab performance (ESSAT and OWL), or simulator data (Eyesi) as follows:

Annual costs included the cost of replenishing supplies, hardware/software repair, laboratory space rental costs, and the time-associated costs of faculty and staff. The cost of space for a wet lab or for a simulator was considered to be relatively equivalent between medium to large ophthalmology residency programs (3 or more residents a year) and was valued to be $19 125 (425 square feet rented at $45/square foot). Square footage and rent per square foot were determined by consultation with FEI’s finance department. An assessment tool’s first-year cost consisted of annual and initial costs.

Time-associated costs were those relating to the value of faculty’s and staff’s time. Faculty-associated costs were based on the average academic ophthalmologist’s salary and work hours; these expenses represent the cost of additional surgical instruction for resident evaluation because previous surgical training expenses already existed before the ACGME “Outcome Project.” With an annual average salary of $276 50024 and an average 40-hour clinical workweek for 50 weeks, not including academic responsibilities, the hourly rate for an attending physician’s time was calculated to be $138.25 per hour. This hourly rate was multiplied by the amount of time required to complete evaluations for a resident annually. Because most academic physicians are not paid for their time teaching, this time committed to resident evaluations is seen as an opportunity cost of lost clinical hours, and thus the cost of time spent for evaluations was based on the hourly rate for attending physician’s time.24 The hourly cost of janitorial staff was set at $11.94,25 with 1.5 hours of work every other week.

As an example, the time-associated cost of an attending spending 13 hours on evaluations was calculated as follows:

or

The average amount of time to complete each assessment tool was determined by a literature search, quantitative assessment, and asking users of the respective tools. The OASIS and GRASIS designers published that each evaluation takes 5 minutes to complete.15,16 Because of the lack of published data for OWL, HRACS, and OSACSS, the amount of time to complete each evaluation had to be estimated. Based on consultation with FEI ophthalmologists, as well a low-involvement qualitative classification, these tools were approximated at 5 minutes per evaluation. According to ACGME standards, a minimum of 86 cataract surgeries are required upon each resident’s graduation.26 Assuming 86 evaluations are completed per resident per year at a rate of 5 minutes per evaluation, an attending annually spends 7 hours and 10 minutes per resident completing each of the above assessment tools. The San Francisco Match27 stated 460 residents matched all positions among 117 ACGME-accredited programs, averaging 3.93 residents per program.

The ESSAT’s designers proposed an annual time of 36 hours to complete the assessment for a class of 4 residents.11 Therefore, 1 attending spends 9 hours per resident per year on the ESSAT evaluation.

Because the amount of time spent on review is at the attending physician’s discretion, estimates were made for ICSAD and Eyesi. It was approximated by academic ophthalmologists that faculty spend approximately 2 hours per week for 26 weeks reviewing the analog videotape data motion tracking and simulator data for a class of 4 residents. Because the amount of time spent on review is at the attending’s discretion, an annual review per resident of 13 hours was estimated.

Both ESSAT and OWL evaluate a resident based on wet lab practice procedures. The wet lab expenses were broken into initial costs (equipment and workstations) and supply costs (disposable materials replaced throughout training). Equipment is usually procured at a discount (used models) or as a donation (from the university or industry). However, many programs establish and maintain wet labs without outside financial support.28 For the purpose of this study, all calculations were made on the basis that wet lab items were purchased. In addition, it was assumed that 3 of each item were required to create 3 training stations per wet lab. Initial costs, obtained from communication with FEI’s finance department, included used microscopes with footpedals ($35 000 each) as well as phacoemulsification machines ($55 000 each). Other essential components of a wet lab included workstations ($833 each), Sony DXC-C33P PAL compact color microscope cameras ($4900 each), liquid-crystal display monitors with S-Video ($400 each), and instrument kits ($2000/kit; 9 kits per wet lab) consisting of Vannas scissors, Westcott spring scissors, needle holders, suturing forceps, curved tissue forceps, and straight tissue forceps. These values were derived from current costs to FEI.2 The total initial cost of a wet lab was approximately $306 400.

Wet lab supply costs included synthetic eyes ($10 800/year) and disposable components for the phacoemulsification machine, which include cassettes and tubing, ophthalmic viscosurgical devices, and blades ($12 000/year), as in accordance to current costs to FEI. The annual supply cost of maintaining a wet lab was approximately $39 000.

Results

The ESSAT has been cited 16 times in the literature. It was quantitatively scored as low involvement. With an initial cost of $306 400 and an annual cost of $63 470, ESSAT’s first-year cost is $369 870 (Tables 1 and 2).

Table 1
Table 1:
Initial costs.*
Table 2
Table 2:
Breakdown of annual costs.*

The OWL has been cited 11 times in the literature and requires low involvement from attending physicians. The OWL’s initial cost was $306 400, its annual cost was $62 480, and its first-year cost was $368 880 (Tables 1 and 2).

The HRACS has been cited 8 times in the ophthalmology literature. Because of its predetermined choices to identify error, it requires low involvement from a surgical preceptor. The only initial cost for HRACS is the microscope video camera ($4900). The annual and first-year costs were $3890 and $8790, respectively (Tables 1 and 2).

The OASIS and GRASIS have been widely cited, 34 times and 28 times, respectively. The OASIS requires medium involvement, while GRASIS requires low involvement. Neither assessment tool has initial costs; both have an annual cost and first-year cost of $3890 (Tables 1 and 2).

The OSACSS has been cited 26 times and requires low involvement. The only initial cost is that of the microscope video camera ($4900). The annual cost of OSACSS consists of the time cost of faculty ($3890), producing a first-year cost of $8790 (Tables 1 and 2).

The ICSAD has been cited 15 times and requires low involvement from the attending. The $12 000 initial cost for ICSAD consists of $7100 (1 Fastrak unit) and $4900 (microscope camera). The time-associated cost of faculty makes up the annual cost of $7060. The ICSAD’s first year cost is $19 060 (Tables 1 and 2).

The Eyesi has been referenced 9 times in the literature and requires a low involvement from surgical preceptors. The initial cost of 1 Eyesi cataract surgery simulator is $169 000. The annual costs ($26 590) consist of square footage rent ($19 125), service, and maintenance of common components ($1200 every 3 years or $400 every year), and faculty time-associated costs ($7060). The first-year cost for the Eyesi cataract surgical simulator is $195 590 (Tables 1 and 2).

The 2 most affordable methods of resident evaluation are OASIS ($3890) and GRASIS ($3890) because neither requires initial or wet lab costs. The HRACS and OSACSS also have $3890 in annual expense; however, they have an additional initial expense of microscope video cameras. The ICSAD is the next financially viable option, with a first-year cost of $119 060 and an annual cost of $7060. This is with the purchase of only 1 Fastrak motion device. The Eyesi has the next lowest first-year expense at $195 590 and an annual expense of $26 590. The OWL’s first year cost and annual costs of $368 880 and $62 480, respectively, make it the second most expensive option. The ESSAT is the most expensive option, with a first-year cost of $369 870 and an annual cost of $63 470.

Discussion

The importance of objective assessments of resident cataract surgical skills is well established. In addition to providing timely, quantitative, and qualitative feedback, assessments must have reasonable time and financial costs. This study presented the quantitative and qualitative costs commonly associated with 8 tools. We hope that this multiple comparison provides the foundation for future cost analyses of various cataract surgery curriculums. Although the study compares intraoperative and postoperative tools, it is up to a residency program to determine which tools are best suited for its purposes.

The 5 least expensive options (OASIS, GRASIS, HRACS, OSACSS, ICSAD) do not include the substantial cost of a wet lab or a simulator. Running any of these 5 options alone is noncompliant with the ACGME requirement for a microsurgical skills laboratory or simulator.

The use of simulation-based medical education tools (SBME), such as Eyesi, has been proven to be effective at training healthcare professionals. Cost analysis of SBME tools has only recently begun, with many variables playing a role in how effective a certain tool may be.29 Eyesi is the least expensive option for cataract surgery simulation and training; however, it is not possible to use Eyesi without a wet laboratory facility for noncataract surgical training. Therefore, Eyesi most likely requires a secondary wet lab tool for evaluation, and this must be taken into consideration when selecting simulation models. It is up to each residency program to determine how it wants to establish resident competency, including whether more than 1 evaluation tool is desired or necessary. For example, glaucoma, cornea, retina, strabismus, and oculoplastics preoperating room surgical training requires a wet laboratory facility and will not be evaluated by Eyesi.

Many variables had to be estimated and will vary from program to program. Wet lab setup and maintenance costs depend entirely on each program’s infrastructure, resources, and relationships with companies. Similarly, the amount of wet lab or simulator space varies based on space available and cost of square footage at each program’s hospital. The size of the residency program (number of residents and funding) also influences the number of wet lab workstations, machines, and supplies that can be purchased. This study used 3 workstations to best represent a medium to large training program’s wet lab. It was impossible to consider the extensive combinations for receiving funding and or equipment from industry; therefore, this study calculated the costs as though there were no outside support and as a best representation of the needs of a medium to large program. However, many institutions might receive donated equipment from their departments—this cannot always be accounted for because this donation might have also been used for clinical or research purposes. In addition, the costs of labor were under the assumption that attending physicians could make the time for evaluations and that the residency institute had maintenance workers available for laboratory maintenance. These options might not exist in certain programs and must be factored in when deciding which training methodology is optimum for an individual institute. Although many academic ophthalmology attending physicians are expected to volunteer time to evaluate resident training, an estimated cost was used that assigned this time as an opportunity cost that could have been spent in clinic. The costs might also have been underestimated because residents often do more than the minimum 86 cataract surgeries required by ACGME standards. Although multiple factors were valued and included in this quantitative analysis of resident training, we could not factor in all variables that contribute to making a residency program work.

Many of the time-associated costs were based on estimates of how long each tool takes to complete. In many instances, there was a lack of published information describing the amount of time needed to complete a particular assessment tool. When this occurred, we deferred to an estimate based on the experience of practicing ophthalmologists who were familiar with the tools being evaluated. Rating the level of involvement of each tool and determining the number of times each tool was cited was done to strengthen the review of the literature completed by Gensheimer et al.10 Because there were no formally published data on measuring the amount of time to complete a tool or how widespread a tool’s use was, we used the level of involvement scale and number of citations as proxies.

Because there is growing focus on the expenses associated with training, programs have to consider assessment tools that are valid, reliable, objective, and cost effective. Across surgical specialties, training paradigms have been compared head to head on measures such as cost of implementation, surgical technique, and surgical outcomes.30 With these direct comparisons, costs associated with a particular tool can be linked to downstream quality measures and a cost-based model can be designed.30 In ophthalmology, however, there is a lack of published data regarding cost analysis of particular training paradigms or assessment tools. Paying $100 000 for an assessment tool that provides $200 000 of benefit (improved patient outcomes, operating room time, and attending opportunity cost) is better than paying $20 000 for a tool that provides zero net benefit. As future research addresses these questions, programs can make informed decisions about how particular education tools affect the learning and cost curves.

Graduate medical education is faced with more stringent requirements in determining resident competency and, concurrently, residency funding faces cutbacks. More research into cost analysis of training methods is needed in ophthalmology for a sustainable future of residency training.

What Was Known

  • Eight tools with demonstrated reliability and validity are available for the assessment of competency in ophthalmology residency training. These tools range from wet lab curricula to global rating scales and virtual reality simulation systems.

What This Paper Adds

  • The cost of the 8 tools is an important factor in each residency training program’s selection of tools to implement in assessing surgical competence. This cost is substantial, and means of paying for this important component of competency assessment must be established.

References

1. Lee AG, Carter KD. Managing the new mandate in resident education; a blueprint for translating a national mandate into local compliance. Ophthalmology. 2004;111:1807-1812.
2. Lee AG, Volpe N. The impact of the new competencies on resident education in ophthalmology [editorial]. Ophthalmology. 2004;111:1269-1270.
3. Mills RP, Mannis MJ., 2004. American Board of Ophthalmology Program Directors’ Task Force on Competencies. Report of the American Board of Ophthalmology Task Force on the Competencies [guest editorial], Ophthalmology, 111, 1267-1268.
4. Oetting TA, Lee AG, Beaver HA, Johnson AT, Boldt HC, Olson R, Carter K. Teaching and assessing surgical competency in ophthalmology training programs. Ophthalmic Surg Lasers Imaging. 2006;37:384-393.
5. Hosler MR, Scott IU, Kunselman AR, Wolford KR, Oltra EZ, Murray WB. Impact of resident participation in cataract surgery on operative time and cost. Ophthalmology. 2012;119:95-98.
6. Taravella MJ, Davidson R, Erlanger M, Guiton G, Gregory D. Time and cost of teaching cataract surgery. J Cataract Refract Surg. 2014;40:212-216.
7. Accreditation Council for Graduate Medical Education. ACGME program requirements for graduate medical education in ophthalmology. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/ProgramRequirements/240_ophthalmology_2016.pdf. Accessed January 6, 2015
8. Congress of the United States, Congressional Budget Office. Reducing The Deficit: Spending and Revenue Options, March 2011. Available at: http://www.cbo.gov/sites/default/files/cbofiles/ftpdocs/120xx/doc12085/03-10-reducingthedeficit.pdf. Accessed January 6, 2015
9. Steinmann AF. Threats to graduate medical education funding and the need for a rational approach: a statement from the Alliance for Academic Internal Medicine. Ann Intern Med. 2011;155:461-464.
10. Gensheimer WG, Soh JM, Khalifa YM., 2013. Objective resident cataract surgery assessments [reports], Ophthalmology, 120, 432-433.e1.
11. Fisher JB, Binenbaum G, Tapino P, Volpe NJ. Development and face and content validity of an eye surgical skills assessment test for ophthalmology residents. Ophthalmology. 2006;113:2364-2370.
12. Taylor JB, Binenbaum G, Tapino P, Volpe NJ. Microsurgical lab testing is a reliable method for assessing ophthalmology residents’ surgical skills. Br J Ophthalmol. 91, 2007, p. 1691-1694, Available at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2095537/pdf/1691.pdf. Accessed January 6, 2015.
13. Lee AG, Greenlee E, Oetting TA, Beaver HA, Johnson AT, Boldt HC, Abramoff M, Olson R, Carter K. The Iowa ophthalmology wet laboratory curriculum for teaching and assessing cataract surgical competency. Ophthalmology. 2007;114(7):e21-e26.
14. Gauba V, Tsangaris P, Tossounis C, Mitra A, McLean C, Saleh GM. Human reliability analysis of cataract surgery. Arch Ophthalmol. 126, 2008, p. 173-177, Available at: http://archopht.jamanetwork.com/data/Journals/OPHTH/6864/ecs70060_173_177.pdf. Accessed January 6, 2015.
15. Cremers SL, Ciolino JB, Ferrufino-Ponce ZK, Henderson BA. Objective Assessment of Skills in Intraocular Surgery (OASIS). Ophthalmology. 2005;112:1236-1241.
16. Cremers SL, Nereida Lora A, Ferrufino-Ponce ZK. Global Rating Assessment of Skills In Intraocular Surgery (GRASIS). Ophthalmology. 2005;112:1655-1660.
17. Saleh GM, Gauba V, Mitra A, Litwin AS, Chung AKK, Benjamin L. Objective Structured Assessment of Cataract Surgical Skill. Arch Ophthalmol. 125, 2007, p. 363-366, Available at: http://archopht.jamanetwork.com/data/Journals/OPHTH/9983/ecs60066_363_366.pdf. Accessed January 6, 2015.
18. Ezra DG, Aggarwal R, Michaelides M, Okhravi N, Verma S, Benjamin L, Bloom P, Darzi A, Sullivan P. Skills acquisition and assessment after a microsurgical skills course for ophthalmology residents. Ophthalmology. 2009;116:257-262.
19. Solverson DJ, Mazzoli RA, Raymond WR, Nelson ML, Hansen EA, Torres MF, Bhandari A, Hartranft CD. Virtual reality simulation in acquiring and differentiating basic ophthalmic microsurgical skills. Simul Healthc. 2009;4:98-103.
20. Privett B, Greenlee E, Rogers G, Oetting TA. Construct validity of a surgical simulator as a valid model for capsulorhexis training. J Cataract Refract Surg. 2010;36:1835-1838.
21. Mahr MA, Hodge DO. Construct validity of anterior segment anti-tremor and forceps surgical simulator training modules; attending versus resident surgeon performance. J Cataract Refract Surg. 2008;34:980-985.
22. Zendejas B, Wang AT, Brydges R, Hamstra SJ, Cook DA. Cost: the missing outcome in simulation-based medical education research: a systematic review. Surgery. 2013;153:160-176.
23. Levin HM. Waiting for Godot: cost-effectiveness analysis in education. Light RJ, editor. Evaluation Findings That Surprise. 2001, Jossey-Bass, San Francisco, CA, p. 55-68, (New Directions for Evaluation, vol. 90). Available at: http://cbcse.org/wordpress/wp-content/uploads/2012/10/Waiting-for-Godot.pdf. Accessed January 6, 2015.
24. Association of American Medical Colleges., 2012. Report on Medical School Faculty Salaries 2010–2011, Association of American Medical Colleges, Washington, DC.
25. United States Department of Labor. Bureau of Labor Statistics. May 2013 National Occupational Employment and Wage Estimates United States. Available at: http://www.bls.gov/oes/current/oes_nat.htm/oes131031.htm. Accessed January 6, 2015
26. Accreditation Council for Graduate Medical Education. Frequently asked questions: Ophthalmology Review Committee for Ophthalmology ACGME. 2012, Available at: http://www.acgme.org/acgmeweb/Portals/0/PDFs/FAQ/240_Ophthalmology_FAQs_2013.pdf. Accessed January 6, 2015.
27. SFMatch. Residency and Fellowship Matching Services. Ophthalmology residency. Available at: https://www.sfmatch.org/SpecialtyInsideAll.aspx?id=6&typ=2&name=Ophthalmology. Accessed January 6, 2015
28. Henderson BA, Grimes KJ, Fintelmann RE, Oetting TA. Stepwise approach to establishing an ophthalmology wet laboratory. J Cataract Refract Surg. 2009;35:1121-1128.
29. Feudner EM, Engel C, Neuhann IM, Petermeier K, Bartz-Schmidt KU, Szurman P. Virtual reality training improves wet-lab performance of capsulorhexis: results of a randomized, controlled study. Graefes Arch Clin Exp Ophthalmol. 2009;247:955-963.
30. Orzech N, Palter VN, Reznick RK, Aggarwal R, Grantcharov TP. A comparison of 2 ex vivo training curricula for advanced laparoscopic skills; a randomized controlled trial. Ann Surg. 2012;255:833-839.

Other Cited Material

A. Web of Science. Available at: http://thomsonreuters.com/en/products-services/scholarly-scientific-research/scholarly-search-and-discovery/web-of-science.html
© 2015 by Lippincott Williams & Wilkins, Inc.