Skip Navigation LinksHome > September 2010 - Volume 85 - Issue 9 > Measures of Educational Effort: What Is Essential to Clinica...
Academic Medicine:
doi: 10.1097/ACM.0b013e3181e4baca
Faculty

Measures of Educational Effort: What Is Essential to Clinical Faculty?

Ipsen, Merete MD; Eika, Berit MD, PhD; Mørcke, Anne Mette MD, PhD; Thorlacius-Ussing, Ole MD, DMSci; Charles, Peder MD, DMSci

Free Access
Article Outline
Collapse Box

Author Information

Dr. Ipsen is PhD fellow, Center for Medical Education, Aarhus University, Aarhus, Denmark, and Aalborg Hospital, Aalborg, Denmark.

Dr. Eika is professor, Center for Medical Education, Aarhus University, Aarhus, Denmark.

Dr. Mørcke is associate professor, Center for Medical Education, Aarhus University, Aarhus, Denmark.

Dr. Thorlacius-Ussing is professor, Department of Gastrointestinal Surgery, Aalborg Hospital, Aarhus University Hospital, Aalborg, Denmark.

Dr. Charles is professor, Center for Medical Education, Aarhus University, Aarhus, Denmark.

Correspondence should be addressed to Dr. Ipsen, Center for Medical Education, Brendstrupgaardsvej 102, Bygn. B, DK-8200 Aarhus N, Denmark; telephone: (+45) 6127-1672; fax: (+45) 8620-1228; e-mail: m.ipsen@rn.dk.

First published online June 7, 2010

Collapse Box

Abstract

Purpose: To enhance the recognition of educational effort and thereby support faculty vitality, the authors aimed to identify essential categories of educational effort from the perspective of clinical faculty and determine whether the emerging categories were in concordance with an organizational perspective.

Method: The authors performed nominal group processes in four groups in 2008, with the participation of 24 clinical faculty members, 6 in each group, representing 18 (medical, surgical, paraclinical, and psychiatric) specialties at 14 hospitals in Denmark. Subsequently, the authors performed a comparative analysis of the emerging essential categories and the organizational work by the national panel on medical education, appointed by the Association of American Medical Colleges (AAMC).

Results: The four groups of clinical faculty members agreed on categories of educational effort. This quantitative consistency in prioritization was supported by qualitative consistency, as the authors observed similar uses of words and phrases among all four groups. The top priority in essential categories of educational effort was “Visibility of planned educational activities on the work schedule,” which received 39% of all votes. The comparative analysis showed that the essential categories of educational effort suggested by clinical faculty were in concordance with the steps developed by the AAMC.

Conclusions: The high degree of consistency among clinical faculty from different locations and specialties and the high concordance with the organizational work of the AAMC suggest that it is possible to develop standardized measurements of educational effort. Clinical faculty emphasized that a good starting point for educational measurements is the work schedule.

In a busy clinical setting, where the health needs of patients compete with the educational needs of residents, clinical faculty members often face the question, “Should I prioritize the patient or the resident?” The choice usually favors the patient, but that, in a sense, “undermines” residents' education. In the short term, the choice must favor the patient, but in the long term, it is in everyone's interest, including patients', that tomorrow's doctors are well trained.

Not only is the education of tomorrow's doctors at stake, but also the faculty vitality and the very health of an institution's culture. When educational contributions are not valued as highly as patient service, faculty members who choose education over patient service receive poor recognition. Faculty vitality motivators are among others based on institutional commitment to core values1 and the feeling of being valued in one's work.2 Therefore, faculty who feel unrecognized for their educational efforts are more likely to “stifle enthusiasm for teaching,”3,4 which ultimately saps the faculty's vitality. As Kirch1 clearly expressed, faculty vitality goes hand-in-hand with a positive institutional culture; without it, institutional culture deteriorates. Thus, all faculty efforts need to be recognized by peers and leaders. To enhance the recognition of educational efforts and boost faculty vitality, we must ask clinical faculty what they consider the essential educational effort. Pololi et al,5 for instance, have already described faculty's desire that institutions “assign appropriate value to non-income-producing activities.”

Clinical faculty's educational efforts are measured in relative value units (RVUs), which are evaluated in two key ways6: the contact hour method (estimating the time allocated to educational efforts) and the relative value method (estimating the educational efforts, analogous to the resource-based relative value scale for clinical activities). Educational RVU systems have been developed by local departments and specific persons7–12 and by organizations and medical councils,13,14 the latter exemplified by the visionary and important report by Nutter and colleagues14 for the Association of American Medical Colleges (AAMC). The AAMC report presents a framework that deans and faculty can use to develop systems for measuring faculty's efforts and contributions to their schools' educational missions.6,15

However, as Nutter and colleagues14 pointed out, the success of a system benefits from input from teaching faculty, administrators, and students. In other words, a standardized system of educational RVUs will be more robust if it is in concordance with both local and organizational perspectives. From a local perspective, we do not yet know which educational efforts clinical faculty consider essential to measure. In a wider organizational and even international perspective, we do not know whether a standardized educational RVU system can measure the educational efforts of clinical faculty across institutions and specialties.

Our long-term goal is to develop a national standardized system of graduate educational RVUs, covering all specialties in Denmark, that will lead to visibility of educational effort and provide a means for communication about educational efforts, thus supporting the vitality of clinical faculty and the education of tomorrow's doctors.

Our aim in this study was to identify the essential categories of educational effort from the local perspective of clinical faculty and to determine whether those essential categories are concordant to standardized measures from an organizational perspective. We used a consensus method to identify the essential categories, inviting clinical faculty from different geographical locations and specialties in Denmark. We also performed a comparative analysis of the essential categories identified by clinical faculty and compared them against the organizationally based report by the AAMC.1,14

Back to Top | Article Outline

Method

The nominal group process consensus method

Several consensus methods exist. We chose the nominal group process,16–20 a single meeting in which participants share their perspectives and have a face-to-face debate before voting individually.

On the basis of recommendations by postgraduate clinical associate professors, we purposely sampled departments with well-implemented graduate training programs. We invited one medical specialist/attending from each of 28 departments, which represented 18 medical, surgical, paraclinical (radiology, pathology, and laboratory medicine), and psychiatric specialties in 14 hospitals. The invitation emphasized the department's thorough educational effort, the recommendation by the clinical associate professor, and the importance of the invitee's specific knowledge concerning educational efforts among clinical faculty in hospitals.

Although this study was exempt from ethical approval according to Danish law, we took considerable effort to protect the interests of the participants: Participation was voluntary, the data were analyzed anonymously, and participants were encouraged to contact the researcher if they had any questions or remarks.

The nominal group processes were performed in Denmark from June to October 2008 with two facilitators (M.I., A.M.M.). The participants were divided into four groups of six people, based on variety in gender, specialty, and geography. Table 1 shows group characteristics. Each meeting lasted three hours, and the procedure was a classical nominal group process approach,16 which we describe here.

Table 1
Table 1
Image Tools
Back to Top | Article Outline
Welcome speech.

We opened each meeting by purposely addressing the participants' dedication and expertise. The classical nominal group process approach follows the premise that, by recognizing participants' dedication and expressing the wish to explore their expert points of view, they are more likely to proceed as dedicated experts.

Back to Top | Article Outline
Stimulus task.

We then distributed a sheet of paper that asked the participants to describe potential ways to measure educational efforts conducted by clinical faculty. By having participants individually reflect and write, the participants generated a diversity of ideas.

Back to Top | Article Outline
Round robin.

The participants then took turns contributing one idea at a time until the facilitator had exhaustively documented all of the ideas on a flip chart.

Back to Top | Article Outline
Clarification.

In the following brief dialogue, the group clarified all ideas for fundamental understanding and combined convergent ideas on the flip chart.

Back to Top | Article Outline
First voting on ideas.

We asked each participant to choose the five ideas he or she believed most essential to include in an educational RVU measurement. They wrote the ideas on five personal voting cards, ranking them from most important (five points) to least important (one point).

Back to Top | Article Outline
Intermediate results.

The facilitator tabulated the votes and wrote the intermediate results on the flip chart.

Back to Top | Article Outline
Debate.

The participants briefly debated the intermediate results and the implications of using the prioritized ideas—and of discarding the nonprioritized ideas—in measuring educational RVUs.

Back to Top | Article Outline
Second voting on ideas.

The facilitator asked the participants to individually reselect and rank, using considerations raised in the debate, the five most essential ideas.

Back to Top | Article Outline
Final result.

The facilitator tabulated the votes and showed the final result on the flip chart.

Back to Top | Article Outline
Data analysis

Immediately after the nominal group process, the facilitators clarified the prioritized ideas for each other to ensure comparability and to code the specific educational statement embedded in each idea. As an example of this coding process, Group 3 explained idea no. 05 (“[Measure] time and charge of the doctors for all educational tasks”) as follows:

When measuring educational effort in an educational session, [knowing] the rank of the resident and the supervisor is essential, because the educational effort depends on the level of competence of both participants. So this measurement can elucidate some of the complexity and the resource spending in an educational session—but time spending is also part of the educational effort.

The issues of “the rank” and “the level of competence” and “this measurement” were the focal points of the group's explanation and concern. Hence, the specific educational statement of idea no. 3/05 was coded as “the rank of the participants in educational activities,” as we looked for relevant measures of educational effort. Table 2 shows examples of this coding process, presenting the emerging categories and one corresponding idea from each group.

Table 2
Table 2
Image Tools

Following the clarification of the specific educational statements, we conducted a dynamic categorization process that included a content analysis of all prioritized ideas across all four groups, as recommended by Delbecq and colleagues.16 Figure 1 shows the process from the generation of ideas to the final categories. The first author (M.I.) distributed the coded ideas into eight categories based on convergent ideas and emerging themes in the specific educational statements. Then, two authors (M.I., O.T.-U.) evaluated the eight categories and the corresponding ideas, and the two authors elaborated once again on the specific educational statements in the ideas. This process led to recoding of the ideas, and this condensed the eight categories into the six essential categories. Finally, all authors discussed the categories and reached consensus on the final six essential categories. After establishing the categories, we added the voting points from the final voting to the relevant categories.

Figure 1
Figure 1
Image Tools
Back to Top | Article Outline
Comparative analysis of local and organizational perspectives

We compared the six essential categories with the “metrics system for measuring medical school faculty effort and contributions to a school's education mission” that was developed by the national panel on medical education appointed by the AAMC.14 The work by the AAMC, which included graduate programs and not only student education, arrived at four steps for measuring faculty's efforts and contributions. We compared those steps and the categories on the basis of the clinical faculty activities in American and Danish teaching hospitals. Nutter and colleagues14 described the clinical faculty activities in the United States in their report, and in Denmark the ideas from the clinical faculty members reflect the clinical faculty activities. We then matched the steps and the categories directly. This comparison was also possible as the measurements of both systems were constructed as RVUs (the relative value method, estimating the educational effort) with the use of quantitative and qualitative measures of faculty activity.

Back to Top | Article Outline

Results

Nominal group process

In total, 24 of the 28 invited clinical faculty members participated; four were unable to participate because of sudden obligations. Altogether, the four groups produced 136 ideas. The first votes resulted in 70 ideas; by coincidence, the number of ideas was identical in three of the four groups. The final votes narrowed the ideas down to 56 (11–17 from the initial 31–40 ideas in each group). Following the nominal group process method, we discarded the remaining 80 nonprioritized ideas. Figure 2, which illustrates the group and cumulative votes for each essential category of educational effort, shows that the four nominal groups agreed on which categories were most and least important. This quantitative consistency was supported by a qualitative consistency, as the groups used similar words and phrases when stating their ideas.

Figure 2
Figure 2
Image Tools

The top prioritized essential educational RVU category for clinical faculty, receiving 139 of 360 votes (39%), was “Visibility of planned educational activities on the work schedule.” An example of this could be to display progress and appraisal meetings on the work schedule so all staff members are aware of them. Faculty regarded the work schedule, which concretizes the managerial orchestration of all daily functions, as a reification of their departments' priorities concerning time allocated to education. According to clinical faculty, visibility of education on the work schedule serves two purposes: to signal the department's educational mission and to recognize educational efforts.

The second essential category (elected by 19%) was “Feedback on the educational environment.” An example of this could be evaluation of the department, for instance, a questionnaire at the end of the residents' rotations. Clinical faculty considered the educational environment a core attribute of successful and protected learning. Hence, they advocated for measures from both an educational point of view—to improve learning for the residents—and from a personal point of view—to receive individual feedback and recognition of their educational efforts.

The third essential category (14%) was “Informal educational activities.” Clinical faculty confer with, guide, and supervise residents, as this increases their learning curve, but these informal educational sessions interrupt clinical work and prolong working hours. Clinical faculty wanted this educational effort to be recognized, and their managers to consider these efforts when planning the work functions.

The fourth essential category (13%) was “Complexity of the educational task, including the rank of the participating resident and clinical faculty member.” Faculty perceived that the more complex the educational task, the more it costs time and mental resources. This works both ways: Very basic educational tasks (e.g., teaching a resident to insert an IV line) waste valuable specialist time, whereas very complex educational tasks (e.g., supervising a pediatric resident who is treating an acute ill child with concurrent disabilities and agitated parents) can be quite challenging to supervise. The clinical faculty called for both ways to recognize their educational efforts and guidelines for appropriately matching residents and faculty members.

The fifth essential category (9%) was “Time for management and development.” Clinical faculty wanted their peers and managers to recognize their educational administrative work, which they considered fundamentally important to support the missions of their educational institutions. These efforts include faculty development (e.g., teach-the-teacher courses), which they viewed as strengthening for the faculty vitality and the general institutional culture.

The sixth and final essential category (6%) was “Formal educational activities,” such as lectures in the afternoon. Clinical faculty stated the importance of well-prepared formal educational sessions for the quality—and joy—of teaching. They often felt recognized for the actual teaching time, but not for the preparation time. Measurements of formal educational activities, including preparatory time, were important for two reasons: to confirm the educational standards and to recognize the pedagogical competency of clinical faculty.

Back to Top | Article Outline
Comparison of local and organizational perspectives

The six essential categories in Table 2 are compared with the four steps of the AAMC14 in Table 3. Table 3 demonstrates concordance in all steps, except in “solo/group adjustment,” as teaching alone or in groups was not a priority for clinical faculty in this study. The four steps of the AAMC derive from detailed lists of medical school faculty activities in education, and the six essential categories found in this study reflect the AAMC steps.

Table 3
Table 3
Image Tools

The first AAMC step, “Activity,” measures faculty effort and contributions by units of activity performed and activity weight. The foremost consideration mentioned by Nutter and colleagues14 is units of activity performed, which counts both the number of units and the time required to conduct each unit. This corresponds with the top prioritized category of the faculty members in this study, “the visibility of educational activities on the work schedule.” Activity weight, which measures level of faculty experience and skill required for an activity, corresponds with (3) “measurements of informal educational activities,” (6) “measurements of the formal educational activities,” and (4) “complexity of the educational task.” Concerning the activity weight, the AAMC considers the level of faculty experience and skill required for an activity, and this consideration is parallel to the categories (3), (4), and (6) in that those steps represent the faculty effort provided for the educational activity. This is also exemplified by the charge of the participants where the faculty members in this study want recognition for the complexity of an educational task. Thus, the AAMC's step 1, “activity,” measures the mix of time and content that characterizes RVUs.

The second AAMC step is “Performance” and modifies quantity and quality of the educational effort by solo/group adjustment and quality adjustment. Solo/group adjustment is not represented by an essential category in this study. The difference between the two studies may be due to the fact that the teamwork structure in Danish hospitals is not yet fully implemented; this may become a priority later. Quality adjustment corresponds with (2) “feedback on the educational environment.”

The third and fourth AAMC steps relate to the educational mission. These two steps are “Category weight” and “Program weight.” Category weight corresponds with (5) “time for management and development.” This area has developed in Denmark during the last decade because of the new specialist training program. Program weight was not an issue in this study because it only focuses on resident education.

Back to Top | Article Outline

Discussion

With an overall goal of developing a Danish national system of educational RVUs for all specialties, in this study we aimed to identify essential categories of educational effort from the perspective of clinical faculty and to determine whether the essential categories are in concordance with standardized measurements in an organizational perspective. We found six essential categories of educational RVUs prioritized by clinical faculty representing 18 different specialties (medical, surgical, paraclinical, and psychiatric), and the essential categories were in concordance with the work of the AAMC.

The findings of this study will be used in further research towards the development of a standardized educational RVU system, which can be a means of communicating educational effort and, thereby, support recognition of clinical faculty. To develop a standardized system of educational RVUs, it is relevant to explore local and organizational perspectives, as pointed out by Nutter and colleagues.14 This study shows concordance between local and organizational perspectives, both by internal consistency of the local perspective among specialties and by external consistency in the comparison with the organizational work by the AAMC.14 The high degree of internal consistency among the four groups was remarkable, given the differences in epistemic culture between medical specialties21 and geographical regions. An explanation of this consistency may be that the “culture of medicine”22 is stronger than the culture within a specialty or a region. The external consistency between the local and organizational perspectives was also notable in light of the different national origins (Danish and American) and educational traditions. Again, the consistency may arise from a shared Western “culture of medicine,” in which hospitals and medical education are similarly organized, and which supersedes the differences. From this point of view, we believe it is possible to establish a standardized educational RVU system.

Through their six essential categories, the faculty expressed both their wishes to involve management in the discussion of educational issues and their dedication to educating residents. This is in line with the article and presentation by Vanderveen et al23 and the included discussion in the Archives of Surgery that depict a profound feeling of educational responsibility among faculty members and their wish for managerial attention. Within the six essential categories, two recurrent themes were debated: the educational mission and the recognition of educational efforts. The two themes offer an explanation of why clinical faculty wish to involve management. Managerial decision making and support are more thorough when relevant information on educational effort is available, and management is then likely to refocus educational goals and redistribute resources related to the educational mission. This also supports the recognition of clinical faculty, thus increasing faculty vitality.

The implementation and the actual use of the essential categories would be the ultimate test of standardized measurements of educational effort. A metric system needs to be functional in a clinical setting in order to collect relevant information. The collection of information also depends on the acceptability and feasibility of the chosen tool. The acceptability derives from the perspectives of clinical faculty, who, amongst others, suggested the essential category “Visibility of planned educational activities on the work schedule.” The feasibility of this derives from the use of the work schedule as a tool for collecting information. The work schedule contains a significant amount of information on work patterns, which subconsciously leads to recognition of the tasks revealed. An extension of the work schedule to include specific information on, for example, planned educational sessions and planned mentor–mentee meetings may analogously lead to recognition of educational efforts.

Methodologically, a nominal group process has a limited number of participants and thereby a limited number of elicited perspectives. For this reason, we can provide neither a complete list of ways to measure educational effort nor a complete list of faculty educational activities performed in the hospitals. We tried to overcome this limitation by purposive sampling, thus creating a great variety in the informants. The strengths of the nominal group process are that all participants have protected time to talk, all ideas are exhaustively presented, and the final voting relies on individual considerations after a group debate. Other limitations of the nominal group process include the necessity of presence of all participants at one time and the structured meeting form, which can result in conforming behavior for inexperienced participants. There were no participation cancellations due to any personal reasons, and all participants had several ideas, so we did not experience conforming behavior.

This study reveals that clinical faculty from different locations and specialties agree on essential categories of educational RVUs. However, as in the words of Jones and Hunter,24 “the output from consensus approaches is rarely an end in itself,” and, as stated earlier, the successful implementation of a metric system concerning educational effort depends on all parts involved. This level of involvement is also acknowledged by faculty in the essential category “Feedback of educational environment.” This article presents essential categories of educational effort, but only from the perspectives of clinical faculty. Research has been done concerning educational efforts from the perspective of residents.25–27 Yet another step is to explore educational efforts from the perspective of managers and their current evaluation methods of educational efforts.

We hope that the development of standardized measurements for comparing educational efforts will improve communication concerning and recognition of those efforts and, subsequently, strengthen the vitality of clinical faculty.

Back to Top | Article Outline

Acknowledgments:

The authors would like to recognize the participants of the study for their sincere cooperation in the nominal group processes. The authors would also like to thank Dr. Tove Nilsson, MD, DMSci, medical director, Aalborg Hospital, for her ongoing support to the PhD project on “Identification and registration of educational effort provided by medical specialists in hospitals.”

Back to Top | Article Outline

Funding/Support:

None.

Back to Top | Article Outline

Other disclosures:

None.

Back to Top | Article Outline

Ethical approval:

The Central Denmark Region Committee on Biomedical Research Ethics has confirmed that this study is exempt from ethical approval.

Back to Top | Article Outline

References

1Kirch GD. A word from the president: “The state of the faculty.” AAMC Reporter. February 2008;17:2.

2Bunton S. Medical Faculty Job Satisfaction: Thematic Overviews From Ten Focus Groups. Washington, DC: Association of American Medical Colleges; 2006.

3Brawer J, Steinert Y, St-Cyr J, Watters K, Wood-Dauphinee S. The significance and impact of a faculty teaching award: Disparate perceptions of department chairs and award recipients. Med Teach. 2006;28:614–617.

4Papp KK, Aucott JN, Aron DC. The problem of retaining clinical teachers in academic medicine. Perspect Biol Med. 2001;44:402–413.

5Pololi LH, Dennis K, Winn GM, Mitchell J. A needs assessment of medical school faculty: Caring for the caretakers. J Contin Educ Health Prof. 2003;23:21–29.

6Mallon WT, Jones RF. How do medical schools use measurement systems to track faculty activity and productivity in teaching? Acad Med. 2002;77:115–123.

7Bardes CL, Hayes JG. Are the teachers teaching? Measuring the educational activities of clinical faculty. Acad Med. 1995;70:111–114.

8Daugird AJ, Arndt JE, Olson PR. A computerized faculty time-management system in an academic family medicine department. Acad Med. 2003;78:129–136.

9MacGregor DL, Tallett S, MacMillan S, Gerber R, O'Brodovich H. Clinical and education workload measurements using personal digital assistant-based software. Pediatrics. 2006;118:e985–e991.

10Pitman AG, Jones DN. Radiologist workloads in teaching hospital departments: Measuring the workload. Australas Radiol. 2006;50:12–20.

11Tallett S, Lingard L, Leslie K, et al. Measuring educational workload: A pilot study of paper-based and PDA tools. Med Teach. 2008;30:296–301.

12Tottrup A. Surveillance of surgical training by detailed electronic registration of logical components. Postgrad Med J. 2002;78:607–611.

13Hilton C, Fisher WJ, Lopez A, Sanders C. A relative-value-based system for calculating faculty productivity in teaching, research, administration, and patient care. Acad Med. 1997;72:787–793.

14Nutter DO, Bond JS, Coller BS, et al. Measuring faculty effort and contributions in medical education. Acad Med. 2000;75:199–207.

15Mallon WT. Introduction: The history and legacy of mission-based management. In: Management Series: Mission-Based Management. Washington, DC: Academic Medicine and the Association of American Medical Colleges; 2006.

16Delbecq AL, Van de Ven AH, Gustafson DH. Group Techniques for Program Planning: A Guide to Nominal Group and Delphi Processes. Glenview, Ill: Scott, Foresmann and Company; 1975.

17Gallagher M, Hares T, Spencer J, Bradshaw C, Webb I. The nominal group technique: A research tool for general practice? Fam Pract. 1993;10:76–81.

18Cross H. Consensus methods: A bridge between clinical reasoning and clinical research? Int J Lepr Other Mycobact Dis. 2005;73:28–32.

19Hutchings A, Raine R, Sanderson C, Black N. An experimental study of determinants of the extent of disagreement within clinical guideline development groups. Qual Saf Health Care. 2005;14:240–245.

20Raine R, Sanderson C, Hutchings A, Carter S, Larkin K, Black N. An experimental study of determinants of group judgments in clinical guideline development. Lancet. 2004;364:429–437.

21Foray D, Hargreaves D. The production of knowledge in different sectors: A model and some hypotheses. Lond Rev Educ. 2003;1:7–19.

22Boutin-Foster C, Foster JC, Konopasek L. Viewpoint: Physician, know thyself: The professional culture of medicine as a framework for teaching cultural competence. Acad Med. 2008;83:106–111.

23Vanderveen K, Chen M, Scherer L. Effects of resident duty-hours restrictions on surgical and nonsurgical teaching faculty. Arch Surg. 2007;142:759–764.

24Jones J, Hunter D. Consensus methods for medical and health services research. BMJ. 1995;311:376–380.

25Boex JR, Leahy PJ. Understanding residents' work: Moving beyond counting hours to assessing educational value. Acad Med. 2003;78:939–944.

26Ipsen M, Nohr SB. The three-hour meeting: A socio-cultural approach to engage junior doctors in education. Med Teach. 2009;31:933–937.

27Roff S, McAleer S, Skinner A. Development and validation of an instrument to measure the postgraduate clinical learning and teaching educational environment for hospital-based junior doctors in the UK. Med Teach. 2005;27:326–331.

Cited By:

This article has been cited 1 time(s).

Laryngoscope
A qualitative analysis of faculty motivation to participate in otolaryngology simulation boot camps
Deutsch, ES; Orioles, A; Kreicher, K; Malloy, KM; Rodgers, DL
Laryngoscope, 123(4): 890-897.
10.1002/lary.23965
CrossRef
Back to Top | Article Outline

© 2010 Association of American Medical Colleges

Login

Article Tools

Images

Share