Patient handoff has been defined as “the transfer of professional responsibility and accountability for some or all aspects of care for a patient, or group of patients, to another person or professional group on a temporary or permanent basis.”1 Improperly conducted handoffs can lead to wrong treatment, delays in medical diagnosis, life-threatening adverse events, patient complaints, increased health care expenditure, increased hospital length of stay, and a range of other effects that affect the health system or patient.2 Over the past 20 years, there has been a reduction in the working hours of resident and intern doctors in the United States due to the Accreditation Council for Graduate Medical Education duty hours restriction, and in Europe from the European Working Time Directive.3–5 As a consequence of this, the number of trainee or physician shift changes has increased with a subsequent rise in the frequency of handoff of care. Newly qualified doctors feel unprepared for handoff, not knowing what is expected of them, and are challenged in applying their knowledge, skills, and attitudes within the handoff process.6 This should not be unexpected, as for example there appears to be little formal teaching in handoff performance in the United States or Great Britain.7–9 There have been initial efforts to overcome this situation and provide training for handoffs; however, a recent systematic review of educational interventions to improve handoff showed a paucity of research into handoff education and limited evidence of the effectiveness of current educational strategies.10–12 In view of these shortcomings and as a starting point to address these educational deficiencies, we undertook this study to develop, by consultation with experts in medical education, agreed learning outcomes for the teaching of handoff to medical students.
Several theoretical frameworks have been proposed to conceptualize patient handoff and are helpful when considering the design of learning outcomes. Two in particular support a generic approach to handoff training and, therefore, may be particularly helpful when considering learning outcome design. The first framework identifies three key elements in handoff.13 These are information, responsibility and/or accountability, and system. Information may be verbal, written, or a data set and may incorporate the use of information and communication technologies. The information transfer may occur between hospital care and community care or vice versa, occur between hospital departments, or occur across hospital shift changes. Responsibility, which can be considered a personal attribute, reflects on the individual’s obligations in handoff. Accountability, which may be considered an organizational attribute, reflects on the organization’s responsibilities in handoff. System reflects issues within safety-critical systems such as institutional or location-specific leadership, teamwork, trust, and safety culture. The second framework, focusing on a competency-based approach to handoff, makes recommendations for handoff training based on the core competencies of communication and professionalism.14
An important focus of learning outcomes is the student’s ability to put knowledge to use in solving problems, and operating effectively in a chosen field.15 Various methods are used to facilitate definition of learning outcomes by experts. These include survey-based questionnaires, the Delphi method, and expert working groups.16–20 Constraints associated with these standard means of reaching group consensus for defining learning outcomes relate to the processes of generating a comprehensive set of learning outcomes and reaching agreement on them. Agreement on learning outcomes and how much emphasis should be placed on each one may be even more difficult to achieve when participants represent different professional domains. One solution to the issues just mentioned is to use group concept mapping (GCM), which we describe in detail below with respect to this study.21,22 While GCM shares advantages of other methods for consensus building, such as Delphi, the affinity diagram approach, and focus groups, it mitigates against some of their drawbacks. In contrast to the Delphi method, GCM requires only one round of structuring the data, which is generated by the participants themselves, not by the facilitator. Unlike the analysis of focus group data, GCM does not rely on researcher-driven classification schemes and does not need an intercoder discussion. In GCM, researchers use the original intact respondents’ statements as units of analysis and then quantitatively aggregate their contributions through multidimensional scaling (MDS) and hierarchical cluster analysis. Consensus is not forced; it emerges objectively through the multivariate statistical analysis.
Setting, process, and participants
We undertook this study as part of the Patient Project, a multicountry European Union–funded project. The primary focus of this project is to develop a curriculum for handoff training for medical students at a European level; thus, our study was limited to addressing training for medical students.23 We conducted this study at the School of Medicine University College Cork (UCC), Ireland; the Open Universiteit of the Netherlands (OUNL); Rheinisch-Westfälische Technische Hochschule Aachen University (UKA), Germany; and Fundacion Avedis Donabedian (FAD), Barcelona, Spain, during the period of May to June 2013. Because this research was conducted in established educational settings, involving normal educational practices, it was exempt from institutional review board approval from all four participating institutions.
We invited a group of 127 experts to participate in a GCM process, to identify a common understanding about learning outcomes for handoff training for medical students. We invited experts to participate in the GCM via e-mail. As the participants remained anonymous throughout the GCM process, the only record linking the subject and the research would be the consent document, and the principal risk would be potential harm resulting from a breach of confidentiality; therefore, written consent was not sought. Also, the research involves no procedures for which written consent is normally required outside the research context. Participation in the GCM was deemed to indicate that consent had been given. The participants were not offered or given any incentives to participate in the GCM.
We chose GCM because it is a structured, mixed-methods approach applying both quantitative and qualitative measures to identify an expert group’s common understanding about a particular issue.24–26 The method involves the expert participants in idea generation, sorting of ideas into groups, and rating the ideas on some values—for this study, importance and difficulty to achieve. The participants work individually. The GCM method does not need interexpert discussion to come up with an agreement. When sorting the statements into groups, the participants, in fact, “code” the text themselves. Then it is the advanced statistical techniques of MDS and hierarchical cluster analysis, performed by the research team, that quantitatively aggregates individual inputs from the participants to reveal objective patterns in the data.27,28 One of the distinguishing characteristics of GCM is visualization, which is a substantial part of the analysis. Visualization allows for grasping at once the emerging data structures, their interrelationships, and their interpretation to support decision making.
We designed a selection framework for identifying experts in medical education to contribute to the GCM process. They were mainly from, but not exclusively so, the medical schools and the related hospitals associated with UCC, UKA, and FAD. They were academics (non-discipline-specific) or were clinicians (doctors or nurses) involved in medical education at the undergraduate or postgraduate level. Using this framework, we constructed a list of experts to participate in the GCM. We avoided duplication of experts by undertaking a cross-check process. We then invited the experts to participate in the GCM via e-mail with one subsequent reminder e-mail and asked the five demographic questions: country of experts; discipline of experts; professional experience in clinical health care; years teaching in medical education; and years in curriculum development in health care.
Group concept mapping
The GCM procedure consisted of five phases: idea generation (brainstorm) and idea pruning, sorting of ideas into groups, rating on two values (importance and difficulty to achieve), analysis of the data, and interpretation of the results. The first three steps were performed by the experts; the last two steps were performed by the research team. We invited the experts through the project’s online management system and explained the rationale for the study. We assured the experts of anonymity with regard to their inputs and provided them with a link to the brainstorming page of a Web-based tool for data collection and analysis (Concept System Global, 2013). They could visit the Web site as many times as they needed using their own username and password. On the brainstorming page we asked them to generate ideas by completing the following trigger statement: “One specific learning outcome of the Handoff module is …” by using short phrases or statements expressing one thought. We gave the experts two weeks to complete the idea generation task.
When the idea generation phase was completed, we used convenience sampling to select four of the experts—one each from OUNL, FAD, UCC, and UKA—and invited them to participate in the pruning phase before sending the final list of statements to all the experts for sorting and rating. We asked the four experts to check, edit, and, if needed, reduce the ideas to a manageable list (about 100) for the next stages of sorting and rating. Guidelines for idea pruning were as follows: look for statements that contain more than one idea and, if needed, split them; remove identical ideas; check whether the ideas address the focus point, and delete those that don’t; make sure that each unique idea is included in the final list; and make sure that the idea is clear, concise, and understandable.
This final list was randomized and then made available again to the full group of experts, firstly for the sorting of ideas into groups based on similarity in meaning and giving names to the groups, and secondly for the rating of the ideas on two values of importance and difficulty. We gave the experts three weeks to complete both sorting and rating. We sent a reminder after two weeks. As in the brainstorming phase, the participants could save their work and return later to continue.
The primary outcome measures in our study were the themes that emerged from the GCM with which to select learning outcomes and define them to form a basis for a curriculum on handoff training for medical students. The secondary outcome measures were the rating of these themes on importance to achieve and difficulty to achieve.
The first outcome of the MDS is a point map. This two-dimensional graphical configuration represents the learning outcomes (as points on the map) and shows how they are related. The closer the points are to each other, the closer in meaning they are. This is a result of more people grouping them together during the sorting.
An important question at this point is to determine how this configuration represents the experts’ original judgment. To determine the extent to which the raw qualitative judgment of the experts matches the quantitative conceptual model in the map, we look at the stress value, a statistic generated by MDS to indicate the goodness-of-fit between the two realms. For GCM studies, it should be in the range between 0.205 and 0.365.21 The stress index of our study is 0.338, which is in this range and indicates that the map is a good representation of the original sorting of the experts. In addition, MDS assigns each statement a bridging value, which is between 0 and 1. A low bridging value means that a statement has been grouped together with statements around it. A higher bridging value means that the statement has been grouped together with some statements farther apart from either side. Some groups of learning outcomes can already be detected by a simple visual inspection, but to make the process more efficient, the hierarchical cluster analysis was applied. GCM starts with the assumption that all ideas are individual clusters (in our case, 107 clusters) and consequently merges ideas until it arrives at one cluster. To determine the number of clusters that best reflects the data, we checked different solutions provided by the hierarchical cluster analysis, numbering between 16 and 5 cluster solutions.28
We prepared a checklist with the suggestions made by the hierarchical cluster analysis for merging clusters. We used convenience sampling to select and invite another four experts from OUNL, FAD, UCC, and UKA who had participated in sorting and rating phases, to help with deciding on the “best”-fitting solution. At each step of the merging, the experts had to indicate whether they “agreed,” were “undecided,” or “disagreed” with the suggestion. After completing the assignment, the final solution could be either 9 or 10 clusters. The 9- and 10-cluster solutions were checked again, and a 10-cluster solution was selected as the best-fitting solution. The next step in making sense of the data was to attach meaningful labels to the clusters. There are three methods available for labeling. The first is to use the labels suggested by the system; the second is to look at the bridging values of the statements composing the cluster; the third is to read through all the statements in a cluster and to define, in a label, the theme of the statements. To define the cluster labels, we combined all three methods.
We used MDS and did agglomerative hierarchical cluster analysis using Ward’s algorithm to analyze the data.27,28 Nonmetric MDS uses the group proximity matrix and symbolizes it as a point map on which statements are displayed as points on a two-dimensional space with distances between them replicating the frequency with which they were grouped together by participants. Cluster analysis uses the x, y coordinates from the MDS to group statements into clusters that represent underlying themes. Hierarchical cluster analysis starts with the assumption that all ideas are individual clusters, and consequently merges ideas until it arrives at one cluster. Subsequently, human experts need to look at the solution proposed and decide on the number of clusters that represents the data in the best possible way and reflects the context of the study. See Figure 1 for an overview of the GCM process.
We invited 127 experts to participate. Of these, 74 (58%) registered initially for online data collection. Of these 74 experts, 45 (61%) contributed effectively to the brainstorming session. Twenty-two of the 45 (49%) experts who contributed to the brainstorming phase completed the sorting and rating phases (17% of the initial 127 invitees). The 45 experts produced 229 statements during the brainstorming phase. The 4 experts we selected reduced these to 107 statements after the pruning phase. The Ward agglomerative hierarchical cluster analysis placed the statements in 10 clusters for labeling. For the demographic characteristics of experts who participated in brainstorming, see Tables 1 and 2.
Primary outcome measures
We identified themes for the 10 clusters which serve as labels for learning outcomes, shown in List 1. The themes cover knowledge (e.g., “Being aware of errors and risks in handoffs”), skills (e.g., “Demonstrate proficiency in handoff in simulation”), and attitudes (e.g., “Engage with colleagues, patients and carers”).
List 1 Themes Identified by Participants for Learning Outcomes From the Group Concept Mapping, Rating of the Themes by the Participants on Importance to Achieve, and Rating by Difficulty to Achieve, From a Multicountry European Study of Group Concept Mapping and Learning Outcomes for Medical Student Handoffs, 2013a Cited Here...
Themes identified for learning outcomes
- Application of structured handoff methods
- Demonstrate proficiency in handoff in workplace
- Being able to perform handoff accurately
- Demonstrate proficiency in handoff in simulation
- Learn how to communicate effectively
- Prepare clinical documentation
- Engage with colleagues, patients, and carers
- Being aware of errors and risks in handoff
- Understand the benefits and challenges of handoff
- Clinical performance
Rating on importance: Range 1–5 with 1 indicating lowest on importance and 5 indicating most important
- “Application of structured handoff methods”“Learn how to communicate effectively”
“Prepare clinical documentation”
- “Clinical performance”
- “Engage with colleagues, patients, and carers”“Understand the benefits and challenges of handoff”
- “Being aware of errors and risks in handoff”“Demonstrate proficiency in handoff in simulation”
- “Being able to perform handoff accurately”“Demonstrate proficiency in handoff in workplace”
Rating on difficulty to achieve: Range 1–5 with 1 indicating easiest to achieve to 5 indicating most difficult to achieve
- “Understand the benefits and challenges of handoff”
- “Application of structured handoff methods”
- “Being able to perform handoff accurately”“Being aware of errors and risks in handoff
“Prepare clinical documentation”
“Learn how to communicate effectively”
- “Demonstrate proficiency in handoff in workplace”
- “Demonstrate proficiency in handoff in simulation”“Engage with colleagues, patients, and carers”
aGroup concept mapping is a structured mixed methods approach, applying both quantitative and qualitative measures to identify an expert group’s common understanding about the learning outcomes for training medical students in handoff.
Secondary outcome measures
The group of experts rated the statements using a 1–5 ranking scale on importance (1 = not at all important; 5 = very important) and on how difficult they would be to achieve (1 = easiest to achieve; 5 = most difficult to achieve). List 1 shows the clusters on importance and difficulty to achieve. The clusters entitled “Being able to perform handoff accurately” and “Demonstrate proficiency in handoff in workplace” were rated as most important. “Demonstrate proficiency in handoff in simulation” and “Engage with colleagues, patients, and carers” were rated most difficult to achieve.
There are several implications from our GCM study. We identified 10 themes with which to select learning outcomes and operationally define them to form a basis for a curriculum on handoff training for medical students. In contrast to the traditional position on learning outcomes seen as only expected results of the teaching and learning, our findings emphasized the need to consider the means by which to achieve the desired learning outcomes, reflected by the two clusters on performing in simulated and real settings. The results of our current study are in line with some other studies.24 We identify similar issues such as need for skills in application of structured handoff methods and tools, standardization of handoff procedures, effective communication and collaboration between different stakeholders, and the role of workplace learning. At the same time, our study extended the scope of handoff topics and teaching methods to performing handoff accurately, minimizing errors and risks, understanding the effect of good practices in handoff, and recognizing the consequences of improper handoff. Our study emphasized the idea of creating a simulated environment for teaching and learning handoff. Learning outcomes have also been ranked in terms of how important they are, and on how easy or difficult they may be to achieve. For example, some learning outcomes, such as “Being able to perform handoff accurately,” “Demonstrate proficiency in handoff in workplace,” “Demonstrate proficiency in handoff in simulation,” and “Engage with colleagues, patients, and carers” were rated very important but were considered difficult to achieve. This discrepancy between importance and difficulty to achieve in relation to these learning outcomes may reflect issues in relation to the costs and manpower resources need for simulation and the challenges such as supervision and indemnity encountered in involving undergraduates in work-based clinical practices within the participants’ organizations.
Our study has several strengths. We used a structured, mixed-methods approach applying both quantitative and qualitative measures to provide an expert informed basis for defining learning outcomes. Our study included experts from four European countries, who generated the groups of statements that provided the themes for the learning outcomes. According to a meta-analytical review containing 69 GCM studies, conducted in the last 10 years, a sample of 20 to 30 participants is optimal for generating valid and reliable results from sorting data.26 The variability of stress value increased when 15 or fewer sorters were involved; no improvement of the stress value was detected when more than 35 sorters were included. Twenty-two participants in our study were involved in sorting the statements, which is within the recommended range. The stress index of our study of 0.338 is also in the suggested borders and indicates good internal representation validity.
The limitations of our study include a small sample and the generalizability of our study’s findings. A higher number of experts involved in the rating phase was desirable; however, sorting is the primary activity in the GCM studies, and rating is the secondary one. Also, although our study suggests what we could expect from learners in terms of knowledge, skills, and attitudes, the level of these categories needs to be determined—for example, using taxonomies in the cognitive and affective domains. Finally, as most of the participants in our study come from three medical schools and their related hospitals associated with UCC, UKA, and FAD, the findings and recommendations should be applied to only these institutions. Interested parties could either use the findings to define the learning outcomes of handoff teaching relevant to their medical schools or replicate the study to generate original findings.
The significance of our study is that future handoff training curriculum for medical students might be designed on the basis of these learning outcomes, possibly at a European or international level, similar to the World Health Organization’s Patient Safety Curriculum Guide.29 Assessing the competencies associated with the learning outcomes is paramount. Valid assessment of competencies in these learning outcomes may be achieved within the traditional objective structured clinical examination setting and the high-fidelity simulated environment with the use of valid and reliable metrics for assessment. Further research should focus on the effects of handoff training on medical practice. The next step is the design of the curriculum and its implementation, followed by assessment of the success or not of this educational strategy in preparing new medical graduates to be proficient in the handoff process. Further research might focus on the following, for example: Has patient safety benefited? Has there been a decrease in reported medical error? Has the quality of discharge communications improved?
Our GCM study identified expert consensus on 10 themes for designing learning outcomes for a handoff training curriculum for medical students. Those learning outcomes considered most important were also among those considered most difficult to deliver. We believe that there is an urgent need to address the issue of preparing newly qualified doctors to be proficient at handoff at the point of graduation; otherwise, this is a latent error within health care systems.
1. British Medical Association, Junior Doctors Committee, National Patient Safety Agency, National Health Service Modernisation Agency. Safe Handover: Safe Patients. Guidance on Clinical Handover for Clinicians and Managers. 2005 London British Medical Association
2. Wong MC, Yee KC, Turner PeHealth Services Research Group, University of Tasmania. Clinical Handover Literature Review. 2008 Tasmania, Australia Australian Commission on Safety and Quality in Health Care
3. Accreditation Council for Graduate Medical Education. Report of the ACGME Work Group on Resident Duty Hours. 2002 Chicago, Ill Accreditation Council for Graduate Medical Education
4. Nasca TJ, Day SH, Amis ES JrACGME Duty Hour Task Force. . The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363:e3
5. Department of Health. European Working Time Directive. 2010 London, UK Department of Health
6. Cleland JA, Ross S, Miller SC, Patey R. “There is a chain of Chinese whispers”: Empirical data support the call to formally teach handover to prequalification doctors. Qual Saf Health Care. 2009;18:267–271
7. Solet DJ, Norvell JM, Rutan GH, Frankel RM. Lost in translation: Challenges and opportunities in physician-to-physician communication during patient handoffs. Acad Med. 2005;80:1094–1099
8. Horwitz LI, Krumholz HM, Green ML, Huot SJ. Transfers of patient care between house staff on internal medicine wards: A national survey. Arch Intern Med. 2006;166:1173–1177
9. Beasley R, Bernau S, Aldington S, et al. From medical student to junior doctor: The medical handover—a good habit to cultivate. Student BMJ. 2006;14:188–189
10. Maher B, Drachsler H, Kalz M, et al. Use of mobile applications for hospital discharge letters—improving handover at point of practice. Int J Mob Blended Learn. 2013;5:19–42
11. Drachsler H, Kicken W, van der Klink M, Stoyanov S, Boshuizen HP, Barach P. The Handover Toolbox: A knowledge exchange and training platform for improving patient care. BMJ Qual Saf. 2012;21(suppl 1):1114–1120
12. Gordon M, Findley R. Educational interventions to improve handover in health care: A systematic review. Med Educ. 2011;45:1081–1089
13. Jeffcott SA, Evans SM, Cameron PA, Chin GS, Ibrahim JE. Improving measurement in clinical handover. Qual Saf Health Care. 2009;18:272–277
14. Arora VM, Johnson JK, Meltzer DO, Humphrey HJ. A theoretical framework and competency-based approach to improving handoffs. Qual Saf Health Care. 2008;17:11–14
16. Kordi R, Dennick RG, Scammell BE. Developing learning outcomes for an ideal MSc course in sports and exercise medicine. Br J Sports Med. 2005;39:20–23
17. Tonni I, Oliver R. A Delphi approach to define learning outcomes and assessment. Eur J Dent Educ. 2013;17:173–180
18. Schiekirka S, Reinhardt D, Beißbarth T, Anders S, Pukrop T, Raupach T. Estimating learning outcomes from pre- and posttest student self-assessments: A longitudinal study. Acad Med. 2013;88:369–375
19. Johnson O, Bailey SL, Willott C, et al.Global Health Learning Outcomes Working Group. Global health learning outcomes for medical students in the UK. Lancet. 2012;379:2033–2035
20. Burke S, Martyn M, Thomas H, Farndon P. The development of core learning outcomes relevant to clinical practice: Identifying priority areas for genetics education for non-genetics specialist registrars. Clin Med. 2009;9:49–52
21. Trochim WMK. An introduction to concept mapping for planning and evaluation. Eval Program Plann. 1989;12:1–16
22. Trochim MK, Trochim WMK Concept Mapping for Planning and Evaluation. Applied Social Research Methods Series. 200715th ed. Thousand Oaks, Calif Sage
24. Stoyanov S, Boshuizen H, Groene O, et al. Mapping and assessing clinical handover training interventions. BMJ Qual Saf. 2012;21:i50–i57
25. Davison M Multidimensional Scaling. 1983 New York, NY John Wiley
26. Anderberg MR Cluster Analysis for Applications. 1973 New York, NY Academic Press
27. Ward JH Jr.. Hierarchical grouping to optimize an objective function. J Am Stat Assoc. 1963;58:236–244
© 2015 by the Association of American Medical Colleges
28. Rosas SR, Kane M. Quality and rigor of the concept mapping methodology: A pooled study analysis. Eval Program Plann. 2012;35:236–245