Need for Better Clinical Management and Leadership
The Health and Social Care Act 2012 represented the most significant and complex reforms to the National Health Service (NHS) in England since its formation in 1948 (Edwards, 2013). The Act details a number of legislative changes, including the abolition of primary care trusts and strategic health authorities, the establishment of Health and Wellbeing boards as the new regulator, and also the creation of 211 new Clinical Commissioning Groups. There are a number of highly debated elements in the Act. Commissioning has attracted a great deal of controversy due to the emphasis on competition; proponents argue that this will drive up service quality, whereas critics claim that increasing commercial sector involvement will lead to NHS privatization, fragmentation of care, and undermining of core services (Walshe & Ham, 2011). Furthermore, an investigation of the Clinical Commissioning Group boards found that one third of General Practitioner (GP) members had a conflict of interest due to financial relationships with private health care providers (Iacobucci, 2013). The complexity and rules of the new system have also been identified as a cause for concern, causing confusion amongst commissioners and providers needing to develop understanding of the relationships between the new and existing bodies (Edwards, 2013). The Act has also been implemented at a time of unprecedented austerity in the NHS budget, leading to concerns from clinicians and politicians (Hansard: House of Commons, 2012).
In order to successfully implement the reforms and fiscal efficiencies, strong and effective clinical leadership is necessary (Edwards, 2013; The King's Fund, 2012). Clinical leadership as a concept has no accepted definition but is widely acknowledged as being essential to the day-to-day running of and implementing change in high-performing health care organizations (Swanwick & McKimm, 2011). The rationale for clinical leadership is simple: Clinicians are at the core of providing high-quality health care and are therefore ideally placed to identify opportunities for service development and lead organizational change. Kaiser Permanente in the United States is a well-known example of a clinical leadership-centric organization, which run economically efficient models of integrated care, with high clinical quality and high patient satisfaction (Ham & Dickinson, 2008). Clinical leadership is not a new concept. Thirty years ago, the Griffiths report identified a lack of general management as the main weakness of the NHS and recommended that “hospital doctors must accept the managerial responsibility that comes with clinical freedom” (Griffiths, 1984). However, the greatest drive to clinical leadership came in the mid to late 2000s, with the publications from Healthcare for London and the Next Stage Review, both led by Lord Darzi, which encouraged clinical leadership to promote high-quality care (Darzi, 2007, 2008). A further report, Doctors in Society, published by The Royal College of Physicians, stated that “leadership in medicine today is seriously failing” and recommended that each doctor developed leadership and management skills, regardless of their current role (Royal College of Physicians of London, 2005).
Need for Clinical Management Learning and Education
Clinicians perceive significant barriers to taking on leadership roles within the NHS, including scepticism as to the value of spending time on leadership, a lack of financial or professional incentive for taking on leadership roles, and a lack of training and development opportunities (Mountford & Webb, 2009). A recent survey of 1,479 junior doctors identified lack of time working with managers and a perception of being poorly valued by their organization as barriers to promoting their own leadership (Gilbert, Hockey, Vaithianathan, Curzen, & Lees, 2012). Furthermore, a survey by the King's Fund found that newly appointed clinical directors typically felt underprepared for their leadership role (Giordano, 2010).
If clinical leadership is to become of a higher standard and more widely practiced, all clinicians need to develop a greater understanding of the “physiology” of the NHS, including structures, organizations, funding, and governance, together with the influential internal and external forces (Warren & Carnall, 2011). Clinicians who wish to acquire a more significant leadership role will need to acquire a broader range of leadership skills and styles in order to exact more influence on the system. Such training is difficult to define and deliver in an appropriate setting, especially for trainee and junior clinicians, for whom there is a lack of accessible training opportunities at both undergraduate and postgraduate levels (Coltart et al., 2012; Warren & Carnall, 2011).
Warren and Carnall have summarized the multiple leadership training modalities utilized in the NHS (Warren & Carnall, 2011). The most common are short schemes or courses, which are most beneficial if the material is immersive or relevant to current practice. Mentoring can provide benefits for both mentor and mentee, although such relationships are difficult to form and sustain. One-to-one coaching is utilized usually by more senior clinicians to enhance specific skills. Networking with peers or senior leaders can provide a platform to develop a wide variety of skills by involvement with projects or areas of practice in different health care sectors. Finally, experiential learning gives clinicians the opportunity to work in an area of management or policy for a defined period of time, either alongside existing clinical commitments or as an out of training experience. Such schemes include the NHS medical director’s clinical fellow scheme Prepare to Lead and Darzi Fellowships (Brown, Ahmed-Little, & Stanton, 2012; Coltart et al., 2012). None of the above schemes form a compulsory part of medical training, and in practice, only those with an interest in clinical leadership or management are likely to apply to partake.
Learning in Behavioral Simulations
In a behavioral simulation, participants are absorbed into a dynamic, role-playing setting. The experiential element is purported to have significant educational benefits over more traditional training methods and has been identified as lacking in current leadership training modalities (Agboola Sogunro, 2004; Frich, Brewster, Cherlin, & Bradley, 2015). Although simulations have been effectively used in medical, aviation, and military training, simulations for leadership development and policy change are acknowledged to be an underresearched area of health care practice (Salas, Paige, & Rosen, 2013). The first prominent behavioral simulations in the United Kingdom were developed and run by the Office for Public Management in association with the East Anglian regional health authority. The simulations, known as the “Rubber Windmill,” aimed to see how a change in health policy—the introduction of internal markets—would impact on health services in the future (The Rubber Windmill, 1990). The concept was popularized further by the Windmill simulations run by the King’s Fund; others have been run in the United Kingdom, Europe, and United States, reporting educational benefits for participants and changes in organizational planning and strategy as a direct result of simulation processes and outcomes (Harvey, Liddell, & McMahon, 2009; Liddell & Parston, 1990).
In a behavioral simulation, participants have similar roles to those in real life, and the content reflects real-life or predicted circumstances. There are no rules in the simulations—participants are encouraged to act as they would in reality, responding to a particular challenge using the information available. The processes and consequences of the interactions between the participants determine the simulation outcomes (Cohen, Darzi, & Vlaev, 2013).
This study aims to test for first time if participation in a behavioral simulation is an acceptable and effective modality to improve clinical management and leadership capability and the understanding of health care reforms.
This was an experimental study, assessing the efficacy and acceptability of a single-day behavioral role-playing simulation (“The Crucible”) to help develop clinical leadership skills and understanding of the new NHS structure. A single participant group self-assessed competencies pre- and postintervention.
The simulation was designed by SH and LMcM for NHS London in collaboration with the Centre for Health Policy, Imperial College London; both organizations were concerned that there was a significant gap in the clinical leadership skills in the group of senior registrars and newly appointed consultants who would, over the next few years, be required to help lead their organizations through the period of unprecedented change (in the United States, a senior registrar equivalent is a senior or chief resident; a consultant is an attending physician). NHS London oversaw the ethical design, conduct, and reporting of the simulation.
Simulation health economy. The Crucible simulation was based on two adjoining fictitious geographical areas, Barnden and Hambridge, which are located in the northeast of Capital City in 2015. With a combined population of 800,000, both areas have a wide variety of socioeconomic deprivation and health needs, including a doubling of diabetes rates in the past 5 years; together they have an increasing population and an above average birth rate. Barnden has a large commercial area with office blocks serving large corporations and governmental bodies. Hambridge has a more mixed economy. Both areas contain a mixture of social housing and top-end residential properties (see Supplemental Digital Content 1, http://links.lww.com/HCMR/A25).
Simulation issues and challenges. Participants are told in their briefing pack that the poor performance of health services in the boroughs has been highlighted in a recent television documentary in which stakeholders were interviewed to provide a synopsis of current tensions and challenges facing health care in the region. Statements from stakeholders describing their positions and concerns are provided to participants. The local council and health groups are keen that this escalating situation is resolved and plans to improve health care delivery are developed; this is the setting for The Crucible simulation.
Procedure. The Medical Director of NHS London (A. M.) wrote to the Clinical Director of each of the London NHS trusts to recommend senior registrars and newly appointed consultants who would benefit from attending the simulation. Participation was entirely voluntary. The simulation was funded by NHS London; participants were able to attend free of change. The simulations took place at a dedicated venue in central London, in one large room with presentation facilities. One week prior to the simulation, an e-mail was sent to each participant containing a briefing pack detailing the simulation structure and objectives, together with details of the simulation health economy and demography.
Eighty-six doctors from 17 London area health care organizations accepted the invitation to attend one of the two simulation dates. Seventeen doctors withdrew either on the day or in the week leading up to the simulation. Sixty-nine participants attended in total: 36 in the first run and 33 in the second. Of the 69 participants, 32 were currently working at Consultant grade and 35 at Registrar grade. Two participants did not identify their current grade. Participants came from a wide range of specialties, including medicine, surgery, public health, dentistry, psychiatry, and anaesthetics.
Participants were given two lecture-style presentations, introducing the background of NHS reform and introducing The Crucible concept. Participants were then assigned to roles working as part one of the key stakeholder groups in the simulation (e.g., the Quality and Economic regulator, the Clinical Commissioning Group, WeCare). Field coaches were introduced—these were experts who worked in similar roles in real life and could pass on advice and expertise to the participants. The Council leader (played by S. H.) started the simulation by detailing her expectation—that The Crucible would enable stakeholders to come to agreements as to how the health care challenges in Barnden and Hambridge could be met. There was no set structure in the discussions; groups and individuals were free to talk with who they wished, at any time. The simulation itself ran for approximately 3 hours, before each group presented outcomes and conclusions. A short group debriefing session was conducted immediately.
Feedback and reaction was captured by a paper questionnaire completed immediately following the end of the simulation, based on a previously published 5-point Likert scale (1 = strongly disagree, 5 = strongly agree; Haller et al., 2008).
Perceived knowledge and learning were evaluated using two self-assessment questionnaires. The initial seven-item questionnaire examined knowledge of the health care system in the United Kingdom using a 5-point scale. The second questionnaire was developed using the domains established in The Medical Leadership Competency Framework. Developed jointly by NHS Institute for Innovation and Improvement and the Academy of Medical Royal Colleges, this framework details the leadership competencies that clinicians need to fulfil in order to effectively partake in the planning, delivery, and transformation of health services (NHS Institute for Innovation and Improvement, 2008). Five competency areas are described, each with four subsections; participants self-assessed their competency using a 7-point Likert scale on each of the 20 areas (1 = poor to 7 = excellent).
Intention to change behavior was investigated using established principles published by Ajzen in the Theory of Planned Behavior (Ajzen, 1991). Twenty-four questions were modified by a behavioral economist and experimental psychologist. Each question was grouped into categories: Intentions (2), Attitudes (6), Subjective Norms (4), Capability (6), and Opportunity (6). Questions were rated using a 7-point Likert scale (1 = strongly disagree to 7 = strongly agree). Internal consistency of the scale was evaluated using Cronbach’s alpha.
Self-efficacy in clinical leadership was also measured as a separate construct using the previously validated General Self-Efficacy Scale (Schwarzer & Jerusalem, 1995). The questions were assessed on a Likert scale (1 = disagree to 5 = agree). Participants were asked to fill in the scale relating to challenges that they faced in clinical leadership.
Qualitative feedback was also captured; participants were asked four questions, which could be completed either online or on paper:
- What do you feel are the benefits of using this type of simulation for education?
- Could another method have been more effective in conveying knowledge, skills, and behaviors?
- Did attending the simulation event made you more aware of your developmental and leadership needs? If so, how?
- Have you taken any learning from the simulation into your practice so far?
In order to establish baseline levels, the questionnaires evaluating knowledge and behavior were distributed by e-mail link 1 week prior to the simulation using an online questionnaire program (docs.google.com). Participants who did not complete the questionnaires prior to the simulation were asked to fill in a paper version on the day of the simulation, before the event started. Changes in perceived knowledge and behavior were assessed using identical questionnaires which were sent out by e-mail to all participants 2 weeks postsimulation, with a reminder e-mail at 4 weeks, using the same methodology.
Participants were assessed according to the established multimethod approach set out by Kirkpatrick (Kirkpatrick & Kirkpatrick, 2006). The four levels of Kirkpatrick's model include Level 1: Reaction of student—what they thought and felt about the training; Level 2: Learning—the resulting increase in knowledge or capability; Level 3: Behavior—extent of behavior and capability improvement and implementation/application; and Level 4: Results—the effects on the business or environment resulting from the trainee's performance. All these measures are recommended for evaluation of learning in organizations, although their application increases in complexity and usually cost, through the levels from levels 1 to 4. All quantitative analyses were carried out using SPSS version 19.0 (SPSS, Inc., Chicago, IL). The nonparametric Wilcoxon signed-rank test was used to determine the significance of difference pretest and posttest mean scores. For all analyses, statistical significance was set at p < .05 or lower. Participants were free to respond to as many or as few questions as they wished, hence the variable response rates in the questions below.
Qualitative data were analyzed using a thematic, inductive approach outlined by Braun and Clarke (2006) and emergent themes identified.
Sixty-five participants completed the feedback questionnaire, summarized in Table 1. Mean responses to all questions were between 4 (agree) and 5 (strongly agree). The feedback was categorized into three sections. The highest scores in the Participation and Content section were for active involvement and event organization (4.66 and 4.71). The level of immersion felt by participants was also apparent in the qualitative feedback; “we were all very engaged…more so than in other methods of training” (Participant 19) and “I was put into a real-life situation and used skills I didn’t even know that I had (P30).” The relevance of the simulation was also commented upon “It was a brilliant introduction to the reality (P50)” as was the contextualization of the issues “it really encourages active learning and understanding (P54).”
The educational feedback was also very positive, with participants scoring all categories highly. The understanding of NHS structure and organization, together with current issues scored the highest (4.55 and 4.60). Feedback from participants was also positive; “(it gave a) flavor of the real world. (one can) appreciate the plethora of players with vested interests. (P42)” and “this was an excellent way to understand a complex system” (P9).
Knowledge and Learning
Fifty-eight participants completed the seven knowledge status questions both pre- and postsimulation. In all seven areas (regulation of health care providers, role of patient organizations, organizational accountability, role of local authorities, financial climate, roles and responsibilities of commissioners and care providers), there was a significant (p < .05) improvement in perceived knowledge scores. The mean of the total knowledge scores per participant are presented in Table 2.
The five attributes from the medical leadership competency framework are presented in Table 2. Fifty-two participants completed both the pre- and postsimulation questionnaires. All competencies were self-assessed as being significantly higher postsimulation, and all except “Working with others” had a Cohen’s r value of .50 or higher, representing a large change. The lowest scoring area both pre- and postsimulation was in “Setting Direction,” whereas “improving services” showed the biggest change.
Internal consistency of questionnaire. A Cronbach’s alpha value of .70 or above is generally accepted as a good measure of scale reliability (Field, 2013). The questions for intentions, attitudes, norms, and opportunity all scored over .70, indicating good internal consistency (Table 2). For the subjective norms, this was improved to .777 after deletion of one question. However, the questions for capability scored lower at .642. Sequential removal of two of the six questions with lowest reliability improved this to .677, which indicates reasonable reliability.
Changes in behavior postsimulation. Forty-seven participants completed the behavior questionnaires in full pre- and postsimulation (Table 2). Capability was the only construct that showed a large and significant change postsimulation (p < .001, Cohen’s r = −.619). There was a moderate but significant change in Behavioral Intentions, Attitudes and Subjective Norms postsimulation, although there was no significant change detectable in Opportunity.
Self-efficacy in clinical leadership. Forty-six participants completed the General Self-Efficacy Scale both pre- and postsimulation. There was an increase in perceived self-efficacy postsimulation (mean score presimulation, 3.87 to postsimulation, 4.08). This difference was significant at p < .005, but only at medium levels of impact (Cohen’s r = −.43, z value = −2.794).
Fifty-six participants provided qualitative feedback in the free text boxes on the electronic and paper questionnaires.
What do you feel are the benefits of using this type of simulation? The interactive and immersive nature of the simulation was frequently mentioned as being beneficial; “it allows for a rare ‘hands-on’ approach (P4),” “direct engagement (P45),” and “Immersing participants into a realistic scenario but in a ‘safe’ (nonreal) setting naturally brings out behaviors and attitudes that would not come out through reading, lectures, group discussion or perhaps through coaching (P11).” Participants also reflected on the simulation enabling them to experience the complexity of the NHS and the current reforms; “(the simulation enabled) exposure to real-life situations allowing me to understand how NHS changes could work (P23)” and “experiential learning using simulation is very helpful to understand depth and nuance of relationships within complex systems (P25).” In addition, participants commented on the simulation increasing their awareness of the roles and challenges faced by different stakeholders “…Opening up a different perspective of looking at a problem. Appreciating the roles and responsibilities of other stakeholders (P35).” The value of having multiple specialties and field coaches was also commented upon; “I met people from different backgrounds and could learn from their experience (P53).”
Could another method have been as effective? Participants could not identify another learning method that they felt would have been more effective. However, some commented that they required and wanted real-life experience to build on their simulation learning; “in-house involvement with our trusts will be a useful learning opportunity (P8).” Overall, the feedback was very positive; “Having the live experience is invaluable as we live out as real the sort of conversations and difficulties that are faced leading these organizations. Absolutely brilliant! Would love to attend a further event like this in the future (P5).”
Did attending the simulation event make you more aware of your leadership development needs? Of the 56 participants, 52 stated that participation in the simulation had increased awareness of their development needs. Participants provided multiple examples of their learning needs; “Realized leadership skills are essential and need more training (P39)” and “it made me evaluate personal strengths and weaknesses in negotiating (P38).” Participants also focused in on the NHS reforms covered in the simulation “I now intend to crystallise my knowledge of NHS reforms and management in some way and perhaps attend a formal management course (P20)” and also the lack of understanding of the future changes “I think the simulation exposed my lack of knowledge of the forthcoming NHS bill and highlighted the need to stay fully in tune with the changes that will be taking place and the forthcoming tender process (P26).”
Have you taken any learning from the simulation into your practice so far? In the short time between the simulation and feedback, 22 clinicians stated that their experiences in The Crucible simulation had directly influenced their leadership practice. For example, one participant explained how understanding integrated care had directly impacted; “As a community paediatrician, I am now keen to be part of the discharge planning process for children with neurodisability and enable a seamless transfer of care closer to home (P30).” Others expressed that they were able to contribute more to departmental strategy; “I feel more confident that I understand NHS changes, and that many of my colleagues are unlikely to have more knowledge than me. This has given me confidence in expressing my views in management meetings (P24).”
Participants also reported increased willingness to engage colleagues and allied health care professionals in service change; “I engage more junior staff not just medics but MDT (multidisciplinary) staff more (and) work on pull factors for service delivery change (P6).” Further evidence of leadership development provided in conflict resolution and future relationships; “Yes, in terms of team development and conflict within my current team. I am also actively trying to develop my service so the knowledge of the likely future scenarios with GPs/Commissioners will come in very handy (P11).”
Improving clinical leadership and furthering the understanding of NHS reforms are key to providing high-quality health care, especially in the present financial climate. To improve leadership and understanding, it is important that further provision is made for training, which has traditionally been challenging to deliver in both undergraduate and postgraduate settings (Brown et al., 2012; Lemer & Moss, 2013; Warren & Carnall, 2011). The behavioral simulation described here is a unique concept in this regard, addressing both leadership and system level factors for a mixed group of clinicians; such immersive simulation is lacking in much of current leadership training (Frich et al., 2015). Participants on the whole found the simulation to be immersive and relevant, enabling learning in a safe, nonthreatening environment with support and guidance from peers and experts and allowing participants to play out the consequences of their actions.
This study systematically evaluates the acceptability and use of behavioral simulation for educational and behavioral change purposes. The findings demonstrate significant perceived improvements in knowledge and behaviors following a single-day program. Importantly, participants rated their capability and self-efficacy to be significantly improved, both of which act as enablers to improve clinical leadership. Furthermore, within 2 weeks of the simulation, a number of clinicians reported that they had already implemented knowledge and skills gained from The Crucible simulation to demonstrable effect in their NHS trusts. These findings build on the reports of previous simulations, which have reported educational benefit to individuals, groups, and institutions, as well as influencing future behavior, but without any formal analysis (Cohen et al., 2013).
One of the key points of the simulation was the ability to demonstrate marked improvements in all areas of the Medical Leadership Competencies Framework (MLCF). The domains outlined in the MLCF are expected to be attained by every doctor, regardless of whether they wish to pursue a senior leadership role. Indeed, the MLCF has been adopted by the General Medical Council’s undergraduate competency document (“Tomorrows Doctors”) and also into the postgraduate syllabus of the Royal Colleges (General Medical Council, 2009).This reflects the growing recognition and understanding that, for successful system development and promotion of high-quality care, clinicians at all levels require significant leadership skills, not just those who aspire to senior leader roles (Warren & Carnall, 2011).
This simulation was run for the benefit of senior registrars and newly appointed consultants but could in future be run for more junior doctors, allied health care professionals, and hospital managers; indeed a mixture of all the above may enhance the multidisciplinary learning and understanding that comes from the simulation (Holmes et al., 2013). Junior doctors in particular have an increasingly recognized role in leading quality improvement in the NHS, although they are not always empowered or engaged to do so; simulations such as this may help, but senior leaders and managers must become more receptive to their junior colleagues (Brown et al., 2012; Gilbert et al., 2012; Lemer & Moss, 2013).
Behavioral simulations could enable clinicians and managers at all levels to build relationships and test new policy ideas. Such simulations could also provide a safe environment to practice different leadership styles, or for trainers to trial behavioral interventions on different cohorts to try to determine effects on processes or outcome. For example, errors or suboptimal processes could be built in to the simulation to challenge participants to identify areas in which practice could be improved (King, Holder, & Ahmed, 2013). In addition, the application of a clinical and patient safety focus to managerial-focused simulations can enable greater understanding and insight (Cooper et al., 2011). The simulations also offer a unique opportunity for experts to aid the leadership development of participants. For example, the presence of coaches in The Crucible enabled participants to learn contextually and experientially from experts to whom they may not otherwise have access. Future simulations could be part of recognized leadership strategies, including coaching and action learning, and may form the start of more formal mentoring relationships (Warren & Carnall, 2011).
There are a number of limitations to this study that merit further discussion. The Crucible simulation was run over one day and did not have any direct opportunities for ongoing associated learning, such as participant–facilitator discussion forums or a link into a relevant leadership opportunity/mentor at the participants own trust, such as a paired learning scheme (Klaber, Lee, Abraham, Lemer, & Smith, 2011). Such schemes may have contributed to ongoing workplace learning and development. In addition, no qualitative data were collected from the group debriefing session; the facilitators (S. H. and L. M.) felt that recording with microphones or observer note-taking may have altered participant behavior. In addition, no qualitative data were collected directly from participants at the time because the facilitators (SH and LMcM) felt that the use of microphones or observer note taking may have altered behaviour. We accept the data thus captured may have yielded further insights (Issenberg, McGaghie, Petrusa, Lee Gordon, & Scalese, 2005). However a survey instrument was used to capture the learning generated in the small group discussions that followed immediately after the simulation from which a report about participants’ future development needs was prepared for NHS London.
It should be noted that the participants in The Crucible came from a wide variety of hospital trust and clinical backgrounds. Unfortunately, no primary care GPs took part due to budgetary constraints, which, given the increasingly important role of GPs as both commissioners and providers, was unfortunate. No participant commented upon the lack of GPs, perhaps reflecting the difficult relationship between primary and secondary care in real life.
Expert assessment of performance would have been prohibitively challenging and costly, so all the data collected in the simulation were provided by self-assessment. The accuracy of this is unknown, but in the leadership field, self-assessment is increasingly practiced (Frich et al., 2015; Schoemaker, Krupp, & Howland, 2013). Indeed, the NHS Leadership Academy actively encourages self-assessment of leadership skills for clinicians and has links to self-assessment tools based on the MLCF available online (NHS Leadership Academy, 2013). Future studies may benefit from verbal or paper-based objective assessment (Frich et al., 2015).
Prior to taking part in The Crucible, participants were informed that they would be expected to provide feedback and assessments for analysis, which would be kept confidentially by Imperial College London and not passed on to NHS London before being anonymized. Despite this, response rates were suboptimal, with only two-thirds responding to all questions.
There are a number of implications to consider. First, policymakers should consider utilization of behavioral simulations when undertaking significant policy change or when needing to improve understanding of existing policy amongst individuals or organizations. The process of simulation development and outcomes analysis may help define policy further and potentially identify adverse consequences for an organization or system. Second, the simulation can be used as a basis to enhance skill sets, including leadership, in this case using field coaches and experiential learning. Third, simulations can be run across multiple disciplines to allied health care professionals and managers to improve cross-specialty collaboration. Professional networks forged during simulations may enhance future collaboration. The practicalities of design, delivery, and analysis of behavioral simulations must be considered (Cohen et al., 2013) Design can be complex and costly due to the time and knowledge base required to create an immersive and realistic simulation. Delivery may require a number of experienced facilitators, together with field coaches and role-players to enhance the learning and realism. Finally, data capture can be challenging, both in terms of simulation outcomes and individual or organizational learning. Despite the challenges, such simulations may improve leadership skills and understanding of organizational change in a more profound manner than traditional learning methods.
This study has systematically shown the effective use of a single-day behavioral simulation for educational purposes in the fields of clinical leadership and health policy change. Participants reported successfully applying skills into practice soon after the simulation. Ongoing follow-up of participants, together with formal linkage to leadership programs, will enable the worth of such simulations to be determined in the longer term.
Agboola Sogunro O. (2004). Efficacy of role-playing
pedagogy in training leaders: Some reflections. Journal of Management Development
, 23(4), 355–371.
Ajzen I. (1991). Using thematic analysis in psychology. Qualitative Research in Psychology
, 50(2), 179–211.
Braun V., & Clarke V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology
, 3(2), 77–101.
Brown B., Ahmed-Little Y., Stanton E. (2012). Why we cannot afford not to engage junior doctors in NHS leadership. Journal of the Royal Society of Medicine
, 105(3), 105–110. doi:10.1258/jrsm.2012.110202
Cohen D., Darzi A., & Vlaev I. (2013). Behavioural simulations in health care policy: Current uses and future developments. Journal of Health Services Research & Policy
, 18(2), 98–106. doi:10.1177/1355819612473591
Coltart C. E., Cheung R., Ardolino A., Bray B., Rocos B., Bailey A., … Donaldson L. (2012). Leadership development for early career doctors. Lancet
, 379(9828), 1847–1849. doi:10.1016/S0140-6736(12)60271-2
Cooper J. B., Singer S. J., Hayes J., Sales M., Vogt J. W., Raemer D., & Meyer G. S. (2011). Design and evaluation of simulation scenarios for a program introducing patient safety, teamwork, safety leadership, and simulation to healthcare leaders and managers. Simulation in Healthcare: Journal of the Society for Simulation in Healthcare
, 6(4), 231–238. doi:10.1097/SIH.0b013e31821da9ec
Darzi A. (2007). A framework for action
. London, UK: NHS for London.
Darzi A. (2008). High quality care for all: NHS next stage review final report
. Norwich: TSO.
Edwards N. (2013). Implementation of the health and social care act. BMJ
, 346, f2090. doi:10.1136/bmj.f2090
Field A. P. (2013). Chapter 17: Exploratory factor analysis. In Discovering statistics using IBM SPSS statistics: And sex and drugs and rock 'n' roll
(4th ed. pp. 627–685). London: SAGE.
Frich J. C., Brewster A. L., Cherlin E. J., & Bradley E. H. (2015). Leadership development programs for physicians: A systematic review. Journal of General Internal Medicine
, 30(5), 656–674.
General Medical Council. (2009). Tomorrow's doctors: Recommendations on undergraduate medical education
. Manchester, UK: Author.
Gilbert A., Hockey P., Vaithianathan R., Curzen N., & Lees P. (2012). Perceptions of junior doctors in the NHS about their training: Results of a regional questionnaire. BMJ Quality & Safety
, 21(3), 234–238. doi:10.1136/bmjqs-2011-000611
Giordano R. (2010). Leadership needs of medical directors and clinical directors
. London: The King’s Fund.
Griffiths. (1984). Griffiths NHS Management Inquiry Report: Together with the Proceedings of the Committee, the Minutes of Evidence and Appendices: First report from the Social Services Committee: Session 1983–84
. London: Her Majesty's Stationery Office.
Hansard: House of Commons. (2012). Health Committee Thirteenth report: Public Expenditure. (HC 2010–2012 (1499))
. London: The Stationery Office.
Haller G., Garnerin P., Morales M. A., Pfister R., Berner M., Irion O., … Kern C. (2008). Effect of crew resource management training in a multidisciplinary obstetrical setting. International Journal for Quality in Health Care: Journal of the International Society for Quality in Health Care/ISQua
, 20(4), 254–263. doi:10.1093/intqhc/mzn018
Ham C., & Dickinson H. (2008). Engaging doctors in leadership: What we can learn from international experience and research evidence?
Coventry: NHS Institute for Innovation and Improvement.
Harvey S., Liddell A., & McMahon L. (2009). Windmill 2009: NHS response to the financial storm
. London, UK: King's Fund.
Holmes S., Ahmed-Little Y., Brown B., Moonan M., Collins S., Liggett H., & Simpson K. (2013). All together now: North West leadership schools. British Journal of Healthcare Management
, 19(1), 24–31.
Iacobucci G. (2013). More than a third of GPs on commissioning groups have conflicts of interest, BMJ investigation shows. BMJ
, 346, f1569. doi:10.1136/bmj.f1569
Issenberg S. B., McGaghie W. C., Petrusa E. R., Lee Gordon D., & Scalese R. J. (2005). Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review. Medical Teacher
, 27(1), 10–28. doi:10.1080/01421590500046924
King A., Holder M. G. Jr., & Ahmed R. A. (2013). Errors as allies: Error management training in health professions education. BMJ Quality & Safety
, 22(6), 516–519. doi:10.1136/bmjqs-2012-000945
Kirkpatrick D. L., & Kirkpatrick J. D. (2006). Evaluating training programs: The four levels
(3rd ed.). San Francisco, CA: Berrett-Koehler.
Lemer C., & Moss F. (2013). Patient safety and junior doctors: Are we missing the obvious? BMJ Quality & Safety
, 22(1), 8–10. doi:10.1136/bmjqs-2012-001705
Liddell A., & Parston G. (1990). How the market crashed. Heath Service Journal
, 17th May
NHS Institute for Innovation and Improvement. (2008). Medical leadership competency framework: Enhancing engagement in medical leadership
. Coventry: Author.
Royal College of Physicians of London. (2005). Doctors in society: Medical professionalism in a changing world: Technical supplement to a report of a working party of the Royal College of Physicians of London: December 2005
. London: Author.
Salas E., Paige J. T., & Rosen M. A. (2013). Creating new realities in healthcare: The status of simulation-based training as a patient safety improvement strategy. BMJ Quality & Safety
, 22(6), 449–452. doi:10.1136/bmjqs-2013-002112
Schoemaker P. J., Krupp S., & Howland S. (2013). Strategic leadership: The essential skills. Harvard Business Review
, 91(1–2), 131–134. 147.
Schwarzer R., & Jerusalem M. (1995). Generalized self-efficacy scale. In Weinman J., Wright S., Johnston M. (Eds.), Measures in health psychology: A user’s portfolio. Causal and control beliefs
(pp. 35–37). Windsor: NFER-NELSON.
Swanwick T., & McKimm J. (2011). What is clinical leadership
… and why is it important? The Clinical Teacher
, 8(1), 22–26. doi:10.1111/j.1743-498X.2010.00423.x
The King's Fund. (2012). Leadership and engagement for improvement in the NHS: Together we can: Report from the King's Fund leadership review 2012
. London: Author.
The Rubber Windmill
. (1990). East Anglian Regional Health Authority.
Walshe K., & Ham C. (2011). Can the government's proposals for NHS reform be made to work? BMJ
, 342, d2038. doi:10.1136/bmj.d2038
Warren O. J., & Carnall R. (2011). Medical leadership: why it's important, what is required, and how we develop it. Postgraduate Medical Journal
, 87(1023), 27–32. doi:10.1136/pgmj.2009.093807