Secondary Logo

Journal Logo

Innovation Reports

Beyond Continuing Medical Education: Clinical Coaching as a Tool for Ongoing Professional Development

Iyasere, Christiana A. MD, MBA; Baggett, Meridale MD; Romano, Jordan DO; Jena, Anupam MD, PhD; Mills, Gabrielle; Hunt, Daniel P. MD

Author Information
doi: 10.1097/ACM.0000000000001131
  • Free

Abstract

Problem

The practice of medicine is predicated on the apprenticeship model, one in which a trainee, paired with a senior teacher, learns through observation, modeling, and doing. Refinements in diagnostic accuracy, therapeutic knowledge, and procedural skill emanate from direct interactions between the apprentice and expert clinician. For most physicians, the period of official apprenticeship begins during clinical clerkships in medical school and ends with the completion of residency or fellowship, yet the acquisition of expertise requires ongoing opportunities to practice a given skill and obtain structured feedback on one’s performance—components of what is referred to as “deliberate practice.”1

However, models which utilize clinical feedback as a tool to improve clinical expertise when official apprenticeship is complete are underused. In much the same way that professional athletes, pianists, and chess players rely on a coach’s feedback to hone skills throughout their careers, direct and repeated observation of a physician by a senior colleague followed by structured feedback holds similar promise for the physician and his/her practice of medicine. This kind of coaching to nurture skill acquisition could be an important complement to other system-level interventions designed to improve health care quality and ongoing educational opportunities.

The medical profession has long held that knowledge acquisition does not end with residency; however, existing models of continuing medical education (CME) are variable in their approach and outcomes. Adult learning theory emphasizes the role of the self-directed adult learner, one that is able to set goals, brings life experience to his/her learning, and performs best in a collaborative environment with his/her teacher.2 Nevertheless, a large portion of CME is delivered through passive didactic lectures and online tutorials, which are demonstrated to have a variable impact on physician performance.3 In comparison, newer efforts geared toward interactive learner-centered CME are better aligned with adult learning theory and are found to effect change in professional practice and perhaps health care outcomes.3

Models in which physicians receive structured feedback on clinical decision making in real time are emerging under the description of clinical coaching. In this paradigm, the explicit goal is to further individual performance, not remediation. For example, video-based coaching of surgeons on operative techniques and clinical decision making has proven valuable to surgeons at all experience levels in identifying alternative surgical approaches and episodes of failure to progress.4 Similarly, embedding a physician coach into structured internal medicine multidisciplinary rounds has been associated with reductions in length of stay and a trend toward overall cost reduction.5

Today, the potential of coaching to improve clinical skills may arguably be greatest at the start of a physician’s career, following residency. Immediately after training, physician skill continues to improve; however, in the absence of ongoing feedback, performance will decline with time, demonstrating that on its own, repeated experience is insufficient for continued improvement.1 Furthermore, reductions in residency work hours and overall “work compression” may negatively impact resident physicians’ preparedness for unsupervised practice.6,7 Recent surveys of residents and residency directors suggest both experience unease regarding work hours and resident preparation for senior clinical roles.8

Approach

A better understanding of the role clinical coaching can play in the professional development and CME of physicians is needed. As part of an effort to explore this model as a solution to needs at our institution (Massachusetts General Hospital [MGH], Boston, Massachusetts), in July 2013, we developed a formal clinical coaching pilot program, in which we used a role called the senior clinical advisor (SCA), to provide early-career (or junior) hospitalists with feedback on medical decision making, data interpretation, and clinical exam findings.

Program development

The SCA role grew out of concerns voiced by incoming hospitalists and the MGH Hospital Medicine Unit (HMU) as a whole during individual and group conferences. These concerns included (1) managing the transition from the supervised environment of residency to independent practice, (2) the increasing complexity of patient care, (3) reduced time available for reflection and reasoning by frontline hospitalists, and (4) a winnowing of the experience provided by residency training due to work hours limitations.

To develop a solution to address these concerns, we held an HMU-wide retreat in March 2013. Working in small groups charged with designing the SCA role (see Table 1), we debated its merits, developed specific guidelines for SCAs, and obtained faculty buy-in. Working group ideas were shared with the group at large, and written summaries of the discussions were distributed to all HMU members to encourage further discussion and program refinement.

Table 1
Table 1:
Working Groups and Assigned Questions for the SCA Role Development Retreat, SCA Pilot Program,a Massachusetts General Hospital, March 2013

Following the retreat, we crafted the pilot SCA program with two goals in mind: (1) to minimize feelings of isolation among new hospitalists and (2) to provide opportunity for feedback on clinical decision making in real time to both enhance individual performance and create space for reflection and clinical reasoning. We decided to address additional goals (on evaluation and coaching tools, research/scholarship related to the program, and justifying the program to hospital executive administration) highlighted at the retreat in future iterations of the program.

At the program’s initiation in July 2013, the MGH Department of Medicine had 65 active hospitalists; 42 (65%) of these hospitalists had completed residency within the last five years. Each of these physicians cared for 10 to 13 patients per day and had more than 1,800 patient encounters over the course of a year. We recruited 12 SCAs from a group of MGH hospitalists with five or more years of postresidency patient care experience (range: 5–32 years) and who were actively involved in the training of medical residents (both on the wards and through formal teaching roles). Although we did not employ any formal faculty development for the SCA role, all SCAs had prior experience in our peer observation program9 and successfully served as mentors to residents and colleagues. SCAs were salaried faculty at our institution; as such, reimbursement for participating in this pilot program came under their existing clinical full-time equivalent, and thus no incremental funds were used to support this program.

Each of the 12 SCAs served as a clinical coach for a total of 28 early-career hospitalists over the term of the pilot (i.e., 23 two-week blocks). Each day the SCA met in person with early-career hospitalists to discuss difficult clinical decisions, review patient-specific data, provide general advice relevant to patient care, and examine any patients for whom a physical exam could help clarify a diagnosis or change management. Most SCAs met with hospitalists after patient and multidisciplinary rounds; however, there was significant variability as the timing was determined by patient workload and clinical demands. On the basis of adult learning theory and self-directed learning, we believed it was important that the early-career hospitalist determine the subject matter for discussion; as a result, patients discussed with the SCA were determined by the patients’ frontline hospitalist.

Program evaluation

To understand the effects of the SCA pilot, from August 2013 to June 2014, we collected both clinical narratives and programmatic surveys. Each day the SCA catalogued the encounters and discussions (interactions) that occurred in clinical narratives; these were collected in a shared Microsoft Excel spreadsheet (Microsoft, Redmond, Washington). Two members of the research staff (C.A.I., M.B.) independently reviewed all clinical narratives to identify common themes in the reported interactions. Through content analysis, the top five themes were selected, and disagreements were resolved through consensus and input from additional authors.

We designed the programmatic surveys via consensus and piloted them within the HMU for clarity of wording and intent. Both the SCAs and early-career hospitalists were surveyed at baseline and after each clinical block via the REDCap tool (Research Electronic Data Capture, Vanderbilt University, Nashville, Tennessee) hosted at Partners HealthCare. The surveys contained 20 questions on topics related to structural components of the rotation, clinical decision making, and professional development.

Outcomes

Themes and outcomes from clinical narratives

On the basis of a review of the clinical narratives, we found the interactions between the SCA and early-career hospitalists broke out into five major themes: (1) diagnostic accuracy, (2) confirmation of physical exam findings, (3) watchful waiting (delaying therapeutic change or new testing based on symptom evolution), (4) improved testing efficiency, and (5) avoidance of subspecialty consultation. In one example of collaborative physical examination, the management of presumed cellulitis was changed on the basis of the interaction between the SCA and the early-career hospitalist. After the patient failed to respond to intravenous vancomycin, the early-career hospitalist planned to consult dermatology and change antibacterials. A repeat physical exam alongside the SCA raised the possibility of persistent topical fungal infection, with the subsequent suggestion of antifungal therapy. The cutaneous lesions significantly improved with the antifungal therapy, without the need for a subspecialist consultation.

Table 2 provides additional examples from the clinical narratives of the impact of the program on patient outcomes. Along with specific treatment suggestions, many recommendations involved waiting before requesting a consult or changing treatment course—a clinical scenario in which the SCA’s accumulated experience allowed for a nuanced comparison of expected versus observed outcomes and the judgment to feel comfortable in using time as a diagnostic intervention.

Table 2
Table 2:
Examples of SCA and Early-Career Hospitalist Interactions From Clinical Narratives, SCA Pilot Program,a Massachusetts General Hospital, August 2013–June 2014

Results of programmatic surveys

Ten of 12 SCAs and 25 of 28 early-career hospitalists responded to all surveys (response rate of 83% and 89%, respectively). Overall, early-career hospitalists viewed the program favorably. Twenty-three (92%) rated the SCA role as useful to very useful. Twenty (80%) reported that interactions with the SCA over a two-week period led to at least one change in their diagnostic approach, and 14 (56%) reported at least one change in a patient’s diagnosis over the two-week period. Thirteen (52%) reported calling fewer subspecialty consults as a result of guidance from the SCA. In response to questions related to professional development, 18 (72%) early-career hospitalists felt more comfortable as an independent physician following their interactions with the SCA, and 19 (76%) thought the interactions improved the quality of care they delivered to patients. The SCAs also benefited from the program—9 (90%) were satisfied to very satisfied with their role, and 7 (70%) reported feeling more comfortable as clinical mentors after taking part in the program.

Next Steps

To our knowledge, our SCA pilot program is one of the first of its kind to explicitly emphasize the role of ongoing, formal clinical coaching of physicians in independent practice. The current clinical environment relies on patient outcomes or board-enforced maintenance of certification as feedback mechanisms. In contrast, our program capitalized on earlier, and perhaps more salient, opportunities for clinicians to get immediate feedback on the accuracy of diagnoses to motivate further growth in reasoning and error correction. Similar programs that offer feedback on clinical decision making at the request of the practicing physician and within their working environment could provide an alternative framework for CME and maintenance of certification for medical licensure.

Despite the early success of this program, our study was limited in size and data collection, and it was performed at a single academic institution. We acknowledge concerns about possible costs of this program and its sustainability; a better understanding of this aspect of the SCA program will require an analysis of its full costs (e.g., time spent by early-career hospitalists and SCAs, any cost offset due to decreased testing, consultation, and/or length of stay) and the possible impact on patient outcomes. Anecdotally, we found that the SCA program reduced the need for subspecialty consultation and, in many cases, the need for additional testing.

Given the relative newness of hospitalist medicine and the lack of routine fellowship training for hospitalists, it is possible that the benefits of the SCA role are limited to hospitalist medicine. However, preliminary results in surgery, which show that physicians at all levels of training derived benefits from clinical coaching,4 suggest that our work may have applications outside of hospitalist medicine. To better understand the impact and generalizability of clinical coaching, a larger, longitudinal study is required to look at patient and provider outcomes in detail.

As this was a pilot program, there was no formal faculty development centered on successful clinical coaching for either the SCAs or early-career hospitalists. Further refinement of the SCA role to meet faculty needs is needed and could include faculty development dedicated to strengthening coaching skills.

Our SCA pilot program recapitulates the dynamics of coaching where “the coach provides the outside eyes and ears, and makes you aware of where you’re falling short … [and uses] a variety of approaches … the most common [being] just conversation.”10 At its core, the SCA role provides the opportunity for clinical discussion and reflection with an engaged, knowledgeable colleague that is needed for the continued development of physician expertise in clinical decision making and the facilitation of deliberate practice. Committing the time and resources to further hone these skills in our newest independent practitioners will be instrumental in furthering the goal of safer, more efficient, and more effective patient care.

References

1. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004;79(10 suppl):S70S81.
2. Knowles MS. The Modern Practice of Adult Education: Andragogy Versus Pedagogy. 1970.New York, NY: Association Press.
3. Cervero RM, Gaines JK. Effectiveness of Continuing Medical Education: Updated Synthesis of Systematic Reviews. July 2014. Chicago, Ill: Accreditation Council for Continuing Medical Education; http://www.accme.org/sites/default/files/652_20141104_Effectiveness_of_Continuing_Medical_Education_Cervero_and_Gaines.pdf. Accessed December 16, 2015.
4. Hu YY, Peyre SE, Arriaga AF, et al. Postgame analysis: Using video-based coaching for continuous professional development. J Am Coll Surg. 2012;214:115124.
5. Artenstein AW, Higgins TL, Seiler A, et al. Promoting high value inpatient care via a coaching model of structured, interdisciplinary team rounds. Br J Hosp Med (Lond). 2015;76:4145.
6. Axelrod L, Shah DJ, Jena AB. The European Working Time Directive: An uncontrolled experiment in medical care and education. JAMA. 2013;309:447448.
7. Mattar SG, Alseidi AA, Jones DB, et al. General surgery residency inadequately prepares trainees for fellowship: Results of a survey of fellowship program directors. Ann Surg. 2013;258:440449.
8. Drolet BC, Christopher DA, Fischer SA. Residents’ response to duty-hour regulations—A follow-up national survey. N Engl J Med. 2012;366:e35.
9. Finn K, Chiappa V, Puig A, Hunt DP. How to become a better clinical teacher: A collaborative peer observation process. Med Teach. 2011;33:151155.
10. Gawande A. Personal best: Top athletes and singers have coaches. Should you? New Yorker. October 3, 2011. http://www.newyorker.com/magazine/2011/10/03/personal-best. Accessed December 16, 2015.
Copyright © 2016 by the Association of American Medical Colleges