Secondary Logo

Journal Logo

A Novel Pediatric Residency Coaching Program: Outcomes After One Year

Rassbach, Caroline E. MD; Blankenburg, Rebecca MD, MPH

doi: 10.1097/ACM.0000000000001825
Innovation Reports
Free

Problem The ACGME requires all residency programs to assess residents on specialty-specific milestones. Optimal assessment of competence is through direct observation of performance in clinical settings, which is challenging to implement.

Approach The authors developed the Stanford Pediatric Residency Coaching Program to improve residents’ clinical skill development, reflective practice, feedback, and goal setting, and to improve learner assessment. All residents are assigned a dedicated faculty coach who coaches them throughout their training in various settings in an iterative process. Each coaching session consists of four parts: (1) direct observation, (2) facilitated reflection, (3) feedback from the coach, and (4) goal setting. Coaches document each session and participate in the Clinical Competency Committee. Initial program evaluation (2013 –2014) focused on the program’s effect on feedback, reflection, and goal setting. Pre- and postintervention surveys of residents and faculty assessed the quantity and quality of feedback provided to residents and faculty members’ confidence in giving feedback.

Outcomes Review of documented coaching sessions showed that all 82 residents had 3 or more direct observations (range: 3–12). Residents and faculty assessed coaches as providing higher-quality feedback and incorporating more reflection and goal setting than noncoaches. Coaches, compared with noncoaches, demonstrated increased confidence in giving feedback on clinical reasoning, communication skills, and goal setting. Noncoach faculty reported giving equal or more feedback after the coaching program than before.

Next Steps Further evaluation is under way to explore how coaching residents can affect patient-level outcomes, and to better understand the benefits and challenges of coaching residents.

C.E. Rassbach is clinical associate professor, associate residency program director, and coaching director, Department of Pediatrics, Stanford School of Medicine, Palo Alto, California; ORCID: https://orcid.org/0000-0002-9221-1643.

R. Blankenburg is clinical associate professor, associate chair for education, and residency program director, Department of Pediatrics, Stanford School of Medicine, Palo Alto, California; ORCID: http://orcid.org/0000-0002-1938-6113.

Funding/Support: Rathmann Family Foundation Educators-4-CARE Fellowship in Medical Education at Stanford provided salary support for Dr. Rassbach.

Other disclosures: None reported.

Ethical approval: This study was determined to be exempted by Stanford University’s institutional review board.

Previous presentations: The data here were previously presented at the Pediatric Academic Societies National Meeting, Vancouver, British Columbia, Canada, May 2014, and at the Association of Pediatric Program Directors Annual Meeting, Orlando, Florida, March 2015.

Correspondence should be addressed to Caroline E. Rassbach, Department of Pediatrics, Stanford University, 300 Pasteur Dr., MC 5776, Stanford, CA 94305; telephone: (650) 725-8292; e-mail: crassbac@stanford.edu; Twitter: @StanfordChild.

Back to Top | Article Outline

Problem

Graduate medical education has traditionally relied on time-based training over a predetermined time period and on trainees’ acceptable scores on knowledge examinations as proxies for clinical competence. Recently, pedagogy has shifted dramatically toward competency-based medical education,1,2 which focuses on educational outcomes and better assessments of competence. The Accreditation Council for Graduate Medical Education now requires programs’ Clinical Competency Committees (CCCs) to evaluate residents on specialty-specific milestones.

For program directors and clinical faculty members to optimally assess residents’ readiness for independent practice, they need firsthand data on resident performance within the context of patient encounters. Direct observation (DO) is the ideal method for such assessment,1,2 and DO can foster residents’ growth by facilitating formative feedback, reinforcing positive behaviors, and highlighting areas needing improvement. Leaders in medical education and milestone development thus recommend the use of DO to foster medical expertise.

Despite these benefits, DO is challenging to implement because of a lack of faculty time and skill, potential stress for the learner, its perceived lack of validity, and duty hours restrictions.3 Several factors facilitate the success of DO, including faculty development and compensation. In pediatrics, additional challenges to assessing residents include faculty with shorter lengths of service time and residents repeating core rotations less frequently because of the Pediatric Residency Review Committee’s implementation of a six-month individualized curriculum.

In a March 2013 needs assessment of the Stanford Pediatric Residency Program, residents reported receiving insufficient feedback and written evaluations that did not always align with the feedback they did receive. We recognized the opportunity to fundamentally change our approach to feedback by introducing coaching as a means to increase DO, facilitate milestone assessment, and increase reflective practice4 and continuous learning.

Coaching is widely used in other disciplines including business, sports, and the performing arts. In medicine, Dr. Atul Gawande5 introduced the idea of coaching physicians to improve clinical skills. Coaching has also been used to improve physician communication skills, mentorship,6 reflection, goal setting, and self-directed learning. As such, we developed a coaching program for pediatric residents with the purpose of (1) improving residents’ clinical skill development, (2) increasing residents’ reflective practice, (3) improving feedback, (4) facilitating goal setting, and (5) improving milestone assessment.

Back to Top | Article Outline

Approach

Program development

The Stanford Pediatric Residency Coaching Program, which is based on the conceptual frameworks of reflective practice,4 self-determination theory,7 and lifelong learning and goal setting,8 began in 2013. We developed a new conceptual model for coaching in medicine; that is, the coaching itself constitutes a continuous improvement cycle in which both the coach and resident are active participants (Figure 1).

Figure 1

Figure 1

All residents are assigned a dedicated faculty coach who coaches or mentors them in different clinical settings throughout their years of training. Each coaching session consists of four parts: (1) DO focusing on the resident’s self-identified goals, (2) facilitated reflection, (3) feedback from the coach, and (4) further goal setting. The goal is for residents, after each coaching session, to internalize the discussion and apply what they have learned to subsequent clinical encounters. Coaches perform subsequent DOs of the resident focusing on previously identified goals, resulting in an iterative process of coaching and skill development.

When the program began in 2013, eight faculty coaches were recruited based on their teaching evaluations, application to the program, and interviews with residency leaders. The program has subsequently expanded to 10 coaches, a coaching director (C.E.R.), and an associate director. Each coach is assigned 3 to 4 pediatric residents from each training level for a total of 10 to 11 residents. Each coach receives 10% salary support for their coaching efforts (the Department of Pediatrics funds the coaching program). Following a four-month pilot with the intern class, the Stanford Pediatric Residency Coaching Program was implemented in the full 82-pediatric-resident training program in July 2013.

Coaches use a combination of tools developed by the coaching director (C.E.R.) and residency director (R.B.). Many of the tools they apply to their coaching are based on tools previously developed by others through an iterative process including review by educational experts and revision for content validity.9 These tools currently include structured clinical observations for the following actions: (1) gathering a patient history and performing a physical, (2) presenting on rounds, (3) participating on rounds, (4) precepting rounds (for upper-level residents), and (5) documenting a patient interaction. A sixth informal coaching session form is available for observations not otherwise categorized. In addition, coaches use three tools published by the I-PASS Study Group including handoff giver, handoff receiver, and printed handoff tools.10

Coaches initially meet with residents individually to review the Stanford Pediatric Residency Coaching Program goals and to identify each resident’s learning goals. Coaches then perform sequential DOs using the nine tools described above. As mentioned, at each coaching session, the coaches guide the residents through reflection, provide feedback, and help set learning goals. Coaches complete 10 to 12 DOs per year for interns, 7 to 8 for postgraduate year (PGY) 2 residents, and 5 to 6 for PGY3 residents. Coaches provide feedback in person through the coaching sessions immediately (or as soon as is possible) following DO. Coaching sessions, which the coach and the resident arrange according to their schedules, occur in the clinic, emergency department, and acute and intensive care settings. Coaches observe interns at least three times presenting a patient on rounds; two times performing a history, physical examination, and presentation in clinic; two times performing I-PASS handoffs; and two times reviewing documentation (at least once in the hospital setting and once in clinic). Additionally, coaches meet with interns to review their individualized learning plan goals. Coaches coach PGY2 and PGY3 residents on many of the same clinical tasks as interns, as well as on precepting, leading a team, and teaching. Coaching sessions usually last 30 to 90 minutes. Coaches document each coaching session in the residency program’s electronic evaluation system.

The coaching director (C.E.R.) and associate director, along with other experts in medical education, hold two-hour coaching faculty development sessions monthly. The sessions, which are designed to build coaches’ skills, cover topics such as the use of DO tools, feedback, reflection, and goal setting. Additionally, coaches use the sessions to brainstorm approaches for challenging situations. A Web site for coaches and residents allows storage of program policies, DO tools, and faculty development materials.

In addition to participating in faculty development, coaches are members of the CCC. They act as liaisons, both linking data gathered through DO to milestone assessment and bringing CCC recommendations back to the residents to aid ongoing skill development. For example, coaches work with residents who are struggling in a particular domain, such as clinical reasoning or communication, to hone their skills in these deficient areas by observing and offering feedback during the coaching sessions.

Back to Top | Article Outline

Program evaluation

We sought to determine whether this longitudinal coaching program could improve the quantity and quality of the feedback that residents receive from coaches as well as from noncoach faculty members. We also hoped to learn whether this program affects faculty members’ confidence in giving feedback. As such, we assessed the effectiveness of the coaching program through the following measures: (1) the number of DOs completed per resident between 2013 and 2014, (2) coaches’ and noncoach faculty members’ self-assessment of the amount and quality of feedback they provided, and (3) residents’ perspectives on the amount and quality of feedback they received. Stanford University deemed the evaluation to be exempt research.

We surveyed faculty members using anonymous preintervention (2013) and retrospective pre–post intervention (2014) surveys to assess their perceptions of the quantity and quality of feedback they provided to residents and their confidence in giving feedback. We distributed the questionnaires through SurveyMonkey (San Mateo, California) to the original 8 coaches and to 77 noncoach core faculty members including associate program directors, advisors, rotation directors, scholarly concentration leaders, and division chiefs. We distributed a parallel retrospective anonymous pre–post survey in 2014 to the 82 pediatric residents in the program to assess their perceptions of the quantity and quality of feedback they received from both coaches and noncoach faculty members.

To assess the quality of feedback, we asked faculty how well trained they felt in providing feedback and how often they asked residents to reflect on their performance and set learning goals. Similarly, we asked residents to rate how well trained their coach, compared with other faculty, was in giving feedback, and how often the coach asked them to reflect on their performance and set learning goals.

Back to Top | Article Outline

Outcomes

All 8 (100%) of the coaches, and 41 of the 77 noncoach faculty members (53%), responded to the preintervention assessment in March 2013. All 8 coaches (100%), 42 of 77 noncoach core faculty members (55%), and 62 of 82 residents (76%) responded to the retrospective pre–post intervention assessment in March 2014.

Back to Top | Article Outline

Quantity of feedback

We quantified the number of DOs documented in the electronic evaluation system during the 12-month intervention and found that all 82 residents (100%) participated in 3 or more coaching sessions (range: 3–12). The coaches facilitated a total of 659 coaching sessions, an average of 82 per coach (range 57–108).

In addition to the feedback provided by coaches, we gathered data from noncoach faculty members who reported giving equal or more feedback after the coaching program was implemented than before. This outcome, which aligns with a larger goal of increasing feedback across the residency program, has reassured us that other faculty still provided feedback even in the presence of the Stanford Pediatric Residency Coaching Program.

Back to Top | Article Outline

Quality of feedback

Of the 53 residents who answered the relevant questions, 51 (96%) agreed or strongly agreed that their coach was well trained in providing feedback, compared with 37 (70%) who agreed/strongly agreed that other (noncoach) faculty were well trained in providing feedback (P < .01, Fisher exact test). All 8 of the coaches (100%) agreed/strongly agreed that they were well trained in providing feedback compared with 22 of the 34 noncoach faculty members (65%) who answered the relevant question (P = .08 [Figure 2]).

Figure 2

Figure 2

Of the 55 residents who answered the relevant questions, 54 (98%) reported that their coach usually/always asked them to reflect on their performance, while only 22 (40%) reported that their noncoach faculty usually/always asked them to reflect (P < .01). Likewise, 100% of coaches (n = 8/8) versus 51% of noncoach faculty (n = 18/35) reported that they usually/always asked residents to reflect (P = .01 [Figure 2]).

Similarly, 44 residents (of 55; 80%) reported that their coach usually/always asked them to set learning goals, while only 16 (29%) reported that their noncoach faculty usually/always asked them to set learning goals (P < .01). Meanwhile, 100% of coaches versus 43% of noncoach faculty (n = 15/35) reported that they usually/always ask residents to set goals (P < .01 [Figure 2]).

Thus, residents and faculty assessed coaches, compared with noncoaches, as more skilled at providing feedback and as more likely to incorporate reflection and goal setting. The correlation between resident ratings of faculty and faculty self-assessment was statistically significant in regard to faculty skill in giving feedback and faculty use of reflection and goal setting (Spearman’s rho = 0.94, P < .01).

Back to Top | Article Outline

Faculty confidence in giving feedback

We compared faculty members’ confidence in giving feedback pre and post intervention using unpaired t tests. Coaches demonstrated increased confidence in giving feedback on clinical reasoning, communication skills, and goal setting (P < .01)—all topics covered in faculty development (see Figure 3). Noncoach faculty did not have improved confidence in these areas. These findings suggest that faculty development in targeted areas can improve not only faculty members’ confidence but also the quality of the feedback they provide.

Figure 3

Figure 3

Back to Top | Article Outline

Next Steps

The Stanford Pediatric Residency Coaching Program provides a structured, cyclical framework of DO and coaching in multiple clinical settings. The program facilitates a longitudinal relationship between a resident and a dedicated faculty coach who fosters resident self-reflection, goal setting, and continuous performance improvement. Further, the program positions the participating coaches to contribute to the CCC while also helping residents learn from the CCC through targeted skill development. According to our program evaluation results, the coaching program has successfully achieved the goal number of DO and coaching sessions per resident per year, and, according to residents and coaches alike, has improved the quality of feedback in our residency program.

Thus far, the coaching program has been implemented only at the Stanford Pediatric Residency Program. This program can serve as a model for other residency programs striving to improve feedback, clinical skill development, reflective practice, goal setting, and milestone assessment. Examining whether our results are generalizable will be possible as other residency programs and institutions adopt similar coaching programs. At our institution the Department of Pediatrics funds the coaching program. Other ways to support such a program include employing volunteer faculty, professors emeriti, and faculty who coach only in their area of practice (rather than longitudinally).

Ongoing studies to evaluate the Stanford Pediatric Residency Coaching Program are evaluating higher-level outcomes including patient outcomes. Currently, we are studying the role that coaches play in helping residents process patient feedback and improve their communication skills. A qualitative study is also under way to understand residents’ and coaches’ perspectives of how coaching affects skill development and patient outcomes. This qualitative study may also help us understand the variability in coaching styles that may affect residents’ and coaches’ perspectives. The data included in this Innovation Report are self-reported and may reflect social desirability and self-selection bias, so additional potential areas of study include gathering more objective measures of the performance of coaches compared with all noncoaches (not just those who respond to a survey) in giving feedback.

Acknowledgments: The authors wish to thank the Rathmann Family Foundation Fellowship in Medical Education for its support of principal investigator Caroline E. Rassbach, MD. They would also like to thank Hugh O’Brodovich, MD, and Mary Leonard, MD, MS, for their generous support of the Stanford Pediatric Residency Coaching Program, as well as the faculty coaches at Stanford who make this program so successful: David Axelrod, MD, Caroline Buckway, MD, Jennifer Carlson, MD, Hayley Gans, MD, Lucy Lee, MD, Carrie Loutit, MD, Catherine Miller, MD, Loren Sacks, MD, Debbie Sakai, MD, Hayden Schwenk, MD, Nivedita Srinivas, MD, and Ann-Ming Yeh, MD. The authors would also like to acknowledge the Stanford Pediatrics Residency Program residents. Finally, they would like to thank Alyssa Bogetz, MSW, for her thoughtful review, and Rita Popat, PhD, for her statistical expertise.

Back to Top | Article Outline

References

1. Holmboe ES. Realizing the promise of competency-based medical education. Acad Med. 2015;90:411–413.
2. Iobst WF, Sherbino J, Cate OT, et al. Competency-based medical education in postgraduate medical education. Med Teach. 2010;32:651–656.
3. Fromme HB, Karani R, Downing SM. Direct observation in medical education: A review of the literature and evidence for validity. Mt Sinai J Med. 2009;76:365–371.
4. Schön DA. The Reflective Practitioner: How Professionals Think in Action. 1983.New York, NY: Basic Books.
5. Gawande AA. Personal best. New Yorker. October 3, 2011:44–53. http://www.newyorker.com/magazine/2011/10/03/personal-best. Accessed April 27, 2017.
6. Iyasere CA, Baggett M, Romano J, Jena A, Mills G, Hunt DP. Beyond continuing medical education: Clinical coaching as a tool for ongoing professional development. Acad Med. 2016;91:1647–1650
7. Ten Cate TJ, Kusurkar RA, Williams GC. How self-determination theory can assist our understanding of the teaching and learning processes in medical education. AMEE guide no. 59. Med Teach. 2011;33:961–973.
8. Li ST, Paterniti DA, Co JP, West DC. Successful self-directed lifelong learning in medicine: A conceptual model derived from qualitative analysis of a national survey of pediatric residents. Acad Med. 2010;85:1229–1236.
9. Lane JL, Gottlieb RP. Structured clinical observations: A method to teach clinical skills with limited time and financial resources. Pediatrics. 2000;105(4 pt 2):973–977.
10. Starmer AJ, Landrigan C, Srivastava R, et al. I-PASS handoff curriculum: Faculty observation tools. MedEdPORTAL. October 3, 2013. https://www.mededportal.org/publication/9570. Accessed April 27, 2017.
Copyright © 2017 by the Association of American Medical Colleges