Secondary Logo

Journal Logo

Research Reports

Key Factors in Clinical Competency Committee Members’ Decisions Regarding Residents’ Readiness to Serve as Supervisors: A National Study

Schumacher, Daniel J. MD, MEd; Martini, Abigail; Bartlett, Kathleen W. MD; King, Beth MPP; Calaman, Sharon MD; Garfunkel, Lynn C. MD; Elliott, Sean P. MD; Frohna, John G. MD, MPH; Schwartz, Alan PhD; Michelson, Catherine D. MD, MMSc; and the Association of Pediatric Program Directors Longitudinal Educational Assessment Research Network Clinical Competency Committee Study Group

Author Information
doi: 10.1097/ACM.0000000000002469

Abstract

Both clinical competency committees (CCCs) and the entrustment framework for performance assessment are recent innovations in medical education. As such, little is known about how CCCs make entrustment decisions.

Although the entrustment framework increasingly is being used in competency-based medical education,1–5 most research in this area has focused on how frontline assessors determine when a learner can be entrusted fully, partially, or not at all.6–10 However, contemporary assessment practices are placing greater responsibility for making summative assessment decisions on CCCs and “entrustment committees,” which are removed from the clinical learning environment.11–13 These committees are tasked with reviewing assessment data and making comprehensive decisions, which often include determining which activities learners are allowed to perform without supervision. Much of the research on CCCs has focused on their structure and process,3,14–16 as well as on potential best practices for them to consider,17–26 with little exploration of their actual decision making.27 With many CCCs now in place or in development, ongoing entrustment research must focus on how such groups arrive at their assessment decisions.

To further understand this area, we sought to identify the key factors that pediatric residency program CCC members consider when recommending residents to one of five progressive supervisory roles.

Method

Study setting

Fourteen pediatric residency programs (see Table 1), representing a range of sizes and geographical locations, all of which are members of the Association of Pediatric Program Directors Longitudinal Educational Assessment Research Network, participated in this study during the 2015–2016 academic year.

Table 1
Table 1:
Pediatric Residency Programs That Participated in a Study of Clinical Competency Committee (CCC) Members’ Recommendations Regarding Residents’ Readiness to Serve as Supervisors, 2015–2016

Data collection

All CCC members at the participating sites were eligible to take part in this study, and site leads were asked to recruit participants from this convenience sample via e-mail. Assessment data for all categorical pediatric residents at the participating sites were considered eligible for inclusion. CCC members and residents were assigned study identification numbers prior to data collection, and all data were submitted using only those study identification numbers.

As part of their biannual (midpoint and end of academic year) review and milestone assignment process during the July 2015 to June 2016 academic year, participating CCC members were asked to make an overall recommendation for the residents they reviewed, placing them in one of five supervisory roles (Levels 1–5, see Appendix 1). Levels 2 and 4 were “borderline” supervisory roles, as their descriptions indicate. Participants submitted their recommendations via an online survey (see Supplemental Digital Appendix 1, available at https://links.lww.com/ACADMED/A602), which was separate from any process or form they used for their local CCC review. They were then asked to respond to a free-text prompt to describe the key factors that led them to recommend that supervisory role (see Appendix 1). Participants completed one form for each resident recommendation they made. Prior to the survey administration, all questions were reviewed and edited by a group of 12 residency and medical education research leaders.

During each of the two review periods, reminder e-mails were sent from the Association of Pediatric Program Directors Longitudinal Educational Assessment Research Network to participants encouraging them to complete the survey.

Data analysis

We inductively analyzed the free-text responses using a thematic analysis.28 Dedoose (version 7.5.15, SocioCultural Research Consultants, Manhattan Beach, California) was used to facilitate organization and coding of the data.

First, three authors (D.J.S., A.M., C.D.M.) independently read and reread all responses to become familiar with the data. Next, they independently coded the data by supervisory role, developing a unique codebook for each. The five supervisory roles were coded separately to analyze trends in key factors across roles. To facilitate this goal, the coding team consistently named codes across levels whenever possible. Following the independent coding, the primary coders iteratively reconciled disagreements and agreed on a final codebook for each supervisory role.

Then two secondary coders (K.W.B., B.K.) independently coded the data from the first 100 survey responses in the “may serve as a supervisory resident in ALL settings” role (Level 5) and half of the responses in the other four roles, using the codebooks developed by the primary coders. Their goal was to determine areas of agreement or disagreement with the primary coding team. Areas of agreement were treated as evidence of credibility in the data analysis. Areas of disagreement were discussed among the entire coding team (primary and secondary coders) to resolve differences and agree on a final consensus coding of the data. The secondary coders found a few instances where a code present for one supervisory role could replace a separate code from another role, thus reducing the total number of codes. They also suggested combining similar codes within levels in a few instances, simplifying the codebooks further.

Before proceeding with a higher-level analysis, similar codes from more than one supervisory role were grouped together, with a notation about all the applicable roles. The five coders then worked iteratively to group codes into categories and coalesce categories into themes. Categories were developed by grouping codes that were similar. Themes were developed by grouping categories that represented patterns in the key factors participants used to select a supervisory role.28

To ensure credibility, our data analysis allowed for triangulation between independent coding by select authors. We sought to promote transferability, which is “the extent to which … findings can be transferred or applied in different settings,”29 by obtaining data from multiple CCC members at several residency programs, with the goal of achieving a representative case sampling. We also sought to promote confirmability in the structure of the coding team. Two members of this team (A.M., B.K.) were not site investigators or physicians, and they had no previous experience serving on a CCC. These coders helped to ensure that other members of the coding team, who were all physicians with residency leadership experience, did not impose undue personal bias during the coding process; knowledge of CCC processes offered both a potential strength for the trustworthiness of our findings and a potential concern for confirmability if reflexivity was not addressed.

The institutional review board (IRB) at Cincinnati Children’s Hospital Medical Center (lead site) granted exempt status to this study. The IRB at each participating program also reviewed and approved this study. Site leads were provided study information/sample recruitment forms to distribute to residents and CCC members. Individual IRBs at each site determined if a waiver of documentation of informed consent was appropriate, if documented consent was preferred, or if documented consent was not necessary.

Results

Across the 14 participating programs, 84 of the 155 CCC members completed 769 resident forms (postgraduate year [PGY] 1: 293; PGY2: 315; PGY3: 147; not specified: 14) over two CCC review cycles. The majority of forms categorized residents as being able to supervise in all settings (Level 5) (n = 511; PGY1: 129; PGY2: 245; PGY3: 135; not specified: 2), with the remaining forms having the following distribution among the supervisory roles: all settings but borderline (Level 4) (n = 56; PGY1: 14; PGY2: 34; PGY3: 8), some settings (Level 3) (n = 47; PGY1: 39; PGY2: 7; PGY3: 1), some settings but borderline (Level 2) (n = 80; PGY1: 52; PGY2: 18; PGY3: 3; not specified: 7), not able to serve as a supervisor (Level 1) (n = 67; PGY1: 55; PGY2: 7; PGY3: 0; not specified: 5), and unable to assign a role (n = 8; PGY1: 4; PGY2: 4; PGY3: 0).

Four themes emerged from our analysis regarding how participants categorized residents into supervisory roles: (1) Determining supervisory ability follows from demonstrated trustworthiness; (2) demonstrated performance matters, but so does experience; (3) ability to lead a team is considered; and (4) contextual considerations external to the resident are at play. These themes and representative quotations (identified by supervisory role recommended and participant ID number) are described in detail below.

Overall, we found no appreciable differences in the key factors used to recommend residents to a borderline (Levels 2 and 4) versus nonborderline role. However, participants offered several key factors that were unique to the “may serve as a supervisory resident in ALL settings” (Level 5) and “may not serve as a supervisory resident” (Level 1) roles (i.e., the entrustment extremes), which we highlight and expand on below.

Theme 1: Determining supervisory ability follows from demonstrated trustworthiness

Participants identified trustworthiness as a key factor influencing their supervisory role recommendations. Sometimes they explicitly referenced “trust” (Level 2 [59]) or made a more global reference to how much a resident was trusted, such as “faculty felt comfortable with her in charge of patients when they were at home” (Level 5 [13]) and “could practice independently … even without six months more training” (Level 5 [84]).

In other instances, participants described characteristics that were germane to determining trustworthiness. These characteristics included the ability, or lack thereof, to provide safe care and to do what was right. They made explicit references to safe care only in regard to the two extreme supervisory roles: “error frequency” (Level 1 [34]) and “prioritiz[es] responsibilities to provide patient care that is safe, effective and efficient” (Level 5 [56]).

Participants also mentioned clinical characteristics that were relevant to trustworthiness, such as dependability (“unable to complete basic tasks reliably,” Level 1 [84]), clinical judgment (“good knowledge base, which she is able to apply in commonsense fashion,” Level 5 [51]), and confidence in decision making (“not paralyzed with indecision,” Level 5 [64]; and “decisive personality,” Level 5 [51]).

Finally, participants described components of trustworthiness that focused on residents’ willingness to stop in the face of uncertainty, seek help when needed, and demonstrate commitment to improvement. For example, one participant noted that a resident was “good at asking questions of things she doesn’t know, rather than just winging it” (Level 4 [32]). The same participant even identified knowing one’s limitations and exhibiting appropriate help-seeking behaviors as justification for overlooking more global weaknesses: “the most important aspect is that while she is one of our weaker residents, she very much knows her limits and when/how to ask for help” (Level 4 [32]). Although these characteristics were described for all supervisory roles, specific concerns about residents’ reticence to ask questions or proceed without understanding were called out in Level 1 recommendations. For example, one participant identified a resident who “appears to carry out the plan but not understand why” (Level 1 [37]).

Theme 2: Demonstrated performance matters, but so does experience

Participants felt that both clinical performance and clinical experience across settings, such as critical care and general pediatrics, were key factors for determining the level of supervisory responsibility granted to residents.

When considering residents’ demonstrated performance, participants referenced specific areas of clinical performance, including medical knowledge, ability to discern nuances and subtleties in clinical care, communication skills, organization and prioritization, and systems-based practice. Except for systems-based practice, participants discussed the other areas as key factors for recommending residents to all five supervisory roles. For example, they said the following: “has not yet mastered skills of an intern (organization, communication, prioritization of care)” (Level 1 [27]); “able to understand nuances of patient diagnoses” (Level 5 [5]); and “excellent medical knowledge with above-average ability to synthesize, analyze data and develop management plans incorporating recent literature” (Level 5 [73]).

Participants identified systems-based practice as a key factor for all supervisory roles except “may serve as a supervisory resident in SOME settings but is just above the borderline” (Level 2). For example, they said the following: “needs to become more familiar with the system before he would be capable to supervise others” (Level 1 [57]) and “intimate knowledge of hospital system” (Level 5 [24]).

When recommending residents to the highest-level supervisory role, participants also referenced the extremes of performance demands, such as being “very calm under pressure” (Level 5 [32]) and managing complexity well. Participants also pointed out residents’ care coordination and “accept[ing] … ambiguity at an appropriate level” (Level 5 [64]).

Participants noted that professionalism in general (“very strong professionalism allowing for full follow-through on all aspects of residency,” Level 5 [17]; and “professionalism with strong understanding of role of physician,” Level 5 [44]) as well as a few specific areas of professionalism served as key factors to recommending residents to all but the lowest-level supervisory role. The specific areas they identified included patient commitment and advocacy (“heavily invested in care, learners, and teams,” Level 3 [28]) and professional identity (“sense of duty and commitment to her patients,” Level 5 [39]).

A final area of performance that participants discussed was the degree to which residents demonstrated being self-directed learners:

Demonstrated excellent self-directed, internally motivated learning, utilizing self-assessment skills and external feedback to continue to improve her performance and strengthen areas of practice. (Level 5 [73])

Participants also noted that several factors that cut across specific areas of performance affected their supervisory role recommendations. These factors included developmental progression, comparison against peers and expectations, undergoing remediation, resident reputation, and personal experience working with the resident. For example, they said the following:

Intern is currently undergoing remediation, and we would not allow someone to supervise while that is happening. (Level 1 [84])

Lower than class average milestone scores in most areas. (Level 1 [27])

A number of people have expressed concern about her performance, and we have discussed her in the CCC many times thus far this year. Even some residents have expressed concern about her, which was a major concern to me. (Level 1 [21])

Improvement seen over time. (Level 2 [22])

Adequate growth in milestones. (Level 4 [82])

I have worked with the resident quite a bit over the course of her training, which helped me to feel comfortable with my decision. (Level 5 [21])

Although demonstrated performance was clearly important in participants’ decision making, experience was as well. Experience was a key factor across recommendations to all supervisory roles. For example, one participant noted the importance of having “exposure to various rotations” (Level 5 [11]), while another noted that “… even with all of that [positive aspects of performance], I would only propose that she supervise students in settings to which she has already been exposed” (Level 2 [28]).

Theme 3: Ability to lead a team is considered

Participants noted that various characteristics related to leading a team were key factors driving their supervisory role categorizations. These characteristics were most relevant to the highest-level role but were discussed to some degree for all roles. For example, participants discussed simply possessing or lacking team leadership skills: “intern is still developing their leadership skills … a solid performer who is where she is supposed to be for a midyear intern, but not ready to lead a team” (Level 1 [22]), and “multiple faculty members commented on [the resident’s] ability to run a service” (Level 5 [13]).

Participants also identified characteristics related to being a strong leader, such as “maturity” (Level 2 [79]) and possessing a “calm … demeanor” (Level 5 [78]) as well as being an effective teacher and “provid[ing] good instruction to med students/interns” (Level 5 [78]).

Finally, participants took note of residents’ demonstrated abilities when they were in the role of supervisor previously:

This resident has had multiple opportunities to supervise both on call and in settings in which other learners require supervision … this has been consistently done. In addition, peer evaluations support that peers value this individual as a supervising resident. (Level 5 [75])

Diligent in patient care/supervision. (Level 5 [78])

Theme 4: Contextual considerations external to the resident are at play

Factors related to the clinical learning environment and structure of the residency program served as notable considerations for some participants.

Patient volume and acuity were used to both advance and hold back residents from certain supervisory roles. These factors were most relevant to the extreme levels, as both were discussed only when placing residents in the lowest two supervisory role levels or in the highest level. Referring to Level 1, one participant noted that a resident was “not yet at the level to supervise in high-acuity/high-volume settings” (Level 1 [64]). Referring to Level 5, one participant noted that a resident was “able to manage high-volume, high-acuity situations” (Level 5 [67]), while another noted that a resident “thrived in a high-volume and high-acuity environment like the [neonatal intensive care unit] … [and] on a challenging inpatient subspecialty service with difficult patients and multiple attendings” (Level 5 [18]).

The support that was or was not available for residents was also a factor:

I don’t know that the resident was so much ready as that there is additional supervision by seniors and attendings in place while allowing this intern to work on supervisory skills with med students. (Level 2 [58])

The availability of backup supervisors for a resident serving in a supervisory role was explicitly noted as a key factor only when placing residents in one of the borderline levels (Levels 2 and 4). Some participants were even willing to advance residents to a higher supervisory level with the caveat of recommending adjustments to their schedules that would postpone the supervisory rotations until later in the year. For example, one participant noted that “schedule changes [for a resident] made [his/her] supervisory role later in [the] second year of training” (Level 4 [78]).

Finally, training level rules set by the program served to hold back some residents who might otherwise have been deemed fit for a higher-level supervisory role. For example, one participant noted that a resident “[was] an intern but will be able to supervise once they get to junior year in six months” (Level 1 [1]).

Discussion

We found four themes that describe the key factors CCC members use when they recommend residents to one of five supervisory roles. As Figure 1 illustrates, CCC members consider factors related to residents (themes 1 and 3), the environment (theme 4), and both residents and the environment (theme 2) when making their recommendations.

Figure 1
Figure 1:
Relationship of the themes and key factors in a study of clinical competency committee members’ recommendations regarding pediatric residents’ readiness to serve as supervisors, 2015–2016.

We found that CCC members recommended residents across all training levels to the highest level of supervisory roles. This finding likely represents the commonplace practice of PGY2 residents in pediatrics serving in supervisory roles; thus, typically developing residents may be deemed ready to supervise by the end of their PGY1 year. As Schwartz30 noted: “In most specialties, residents in their second year are allowed (even required) to supervise medical students or more junior residents. Program directors trust residents to supervise before they trust them to practice unsupervised.” However, this finding could also represent a low bar for entrusting residents to serve in supervisory roles or an incomplete consideration of the full implications of such a decision, especially when considering that many PGY1 residents were deemed able to serve as a supervisor in all settings rather than just some settings.

Trustworthiness

The recommendations we studied represent summative entrustment decisions. Recently, ten Cate31 argued that four factors drive entrustment decisions: perceived trustworthiness, risks, benefits, and the trust propensity of the “trustor.” The first three factors are represented in the themes we identified in this study. However, our study focused on the trustworthiness construct that has been described in recent years.8,31,32

Various components of trustworthiness, or the characteristics that individuals demonstrate which lead others to place trust in them, have been described as well.8,31,32 In a model from medicine, Kennedy and colleagues8 described four components of trustworthiness—knowledge and skill, discernment (i.e., knowing one’s limits and seeking help appropriately), conscientiousness (i.e., reliability and follow-through), and truthfulness. CCC members in our study described the first three components as key factors in their supervisory role decisions. Our findings then further support the constructs defined by Kennedy and colleagues. Perhaps equally important, our findings also reinforce using the trustworthiness framework for assessment purposes.

Although trustworthiness was a consideration in all entrustment decisions, explicit references to residents’ abilities to provide safe care were only made for decisions regarding the highest- and lowest-level supervisory roles. Although it is almost certainly important for any summative assessment or advancement decision in medicine, providing safe care may only be an important factor when deciding to fully trust someone or to not trust her or him at all.

Experience and performance

CCC members noted that they considered residents’ experience when recommending them to a supervisory role. However, critical care experience was only a key factor when placing residents in the highest-level supervisory role. When choosing one of the “some settings” levels (Levels 2 and 3), CCC members identified either outpatient or inpatient general pediatric contexts as the settings in which they imagined the resident serving as a supervisor. Thus, perhaps expectedly, CCC members seemed to use critical care experience as an indicator of the resident being able to serve as a supervisor in all settings, particularly those settings in which patients are more likely to be critically ill.

Residents’ performance at the extremes of demands, care coordination, and ability to manage ambiguity were all key factors in CCC members recommending the highest-level supervisory role. This finding likely reflects that committee members use these higher-order abilities to discern residents’ placement at the highest level. It also underscores the importance of collecting assessment data in these areas for residents who are further along in their development. Such data are likely to be beneficial in making continued advancement and summative assessment decisions, and they provide learners with the most meaningful feedback to take the next steps in their development.33,34

CCC members noted that knowledge of and facility with working in systems, as well as being a self-directed learner, were important factors in their decision making, highlighting the importance of competency domains such as systems-based practice and practice-based learning and improvement.35 Although both are newer domains compared with traditional domains such as patient care and medical knowledge, their importance to CCC members’ summative decision making was clear. This finding underscores the importance of ensuring that data that speak to these domains are included when designing an assessment program.

Of note, professionalism was not a key factor for recommending residents to the lowest-level supervisory role. Perhaps CCC members did not even consider professionalism when residents had not met other fundamental achievements.

Ability to lead a team

We also found that residents’ ability to lead teams was a key factor in CCC members’ decisions. However, a number of related comments mentioned characteristics of individuals, such as maturity and composure, that are likely not frequently or explicitly included in residency curricula or assessment structures. This finding raises the question of whether programs should be carefully considering these characteristics at selection, whether their early presence will predict entrustment trajectories, and how we can foster these characteristics over time.

Contextual considerations

The availability of other supervisors to act as backup was noted as a key factor when recommending residents to one of the borderline levels (Levels 2 and 4). Advancing these residents to a borderline level with support available, rather than keeping them at the next lowest level, is critical not only for ensuring safe care but also for placing residents at the leading edge of their development with just enough scaffolding to support them and allow them to further develop their competence.36,37

Some CCC members noted a willingness to advance residents to a higher level if adjustments could be made to their schedule, such as supervisory rotations coming later in the year, to allow time for more growth in the interim. Although it is a practical solution, this rationale raises questions about the integrity of decisions that rely on an expected trajectory and the need for more dynamic assessment structures that allow for the reconsideration of these decisions over time.

CCC members discussed patient volume and acuity as key factors only when recommending residents to a supervisory level at one of the extremes. On the lower extreme, residents’ inability to manage higher volume and acuity was a key factor in holding them back from a higher level. On the upper extreme, CCC members used residents’ ability to manage higher volume and acuity as a key factor in advancing them. Both volume and acuity can affect workload, and residents who are early in their training have described the cognitive burden that each new patient adds, reporting that they feel they learn best with no new admissions on an inpatient service.38 Learners and CCC members alike recognize the implications of even small increases in patient volume or acuity and how these changes can affect performance. Attending to these factors in assessment is likely important not only for making advancement decisions but also for designing systems in which resident staffing is safe and beneficial for both residents and patients.

Implications for practice

Figure 1 illustrates that both resident factors and environmental factors are considered to assess the supervisory abilities of residents. We believe that the themes we identified related to these resident factors provide useful insights into the types of data residency programs may wish to collect in their assessment programs. However, perhaps more important, our findings illuminate how CCC members use environmental factors in their decision making. Programs may be less likely to collect robust data in these areas, but formally presenting this information to CCC members is necessary.

Limitations

This study has limitations. First, it was conducted with CCC members from a single specialty, and the findings may not fully transfer to other specialties. Second, our study design did not allow for follow-up questions, so we could not clarify participants’ responses, and they could not elaborate on them further. That said, we had 84 CCC member participants who provided information on the key factors they considered when making more than 750 decisions about residents’ supervisory roles. In completing this task, many participants offered multiple key factors that drove their decision making. Third, three members of the coding team had experience as residency program leaders and with leading CCC efforts, which could have introduced bias into their coding. Including two other coders with no personal experience in program leadership or CCC membership provided balance to this potential bias. Finally, all data were self-reported by CCC members without objective data to support or refute their comments.

Conclusions

Our study found that CCC members consider residents’ trustworthiness, demonstrated performance, and leadership skills when determining the supervisory role in which they can serve. These factors all focus on residents’ abilities. However, experience and contextual considerations external to the resident also serve as key factors in these decisions. The interplay between these factors in CCC decision making, then, is important to consider as CCC processes are optimized and studied further.

Acknowledgments:

Members of the Association of Pediatric Program Directors Longitudinal Educational Assessment Research Network Clinical Competency Committee Study Group, who meet the criteria for authorship but are not named above, include Michelle Barnes, MD, University of Illinois College of Medicine; Natalie Burman, DO, MEd, Naval Medical Center San Diego; Caren Gellin, MD, University of Rochester School of Medicine and Dentistry; Kathleen Gibbs, MD, Children’s Hospital of Philadelphia and the Perelman School of Medicine at the University of Pennsylvania; Javier Gonzalez del Rey, MD, MEd, Cincinnati Children’s Hospital Medical Center/University of Cincinnati College of Medicine; Su-Ting T. Li, MD, University of California, Davis, School of Medicine; Jon F. McGreevy, MD, MSPH, Phoenix Children’s Hospital/University of Arizona College of Medicine; Sue Poynter, MD, MEd, Cincinnati Children’s Hospital Medical Center/University of Cincinnati College of Medicine; Shannon E. Scott-Vernaglia, MD, Massachusetts General Hospital and Harvard Medical School; Tanvi Sharma, MD, MPH, Boston Children’s Hospital and Harvard Medical School; Daniel Sklansky, MD, University of Wisconsin School of Medicine and Public Health; and Lynn Thoreson, DO, University of Texas at Austin Dell Medical School.

References

1. ten Cate O, Scheele F. Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice? Acad Med. 2007;82:542–547.
2. ten Cate O, Snell L, Carraccio C. Medical competence: The interplay between individual ability and the health care environment. Med Teach. 2010;32:669–675.
3. Hauer KE, Kohlwes J, Cornett P, et al. Identifying entrustable professional activities in internal medicine training. J Grad Med Educ. 2013;5:54–59.
4. American Board of Pediatrics. Entrustable professional activities for general pediatrics. https://www.abp.org/entrustable-professional-activities-epas. Accessed September 12, 2018.
5. Englander R, Flynn T, Call S, et al. Toward defining the foundation of the MD degree: Core entrustable professional activities for entering residency. Acad Med. 2016;91:1352–1358.
6. Sterkenburg A, Barach P, Kalkman C, Gielen M, ten Cate O. When do supervising physicians decide to entrust residents with unsupervised tasks? Acad Med. 2010;85:1408–1417.
7. Choo KJ, Arora VM, Barach P, Johnson JK, Farnan JM. How do supervising physicians decide to entrust residents with unsupervised tasks? A qualitative analysis. J Hosp Med. 2014;9:169–175.
8. Kennedy TJ, Regehr G, Baker GR, Lingard L. Point-of-care assessment of medical trainee competence for independent clinical work. Acad Med. 2008;83(10 suppl):S89–S92.
9. Hauer KE, Ten Cate O, Boscardin C, Irby DM, Iobst W, O’Sullivan PS. Understanding trust as an essential element of trainee supervision and learning in the workplace. Adv Health Sci Educ Theory Pract. 2014;19:435–456.
10. Sheu L, Kogan JR, Hauer KE. How supervisor experience influences trust, supervision, and trainee learning: A qualitative study. Acad Med. 2017;92:1320–1327.
11. Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—Rationale and benefits. N Engl J Med. 2012;366:1051–1056.
12. Lomis K, Amiel JM, Ryan MS, et al.; AAMC Core EPAs for Entering Residency Pilot Team. Implementing an entrustable professional activities framework in undergraduate medical education: Early lessons from the AAMC core entrustable professional activities for entering residency pilot. Acad Med. 2017;92:765–770.
13. Conforti LN, Yaghmour NA, Hamstra SJ, et al. The effect and use of milestones in the assessment of neurological surgery residents and residency programs. J Surg Educ. 2018;75:147–155.
14. Promes SB, Wagner MJ. Starting a clinical competency committee. J Grad Med Educ. 2014;6:163–164.
15. French JC, Dannefer EF, Colbert CY. A systematic approach toward building a fully operational clinical competency committee. J Surg Educ. 2014;71:e22–e27.
16. Chahine S, Cristancho S, Padgett J, Lingard L. How do small groups make decisions? A theoretical framework to inform the implementation and study of clinical competency committees. Perspect Med Educ. 2017;6:192–198.
17. Ross FJ, Metro DG, Beaman ST, et al. A first look at the Accreditation Council for Graduate Medical Education anesthesiology milestones: Implementation of self-evaluation in a large residency program. J Clin Anesth. 2016;32:17–24.
18. Sklansky DJ, Frohna JG, Schumacher DJ. Learner-driven synthesis of assessment data: Engaging and motivating residents in their milestone-based assessments. Med Sci Educ. 2017;27:417–421.
19. Ketteler ER, Auyang ED, Beard KE, et al. Competency champions in the clinical competency committee: A successful strategy to implement milestone evaluations and competency coaching. J Surg Educ. 2014;71:36–38.
20. Shumway NM, Dacus JJ, Lathrop KI, Hernandez EP, Miller M, Karnad AB. Use of milestones and development of entrustable professional activities in 2 hematology/oncology training programs. J Grad Med Educ. 2015;7:101–104.
21. Hong R. Observations: We need to stop drowning—A proposal for change in the evaluation process and the role of the clinical competency committee. J Grad Med Educ. 2015;7:496–497.
22. Mount CA, Short PA, Mount GR, Schofield CM. An end-of-year oral examination for internal medicine residents: An assessment tool for the clinical competency committee. J Grad Med Educ. 2014;6:551–554.
23. Donato AA, Alweis R, Wenderoth S. Design of a clinical competency committee to maximize formative feedback. J Community Hosp Intern Med Perspect. 2016;6:33533.
24. Schumacher DJ, Sectish TC, Vinci RJ. Optimizing clinical competency committee work through taking advantage of overlap across milestones. Acad Pediatr. 2014;14:436–438.
25. Johna S, Woodward B. Navigating the next accreditation system: A dashboard for the milestones. Perm J. 2015;19:61–63.
26. Friedman KA, Raimo J, Spielmann K, Chaudhry S. Resident dashboards: Helping your clinical competency committee visualize trainees’ key performance indicators. Med Educ Online. 2016;21:29838.
27. Ekpenyong A, Baker E, Harris I, et al. How do clinical competency committees use different sources of data to assess residents’ performance on the internal medicine milestones? A mixed methods pilot study. Med Teach. 2017;39:1074–1083.
28. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101.
29. Frambach JM, van der Vleuten CP, Durning SJ. AM last page. Quality criteria in qualitative and quantitative research. Acad Med. 2013;88:552.
30. Schwartz A. What should we mean by “allowed to supervise others” in entrustment scales? Med Teach. 2018;40:642.
31. Ten Cate O. Managing risks and benefits: Key issues in entrustment decisions. Med Educ. 2017;51:879–881.
32. Colquitt JA, Scott BA, LePine JA. Trust, trustworthiness, and trust propensity: A meta-analytic test of their unique relationships with risk taking and job performance. J Appl Psychol. 2007;92:909–927.
33. Dannefer EF. Beyond assessment of learning toward assessment for learning: Educating tomorrow’s physicians. Med Teach. 2013;35:560–563.
34. Driessen E, Scheele F. What is wrong with assessment in postgraduate training? Lessons from clinical practice and educational research. Med Teach. 2013;35:569–574.
35. Swing SR. The ACGME outcome project: Retrospective and prospective. Med Teach. 2007;29:648–654.
36. Vgotsky L. Interaction Between Learning and Development. Mind and Society. 1978.Cambridge, MA: Harvard University Press.
37. Fraser SW, Greenhalgh T. Coping with complexity: Educating for capability. BMJ. 2001;323:799–803.
38. Haney EM, Nicolaidis C, Hunter A, Chan BK, Cooney TG, Bowen JL. Relationship between resident workload and self-perceived learning on inpatient medicine wards: A longitudinal study. BMC Med Educ. 2006;6:35.

Appendix 1

Prompts for Clinical Competency Committee Participants in a Study of Recommendations Regarding Pediatric Residents’ Readiness to Serve as Supervisors, 2015–2016a

Supervisory role recommendation

Based on your review of performance data for this resident and the milestone levels you have assigned, which advancement decision would you recommend making for this resident at this time?

As you consider this task, please use the following definition of supervision: “Serving in a role where responsibilities include some type of oversight of either (1) more junior trainees (e.g., a more senior pediatrics resident supervising a pediatrics intern or medical student), or (2) trainees at the same level but with less pediatric experience (e.g., a third-year pediatrics resident supervising a third-year emergency medicine resident in the pediatric intensive care unit).”

Choose the response that describes what you would recommend allowing them to do and not what they are currently scheduled to do, if applicable.

Level 5: May serve as a supervisory resident in ALL settings

Level 4: May serve as a supervisory resident in ALL settings but is just above the borderline/marginal mark for serving in this role

Level 3: May serve as a supervisory resident in SOME settings

Level 2: May serve as a supervisory resident in SOME settings but is just above the borderline/marginal mark for serving in this role

Level 1: May not serve as a supervisory resident

Unable to determine

Free-text prompts

If “may serve as a supervisory resident in ALL or SOME settings” was chosen, what were the key factors that made you feel this resident was ready to serve in a supervisory role?

If a borderline level (Level 2 or 4) was chosen, what were the key factors that made you feel this resident was borderline?

If “may not serve as a supervisory resident” was chosen, what were the key factors that made you feel this resident may not serve in a supervisory role?

aFor the complete survey instrument, see Supplemental Digital Appendix 1 at https://links.lww.com/ACADMED/A602.

Supplemental Digital Content