Secondary Logo

Journal Logo

Reviewing Residents’ Competence: A Qualitative Study of the Role of Clinical Competency Committees in Performance Assessment

Hauer, Karen E. MD; Chesluk, Benjamin PhD; Iobst, William MD; Holmboe, Eric MD; Baron, Robert B. MD; Boscardin, Christy K. PhD; Cate, Olle ten PhD; O’Sullivan, Patricia S. EdD

doi: 10.1097/ACM.0000000000000736
Research Reports
Free
SDC

Purpose Clinical competency committees (CCCs) are now required in graduate medical education. This study examined how residency programs understand and operationalize this mandate for resident performance review.

Method In 2013, the investigators conducted semistructured interviews with 34 residency program directors at five public institutions in California, asking about each institution’s CCCs and resident performance review processes. They used conventional content analysis to identify major themes from the verbatim interview transcripts.

Results The purpose of resident performance review at all institutions was oriented toward one of two paradigms: a problem identification model, which predominated; or a developmental model. The problem identification model, which focused on identifying and addressing performance concerns, used performance data such as red-flag alerts and informal information shared with program directors to identify struggling residents.

In the developmental model, the timely acquisition and synthesis of data to inform each resident’s developmental trajectory was challenging. Participants highly valued CCC members’ expertise as educators to corroborate the identification of struggling residents and to enhance credibility of the committee’s outcomes. Training in applying the milestones to the CCC’s work was minimal.

Participants were highly committed to performance review and perceived the current process as adequate for struggling residents but potentially not for others.

Conclusions Institutions orient resident performance review toward problem identification; a developmental approach is uncommon. Clarifying the purpose of resident performance review and employing efficient information systems that synthesize performance data and engage residents and faculty in purposeful feedback discussions could enable the meaningful implementation of milestones-based assessment.

Supplemental Digital Content is available in the text.

K.E. Hauer is professor, Department of Medicine, University of California, San Francisco, School of Medicine, San Francisco, California.

B. Chesluk is clinical research associate, Evaluation, Research, and Development, American Board of Internal Medicine, Philadelphia, Pennsylvania.

W. Iobst is vice president for academic and clinical affairs and vice dean, Commonwealth Medical College, Scranton, Pennsylvania.

E. Holmboe is senior vice president, Accreditation Council for Graduate Medical Education, Chicago, Illinois, and adjunct professor of medicine, Yale School of Medicine, New Haven, Connecticut.

R.B. Baron is professor of medicine and associate dean for graduate and continuing medical education, Division of General Internal Medicine, Department of Medicine, University of California, San Francisco, School of Medicine, San Francisco, California.

C.K. Boscardin is associate professor, Department of Medicine, University of California, San Francisco, School of Medicine, San Francisco, California.

O. ten Cate is professor of medical education and director, Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands.

P.S. O’Sullivan is professor of medicine and director of research and development in medical education, Office of Medical Education, University of California, San Francisco, School of Medicine, San Francisco, California.

Funding/Support: Karen E. Hauer receives salary support from the American Board of Internal Medicine (ABIM). Benjamin Chesluk is employed by the ABIM, and Eric Holmboe and William Iobst were employed by the ABIM at the time of this study.

Other disclosures: None reported.

Ethical approval: The institutional review board at the University of California, San Francisco, School of Medicine approved this study.

Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A277.

Correspondence should be addressed to Karen E. Hauer, University of California, San Francisco, School of Medicine, Department of Medicine, 505 Parnassus Ave., M1078, Box 0120, San Francisco, CA 94143-0120; telephone: (415) 476-1964; e-mail: karen.hauer@ucsf.edu.

Medical educators assess trainees’ performance to determine whether they have achieved competence to provide high-quality, safe medical care. Increasingly, the public has come to expect that training programs have processes in place to ensure that future physicians are prepared for independent practice. Educators around the world have defined competencies, and more recently milestones, to articulate the desired characteristics of physicians’ performance and to serve as the basis for assessment.1–3

Whereas the completion of residency training within a specific discipline in an accredited program after a prescribed number of years has historically defined readiness for practice for nearly all trainees, mechanisms to confirm competence are now receiving closer scrutiny. Competency- and milestones-based education seeks to ensure that all trainees are prepared for practice and competent in key activities.4,5 Milestones are intended to serve as a framework to support residents’ learning as a coherent and logical sequence of experiences tailored to individual learning needs. Although the abstract nature of competencies can complicate their use,6 milestones aim to clarify progress to be assessed in specific competency domains.

Though graduate medical education (GME) program directors have always been responsible for monitoring residents’ performance, in the United States, the Next Accreditation System (NAS)7 now requires that, within GME programs, clinical competency committees (CCCs) measure residents’ progressive attainment of competence. As of 2013, CCCs must review all resident evaluations semiannually and report on milestones to the Accreditation Council for Graduate Medical Education (ACGME).1 Nonetheless, this mandate comes with unanswered questions about how these committees should approach their work to render judgments of competence. Decisions about what will be evaluated and how information will be synthesized into a judgment about a trainee’s performance reflect underlying assumptions about the purposes of the review process.8,9 The information sources available to CCCs, the ways that they share and use this information, and their perceptions of their decision-making accountability all may reflect their understanding of the scope and nature of their responsibilities toward trainees and patients. Synthesizing information about a trainee’s performance into a recommendation for advancement ultimately constitutes a judgment to trust the trainee to perform future clinical work independently and unsupervised.10

From the perspective of interpreting performance information for the purpose of guiding and ensuring residents’ development of competence, this study sought to describe the current state of CCCs in GME. Although CCCs are now required in residency programs, little information exists in the literature to guide their work. In addition, how these committees approach their charge or perceive their purpose, or how their operations align with their intentions, remains unknown. This study aims to characterize residency CCCs, understandings of their purpose, and the ways in which they use performance information to make judgments about residents’ competence. Study results will identify current practices to help educators address the relationship between assessment and curricular design, learning, and outcomes. At a pivotal time for assessment in GME, study findings can provide baseline insights about how program directors, as both leaders and end users of the CCC process, perceive their charge and their accountability for ensuring residents’ competence.

Back to Top | Article Outline

Method

Study design

This qualitative study used conventional content analysis, which seeks to describe a phenomenon through the examination, coding, and interpretation of data to identify themes.11 The investigators conducted semistructured interviews with residency program directors at five institutions in California in 2013. Anticipating variability across programs, the investigators used interviews to gain an in-depth understanding of program directors’ perceptions of their CCC procedures and the results achieved. The institutional review board at the University of California, San Francisco, School of Medicine approved this study.

The research team included the principal investigator (K.E.H.), who had experience studying and conducting performance assessment and had served on a student competency committee. The research assistant, who conducted the interviews, had extensive experience in qualitative interviewing and research. The remaining team members brought expertise in research methods (B.C., C.K.B., O.T.C., P.S.O.) and competency-based education across institutions (W.I., E.H., R.B.B.).

Back to Top | Article Outline

Sample

To maximize diversity of responses, the investigators chose stratified purposive sampling12 of residency program directors from the University of California Schools of Medicine at Davis, Irvine, Los Angeles, San Diego, and San Francisco. Programs were classified as larger or smaller and as procedural or nonprocedural (see Supplemental Digital Appendix 1 at http://links.lww.com/ACADMED/A277). Using a random number generator to increase generalizability and assure representation, the investigators selected eight residency programs from each participating institution (three large procedural, three large nonprocedural, one small procedural, one small nonprocedural) and invited the directors of those programs to participate. After the initial interviews, additional program directors from each of the participating institutions were invited—the investigators randomly selected one program from each category at each institution (large procedural, large nonprocedural, small procedural, small nonprocedural). They anticipated achieving saturation with these additional participants.

Back to Top | Article Outline

Data collection

The principal investigator (K.E.H.) conducted three pilot interviews with fellowship directors from the University of California, San Francisco, School of Medicine and refined the interview guide for clarification. The research team developed the interview questions by reviewing the program requirements for each residency program and the literature on group decision making; they also drew on their expertise.

Potential participants received an e-mail invitation to participate. Nonrespondents received up to three follow-up e-mail invitations. Participants provided verbal consent and completed a seven-item electronic questionnaire and one interview. The questionnaire queried the participant’s specialty; gender; age; years as a CCC chair, program director, and/or associate program director; and number of residents in the program. The trained research assistant conducted phone interviews lasting approximately 30 minutes with each participant between January and May 2013, after the announcement about but just prior to the July 2013 deadline for the seven Phase 1 specialties to adhere to the NAS requirement to have a competency committee.13 Interviews were recorded; a professional transcription service transcribed them verbatim. Participants did not receive compensation.

Interview questions solicited descriptions of CCCs, including membership composition, member training, committee leadership, frequency of meetings, and resident performance data available (see Appendix 1). For programs without a functioning CCC, the program director was asked to describe the process used to review residents’ performance. All participants described, without using any identifying information, the review process for a recent example of a struggling resident and an example of a typical (nonstruggling) resident. Questions addressed program directors’ perceptions of the main purpose their committee or review process served, pros and cons of their current procedures, and any anticipated changes to their procedures.

Five investigators (K.E.H., B.C., W.I., E.H., the research assistant) read two to four early transcripts for clarity; subsequently, one unclear question was dropped, and two questions were added.

Back to Top | Article Outline

Analysis

Two investigators (K.E.H., the research assistant) extracted descriptive information about each program, including the presence of a CCC, the number of committee members, the frequency of committee meetings, and the presence of any other committee that also reviewed residents’ performance.

For the qualitative analysis of themes, the investigators conducted transcript coding iteratively with data collection using the constant comparative method14 and discrepant case analysis.15 One investigator (K.E.H.) read the first 10 interviews and generated initial themes. Four additional investigators (B.C., W.I., E.H., P.S.O.) each reviewed 5 to 6 randomly selected transcripts from that group of 10, reviewed the themes in a draft codebook, and met with the principal investigator to contribute codebook additions and revisions. The codebook then was finalized. Two coders (K.E.H., the research assistant) each independently coded all remaining transcripts. They resolved discrepancies through full transcript review and discussion.

Regular research team meetings amongst this diverse group of investigators with multiple perspectives served the purpose of triangulation.16 On the basis of the initial (open) coding, the investigators reviewed and discussed the data to identify and refine larger emerging themes.

Dedoose Version 4.5 (SocioCultural Research Consultants, LLC, Los Angeles, California) Web application software was used for coding, organizing, and retrieving data.

Back to Top | Article Outline

Results

Thirty-four of 60 (56.7%) invited residency program directors completed an interview. Nine additional program directors agreed to participate but did not either because of schedule constraints or because the study had achieved thematic saturation. Consistent with our sampling procedure, participants included 22 large and 12 small programs, representing 15 procedural and 19 nonprocedural specialties. The participation rate by school varied from 33.3% to 75.0%. Participants included 23 men and 11 women.

Overall, 31 (91.2%) participants completed the demographic survey. Their mean age was 48, ranging from under 35 to 66 years. They had served as program director for an average of 7 years (range 1–21, n = 30), comparable to program directors nationally.17 Fourteen had previously served as an associate program director. The number of residents in each program at the time of the study averaged 39 (range 0–99; one small program did not have residents at the time).

Back to Top | Article Outline

Description of CCC structure

Twenty-one of the 34 programs had CCCs. Twenty-two participants had chaired a CCC or equivalent group for an average of 5.6 years (range 1–18 years). Committee membership ranged in size from 3 to about 25 members, although many participants described that attendance varied and was less than the full possible membership. Meeting frequency varied from weekly to yearly. Ten programs with CCCs described second venues for discussing residents’ performance, such as a broader education committee or a general faculty meeting; these venues allowed for early identification or more in-depth discussion of struggling residents.

Back to Top | Article Outline

Characteristics of resident evaluation

From our analysis, two major paradigms emerged that characterized how programs with and without CCCs perceived their purpose in evaluating residents’ competence. These paradigms aligned with the tenets of a problem identification model and a developmental model. The problem identification model predominated. This model viewed the primary purpose of resident performance review as identifying the few struggling residents. The implicit assumption with this model was that participating in the residency program would lead most residents to competence and success by the end of training. In contrast, the developmental model viewed education as a planned series of steps toward mastery. The underlying orientation that all residents were learners informed a focus on guiding residents’ progressive development, without necessarily singling out “problem” residents. Some programs had elements of both models.

The results below describe three major themes and how they apply within each model. Participants’ study identification numbers are listed in parentheses with illustrative quotations. The major themes and associated subthemes are listed here and summarized in Table 1. They are (1) Use of residents’ performance data: variety of tools, clinical systems data, and informal data; (2) Committee member engagement: committee members’ qualifications, contributions to the credibility of the committee process, and decision making; and (3) Implications for residents: committee review consequences, feedback received, and dealing with risks. The results then describe participants’ perceptions of the effectiveness of their performance review processes with each model.

Table 1

Table 1

Back to Top | Article Outline

Use of residents’ performance data.

Residents’ performance data came from a variety of tools implemented in the residency program along with clinical systems data and informally gathered data. Although programs used a variety of assessment tools, evaluation data for the performance review process constituted primarily supervisors’ global evaluations and knowledge examinations.

With the problem identification model, valued aspects of these performance data were the timely recognition of outliers, usually as low score alerts, and the corroboration of performance problems from more than one information source, including a verbal report from a clinical supervisor. Despite ongoing data collection, CCC members were viewed as important additional sources of information at committee meetings to supplement what was written in evaluations, particularly about struggling residents. Consequently, committee members were selected in large part on the basis of their contact with residents across sites. Committee members’ experience with residents informed an overall understanding of the residents’ competence and any performance problems, particularly in small programs and procedural specialties, whose characteristics facilitated direct observation.

Clinical systems data, such as incident reports and complaints from patients or interprofessional staff, constituted important “red-flag” problem identification mechanisms. These triggered program directors and CCCs to review other performance data for those residents, such as their prior supervisors’ evaluations and verbal comments from other supervisors, then to generate plans to intervene. Multiple participants described the value of the data they gathered informally through hallway conversations with faculty and chief residents and through e-mails from faculty. This information was “usually about a problem, not something that’s positive” (1011).

Participants described challenges with efficiently gathering and synthesizing evaluation data for committee review, which seemed to impede their ability to implement the developmental model. One described the challenge of synthesizing information efficiently to characterize a resident’s progress:

Our efficiency with gathering the data right now, it takes way too long.… I can’t, for example, think about or record in it where they are developmentally, or where they are on achieving clinical competence and clinical independence. (0901)

Despite the widespread use of multiple assessment tools, such as for multisource feedback, peer evaluations, and directly observed skills, most participants did not use these data to characterize each resident’s developmental trajectory. Neither clinical systems red-flag tools nor informally gathered data were mentioned in the context of informing a developmental model of performance review. Some participants did describe performance expectations or milestones based on the resident’s year of training that could serve as the foundation for the developmental model.

Back to Top | Article Outline

Committee member engagement.

Program directors perceived that committee members’ qualifications added credibility to the performance review process and enabled them to contribute to the decision making about residents’ advancement. In 14 of the 21 programs, CCC members received training for their committee roles, typically via the distribution of program goals, objectives, or milestones. A few held annual or biannual faculty development sessions on assessing residents. Participants opined that group performance review was credible because more opinions about struggling residents were shared, the program director was supported in making difficult decisions, and conflicting information was often reconciled.

Across programs, the problem identification model relied heavily on faculty members’ qualifications via their perceived status as expert, dedicated educators and clinical supervisors to prepare them for their performance review responsibilities.

It’s both kind of learn as they go and then understanding of what our assessment strategies are, how we use them and they pick it up as they attend more and more committees, but there isn’t specific training. (5817)

The performance standard against which residents were compared was these faculty members’ general knowledge of resident performance—their normative frame of reference. Decision making about residents was commonly dichotomous (performing adequately or not) and inferred rather than determined by systematic deliberation or voting. The absence of concerns regarding a particular resident was taken to imply readiness for advancement, and decision making usually focused on struggling residents. Consequently, CCCs and program directors often did not discuss or review detailed data regarding the majority of residents. Decision making focused on problem identification was described as very efficient: “It usually takes a minute or two” per resident (0771) and “very easy for the other faculty” (4399). Participants found it difficult to help residents with variable performance ratings, and they managed these situations through additional data gathering, either through committee members’ discussing their own direct experience or by contacting other clinical faculty.

Infrequently, participants described using a developmental model for analyzing residents’ progress. They did not describe specific faculty training. Some with CCCs were beginning to apply milestones or stepwise expectations for progress that would support the use of a developmental model. Four participants specifically described engaging CCC members by sharing performance data for all residents. Some expressed trepidation about the value and workload involved with comparing residents’ performance against milestones for decision making and whether it would really enhance the credibility of the committee decisions: “I just don’t want to dampen the spirit of my faculty that do this really well already with more lists and checklists and demands” (0370).

Back to Top | Article Outline

Implications for residents.

The implications of the performance review process included whether committees discussed all residents, how feedback was delivered, and where potential risks to residents existed. Sixteen committees reviewed all residents at least briefly at each meeting; others discussed only struggling residents. One participant from a program with a more in-depth review of all residents explained:

Their scores, their evaluation scores and the comments from the last six months since the last time we met are projected for each resident in the lecture room and then we discuss each resident individually. (4399)

Nearly all participants described providing feedback to residents after meetings, usually biannually.

The problem identification model allocated most performance review time to struggling residents: “There aren’t examples of people we’ve talked about who were doing just perfectly well” (1582). CCCs sometimes discussed high-performing residents, identified by evaluation tools or committee members’ personal knowledge, to suggest nominating them for awards, fellowships, or faculty positions. Programs oriented toward problem identification described sending feedback reports to residents; some scheduled feedback meetings for residents with the program director, whereas others relegated the responsibility for figuring out how to use the feedback to the residents. These feedback meetings allocated minimal time on areas for growth; one participant described giving a resident feedback as follows:

Regarding themselves, I just don’t have much to say, that they’re doing a good job. I just encourage them to still do a good job. (2800)

Multiple participants perceived risks with performance review because it could be a potentially biased process. They expressed apprehension that sharing performance information (forward-feeding) within large or representative committees, rather than serving as the content of helpful feedback to residents, could harm residents if committee members learned damaging information about their trainees. Perceiving clinical supervisors’ reluctance to document performance concerns in writing, program directors invited verbal or e-mail reports of concerns or used anonymous resident performance reviews. Some program directors sensed that residents were nervous to meet with them for feedback.

With a developmental model, milestones guided performance review and the identification of residents’ relative strengths and weaknesses across multiple domains of competence. Feedback discussions prioritized the identification of areas for improvement for each resident. One participant explained:

These benchmarks are great because it lets us have a very transparent communication with our residents as to what the goals and benchmarks of residency are and then as well as with the faculty. (1570)

Some programs enlisted resident advisors who attended the CCC meetings to inform more in-depth feedback and learning planning. Another approach to enhance feedback usefulness was providing aggregate data about the other residents in the program to contextualize each resident’s performance. The developmental model seemed to mitigate concerns about the risks of performance review because residents’ progressive maturation was expected and all residents would have areas for growth.

Back to Top | Article Outline

Evidence of the effectiveness of performance review.

Almost all participants expressed high confidence in their performance review processes. The grounds they cited varied, from gestalt impressions of effectiveness based on a sense that their “end product” (the trainees) was excellent, to the less common description of a rigorous, data-driven process in which every resident was carefully assessed. This confidence was derived from the experiences and commitment of individual faculty members and from the group as a whole. Some participants qualified their positive convictions with ambivalence, such as saying that the process was “adequate” or “80% good”; one said, “I feel reasonably well, I guess, as well as I could, unless someone comes up with some better ideas” (7415).

Some shared misgivings that their processes were effective for struggling residents but perhaps not for other residents. One expressed uncertainty about residents’ trajectory toward competence:

If someone were to ask me, “Is this second-year resident in a position where they can [do this particular clinical activity]?”… That’s a question I’d like to be able to answer for every second-year resident. (3651)

Nonetheless, this participant concurred with most others that all residents could perform the necessary activities by graduation and were praised by fellowships and employers.

Back to Top | Article Outline

Future directions.

When asked about anticipated changes to their resident performance review, participants’ opinions varied about the degree to which adding or changing an existing committee would simply satisfy requirements versus add value. Some predicted that the current committee would demonstrate adherence to ACGME expectations just by changing its name or providing documentation “in name only,” while the more effective work would continue to occur outside of formal resident performance review. Participants predicted that performance review with milestones would necessitate more time and better electronic systems for data capture, synthesis, and presentation. Many were hopeful that milestones would provide more granularity and specificity than reviews based on global evaluations or overarching competencies. Common uncertainties included how faculty would understand milestones, how discrepant performance across milestones would be managed, and whether the new system would be better than current procedures.

Back to Top | Article Outline

Discussion

Our findings illustrate the ways that residency programs engage in resident performance review through broad data collection and varying approaches to information synthesis. Our analyses identified two paradigms guiding performance review processes—a problem identification model and a developmental model. Decision making about residents’ advancement under the problem identification model is implicit, with the assumption that most residents will become successful by the end of training, consistent with the dwell time or tea-steeping model of medical education.18,19 Most programs take a problem identification approach rather than a developmental approach, and they question how milestones will be advantageously operationalized at this pivotal time of new requirements for milestones-based assessment and reporting.19 Our findings also reveal the questions, concerns, and aspirations that residency program directors harbor about how the developmental model—the goal of the NAS—will support individualized paths to competence.

These two models exemplify the tenets of quality assurance and quality improvement. The problem identification model serves a quality assurance purpose by identifying struggling residents. Program directors’ descriptions of the risks of this model for residents are consistent with interpretations of quality assurance as a necessary process to identify outliers, yet also potentially punitive and prone to generating defensiveness.20 In this model, residents may infer that the best course of action is to stay out of trouble and that minor performance deficits are tolerated unless they rise to the level of being labeled performance concerns. Even with a genuine desire for performance feedback to guide learning, trainees can fear appearing incompetent or jeopardizing relationships with supervisors.21 This scenario, in which formative feedback is perceived as high-stakes summative information, jeopardizes the intended value of the milestones as a tool to guide all residents’ efforts to become better. The developmental model of performance review, by incorporating milestones-based assessment, aligns with quality improvement, which proactively incorporates strategies for continuous improvement. Just as the emphasis on quality improvement in patient care has required medical professionals to learn and change their behaviors, the developmental model of resident performance review similarly requires changing culture and procedures. Assessment processes under this model aim to be learner-centered and to empower residents with the motivations and skills, supported with feedback and coaching, to self-improve toward competence for independent practice.22,23 These two models may coexist in residency programs, although it is possible that some program directors, particularly those with high-performing residents, may prioritize resources to provide intensive, milestones-based support or remediation for the small number of residents who do not meet the benchmarks.

Our findings show that programs can incorporate elements of both the problem identification and developmental models for resident performance review and determinations of competence. For example, a developmental approach can inform solutions to identified problems. However, each model highlights a particular emphasis—the problem identification model emphasizes immediate patient safety (by attempting to weed out potentially dangerous “problem” residents), whereas the developmental model emphasizes individual residents’ development and the quality of patient care provided throughout their careers. Milestones could enhance the problem identification model by grounding conversations in clear performance expectations and elucidating underlying etiologies of performance outliers. Whereas problem identification systems, such as alerts to program directors, consume faculty time as they investigate the situation, a more learner-centered orientation as is envisioned with competency- and milestones-based education may engage residents in doing some of the work currently done by their faculty, such as proactively identifying and addressing their own areas for growth.4 Going forward, the NAS has mandated that CCCs embrace the developmental strategy to evaluate each resident’s progress using milestones, and our findings suggest that program directors and CCCs will benefit from guidance on how to implement this new mandated developmental approach.

Participants expressed concern about the time required to assess residents’ performance and anticipated that it could be more onerous under the developmental model. However, the problem identification model also requires resources, and programs may underappreciate the work currently done within and outside of CCCs to collect informal data to supplement routine evaluations. Our participants’ programs typically allocated limited or no time for committees to review average and high-performing residents. Although the ideal amount of time for reviewing these residents is unknown, it is likely more than currently occurs. Nonetheless, there will continue to be a need to balance ideal practices with efficiency. Milestones may enable evaluators and committee members to reach judgments more efficiently if they understand and apply the milestones effectively with the aid of robust information technology.24–26 The modest amount of faculty development that the participating programs conducted to prepare faculty for CCC participation suggests that augmented faculty development will also be needed for committees to accomplish their goals of effective group decision making.27,28

Our review of participants’ experiences and their perceptions of the effectiveness of their processes, as well as our analysis of the ways that performance review supports residents’ development of competence, revealed practices that would support the aims of the NAS. CCC members must have criteria for performance review that include milestones and define what constitutes competence. Clinical supervisors and residents themselves need to understand the performance milestones and how they are applied. CCCs should view performance data for each resident before their discussions, and they should review the performance of all residents in the program. The use of multiple data sources coupled with timely data synthesis facilitates efficiency in the committee setting, as does pre-review and the synthesis of performance information by a small group prior to a full committee meeting. To facilitate each resident’s trajectory toward competence, committees should review progress over time by revisiting areas of focus or concern from prior meetings. Enlisting a resident’s advisor to discuss evaluations with her rather than just sending feedback passively can help the resident to identify next steps in her learning.

This study has limitations. Participants were program directors at five public institutions in one geographic region, potentially limiting the generalizability of our findings. However, a large number of program directors participated across specialties. In addition, our questions about competence review may have steered participants to showcase their best or idealized practices, and we did not observe the CCCs to confirm their procedures. Finally, our study occurred during a time of change in performance review requirements, and participants’ practices may continue to evolve, although our findings suggest that greater adoption of the developmental model may be difficult for programs.

The emergence of competency-based medical education and milestones-based assessment challenges medical educators to find meaningful strategies to assess trainees’ performance. The residency programs in our study used functional strategies for identifying performance outliers, yet many struggle to understand the trajectory of all residents’ development. The uneasy coexistence of these two paradigms (the problem identification model and the developmental model) suggests that, for CCCs to fulfill the vision of supporting individual paths toward competence, information systems to manage and synthesize performance data, clear understanding of the purpose of CCC performance review, and a culture that welcomes constructive feedback to residents are needed. These ingredients could empower programs to ensure their residents’ readiness for independent practice and fulfill their obligation for public and educational accountability of the GME system.

Acknowledgments: The authors thank the participating institutions’ designated institutional officials for help with study recruitment: Stephen Hayden, MD, Khanh-Van Le-Bucklin, MD, James Nuovo, MD, and Neil H. Parker, MD. They also thank Joanne Batt for support with data gathering and data management.

Back to Top | Article Outline

References

1. Accreditation Council for Graduate Medical Education. Frequently asked questions about the Next Accreditation System. December 2012. http://jcesom.marshall.edu/media/19073/NAS_FAQ-.pdf. Accessed March 16, 2015
2. Royal College of Physicians and Surgeons of Canada. The CanMEDS framework. http://www.royalcollege.ca/portal/page/portal/rc/canmeds/framework. Accessed March 9, 2015
3. General Medical Council. Tomorrow’s Doctor. Foreword. http://www.gmc-uk.org/education/undergraduate/tomorrows_doctors_2009_foreword.asp. Accessed March 9, 2015
4. Frank JR, Snell LS, ten Cate O, et al. Competency-based medical education: Theory to practice. Med Teach. 2010;32:638–645
5. Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based medical education. Med Teach. 2010;32:676–682
6. Jippes E, Van Luijk SJ, Pols J, Achterkamp MC, Brand PL, Van Engelen JM. Facilitators and barriers to a nationwide implementation of competency-based postgraduate medical curricula: A qualitative study. Med Teach. 2012;34:e589–e602
7. Accreditation Council for Graduate Medical Education. Next Accreditation System. https://www.acgme.org/acgmeweb/tabid/435/ProgramandInstitutionalAccreditation/NextAccreditationSystem.aspx. Accessed March 9, 2015
8. Pangaro L, ten Cate O. Frameworks for learner assessment in medicine: AMEE guide no. 78. Med Teach. 2013;35:e1197–e1210
9. Hanson JL, Rosenberg AA, Lane JL. Narrative descriptions should replace grades and numerical ratings for clinical performance in medical education in the United States. Front Psychol. 2013;4:668
10. Crossley J, Jolly B. Making sense of work-based assessment: Ask the right questions, in the right way, about the right things, of the right people. Med Educ. 2012;46:28–37
11. Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15:1277–1288
12. Patton MQ Qualitative Evaluation and Research Methods. 19902nd ed. Newbury Park, Calif Sage Publications
13. Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—rationale and benefits. N Engl J Med. 2012;366:1051–1056
14. Dye JF, Schatz IM, Rosenberg BA, Coleman ST. Constant comparison method: A kaleidoscope of data. Qual Rep. 2000;4 http://www.nova.edu/ssss/QR/QR4-1/dye.html. Accessed March 9, 2015
15. Morrow SL. Quality and trustworthiness in qualitative research in counseling psychology. J Couns Psychol. 2005;52:250–260
16. Bearman M, Dawson P. Qualitative synthesis and systematic review in health professions education. Med Educ. 2013;47:252–260
17. Accreditation Council for Graduate Medical Education. Data Resource Book. Academic Year 2011–2012. https://www.acgme.org/acgmeweb/Portals/0/PFAssets/PublicationsBooks/2011-2012_ACGME_DATABOOK_DOCUMENT_Final.pdf. Accessed March 9, 2015
18. Iobst WF, Sherbino J, ten Cate O, et al. Competency-based medical education in postgraduate medical education. Med Teach. 2010;32:651–656
19. Hodges BD. A tea-steeping or i-Doc model for medical education? Acad Med. 2010;85(9 suppl):S34–S44
20. U.S. Department of Health and Human Services. Health Resources and Services Administration. http://www.hrsa.gov/healthit/toolbox/HealthITAdoptiontoolbox/QualityImprovement/whatarediffbtwqinqa.html [no longer available]. Accessed November 13, 2013.
21. Mann K, van der Vleuten C, Eva K, et al. Tensions in informed self-assessment: How the desire for feedback and reticence to collect and use it can conflict. Acad Med. 2011;86:1120–1127
22. Schumacher DJ, Englander R, Carraccio C. Developing the master learner: Applying learning theory to the learner, the teacher, and the learning environment. Acad Med. 2013;88:1635–1645
23. Ericsson KA, Krampe RT, Tesch-Romer C. The role of deliberate practice in the acquisition of expert performance. Psych Review. 1993;100:363–406
24. Borman KR, Augustine R, Leibrandt T, Pezzi CM, Kukora JS. Initial performance of a modified milestones global evaluation tool for semiannual evaluation of residents by faculty. J Surg Educ. 2013;70:739–749
25. Lowry BN, Vansaghi LM, Rigler SK, Stites SW. Applying the milestones in an internal medicine residency program curriculum: A foundation for outcomes-based learner assessment under the next accreditation system. Acad Med. 2013;88:1665–1669
26. Smith CS, Morris M, Francovich C, et al.Pacific Northwest Consortium for Outcomes in Residency Education. A multisite, multistakeholder validation of the Accreditation Council for Graduate Medical Education competencies. Acad Med. 2013;88:997–1001
27. van der Vleuten CP, Schuwirth LW, Driessen EW, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34:205–214
28. Hemmer PA, Pangaro L. Using formal evaluation sessions for case-based faculty development during clinical clerkships. Acad Med. 2000;75:1216–1221
Back to Top | Article Outline

Interview Guide for a Study of the Role of Clinical Competency Committees in Performance Assessment, University of California Schools of Medicine, 2013 Introduction

Thank you for participating in this interview. We appreciate your giving us your time and sharing your expertise. Our goal is to understand how residencies in different specialties and institutions review resident performance and competence. This interview will ask you to describe how your program reviews resident performance information either by you or a committee, and how you make judgments about residents who are competent or may need more work to achieve competence. Please do not use any resident names or identifying information in answering these questions.

Back to Top | Article Outline

Interview questions and probes

There are many different ways that residencies review their residents’ performance. We’d like to understand more about the different ways that programs are doing this. You may use different assessment tools, and we are interested in who or how that information is reviewed. Could you tell me about how your program reviews resident performance?

Probes to use if answers not provided:

  • Do you have a competence/performance committee? If so, what is it called?
  • Who chairs the committee?
  • Including the chair, how many members are on the committee?
  • Who is on the committee? What are their educational roles?
  • When or how does your committee decide to meet?
  • Does your committee meet regularly or ad hoc?
  • About how often does the committee meet?
  • If ad hoc, can you explain how a meeting is triggered?
  • Does your competency, promotions (or equivalent name) committee discuss all residents or only those with performance concerns?
  • How are residents identified for discussion?
  • Who identifies them?
  • Are there any other committees or groups that discuss resident performance?

There can be particular residents who are struggling. Without using any names or identifying information, I’m going to ask you to think of a specific struggling resident whom your committee recently discussed. Could you describe that discussion?

Probes to use if answers not provided:

  • How was that resident identified as struggling?
  • What kind of information did the committee discuss?
  • Who shared the information?
  • How did the committee make a determination/judgment? What kinds of information did they use?
    • Can you tell me more about how that conversation unfolded? What comments did people in the room make? Did everyone speak?
    • Can you describe how that decision is reached? Does every member vote? How do you deal with different opinions in the room?
  • What are the plans for follow-up?
  • Was this a typical example? Why or why not?
  • What are the other ways that struggling residents may be identified for discussion at your committee?

Now I’d like to shift gears to talk about a resident who is not struggling. Without using any names or identifying information, I’d like to ask you now about a recent example of a discussion about a particular resident that your committee had. Can you walk me through how the process unfolded?

Probes to use if answers not provided:

  • What information did the committee discuss?
  • What kinds of information does the committee use to make a determination?
  • Who shared the information?
  • What strategies or tools does your program use to assess residents?
  • What types of information about resident performance does your committee review? Tell me how the committee members see this information. Did they see it in advance of the meeting or at the meeting? Do they always see it or just in certain situations?
  • Did the group have a sense of where this resident is developmentally?
  • ° Do you use milestones? Is this a benchmark you use?
  • How did the discussion flow? Did the group reach consensus? If not, what kind of final decision was reached, and how?
  • Is this example similar or different to your group’s typical discussions of resident performance? How? What else can happen? What other kinds of information might people use?

Do committee members receive any training or guidelines related to assessing resident performance?

  • Can you tell me about that?
  • When and how often do they receive this information?
  • What is the content?

Or if not committee, focus item on the person who reviews resident performance.

What do you see as the main purpose your committee is serving?

Even though you are a chair/participant, what’s your personal take on the pros and cons of how your committee reviews resident performance?

  • Confidence in process
  • Confidence in judgments made
  • Strengths of your committee’s process
  • Challenges that you experience
  • Probe confidence about each of these steps: judgment/decision/action

Now I would like to ask you to think into the future. Are there ways that you envision your competency committee/procedures changing in the future?

Is there anything else you’d like to add?

Thank you.

Supplemental Digital Content

Back to Top | Article Outline
© 2015 by the Association of American Medical Colleges