Secondary Logo

Journal Logo

Original Research

An Examination of Self-Reported Assessment Activities Documented by Specialist Physicians for Maintenance of Certification

Lockyer, Jocelyn PhD; DiMillo, Shanna MBNF; Campbell, Craig MD, FRCPC

Author Information
Journal of Continuing Education in the Health Professions: Winter 2020 - Volume 40 - Issue 1 - p 19-26
doi: 10.1097/CEH.0000000000000283

Abstract

Physicians and surgeons receive performance data from many sources. These include audit and feedback,1,2 multisource feedback,3,4 annual performance reviews or appraisals,5 simulation activities,6 and self-assessment programs.7 They also receive data from their learners,8–10 through collegial discussions,11 and annual performance reviews.12,13 Numerous studies have been performed to examine the effectiveness of performance data in changing physician's knowledge, skills, or behaviors.2,4,7,14–17

Regardless of the assessment activity, there is variability in practice outcomes. A number of factors are associated with both learning and planned changes. These have included discussion with a supervisor/colleague,1,4 the format of the data,2,4 multiple delivery of content,1 availability of narrative comments,4,15,17 trusted sources,4 and creation of targets and action plans.1 The audit and feedback literature shows that small but potentially important improvements in professional practice are possible1 and are similar to the findings of the impact of continuing professional development (CPD) courses in improving the physician performance.18 Several CPD studies show that interactivity, multiple methods/exposure, and a focus on content physicians deem important are associated with behavior change.18

Contemporary thinking about data and its relationship to feedback suggests that the supply of data is simply information and that the subsequent performance of appropriate actions will not necessarily follow. A recent article makes a case for feedback as a dynamic and co-constructive interaction in the context of a safe and mutually respectful relationship for the purpose of challenging a learner's (and educator's) ways of thinking, acting, or being to support growth.19 Johnson and May20 used Normalization Process Theory to explore the characteristics of relatively successful behavior change interventions. They noted that normalization process theory characterizes implementation processes as the product of social mechanisms that users adopt to make sense of new practices (coherence), engage with new practices (cognitive participation), enact a new practice (collective action), and appraise the effectives of a new practice (reflexive monitoring). They found that interventions that focused on action and monitoring, such as audit and feedback, reminders and educational outreach tended to report more positive outcomes.20

Ensuring that physicians have access to facilitated discussion is consistent with the approach being taken by the Medical Council of Canada MCC 360 multisource feedback initiative. The MCC 360 approach couples the delivery of a report capturing quantitative and qualitative data from peers, colleagues, coworkers, and patients with a feedback and coaching session with a trained facilitator to allow the physician to view the results in an objective and constructive manner and develop an action plan for planned change.21 Similarly, the General Medical Council in the United Kingdom requires that practicing doctors take part in an annual appraisal, facilitated by a trained appraiser (responsible officer), to inform medical revalidation recommendations.5

Royal College of Physicians and Surgeons of Canada (Royal College) fellows and other Maintenance of Certification (MOC) program participants are required to participate in assessment activities (three credits per hour) for all new 5-year cycles beginning on or after January 2014. To meet the MOC program requirements, they must complete a minimum of 25 credits per 5-year cycle and provide evidence of their participation in and reflection on the impact of these activities for their practice in their e-portfolio. The Royal College has categorized assessment activities under seven broad categories of practice assessment, accredited self-assessment programs, accredited simulation activities, chart audit and feedback, multisource feedback, direct observation, feedback on teaching, and annual performance reviews.22 Given the inherent variation in the process of assessment and considering the theoretical considerations related to feedback and interventions focused on actioning and monitoring, it is not known whether assessment activities with a discussion component (facilitated by a peer, colleague, or coach) affect the types of changes.23

In a seminal study by Fox et al,24 340 physicians in the United States and Canada were interviewed to understand how learning related to change. Their interviewees identified 775 changes of which 126 (16.2%) were categorized as accommodations (small simple changes), 481 (62.1%) as adjustments (small to moderate incremental changes), 141 (18.2%) as redirections (larger structural changes), and 27 (3.5%) as transformations (large complex changes involving many interrelationships).24 By contrast, by examining the assessment data which psychiatrists documented in the Royal College's e-portfolio, we found that the activities had a variable impact on learning. Some psychiatrists made no changes, while others made accommodations (eg, adopting a hospital's new informed consent procedure and adjustments) (eg, implementing a new procedure), or redirections (eg, large complex changes to practice such as preparing for retirement). The changes spanned the spectrum of assessment activities with the most common being self-assessment programs, feedback on teaching, regular performance reviews, and chart reviews. Less frequent activities included direct observation, peer supervision, and reviews by provincial medical regulatory authorities. The psychiatry data, while examining for patterns of activity and change, did not quantitatively examine the documentation provided for learning and change, nor explore whether assessment activities with a discussion component resulted in a higher frequency of planned change.23

The purpose of this study was to examine recorded assessment activities reported from five different Royal College specialties to identify the variation in assessment activities undertaken by different specialties, determine whether learning occurred, and assess the frequency and type of planned changes that accrued from each type of assessment activity. Our research questions were the following: (1) What were the most common assessment activities completed by each discipline? (2) Were there differences across specialties in the frequency of learning by assessment type? (3) What types of changes were planned? (4) Was there an association between learning and change? and (5) Was there a difference in planned changes for assessment activities with and without a focused discussion component?

METHODS

Data Collection

We selected data for analysis for five disciplines from the 68 primary specialties and subspecialties certified by the Royal College. These disciplines were selected to reflect a range of practices from medicine (cardiology and gastroenterology), surgery (orthopedic surgery and ophthalmology), and laboratory medicine (Anatomical pathology). In selecting the five disciplines, attention was paid to the size of the discipline to preserve anonymity and confidentiality. All these disciplines have a national specialty society (NSS) that is a Royal College–accredited CPD provider organization. NSSs can accredit a full range of group learning as well as self-assessment programs and simulation activities. These disciplines are also likely to have access to and design, approve, or promote the use of different assessment programs, strategies, and tools.23 We recognized that there were likely to be differences in assessment activities available to each specialty. For example, orthopedic surgeons and ophthalmologists may receive performance reports from patient registries related to wait lists and infection rates.25–27 Given the importance of quality assurance and need to reduce variance in pathology, anatomical pathologists have a long history of engagement in self-assessment programs.28,29 Given impetus through the Canadian Association of Gastroenterology and provincial and federal governments, wait times and quality of procedures have come under review providing specialists with individual, group, or comparator data for some procedures.30,31

Premier Inc, the organization that developed and maintains the e-portfolio system for the Royal College, was asked to provide anonymized (individually deidentified) assessment data for each of the disciplines that reported completion of assessment activities in 2017 (12 months) in an Excel spread sheet. The data included a unique identifier attributable to an anonymized physician, the e-portfolio assessment category (practice assessment, accredited self-assessment programs, accredited simulation activities, chart audit and feedback, multisource feedback, direct observation, feedback on teaching, and annual performance review), and the physician's response to at least one of three questions posed: What did you learn or confirm? What additional learning are you planning to complete? What changes are you planning to implement in your practice?

Data Analysis

In setting up the data for analysis, we drew on our previous study23 that had identified that psychiatrists frequently reported activities under the assessment category that did not meet criteria for assessment (ie, the data were not physician specific) or were more appropriately classified as section 1 (group learning), section 2 (self-directed learning), or were too vague to be classified.23 Examples of section 1 activities that were excluded were participation in unaccredited conferences or rounds, where summaries of performance in practice may have been presented. Excluded activities for section 2 included committee work related to examination development, serving as a peer review of the performance of others or developing quality improvement activities. As the data entry classification system in the e-portfolio contained only seven broad categories, we created a coding system for assessment activities based on the previous work in psychiatry23 and our discussions for the activities included and excluded from the study. Although physicians were required to answer at least one of the three questions described above, some answered two or all three questions or they combined the answers under one of the questions. Using the data provided, the researchers undertook a line by line inductive examination of the activity reported to classify each assessment activity, determine whether learning was described (yes/no) and to classify planned changes as no change, an accommodation (small change), or an adjustment/redirection change (incremental/structural change). Given there were relatively few redirections, we elected to combine this category with adjustments. See Supplemental Digital Content 1 (see Appendix 1, https://links.lww.com/JCEHP/A76) for an overview of the coding structure.

To ensure consistency in coding, two researchers (J.L. and C.C.) separately coded the data for gastroenterology in the Excel spreadsheets. The researchers then engaged in several meetings to resolve differences in coding and to discuss the data to ensure coding consistency. For the other specialties, one researcher undertook the coding and the second researcher then examined the data and proposed changes, where needed. The changes for each of the other four specialties were discussed over one or two meetings and final codes for each activity were determined.

Once the coding was complete, the data were analyzed descriptively. Analyses were conducted either across all five specialty groups or collapsed to assess the disciplines as a whole. Where possible, self-reported planned change was assessed using the gradient described above. When analyzing the type of change by discipline for each of the different activity types, the gradient was collapsed to change/no change to allow for statistical analysis due to small cell sizes. Initial analysis showed that most assessment activities did not result in planned changes and that most planned changes were accommodations. Assessment activities were analyzed individually with the exception of a subgroup analysis, which separated activities into those with and without a discussion component. Annual reviews, direct observation, and simulation were viewed as assessment activities that included a focused discussion component.

Chi-squared tests were used to assess relationships, followed by Cramér's V to measure the strength of the association. Pairwise comparisons by discipline also used the chi-squared test with a Bonferroni correction, followed by Phi to measure the strength of the association. The level of association or effect sizes for Cramér's V and Phi were classified as follows: df = 1 (small = 0.10, medium = 0.30, and large = 0.50), and df = 2 (small = 0.07, medium = 0.21, and large = 0.35).32 The statistically significant threshold was set at 0.05 with the exception of the pairwise comparisons where the threshold was set at 0.005. IBM SPSS Statistics 24 (Armonk, NY: IBM Corp.) software was used to conduct the analysis.

The University of Calgary Conjoint Health Research Ethics Board approved the proposal.

RESULTS

In 2017, 3404 physicians from 5 specialties recorded 8275 assessment activities as part of the MOC program. However, 2212 activities (26.7%) were excluded as the entries pertained to section 1 accredited group learning (n = 29), section 1 unaccredited group learning (n = 613), section 2 systems learning (n = 663), section 2 planned learning (n = 336), section 2 scanning activities (n = 25), other teaching activities (n = 239), or activities that were too vaguely described to be classified (n = 337). See Supplemental Digital Content 1 (see Appendix 1, https://links.lww.com/JCEHP/A76) for a full description of excluded activities. As a result, this study analyzed 6063 assessment activities from 2854 physicians including 598 anatomical pathologists, 653 cardiologists, 381 gastroenterologists, 535 ophthalmologists, and 687 orthopedic surgeons.

The most common activities documented across all specialties were self-assessment programs (n = 2122), followed by feedback on teaching (n = 1078), personal practice assessments which the physician completed themselves (n = 751), annual performance reviews (n = 682), and reviews by third parties such as data provided from health systems, regulatory authorities, or registries (n = 661). There were statistically significant differences between specialty groups when investigating the different types of assessment activities. Annual reviews were more likely to be included by cardiologists, direct observation by orthopedic surgeons, personal practice assessments by ophthalmologists, and self-assessment programs by pathologists. Anatomical pathologists did not report simulation activities and reported the lowest proportion of feedback on teaching (Table 1).

TABLE 1.
TABLE 1.:
Assessment Activities by Assessment Activity Type and by Discipline

Learning resulted in 93.5% (n = 5668) of the assessment activities. It occurred in 87.5% (n = 608), 91.9% (n = 1734), 92.2% (n = 871), 95.6% (1277), and 96.3% (n = 1178) of activities reported by gastroenterologists, anatomical pathologists, ophthalmologists, orthopedic surgeons, and cardiologists, respectively (χ2 (4) = 51.574, P < .0005, V = 0.092). Annual reviews experienced greater variability by discipline with learning occurring in 72.8% (n = 59) of activities for gastroenterologists, 86.7% (n = 85) for ophthalmologists, 90.9% (n = 130) for orthopedic surgeons, 95.3% (n = 225) for cardiologists, and 97.6% (n = 121) for anatomical pathologists. There was a statistically significant difference between specialty groups for learning within annual reviews (χ2 (4) = 46.350, P < .0005, V = 0.261). There were no statistical differences for learning by specialty group for feedback on teaching or personal practice assessments.

Overall, 2126 (35.1%) of the activities resulted in planned changes, as shown in Table 2. This was consistent across all disciplines, although Anatomical Pathology documented the greatest number of activities where there were no planned changes described (82.1% compared with all disciplines [64.9%]). Gastroenterology had a slightly higher proportion of assessment activities where a planned change was described as compared to no change (53.4% versus 46.6%, respectively). Most of the changes were accommodation type or simple changes with relatively few changes categorized as being adjustment/redirection changes. A statistically significant association was observed between the specialty groups and the type of change recorded on the assessment activity (χ2 (8) = 412.190, P < .0005, V = 0.184). Assessment activities with statistically significant differences in the proportion of assessment activities reporting change/no change based on specialty group included third party reviews, direct observation, feedback scholarship, feedback teaching, personal practice assessment, and self-assessment programs (Table 3).

TABLE 2.
TABLE 2.:
Assessment Activities by Category of Type of Change and by Discipline
TABLE 3.
TABLE 3.:
Type of Activity by Discipline and Type of Planned Change

Learning was self-reported regardless of whether assessment activities had a focused discussion component (93.6% of activities recorded learning) when compared to assessment activities without a discussion component (93.5%). Across all disciplines, learning was associated with all types of planned changes, although occurred less often when no change was described (90.3%) compared with 99.4% for accommodations, and 100% for adjustments and redirections. There was a statistically significant difference in learning between three different change categories (χ2 (2) = 190.447, P < .0005, V = 0.177). See Supplemental Digital Content 1 (see Appendix 2, https://links.lww.com/JCEHP/A76) for additional information on learning and change by discipline for each of the different assessment activities.

Less than one-fifth of assessment activities (18.8%) included a discussion component. Activities with a focused discussion component were more likely to result in physicians recording a planned change when compared with other types of assessments (47.5% versus 32.2%, respectively). Furthermore, 9.9% of assessment activities with a focused discussion component reported higher-order changes (adjustment/redirection) compared with 1.4% for activities with an included discussion. There was a significant difference between the proportions for types of changes in terms of assessment activities with and without a focused discussion component (χ2 (2) = 271.340, P < .0005, V = 0.212) (Table 4).

TABLE 4.
TABLE 4.:
Role of Learning and Change in Activities, Which Include a Discussion Component Across All Disciplines

DISCUSSION

This study adds to our previous knowledge about physician documentation of assessment activities. We were able to use a large data set from the Royal College's e-portfolio describing assessment activities completed by physicians in 5 specialties over a 12-month period to conduct this study. There are relatively few such data sets collected by other physician organizations. Using the recorded assessment activity data set, we identified a range of assessment activities which specialists accessed to satisfy the MOC program requirements. We noted that learning from assessment activities was almost ubiquitous in the assessment activities physicians undertook. However, only 35.1% of the activities resulted in planned changes with greater frequency of change associated with assessment activities that included a discussion component with a supervisor or peer.

This study confirmed the results of the previous study with psychiatrists,23 which identified a variety of assessment activities were used to meet MOC requirements. As with psychiatrists, the most common assessment activities were those that were more readily accessible (eg, self-assessment programs, feedback on teaching, personal practice assessments, and annual performance reviews).23 These choices are not surprising. NSSs can approve their own programs as well as international programs, making these activities more available. Similarly medical schools depend on specialists to teach undergraduate students and residents and must provide feedback on teaching as part of accreditation standards. For specialists working in institutions, annual reviews with their division or department head are an expectation. Similarly, for those doing work in hospitals, there are established processes of data collection that are part of regular monitoring of practice or are embedded within hospital QI processes.

There were differences by specialty regarding the choice of assessment activities. For example, anatomical pathologists had a significantly higher proportion of activities documented in self-assessment programs relative to the four other specialty groups. This likely reflects the availability of relatively more accredited self-assessment programs approved by their specialty in collaboration with the American College of Pathology relative to other specialties. Third party reviews were an important source of data for both orthopedic surgery and gastroenterology. Both specialties have regular data reporting systems related to wait lists, infection control, and other services provided.25,26,28,30,31 Although some fellows in all specialties reported undertaking personal practice assessments, it seemed to play a larger role in ophthalmology practice, which may relate in part to the recent transition of Ophthalmological practice to independent facilities in which group audit processes may still be in evolution, necessitating personal assessments to monitor practice. Specialty-specific opportunities, teaching opportunities and its associated feedback, the presence or absence of third party reviews, and the type of work undertaken seemed to influence the choice of assessment activities documented.

Learning was common and associated with most of the activities, albeit with some differences between specialty groups with annual reviews noted in particular. This may be the result of differences in the availability of performance data that may facilitate discussion prompting learning and development of plans for improvement. This frequency of learning is consistent with, but greater than that found by Fox et al,24 whose study began by asking the physicians to identify changes and then considered the role that learning played. By contrast, in this study, the assessment activity provided the impetus for the physicians to document learning and plans for change, which may explain why only one-third of the activities lead to plans for change, the majority of which were simple accommodation type changes.

Significant variation across specialty groups was found with respect to intentions of change. For example, anatomical pathologists had the highest proportion of assessment activities with no planned changes recorded. By contrast, gastroenterologists provided a slightly higher proportion of assessment activities that indicated a planned change. This may have been related to the most commonly documented activities for both disciplines and self-assessment programs. For anatomical pathologists, self-assessment programs seemed to result in an affirmation that knowledge was appropriate without the identification of a planned change to improve their practice. By contrast, for gastroenterologists, half of all self-assessment activities recorded a planned change.

The association between assessment activities and focused discussion with peers or colleagues, revealed a small-to-medium effect size that was statistically significant. This confirms the findings from both the CPD course and audit and feedback literature, which have shown the importance of discussion as integral to change.1,4,18 In a recent study performed in Scotland, data from 18 e-portfolios along with interviews with the physicians and appraisers involved were examined. They found that the appraisal discussion provided a forum for verbal reflection with physicians noting that continued professional development, audits, significant events, and colleague multisource feedback were sometimes considered to be useful. They also found evidence that physicians planned and made changes based on the data in the e-portfolio.5 Given research suggesting the value of structured discussions to understand data and develop action plans, consideration of approaches such as a guided de-brief33 or the R2C2 model, which draws on relationship building, reactions, content, and coaches for change may be useful.34–36 The utility of the R2C2 model as an approach to debriefing has been established with practicing physicians engaged in physician assessment and feedback processes.34,37

There are limitations to the study. We focused on five specialties which represented a range of specialties and practice contexts with sufficient numbers to quantify and extend the earlier work with psychiatrists.23 However, given some of the differences we found in the types of assessment activities between specialties, the generalizability to other specialties, including primary care or generalist specialties would need to be explored. Our current analysis was limited to self-recorded documentation of assessment activities. Interviews may have yielded greater insights as to why some activities were more likely to lead to change or to planned change than others. In addition, interviews may have provided insight into whether learning that did not lead to practice change was a confirmation of current practice or whether there were other barriers to attempting practice change. Finally, our data set does not include data regarding other potentially influential factors on assessment activity choice such as the geographic location of practice or type of practice (community versus academic). We were also limited by the nature of the questions posed to capture the reports of learning or change and the brevity of answers provided; the latter finding was similar to that noted in a UK study.5 Further given that many specialists enter their e-portfolio data infrequently or toward the end of the calendar year, the numbers, types of activity, and plans for change are likely to be under represented. Nonetheless, our data set was large; the analysis was iterative and was performed over many months and involved regular ongoing discussions by conference call and email to properly categorize the assessment activities, the role learning played, and the changes planned.

Future research should explore assessment choices in other specialties, the influence of data availability and practice characteristics on the use and impact of assessment activities on learning and change, and the role and impact of discussion in the formulation of plans for improvement.

CONCLUSIONS

Royal College fellows and MOC program participants engaged in many and varied assessment activities to meet their MOC program cycle requirements. Learning occurred with almost all assessment activities while planned changes were discerned in about one-third of assessment activities. Most changes were simple changes or accommodation type changes. Assessment activities that involved discussions with peers, supervisors, or colleagues were more likely to be associated with changes implemented or planned. Further research is required to identify the types of assessment activities and feedback strategies that foster learning and change.

Lessons for Practice

  • ■ Physicians and surgeons participate in a variety of assessment activities as part of the XX Maintenance of Certification Program.
  • ■ The choice of activities varies by discipline and may be influenced by the availability of data and other means of practice assessment.
  • ■ Although most assessment activities result in learning, fewer activities resulted in plans for change.
  • ■ Selection of assessment activities should include the opportunity for discussion of data to stimulate practice change.

REFERENCES

1. Ivers NM, Grimshaw JM, Jamtvedt G, et al. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Intern Med. 2014;29:1534–1541.
2. Colquhoun HL, Carroll K, Eva KW, et al. Advancing the literature on designing audit and feedback interventions: identifying theory-informed hypotheses. Implement Sci. 2017;12:117.
3. Lockyer J. Multisource feedback: can it meet criteria for good assessment? J Contin Educ Health Prof. 2013;33:89–98.
4. Ferguson J, Wakeling J, Bowie P. Factors influencing the effectiveness of multisource feedback in improving the professional practice of medical doctors: a systematic review. BMC Med Educ. 2014;14:76.
5. Wakeling J, Holmes S, Boyd A, et al. Reflective practice for patient benefit: an analysis of doctors' appraisal portfolios in Scotland. J Contin Educ Health Prof. 2019;39:13–20.
6. Griswold-Theodorson S, Ponnuru S, Dong C, et al. Beyond the simulation laboratory: a realist synthesis review of clinical outcomes of simulation-based mastery learning. Acad Med. 2015;90:1553–1560.
7. Pluye P, Grad R, Granikov V, et al. Feasibility of a knowledge translation CME program: courriels Cochrane. J Contin Educ Health Prof. 2012;32:134–141.
8. van der Meulen MW, Smirnova A, Heeneman S, et al. Exploring validity evidence associated with questionnaire-based tools for assessing the professional performance of physicians: a systematic review. Acad Med. 2019;94:1384–1397.
9. Van Der Leeuw RM, Boerebach BC, Lombarts KM, et al. Clinical teaching performance improvement of faculty in residency training: a prospective cohort study. Med Teach. 2016;38:464–470.
10. van der Leeuw RM, Overeem K, Arah OA, et al. Frequency and determinants of residents narrative feedback on the teaching performance of faculty: narratives in numbers. Acad Med. 2013;88:1324–1331.
11. Gagliardi AR, Wright FC, Anderson MA, et al. The role of collegial interaction in continuing professional development. J Contin Educ Health Prof. 2007;27:214–219.
12. Connelly MT, Inui TS, Oken E, et al. Annual performance reviews of, for and by faculty: a qualitative analysis of one department's experiences. J Fac Dev. 2018;32:5–12.
13. Bland CJ, Wersal L, VanLoy W, et al. Evaluating faculty performance: a systematically designed and assessed approach. Acad Med. 2002;77:15–30.
14. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;6:CD000259.
15. Baker K. Clinical teaching improves with resident evaluation and feedback. Anesthesiology. 2010;113:693–703.
16. van der Leeuw RM, Slootweg IA, Heineman MJ, et al. Explaining how faculty members act upon residents' feedback to improve their teaching performance. Med Educ. 2013;47:1089–1098.
17. van der Leeuw RM, Schipper MP, Heineman MJ, et al. Residents' narrative feedback on teaching performance of clinical teachers: analysis of the content and phrasing of suggestions for improvement. Postgrad Med J. 2016;92:145–151.
18. Cervero RM, Gaines JK. The impact of CME on physician performance and patient health outcomes: an updated synthesis of systematic reviews. J Contin Educ Health Prof. 2015;35:131–138.
19. Ajjawi R, Regehr G. When I say…feedback. Med Educ. 2019;53:652–654.
20. Johnson MJ, May CR. Promoting professional behaviour change in healthcare: what interventions work, and why? A theory-led overview of systematic reviews. BMJ Open. 2015;5:e008592.
21. Medical Council of Canada. A National Multi-Source Feedback Program to Evaluate Physician Workplace Performance in their Roles as Communicator, Collaborator and Professional: MCC 360. Available at: https://mcc.ca/assessments/mcc360/. Accessed July 28, 2019.
22. Royal College of Physicians and Surgeons of Canada. MOC Program Regulations and Policies for Fellows. Available at: http://www.royalcollege.ca/rcsite/cpd/moc-program/fellows/moc-regulations-policies-for-fellows-e. Accessed July 28, 2019.
23. Lockyer JM, Sockalingam S, Campbell C. Assessment and change: an exploration of documented assessment activities and outcomes by Canadian psychiatrists. J Contin Educ Health Prof. 2018;38:235–243.
24. Fox RW, Mazmanian PE, Putnam RW. Changing and Learning in the Lives of Physicians. New York, NY: Praeger; 1989.
25. Amar C, Pomey MP, SanMartin C, et al. Sustainability: orthopaedic surgery wait time management strategies. Int J Health Care Qual Assur. 2015;28:320–331.
26. Dyck M, Embil JM, Trepman E, et al. Surgical site infection surveillance for elective primary total hip and knee arthroplasty in Winnipeg, Manitoba, Canada. Am J Infect Control. 2019;47:157–163.
27. Tan JCK, Ferdi AC, Gillies MC, et al. Clinical registries in Ophthalmology. Ophthalmology. 2019;126:655–662.
28. McFadyen C, Lankshear S, Divaris D, et al. Physician level reporting of surgical and pathology performance indicators: a regional study to assess feasibility and impact on quality. Can J Surg. 2015;58:31–40.
29. Tabatabai ZL, Auger M, Kurtycz DF, et al. Performance characteristics of adenoid cystic carcinoma of the salivary glands in fine-needle aspirates: results from the College of American Pathologists Nongynecologic Cytology Program. Arch Pathol Lab Med. 2015;139:1525–1530.
30. Carpentier S, Sharara N, Barkun AN, et al. Pilot Validation Study: Canadian Global Rating Scale for colonoscopy services. Can J Gastroenterol Hepatol. 2016;2016:6982739.
31. Janssen RM, Takach O, Nap-Hill E, et al. Time to endoscopy in patients with colorectal cancer: analysis of wait-times. Can J Gastroenterol Hepatol. 2016;2016:8714587.
32. Cohen J Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillside, New Jersey: Lawerence Erlbaum Associates; 1988:222.
33. Chesluk BJ, Reddy S, Hess B, et al. Assessing interprofessional teamwork: pilot test of a new assessment module for practicing physicians. J Contin Educ Health Prof. 2015;35:3–10.
34. Sargeant J, Lockyer J, Mann K, et al. Facilitated reflective performance feedback: developing an evidence- and theory-based model that builds relationship, explores reactions and content, and coaches for performance change (R2C2). Acad Med. 2015;90:1698–1706.
35. Dalhousie University, R2C2 Resources. Available at: https://medicine.dal.ca/departments/core-units/cpd/faculty-development/R2C2.html. Accessed September 26, 2019.
36. Sargeant J, Armson H, Chesluk B, et al. The processes and dimensions of informed self-assessment: a conceptual model. Acad Med. 2010;85:1212–1220.
37. Pooley M, Pizzuti C, Daly M. Optimizing multisource feedback implementation for australasian physicians. J Contin Educ Health Prof. 2019;39:228–235.
Keywords:

continuing professional development; assessment; feedback; Maintenance of Certification; self-assessment programs

Supplemental Digital Content

Copyright © 2020 The Alliance for Continuing Education in the Health Professions, the Association for Hospital Medical Education, and the Society for Academic Continuing Medical Education