Secondary Logo

Journal Logo

Original Articles: Gastroenterology

Can Pediatric Endoscopists Accurately Assess Their Clinical Competency? A Comparison Across Skill Levels

Scaffidi, Michael A.; Khan, Rishad; Carnahan, Heather; Ling, Simon C.‡,§; Lightdale, Jenifer R.||; Mamula, Petar; Yu, Jeffrey J.#; Grover, Samir C.∗,∗∗; Walsh, Catharine M.‡,§,#,††

Author Information
Journal of Pediatric Gastroenterology and Nutrition: March 2019 - Volume 68 - Issue 3 - p 311-317
doi: 10.1097/MPG.0000000000002191

Abstract

See “Training in Endoscopy: Coaching, Deliberate Practice, and Reflection” by Sauer et al on page 298.

What Is Known

  • Assessment and self-assessment are encouraged practices for the development and maintenance of competence in pediatric endoscopy.
  • Physicians have been shown to have inaccurate self-assessment with regards to nonprocedural skills.
  • What Is New
  • This multicenter study found that endoscopic experience is positively associated with increased self-assessment accuracy of endoscopic competence among pediatric endoscopists.
  • Novice pediatric endoscopists tend to overestimate their performances, suggesting that they may benefit from incorporation of “informed self-assessment” into their training to help develop their conscious awareness.

Assessment is critical in helping to support pediatric endoscopy training. Historically, endoscopic competence has been determined using procedural volume, notably through the use of log books (1,2). With the paradigm shift toward competency-based training, procedural volume is no longer considered a viable surrogate for competency (3,4). Instead, trainees are expected to demonstrate competence in 3 core domains: technical (psychomotor), cognitive (knowledge and recognition), and integrative (expertise and behavior) skills (1,5). In-training assessment is required to support learning and document competence. Trainee engagement in this process is important, with self-reflection and self-directed learning being encouraged by pediatric focused gastroenterology organizations, such the North American Society for Pediatric Gastroenterology, Hepatology and Nutrition (6) and European Society for Pediatric Gastroenterology, Hepatology and Nutrition (7).

Self-assessment is important in education, as it allows individuals to identify their own learning needs and can help in developing personalized learning plans. In addition, there is evidence to suggest that individuals are more accepting of and responsive to external feedback if it aligns with one's self-appraisal (8,9). To date, there is little known about the utility of self-assessment of gastrointestinal endoscopic competence in the pediatric setting. There have also been concerns raised about self-assessment accuracy, defined as the extent to which one's own assessment agrees with an external measure of the same performance (eg, an assessor's score or objective metric) (10). Several systematic reviews have suggested that healthcare professionals generally demonstrate inaccurate self-assessment (11–13). Generalizing these findings to procedural settings, however, may not be appropriate as most studies do not differentiate between different competency domains (ie, technical, cognitive, and integrative). Indeed, one narrative review of the surgical literature revealed mixed evidence regarding self-assessment accuracy of technical skills (14).

In pediatric endoscopy, there are no known studies on self-assessment. Although there is a growing literature about the topic among adult endoscopists (15–18), the overall picture is conflicting in regards to accuracy. Furthermore, there are clear distinctions between competencies entailed in pediatric and adult endoscopy, such as endoscopic indications, sedation practices, and the nontechnical skills required to interact with pediatric patients and their families (1). These differences highlight the need for research within the pediatric context. To address this gap, we conducted a cross-sectional study of novice, intermediate, and experienced pediatric endoscopists with the specific aim of determining the self-assessment accuracy among these groups.

METHODS

This cross-sectional study was an a priori planned subanalysis of data from a previously published prospective North American multicenter study designed to assess validity evidence of the Gastrointestinal Endoscopy Competency Assessment Tool for Pediatric Colonoscopy (GiECATKIDS) (19). Reporting of the findings follows the STROBE statement for reporting observational studies (20). Ethical approval was obtained from The Hospital for Sick Children's research ethics board, University of Toronto research ethics board, and Boston Children's Hospital's institutional review board. The Children's Hospital of Philadelphia's institutional review board granted ethics exempt status as a quality improvement project.

Participants

Study participants included pediatric gastroenterology fellows and staff physicians from 3 North American academic teaching hospitals. Purposive sampling was used to recruit participants for novice, intermediate, and experienced groups according to prespecified procedure volume criteria, developed from credentialing guidelines and a literature review of endoscopic competence (1,2). Novice endoscopists were defined as having performed <50 colonoscopies, intermediates as 50 to 250 colonoscopies, and experienced endoscopists as >500 colonoscopies (1,19). Informed consent was obtained from all of the participants where required.

Assessment Tool

The GiECATKIDS was used to assess pediatric endoscopists. This task-specific direct observational assessment tool, which consists of a global rating scale (GRS) and a checklist (CL), was developed by 41 pediatric endoscopy experts from 28 centers across North America using Delphi methodology (5,19). The GRS assesses 7 holistic aspects of the procedure (technical skill, strategies for scope advancement, visualization of mucosa, independent procedure completion, knowledge of procedure, interpretation and management of findings, and patient safety) using a 5-point criterion referenced ordinal scales with descriptive anchors reflective of the degree of autonomy demonstrated by the endoscopists. The sum of scores of each item yields a total score from 7 to 35, with higher scores reflecting better performance. The CL assesses 18 key steps required to perform a colonoscopy procedure using a dichotomous scale (1 = performed correctly or 0 = not performed/performed incorrectly), with scores ranging from 0 to 18. Both the GRS and CL consist of items related to technical (GRS items 1–4; CL items 5–11), cognitive (GRS item 5; CL items 1, 3, 5, 12, 13, 15), and integrative (GRS items 6, 7; CL items 1, 2, 4, 14–18) competencies. The GiECATKIDS has been shown to have strong evidence of reliability and validity in the assessment of pediatric clinical colonoscopy (19).

Data Collection

Each participant completed a clinical colonoscopy. Attending pediatric gastroenterologists supervised novice and intermediate endoscopists. They provided verbal and/or hands-on assistance and took over performance of the procedure, as appropriate. During each procedure, participants’ performances were assessed by an experienced endoscopist (>500 procedures) using the GiECATKIDS. The assessors were attending endoscopists from the same hospital as the participants; however, they were not precepting the procedure. Assessment of a subset of 22 procedures by a second experienced observer showed high interrater reliability (19). Participants assessed their own performance immediately after the procedure using the GiECATKIDS. Both external assessors and participants were encouraged to read the descriptions for each domain on the GiECATKIDS GRS and item on the CL and to use the full range of responses, but no formal rater training was provided. In addition, for each procedure, case difficulty was rated by the assessor on a 1- (extremely easy) to 5-point (extremely difficult) Likert-type scale and bowel preparation was evaluated using the Ottawa bowel preparation scale (21).

Outcome Measures

The primary outcome was the difference between novice, intermediate, and experienced pediatric endoscopists with respect to self-assessment accuracy of colonoscopy competence in the clinical setting. Self-assessment accuracy was computed by comparing self- and externally-assessed GiECATKIDS scores. Secondary outcomes measures were self-assessment accuracy measures of the cognitive, technical, and integrative domains of the GiECATKIDS assessment tool.

Statistical Analysis

All data were analyzed using SPSS 20 (IBM, Armonk, NY) with an alpha of 0.05 set for establishing statistical significance in all statistical tests. Participants’ demographics were characterized through descriptive statistics. GiECATKIDS percentage scores, including GRS and CL scores, were calculated (19). In addition, Mann-Whitney U tests were used to compare case difficulty and bowel preparation across groups.

Self-assessment accuracy was evaluated using 3 approaches: interclass correlation coefficient model 1 (ICC1,1); absolute difference scores between self- and external-assessment; and a Bland-Altman analysis to compare agreement. First, the ICC1,1 between self- and externally assessed scores was calculated through a 1-way random-effects model for single and average measures with GiECATKIDS total, GRS, and CL scores. Second, absolute difference scores were calculated for GiECATKIDS total, GRS, and CL scores and for endoscopists’ total combined scores for the technical, cognitive, and integrative components of the GiECATKIDS. Kruskal-Wallis 1-way analysis of variance was used to compare absolute difference scores, with effect size measured using Eta-squared (η2). Post-hoc analysis of pairwise comparisons was completed using Mann-Whitney U tests, with adjustment for multiple comparisons (22). Third, self- and externally assessed scores for the total GiECATKIDS score, along with the total combined scores for the technical, cognitive, and integrative competency domains, were used to generate Bland-Altman plots. Bland-Altman analyses were conducted in line with recommendations from the method comparison literature (23).

RESULTS

Data from 21 novice, 16 intermediate, and 10 experienced endoscopists were analyzed. All novice and intermediate endoscopists were gastroenterology fellows (n = 37), and all experienced endoscopists were attending physicians (n = 10). Forty-three (91.5%) endoscopists were right-hand dominant and 27 (57.4%) were female (15 novices, 8 intermediates, and 4 experienced). Novice, intermediated, and experienced endoscopists had performed, on average (±standard deviation [SD]) 13.3 ± 14.2, 106.7 ± 35.2, and 607.8 ± 191.7 previous colonoscopies, respectively. There were no significant differences between the colonoscopies performed by the 3 groups with respect to case difficulty (P = 0.684) or bowel preparation (P = 0.204).

Self-assessment Accuracy

Overall, there was moderate agreement between the external- and self-assessments. The ICC1,1 single measures values for the GiECATKIDS total, GRS, and CL scores were 0.72 (95% confidence interval [CI], 0.55–0.83), 0.81 (95% CI, 0.68–0.89), and 0.42 (95% CI, 0.16–0.63), respectively. The ICC1,1 average measures values for the GiECATKIDS total, GRS, and CL scores, were 0.84 (95% CI, 0.71–0.91), 0.89 (95% CI, 0.81–0.94), and 0.60 (95% CI, 0.28–0.77), respectively.

GiECATKIDS Total, Global Rating Scale, and Checklist Scores

There were significant differences between the 3 groups (novice, intermediate, and experience endoscopists) with respect to self-assessment accuracy (Table 1). There was a significant group effect (novice, intermediate, experienced) for the absolute difference of externally and self-assessed GiECATKIDS total scores (Kruskal-Wallis chi-squared = 10.732, P = 0.005, η2 = 0.20), CL scores (Kruskal-Wallis chi-squared = 16.852, P < 0.001, η2 = 0.34), and GRS scores (Kruskal-Wallis chi-squared = 6.102, P = 0.047, η2 = 0.09). Post hoc analysis revealed that experienced endoscopists had significantly smaller absolute difference score for GiECATKIDS total score, compared to novice endoscopists (P = 0.003). For the GRS, experienced endoscopists also had a significantly smaller absolute difference score as compared to intermediate endoscopists (P = 0.045). For the CL, experienced endoscopists had a significantly smaller absolute difference score compared to intermediate (P = 0.009) and novice endoscopists (P < 0.001). There were no other significant differences among groups. A comparison of novice, intermediate, and experienced endoscopists’ GiECATKIDS scores is provided in Supplementary Table S1 (Supplemental Digital Content, http://links.lww.com/MPG/B510).

TABLE 1
TABLE 1:
Absolute difference scores between externally and self-assessed GiECATKIDS total, global rating scale, and checklist scores and total combined technical, cognitive, and integrative item scores for novice, intermediate, and experienced endoscopists

Bland-Altman analysis showed the mean difference between externally and self-assessed GiECATKIDS total scores was −4.46 (SD = 16.63) (Fig. 1). The upper and lower limits of agreement were 28.13 (95% CI, 21.31 to −37.81) and −37.05 (95% CI, −30.23 to −46.52), respectively. All but 3 data points fell within the limits of agreement, as 1 intermediate was above the upper limit and 1 novice and 1 intermediate endoscopist were below the lower limit. Novices tended to have negative difference scores (ie, below the 0 difference line). There was no clear trend for intermediate and experienced endoscopists.

FIGURE 1
FIGURE 1:
Bland-Altman analysis comparing externally and self-assessed GiECATKIDS total scores. The mean difference (central dashed line) and 95% upper and lower limits of agreement (upper and lower dashed lines, respectively) are shown. GiECATKiDS = Gastrointestinal Endoscopy Competency Assessment Tool for Pediatric Colonoscopy.

Technical, Cognitive, and Integrative Competencies

Comparison of GiECATKIDS items reflective of technical, cognitive, and integrative competency domains revealed significant differences among the 3 groups with respect to self-assessment accuracy (Table 1). There was a significant group effect (novice, intermediate, experienced) for the absolute difference score of externally and self-assessed GiECATKIDS total scores for technical (Kruskal-Wallis chi-squared = 8.722, P = 0.013, η2 = 0.15) and cognitive (Kruskal-Wallis chi-squared = 6.502, P = 0.039, η2 = 0.10) domains of competency. There was no significant group effect of scores for the integrative competency domain (Kruskal-Wallis chi-squared = 4.842, P = 0.089). Post hoc analysis revealed that experienced endoscopists had a significantly smaller absolute difference score compared to novice endoscopists for technical (P = 0.010) and cognitive (P = 0.035) competency domains. There were no other significant differences among domains.

Bland-Altman analyses of scores for the technical, cognitive, and integrative competency domains are summarized in Table 2. For the technical competency domain (Fig. 2A), 3 data points were outside the limits of agreement, with 1 intermediate endoscopist's above the upper limit and 1 intermediate and 1 novice endoscopist below the lower limit. Novices tended to have negative difference scores; there was no clear trend for intermediate and experienced endoscopists. For the cognitive competency domain (Fig. 2B), 4 data points were outside the limits of agreement, with 1 novice above the upper limit and 2 novices and 1 intermediate endoscopist below the lower limit. Novices tended to have negative difference scores; there was no clear trend for intermediate or experienced endoscopists. For the integrative competency domain (Fig. 2C), 5 data points were outside the limits of agreement, with 1 novice and 1 intermediate endoscopist's above the upper limit and 2 novice and 1 intermediate endoscopists below the lower limit. Novices tended to have negative difference scores; there was no clear trend for intermediate and experienced endoscopists.

TABLE 2
TABLE 2:
Bland-Altman analysis of mean difference scores between externally and self-assessed GiECATKIDS total scores and technical, cognitive, and integrative competency domain scores for all endoscopists
FIGURE 2
FIGURE 2:
A–C, Bland-Altman analysis comparing externally and self-assessed GiECATKIDS scores for the technical, cognitive, and integrative competency domains. The mean difference (central dashed line) and 95% upper and lower limits of agreement (upper and lower dashed lines, respectively) are shown. GiECATKiDS = Gastrointestinal Endoscopy Competency Assessment Tool for Pediatric Colonoscopy.

DISCUSSION

This is the first evaluation of pediatric endoscopists’ self-assessment accuracy across levels of experience and competency domains. Overall, we found moderate agreement between the self- and external-assessments of endoscopic competence among the pooled groups. When analyzed separately, however, our findings suggest that endoscopic experience is positively associated with self-assessment accuracy. Specifically, experienced endoscopists were significantly more accurate in their self-assessments than novices. Furthermore, novice endoscopists primarily overestimated their competence. In addition, there were differences in self-assessment accuracy with regard to the technical and cognitive competency domains, with experienced endoscopists demonstrating more accurate self-assessments of their technical and cognitive competencies as compared to novices. Moreover, novice endoscopists tended to overestimate their performance across the technical, cognitive, and integrative competency domains.

Inaccurate self-assessment among novices can be explained by the Dunning-Kruger effect (16,24). The Dunning-Kruger effect is a cognitive bias wherein unskilled individuals are unaware of their own deficiencies or “unconsciously incompetent (25).” Furthermore, they overcompensate for their lack of knowledge by overestimating their performances. Conversely, an increase in competence due to training is associated with less inflated self-assessments, likely as a result of improved meta-cognitive skills (24).

In comparison to the adult literature, our findings are in agreement with a study that examined endoscopists’ technical skills self-assessment accuracy using the Global Assessment of Gastrointestinal Endoscopic Skills tool which found moderate agreement among self- and externally-assessed scores (26). The authors did not, however, describe the self-assessment accuracy of endoscopists at different levels of experience. Another study of adult endoscopists, which used a similar methodology to the present one, also found that endoscopic experience was positively associated self-assessment accuracy (16). Finally, although there are additional studies of endoscopic self-assessment, comparisons with our findings is hindered by methodological differences, such as use of simulation-based outcomes (15,18) and absence of assessment tools with strong evidence of validity and reliability (17,27).

There are several study limitations. First, these data were drawn from a single clinical colonoscopy. In addition, although attending physicians who completed <250 procedures were eligible, none were recruited to the novice and intermediate groups. These 2 factors may limit the generalizability of our findings. Second, there were unequal group sizes for novice, intermediate, and experienced endoscopists, which may have skewed the findings. We have sought to use comprehensive statistical analyses to address this potential drawback. Third, we stratified endoscopists by the number of procedures completed, which can be a poor surrogate for competence, as opposed to external observer scores. This strategy, however, allowed us to stratify the participants a priori. Finally, the assessors were not blinded to endoscopist identity and no observer training provided; factors which may have introduced observer bias.

Our findings have implications for pediatric endoscopy training and practice. First, we strongly caution against the use of self-assessment among novice pediatric endoscopists for summative assessment of endoscopic competence. Novice pediatric endoscopists, as compared to experienced endoscopists, appear to lack insight into technical and cognitive competencies in particular. This is not to say that self-assessment of novices is not important. In fact, integration of self-assessment into training is critically important to the ultimate improvement of overall performance through education. As part of this process, it is essential for trainers to engage in a dialogue with trainees to help them deliberately and critically reflect on their own performance. By addressing learners’ (mis)perceptions of their performance, trainers help inform trainee self-assessment.

Use of an assessment tool, such as the GiECATKIDS, can also help to structure feedback and guide trainees’ self-assessment. Feedback for novice trainees should focus on simple, constructive, well-defined points which help to build a trainee's conscious understanding of the procedure or “conscious competence”—an explicit awareness of why and what they are doing when performing endoscopic procedures (25,28). In addition, benchmark (exemplar) videos and/or video-based feedback can be used. These interventions have been shown to improve self-assessment accuracy among novice adult trainees performing simulated flexible sigmoidoscopy (18). Such videos may help trainees better understand the criteria for competency so they are able to assess themselves as fairly and accurately as possible.

As trainees progress, their feedback should be augmented accordingly. In particular, endoscopic trainers can use questioning or decision training to encourage active problem solving and decision-making to foster self-reflection and conscious competence (28). Trainees can then use this feedback to inform their future self-assessments (ie, “informed self-assessment”). Of course, it is vital that all feedback to learners is provided by trusted, credible sources in safe environments (29).

Accurate self-assessment is important as it allows trainees to set learning goals, which are moderately challenging yet realistic; a factor shown to enhance both learning and self-confidence (30–32). Trainers can work collaboratively with trainees to identify realistic action plans that are practically and directly linked to their goals and help monitor progress toward their goals. These skills can then be applied in practice to identify weaknesses and develop self-directed learning plans.

In summary, we found that experience is positively associated with self-assessment accuracy of endoscopic competence among pediatric endoscopists. Specifically, experienced endoscopists were more accurate across the technical and cognitive competency domains as compared to novices, suggesting that they can more accurately identify their deficiencies in these areas. Considering these findings in the context of current educational strategies, novice pediatric endoscopists may benefit from incorporation of “informed self-assessment” into their training to help develop their conscious awareness. Future research should be aimed at evaluating which strategies are most effective in enhancing self-assessment accuracy in pediatric gastrointestinal procedures, such as specific forms of feedback or use of benchmark performance videos. In addition, a longitudinal study examining potential changes in self-assessment ability of new pediatric gastroenterology graduates over time would be interesting to help further elucidate the need for proctoring of early faculty. In particular, it would be useful to determine whether there is a dose response in terms of endoscopic experience and self-assessment accuracy, especially among endoscopists of intermediate experience.

REFERENCES

1. Walsh CM. Assessment of competence in pediatric gastrointestinal endoscopy. Curr Gastroenterol Rep 2014; 16:401.
2. Walsh CM. In-training gastrointestinal endoscopy competency assessment tools: types of tools, validation and impact. Best Pract Res Clin Gastroenterol 2016; 30:357–374.
3. Ekkelenkamp VE, Koch AD, De Man RA, et al. Training and competence assessment in GI endoscopy: a systematic review. Gut 2015; 0:1–9.
4. Preisler L, Svendsen MBS, Svendsen LB, et al. Methods for certification in colonoscopy—a systematic review. Scand J Gastroenterol 2018; 53:350–358.
5. Walsh CM, Ling SC, Walters TD, et al. Development of the Gastrointestinal Endoscopy Competency Assessment Tool for pediatric colonoscopy (GiECATKIDS). J Pediatr Gastroenterol Nutr 2014; 59:480–486.
6. Leichtner AM, Gillis LA, Gupta S, et al. NASPGHAN guidelines for training in pediatric gastroenterology. J Pediatr Gastroenterol Nutr 2013; 56 (suppl 1):S1–S8.
7. D’Antiga L, Nicastro E, Papadopoulou A, et al. European Society for Pediatric Gastroenterology, Hepatology, and Nutrition syllabus for subspecialty training. J Pediatr Gastroenterol Nutr 2014; 59:417–422.
8. Eva KW, Armson H, Holmboe E, et al. Factors influencing responsiveness to feedback: on the interplay between fear, confidence, and reasoning processes. Adv Health Sci Educ Theory Pract 2012; 17:15–26.
9. Moore DE, Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Edu Heal Prof 2003; 161:1–51.
10. Ward M, Gruppen L, Regehr G. Measuring self-assessment: current state of the art. Adv Heal Sci Educ 2002; 7:63–80.
11. Gordon MJ. A review of the validity and accuracy of self-assessments in health professions training. Acad Med 1991; 66:762–769.
12. Davis DA, Mazmanian PE, Fordis M, et al. Accuracy of physician self-assessment compared with observed measures of competence: a systematic review. JAMA 2014; 296:1094–1102.
13. Blanch-Hartigan D. Medical students’ self-assessment of performance: results from three meta-analyses. Patient Educ Couns 2011; 84:3–9.
14. Zevin B. Self versus external assessment for technical tasks in surgery: a narrative review. J Grad Med Educ 2012; 4:417–424.
15. Ansell J, Hurley JJ, Horwood J, et al. Can endoscopists accurately self-assess performance during simulated colonoscopic polypectomy? A prospective, cross-sectional study. Am J Surg 2014; 207:32–38.
16. Scaffidi MA, Grover SC, Carnahan H, et al. Impact of experience on self-assessment accuracy of clinical colonoscopy competence. Gastrointest Endosc 2018; 87:827–836.e2.
17. Moritz V, Holme O, Leblanc M, et al. An explorative study from the Norwegian Quality Register Gastronet comparing self-estimated versus registered quality in colonoscopy performance. Endosc Int open 2016; 4:E326–E332.
18. Vyasa P, Willis RE, Dunkin BJ, et al. Are general surgery residents accurate assessors of their own flexible endoscopy skills? J Surg Educ 2016; 74:23–29.
19. Walsh CM, Ling SC, Mamula P, et al. The gastrointestinal endoscopy competency assessment tool for pediatric colonoscopy. J Pediatr Gastroenterol Nutr 2015; 60:474–480.
20. Elm E Von, Altman DG, Egger M, et al. Guidelines for reporting observational studies strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. Br Med J 2007; 335:19–22.
21. Rostom A, Jolicoeur E. Validation of a new scale for the assessment of bowel preparation quality. Gastrointest Endosc 2004; 59:482–486.
22. Tomczak M, Tomczak E. The need to report effect size estimates revisited. An overview of some recommended measures of effect size. Trends Sport Sci 2014; 1:19–25.
23. Carkeet A. Exact parametric confidence intervals for Bland-Altman limits of agreement. Optom Vis Sci 2015; 92:e71–e80.
24. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments. J Pers Soc Psychol 1999; 77:1121–1134.
25. Peyton J. The learning cycle. Teaching and Learning in Medical Practice 1998; Rickmansworth, UK: Manticore Europe Limited, 13–19.
26. Vassiliou MC, Kaneva PA, Poulose BK, et al. Global Assessment of Gastrointestinal Endoscopic Skills (GAGES): a valid measurement tool for technical skills in flexible endoscopy. Surg Endosc 2010; 24:1834–1841.
27. Koch AD, Haringsma J, Schoon EJ, et al. Competence measurement during colonoscopy training: the use of self-assessment of performance measures. Am J Gastroenterol 2012; 107:971–975.
28. Walsh CM, Anderson JT, Fishman DS. Evidence-based approach to training pediatric gastrointestinal endoscopy trainers. J Pediatr Gastroenterol Nutr 2017; 64:501–504.
29. Sargeant J, Armson H, Chesluk B, et al. The processes and dimensions of informed self-assessment: a conceptual model. Acad Med 2010; 85:1212–1220.
30. Bounds R, Bush C, Aghera A, et al. Emergency medicine residents’ self-assessments play a critical role when receiving feedback. Acad Emerg Med 2013; 20:1055–1061.
31. Chang A, Chou CL, Teherani A, et al. Clinical skills-related learning goals of senior medical students after performance feedback. Med Educ 2011; 45:878–885.
32. Eva KW, Munoz J, Hanson MD, et al. Which factors, personal or external, most influence students’ generation of learning goals? Acad Med 2010; 85 (10 suppl):S102–S105.
Keywords:

clinical competence/standards; educational measurement; endoscopy; gastrointestinal/education; endoscopy; gastrointestinal/standards; self-assessment

Supplemental Digital Content

Copyright © 2018 by European Society for Pediatric Gastroenterology, Hepatology, and Nutrition and North American Society for Pediatric Gastroenterology, Hepatology, and Nutrition