The American public views physician board certification as a marker for clinical excellence.1 The American Board of Medical Specialties® states that “patients view Board Certification as an important indicator of a physician’s ability to provide high quality, expert care in a medical specialty.”2 The Accreditation Council for Graduate Medical Education (ACGME) uses graduate board certification rates as a metric when reviewing residency programs,3 and many physician groups will only hire board-eligible or board-certified physicians. Thus, board certification has important consequences for both individuals and residency programs.
Board certification through any of the 24 American Board of Medical Specialties Member Boards is a lengthy and rigorous process. All member boards require completion of an ACGME-accredited residency program and a written examination of specialty-specific medical knowledge. Many certifying boards, including the American Board of Anesthesiology, Inc.® (ABA), also require an oral examination. The ABA oral examination assesses the candidate’s ability to: exercise sound judgment in clinical decision making and management of surgical and anesthetic complications, appropriately apply scientific principles to clinical problems, adapt to unexpected changes in clinical situations, and logically organize and effectively present information.4 The ABA considers these attributes essential for board certification and maintains that these attributes cannot be adequately assessed using a written examination.5 Analyses have shown that candidates’ oral examination scores are only moderately correlated with their written examination scores,6,7 indicating that the oral examination is measuring a construct that is different from what is measured by the written examination.
Prior studies have shown that in-training examinations are predictive of subsequent written examination scores used for board certification in the United States7 and in Canada.8 Clinical skills (determined by a program director’s willingness to have a graduating resident provide anesthesia to them for 3 increasingly complex cases) were associated with higher odds of becoming board certified in anesthesiology.9 Program director ratings of graduating residents’ clinical competence were also related to board certification rates in internal medicine.10 These prior studies only examined whether a candidate became board certified or not (a dichotomous outcome).7–10 We are unaware of any published reports demonstrating that clinical performance scores relate to oral examination scores (a continuous outcome).
Our current study explores the extent to which clinical performance scores from the final year of residency are independently associated with oral examination scores when the oral examination is administered approximately 1 year after residency is completed. The ACGME requires residency programs to evaluate residents using 6 core competencies (patient care, professionalism, medical knowledge, practice-based learning and improvement, systems-based practice, and interpersonal and communication skills).11 We also hypothesized that certain of the 6 ACGME core competencies (i.e., interpersonal and communication skills) would be more related to oral examination scores than others (i.e., systems-based practice).
The Massachusetts General Hospital (MGH) IRB waived the need for informed consent and classified this study as exempt (Protocol: 2013P000978). The participants included 124 MGH anesthesia residency graduates from 2009 to 2013. One hundred eleven of them had their first-time ABA written and oral examination scores available and were included in the analysis.
Clinical Performance Determination
Resident clinical performance scores were determined as previously described.12 In brief, faculty members assign clinical performance scores to residents they supervise. Each resident performance score was standardized to the unique scoring attributes of the faculty member providing the evaluation. Our prior work showed that some faculty members are more or less lenient than others (i.e., they habitually assigned higher or lower scores). Our faculty members also each use a unique amount of the score range which is available to them. We thus standardize each assigned resident clinical performance score by subtracting the assigned score from the average score that the faculty member assigns and then divide this by the standard deviation of the scores assigned by that same faculty member. This ensures that each faculty member’s standardized scores average out to be 0.0 with a standard deviation of 1.0. We term these standardized scores “Z-scores.” Z-scores provide a measure of distance from the grader’s mean score in standard deviation units. For example, a Z-score of −0.5 means that the faculty member scored the resident a half standard deviation below where they normally score residents of this same CA-year in our residency program. Z-scores are essentially effect sizes since they are differences in scores which are standardized to the standard deviation of that faculty member’s scores. Faculty members are assigned to evaluate each resident that they worked with during the preceding week. Thus, individual Z-scores are based on a faculty-resident interaction that covered a 1-week period.12
This study used average clinical performance scores (Z-scores) of anesthesia residents during their final year of anesthesia residency.12 Average Z-scores are stable when measured over a 1-year time period, can increase when an educational intervention is made, strongly increase the odds of independently being referred to the Clinical Competency Committee for performance concerns when Z-scores are below average, are related to medical knowledge as determined by ABA In-Training Examination scores, increase when faculty confidence increases in allowing residents to undertake independent and unsupervised care of increasingly complex patients, and identify poor performance due to a wide variety of causes.12
Resident Clinical Performance Scores
Our clinical performance scores (Zrel) were determined using our “relative to peers” scale, which contains assessment components for each of the 6 ACGME core competencies. Clinical performance was determined during each resident’s final year of residency (Postgraduate Year 4, Clinical Anesthesia Year 3 [CA-3]). We determined the mean Zrel score for each resident during their CA-3 year using all individual Zrel scores for that resident. We initiated the resident evaluation system in July 2008 and were able to determine Zrel scores for all 124 graduates from 2009 to 2013. We chose Zrel as the overall clinical performance metric because it is composed of all core competency subscores. A mean Zrel score of 0 indicates that a resident is average among their graduating cohort of MGH anesthesia residents. Residents received an average of 87 evaluations with sufficient data to determine Zrel scores during their final year of residency (median = 87, minimum = 36, maximum = 142).
Zrel Score Factor Analysis
Our Zrel scores are the mean of 7 core competency subscores which follow the 6 core competencies of the ACGME. We split the core competency of patient care into 2 components, cognitive and technical, giving a total of 7 components in our Zrel scores. We determined the number of factors represented by these 7 subscores that make up the Zrel metric. We performed factor analyses on all evaluations in our database from 2008 to 2012 containing all 7 subscores (n = 29,295). We used the FACTOR procedure from SAS version 9.3 (SAS, Cary, NC). Because residents are often evaluated >1 time by the same faculty member, we analyzed the data using 3 different data sets to minimize and assess for the effects of repeated measures. Our first data set contained only evaluations from the first resident-faculty member pairing (n = 12,536). Our second data set contained only evaluations from the second resident-faculty member pairing (n = 7330). Our third data set contained all evaluations from the third or more resident-faculty member pairings (n = 9429).
ABA Board Certification Examination Scores: ZPart 1 and ZPart 2 Scores
To minimize the effect of repeated attempts on ABA certification examination, only first-time written (Part 1) and oral (Part 2) examination scores of the graduates were used in the analysis. Every 5 years, the ABA conducts standard setting studies to establish the cutoff score for its certification examinations. The cutoff score translates to the minimally acceptable level of proficiency in professional knowledge and skill that a board-certified anesthesiologist should possess. In the years between the standard setting studies, different tests are equated to the base test on which the standard setting study was conducted. This equating process13 allows candidates’ abilities to be estimated on the same scale even though they take different tests. The ABA uses American medical school graduates taking the examination for the first time under standard testing conditions as the calibration group to make the equating process as equitable as possible. This psychometric procedure makes the examination scores comparable across the years. For the oral (Part 2) examinations, each candidate is given two 35-minute sessions and is rated independently by 4 oral board examiners across specific testing domains (preoperative evaluation, intraoperative management, postoperative evaluation, and additional topics).a Examiners’ severity is accounted for in the equating process in addition to the task difficulty levels. Although examinees are given only a final “Pass” or “Fail” result on the oral (Part 2) examination, internally each candidate receives a numeric score that is based on all the ratings assigned by the 4 oral examiners; the numeric scores were standardized for the purpose of the current project. Written examination scores were standardized (ZPart 1) by subtracting the mean score of the calibration group from the resident’s score and then dividing that difference by the standard deviation of the calibration group’s scores. The same procedure was followed to standardize the oral examination scores (ZPart 2). ZPart 1 and ZPart 2 indicate how a graduate compares with the national ABA calibration group for the written examination and the oral examination, respectively. A ZPart 1 or a ZPart 2 score of 0 means that an individual’s score was average compared with the national cohort. A ZPart 1 or a ZPart 2 score of 1 means that an individual’s score was 1 standard deviation above the mean compared with the national cohort.
Mean clinical performance scores (Zrel), mean ABA written examination scores (ZPart 1), and mean ABA oral examination scores (ZPart 2) were each compared with the mean (0.0) using a 1-sample t test. To assess the relationship between clinical performance and ABA examination scores, first-order bivariate Pearson correlations (r) between Zrel, ZPart 1 and ZPart 2 were calculated. We used the bootstrap sampling method to determine the 95% confidence intervals (CIs) for each correlation (n = 10,000 replications). We also calculated disattenuated correlation coefficients among these 3 variables. We used the average reliability of 0.90 for Part 1 and 0.85 for Part 2 examinations. We used the test-retest correlation of 0.48 from published work on Zrel scores12 as our reliability estimate for Zrel. In brief, we used the correlation coefficient that resulted when we regressed the first Zrel score assigned by a faculty member to a resident against the second Zrel score assigned at a later date by this same faculty member to this same resident. The correlation was based on 3509 unique faculty member-resident pairings.12 CIs for this correlation coefficient were determined using the Fisher z transform. The point estimate of 0.48 has 95% CIs of 0.45 to 0.51. We calculated disattenuated correlation coefficients as R12, disattenuated = r12/(r11 × r22)1/2.
When clinical performance (Zrel) serves as the predictor variable and the written or oral ABA examination score (ZPart 1 or ZPart 2) serves as the outcome variable, the variance explained in the outcome variable by the predictor variable is the square of the Pearson correlation (r2) between the 2 quantitative variables. Multiple regression was conducted to determine the best combination of predictor variables when >1 predictor variable was included in the model. A t test was used to test whether the regression coefficient for a predictor variable was statistically different from zero. Variance explained by the predictor variables and root mean squared error (RMSE) were reported as goodness-of-fit measures. We did not account for reliability in our multivariable analysis because there is no agreement on correcting for reliability in such modeling. Thus, our multivariable analysis results are likely conservative results.b We determined the 95% CIs for variance explained by the predictor variable and RMSE using 10,000 bootstrap samples.
Factor analysis was done to better understand the data structure of the 7 Zrel subscores. All analyses were done using R 3.2.3 (R foundation for Statistical Computing, Vienna, Austria), Excel 2003 (Microsoft, Redmond, WA), or StatsDirect version 2.7.9 (StatsDirect Ltd., Altrincham, UK) unless noted otherwise. A P value <0.05 was considered statistically significant.
The MGH anesthesia residency program graduated 124 residents over the 5 years from 2009 to 2013. All 124 residents had clinical performance (Zrel). As of May 1, 2014, 123 graduates had taken the ABA written examination (Part 1) and 111 had also taken the ABA oral examination (Part 2). Resident ABA written (Part 1) and oral (Part 2) examination scores were converted to Z-scores (ZPart 1 and ZPart 2) to standardize performance to the national average for each examination. Our analysis is based on the 111 graduates who had Zrel, ZPart 1, and ZPart 2 scores. This cohort of graduates had a mean clinical performance score (Zrel) of 0.02 (SD = 0.38; P = 0.5851), a mean ABA written examination score (ZPart 1) of 0.48 (SD = 1.09; P < 0.0001), and a mean ABA oral examination score (ZPart 2) of 0.43 (SD = 1.06; P < 0.0001; Fig. 1A). This indicates that graduates in this study had average clinical performance scores for our residency and they scored above the national average on both the ABA written and oral examinations. A histogram showing all 7277 non-MGH first-time oral examination scores from examinations conducted during the last 6 years is shown in Figure 1B. A histogram of the oral examination scores from the 111 graduates in this study is shown for comparison (Fig. 1B). The distribution of oral examination scores for all non-MGH graduates had a standard deviation of 0.98, which was similar to the standard deviation of 1.06 found for the scores of the MGH graduates in this study (F = 1.18; P = 0.0964). However, the mean score of the MGH graduates (mean = 0.43) was higher than the mean for the national cohort (mean = 0.00; d = 0.43; P < 0.0001).
Clinical Performance Scores (Zrel) Are Correlated with ABA Oral Examination Scores (ZPart 2)
Mean resident clinical performance scores (Zrel) from the final year of residency were correlated with first-time ABA oral examination scores (ZPart 2) (r = 0.33; 95% CI, r = 0.16–0.48; r2 = 0.11; n = 111; P = 0.0005; Fig. 2A). The disattenuated correlation coefficient, which corrects for unreliability of measurements, was 0.52. This indicates that clinical performance scores (Zrel) are independently associated with ABA oral examination scores (ZPart 2) when used as the sole predictor (Table 1). The Zrel scores used in this study were normally distributed (Shapiro-Wilk test; W = 0. 9775; n = 111; P = 0.0575).
Clinical Performance Scores (Zrel) Are Correlated with ABA Written Examination Scores (ZPart 1)
Mean resident clinical performance scores (Zrel) from the final year of residency were correlated with first-time ABA written examination scores (ZPart 1) (r = 0.27; 95% CI, r = 0.10–0.42; r2 = 0.07; n = 111; P = 0.0047; Fig. 2B). The disattenuated correlation coefficient, which corrects for unreliability of measurements, was 0.41. This indicates that clinical performance scores (Zrel) are independently associated with ABA written examination scores (ZPart 1) when used as the sole predictor (Table 1). The ZPart 1 scores used in this study were normally distributed (Shapiro-Wilk test; W = 0.9918; n = 111; P = 0.7514).
ABA Written Examination Scores (ZPart 1) Are Correlated with ABA Oral Examination Scores (ZPart 2)
Our residents’ ABA written examination scores (ZPart 1) were correlated with first-time ABA oral examination scores (ZPart 2) (r = 0.46; 95% CI, r = 0.28–0.61; r2 = 0.21; n = 111; P < 0.0001; Fig. 2C). The disattenuated correlation coefficient, which corrects for unreliability of measurements, was 0.53. This indicates that ABA written examination scores (ZPart 1) are independently associated with ABA oral examination scores (ZPart 2) when used as the sole predictor (Table 1). The national cohort of 7277 examinees also had ABA written examination scores (ZPart 1) that were correlated with their first-time ABA oral examination scores (ZPart 2) (r = 0.38; 95% CI, r = 0.36–0.40; r2 = 0.14; n = 7277; P < 0.0001). The correlation of the MGH graduates’ written examination scores with their oral examination scores (r = 0.46) was not statistically different from that for the national cohort (r = 0.38) (Fisher Z = 1.00; P = 0.3173).
Clinical Performance Scores (Zrel) and Written Examination Scores (ZPart 1) Are Independently Associated with Oral Examination Scores (ZPart 2)
Using multiple linear regression, we found that clinical performance scores (Zrel) and ABA written examination scores (ZPart 1) each independently contribute to ABA oral examination scores (ZPart 2). The results, based on 111 residents, show that ABA written examination scores (ZPart 1) explained 20.8% (95% CI, 8.0%–37.2%) of the variance in ABA oral examination scores (ZPart 2) (P < 0.0001), and clinical performance scores (Zrel) explained an additional 4.5% (95% CI, 0.5%–12.4%) of the variance of ABA oral examination scores (ZPart 2) (P = 0.012). The ZPart 2 scores were not normally distributed (Shapiro-Wilk test; W = 0.9763; n = 111; P = 0.0450). Thus, we ran a number of checks on our data to ensure that the assumptions underlying multiple regression were met. In particular, when ZPart 2 was regressed on Zrel and ZPart 1, histograms of the standardized residuals were normally distributed. In addition, the normal probability plot of the observed versus predicted values was linear; the points were symmetrically distributed along the diagonal line, thus ensuring a linear relationship between the dependent (ZPart 2) and independent variables. Last, we found no relationship between the variance of the residuals and the predicted values of the ZPart 2, which effectively rules out the problem of heteroscedasticity. Thus, ABA written examination scores (ZPart 1) and clinical performance scores (Zrel) together explain 25.3% of the variance in ABA oral examination scores (ZPart 2). This corresponds to a combined correlation coefficient of 0.50. The RMSE was 0.92 with 95% CI of 0.79 to 1.03. The regression equation is given as:
Figure 3A uses standardized regression coefficients to demonstrate how ABA oral examination scores (ZPart 2) are influenced by ABA written examination scores (ZPart 1) when clinical performance is average (Zrel = 0). Figure 3B uses standardized regression coefficients to show how ABA oral examination scores (ZPart 2) are influenced by clinical performance scores (Zrel) when ABA written examination scores are average (ZPart 1 = 0). Figure 3C uses standardized regression coefficients to show how ABA oral examination scores (ZPart 2) are influenced when both clinical performance scores (Zrel) and ABA written examination scores (ZPart 1) are allowed to vary independently.
Clinical Performance Scores (Zrel) Contain a Single Factor
Our clinical performance scores (Zrel) are composed of 7 core competency subscores. We initially hypothesized that certain of these core competencies might be more related to the ABA written and oral examination scores. We performed factor analysis to determine how many factors were present among the 7 components. When we used all evaluations having all 7 subscores (n = 29,295), factor analysis revealed a single factor with an eigenvalue of 5.8, which explained 82.9% of the variance in Zrel scores. The next largest eigenvalue was 0.36. Because the same faculty member may evaluate the same resident many times, we performed factor analysis using 3 different data sets to assess for and minimize the effects of repeated measures. The first data set contained all first-time faculty-resident parings (n = 12,526), the second data set contained all second-time faculty-resident pairings (n = 7330), and the third data set contained all third-time and any additional faculty-resident parings (n = 9429). Factor analysis on each of these data sets revealed a single factor with eigenvalues of 5.8, 5.8, and 5.8, respectively. These single factors explained 82.3%, 82.9%, and 82.2% of the variance in Zrel scores in each data set, respectively. The next largest eigenvalues for each data set were 0.38, 0.35, and 0.35, respectively. Thus, Zrel acts as a single statement of clinical performance. Thus, our hypothesis that certain core competencies would be better related to ABA written or oral examination scores was not supported. This conclusion results from each of the 7 core competency components being highly related to one another.
Clinical Performance Scores Are Independently Associated with ABA Oral Examination Scores
This study found that residents with higher clinical performance scores were more likely to achieve higher ABA oral examination scores. The ABA’s scheduling procedure ensures that oral examiners have no prior knowledge of the candidates they examine, including which institutions they come from. Thus, the ABA oral examination process is entirely independent from the MGH clinical performance scoring process. This finding is important because it provides evidence that the ABA oral examination is related to clinical performance even after taking into account written examination scores. Clinical performance scores had an independent, albeit small, relationship with oral examination scores. This implies that the oral examination is measuring at least some aspect of clinical performance, which is consistent with the purpose of the ABA oral examination.
The scores from all first-time oral examination test takers form a continuous and broad distribution as do the scores from the graduates in this study (Fig. 1B). When used as the sole predictor of ABA oral examination performance, our clinical performance scores explained 11% (95% CI, 3%–23%) of the variance in the ABA oral examination scores (Fig. 2A). This 11% of explained variance is based on the correlation between Zrel and ZPart 2. Our graduates’ overall ABA board certification rate was 96% within 5 years of graduation from residency. Thus, we did not study clinical performance and ABA board certification in a dichotomous fashion due to the very small cohort of uncertified graduates.
Medical Knowledge and Clinical Performance Scores Independently Contribute to ABA Oral Examination Scores
All candidates that present for the oral examination have already passed the ABA written examination. The oral examination is designed to test judgment in clinical decision making, management of complications, application of scientific principles to clinical problems, adaptability to changing clinical conditions, and logical organization and clear presentation of information.4 It is not designed to be an examination of medical knowledge. Despite this design, we found that ABA written examination scores were related to ABA oral examination scores. When used as the sole predictor, ABA written examination scores explained 21% of the variance in the ABA oral examination scores. The finding that ABA written examination scores are correlated with ABA oral examination scores has been published previously.6,14 A study from 197114 reported a correlation of 0.40 between raw ABA written examination scores and oral examination scores. This means that written examination scores explained about 16% of the variance in the oral examination scores, which is close to the 21% that we found for MGH graduates and very close to the 14% that we found for the current national cohort.
Our multiple regression analysis found that ABA written examination scores and clinical performance scores are each independently associated with ABA oral examination scores. Our analysis showed that written examination scores explained 20.8% (95% CI, 8.0%–37.2%) of the variance, and clinical performance scores explained an additional 4.5% (95% CI, 0.5%–12.4%) of the variance in oral examination scores. Together, they explained about 25.3% of the variance in the ABA oral examination scores. Thus, clinical performance scores measure something more than medical knowledge. The increase from 20.8% (variance associated with written examination scores alone) to 25.3% (total variance associated with a combination of written examination scores and clinical performance scores) represents a 22% (95% CI, 2%–60%) increase in the variance associated with written examination scores alone. Although our 25.3% leaves much unexplained variance in oral examination scores, this can be contextualized by noting that general mental ability explains about 33.6% of the variance in job performance of professionals15 and general intelligence (g) explains about 15.2% of the variance in academic performance of college students.16 Thus, even strong predictors of performance (general mental ability and g) are only able to predict a fraction of the variance in the outcomes of interest.
Clinical Performance Scores (Zrel) Have a Single Factor
Our clinical performance scores (Zrel) are composed of 7 components based on ACGME core competencies.12 Our factor analysis found a single dominant factor, which explained 82.9% of the composite Zrel score. This indicates that our faculty captures a single dominant factor when they determine clinical performance and they do not routinely distinguish among the 6 ACGME core competencies when they evaluate resident clinical performance. Thus, we reject our hypothesis that certain core competencies are better predictors of oral examination scores. This may be due to how our faculty members assess the core competencies as a single construct. A prior review examined the broad question of whether current measurement tools were able to assess the different competencies in an independent manner.17 The authors found no evidence that the competencies were independently assessed using the tools available. Our results add additional support to this finding.
This study has a number of limitations. Our findings come from a single institution, and our cohort of residents, although from 5 different graduating classes, was small (111 residents). Thus, our results may not be representative of all graduating residents in the United States. Comparison of all 7277 non-MGH first-time oral examination scores with the 111 MGH residents analyzed in this study (Fig. 1B) demonstrates a broad and continuous distribution of oral examination scores for each of the 2 groups, albeit with the MGH residents scoring higher on average than the national cohort. In addition, we did not have oral examination scores for 13 residents because they either chose to delay taking the oral examination or failed the written examination. This likely made the 111 graduates in the analysis more homogenous. Thus, the correlation between Zrel and ZPart 2 may be underestimated.
Clinical performance scores and ABA written examination scores each account for a component of the variance in ABA oral examination scores. Our ability to explain 25.3% of the variance in oral examination scores is quite similar in scale to other major predictors of job and academic performance. The large unexplained variance that remains indicates a need for future studies to help understand the origin of this remaining variance. The ABA is adding an Objective Structured Clinical Examination to its certification process in the future. Future studies can test whether this new component adds incremental value in explaining the variance in oral examination scores.
Name: Keith Baker, MD, PhD.
Contribution: This author helped to design the study, collect the data, perform the analysis, and prepare the manuscript.
Attestation: Keith Baker approved the final manuscript, attests to the integrity of the original data and the analysis reported in this manuscript, and is the archival author.
Conflicts of Interest: Keith Baker is an ABA oral examiner.
Name: Huaping Sun, PhD.
Contribution: This author helped to design the study, collect the data, perform the analysis, and prepare the manuscript.
Attestation: Huaping Sun approved the final manuscript and attests to the integrity of the original data and the analysis reported in this manuscript.
Conflicts of Interest: Huaping Sun is the Manager of Psychometrics and Research at the ABA.
Name: Ann Harman, PhD.
Contribution: This author helped to design the study, collect the data, perform the analysis, and prepare the manuscript.
Attestation: Ann Harman approved the final manuscript and attests to the integrity of the original data and the analysis reported in this manuscript.
Conflicts of Interest: Ann Harman is the Chief Assessment Officer at the ABA.
Name: K. Trudy Poon, MS.
Contribution: This author helped to design the study, perform the analysis, and prepare the manuscript.
Attestation: K. Trudy Poon approved the final manuscript.
Conflicts of Interest: None.
Name: James P. Rathmell, MD.
Contribution: This author helped to design the study and prepare the manuscript.
Attestation: James P. Rathmell approved the final manuscript.
Conflicts of Interest: James P. Rathmell is an ABA oral examiner and ABA Director.
This manuscript was handled by: Franklin Dexter, MD, PhD.
The authors thank the MGH faculty members who took time to evaluate MGH residents and the ABA oral examiners who later evaluated these same residents.
1. Brennan TA, Horwitz RI, Duffy FD, Cassel CK, Goode LD, Lipner RSThe role of physician specialty board certification status in the quality movement.JAMA2004292103843
4. Booklet of Information Certification and Maintenance of Certification2013Raleigh, NC: The American Board of Anesthesiology, Inc.
5. Harman A, Lien CAFrost EAMThe process of board certification.In: Comprehensive Guide to Education in Anesthesia2014New YorkSpringer99116
6. Carter HDHow reliable are good oral examinations?California J Educ Res19621314753
7. McClintock JC, Gravlee GPPredicting success on the certification examinations of the American Board of Anesthesiology.Anesthesiology20101122129
8. Kearney RA, Sullivan P, Skakun EPerformance on ABA-ASA in-training examination predicts success for RCPSC certification. American Board of Anesthesiology-American Society of Anesthesiologists. Royal College of Physicians and Surgeons of Canada.Can J Anaesth2000479148
9. Slogoff S, Hughes FP, Hug CC Jr, Longnecker DE, Saidman LJA demonstration of validity for certification by the American Board of Anesthesiology.Acad Med1994697406
10. Norcini JJ, Grosso LJ, Shea JA, Webster GDThe relationship between features of residency training and ABIM certifying examination performance.J Gen Intern Med198723306
11. Swing SRThe ACGME outcome project: retrospective and prospective.Med Teach20072964854
12. Baker KDetermining resident clinical performance: getting beyond the noise.Anesthesiology201111586278
13. Kolen MJ, Brennan RLTest Equating, Scaling, and Linking: Methods and Practices.20143rd ed.New YorkSpringer
14. Kelley PR Jr, Matthews JH, Schumacher CFAnalysis of the oral examination of the American Board of Anesthesiology.J Med Educ1971469828
15. Schmidt FL, Hunter JGeneral mental ability in the world of work: occupational attainment and job performance.J Pers Soc Psychol20048616273
16. von Stumm S, Hell B, Chamorro-Premuzic TThe hungry mind: intellectual curiosity is the third pillar of academic performance.Perspect Psychol Sci2011657488
Copyright © 2016 International Anesthesia Research Society
17. Lurie SJ, Mooney CJ, Lyness JMMeasurement of the general competencies of the accreditation council for graduate medical education: a systematic review.Acad Med2009843019