Secondary Logo

Share this article on:

Predictors of Physician Performance on Competence Assessment: Findings From CPEP, the Center for Personalized Education for Physicians

Grace, Elizabeth S. MD; Wenghofer, Elizabeth F. PhD; Korinek, Elizabeth J. MPH

doi: 10.1097/ACM.0000000000000248
Research Reports

Purpose To identify factors associated with physician performance in a comprehensive competence assessment.

Method The authors conducted a retrospective analysis of 683 physicians referred for assessment at the Center for Personalized Education for Physicians from 2000 to 2010, who were evaluated as either safe or unsafe to return to practice. Multivariate logistic regression was used to determine factors predictive of unsafe assessment outcome. Covariates included personal characteristics (e.g., age), practice context (e.g., solo practice), and referral information (e.g., previous board license action).

Results Older physicians were more likely to have unsafe assessment outcomes (odds ratio [OR] = 1.07; P < .001). Board-certified individuals were less likely to have poor assessment outcomes (OR = 0.40; P = .003) than uncertified individuals. Physicians in solo practice were more likely (OR = 2.15; P = .037) to be deemed unsafe than physicians in other settings. Physicians with a practice scope that matched their training were less likely (OR = 0.29; P = .023) to have unsafe assessment outcomes than those whose did not. Physicians with current or previous board action (suspension, revocation, limitation, or stipulation) were more likely to be deemed unsafe (OR = 2.47; P = .003) than those without.

Conclusions Findings suggest that important predictors of physician performance on competence assessment include personal characteristics, practice context, and reasons for assessment referral. These findings have implications for development of policies and programs designed to assess risk of poor physician performance and quality of care improvement efforts through organizational/practice design or remedial education.

Dr. Grace is medical director, Center for Personalized Education for Physicians (CPEP), Denver, Colorado.

Dr. Wenghofer is associate professor, School of Rural and Northern Health, Laurentian University, Sudbury, Ontario, Canada, and associate professor, Northern Ontario School of Medicine, Sudbury, Ontario, Canada.

Ms. Korinek is chief executive officer, Center for Personalized Education for Physicians (CPEP), Denver, Colorado.

Funding/Support: None reported.

Other disclosures: None reported.

Ethical approval: Ethical approval was provided by the Laurentian University research ethics board (certificate no. 2012-04-16).

Correspondence should be addressed to Dr. Grace, Center for Personalized Education for Physicians, 7351 Lowry Blvd., Suite 100, Denver, CO 80230; telephone: (303) 577-3232; e-mail: esgrace@cpepdoc.org.

Ensuring that physicians are practicing safely and effectively is a critical role of state licensing agencies, hospitals, and other credentialing entities. The percentage of physicians who are dyscompetent (failing to maintain acceptable practice standards) or incompetent (lacking the abilities and qualities to perform effectively)1 is not entirely clear. Some estimates indicate that 6% to 12% of physicians in the United States are dyscompetent.2 Data available from randomly selected physicians in Ontario reveal that approximately 15% of family physicians (FPs)3,4 and 3% of specialists3 were found to be practicing with considerable deficiencies. Keeping physicians practicing safely from graduation to retirement benefits both patients and the profession. Competence assessment and remedial education programs play an important role in assisting in the identification and remediation of knowledge and clinical skill deficiencies. Furthermore, assessment programs also help organizations identify physicians who are not safe to be in independent practice and for whom remediation would be very difficult outside of a fully supervised residency setting.

The Center for Personalized Education for Physicians (CPEP) offers programs to assess physician competence and provide structured remedial education. CPEP is an independent, not-for-profit organization founded in 1990. More than 1,200 physicians have undergone a CPEP competence assessment. Previous publications have described the development of the CPEP program5 and various characteristics of its participant physicians.6–8 Although other published data about physicians who present for competence assessment in the United States and Canada are limited, this body of work is growing.3,6,9–12

In this study, we examine CPEP program data to identify the predictors of performance on CPEP competency assessment. More specifically, we seek to identify factors that are associated with performance on competence assessment that deem the physician unsafe to practice. Thus, our study builds on previous work and provides a more robust analysis than previously possible, given the large size of the CPEP assessment database.

Back to Top | Article Outline

Method

Conceptual framework

Understanding the factors and circumstances that support the continued competence and performance of physicians throughout their careers is essential to maintain safe patient care and protect the public. In previous studies, researchers have recognized that physician performance in practice is affected by the personal characteristics of the physician (e.g., age, gender, education, specialty board certification, health) and their practice context (e.g., practice structure, community location, remuneration).4,13–19 On the basis of this conceptual understanding, we have examined variables representative of both personal and practice context characteristics for each of the study physicians. We define personal characteristics as those attributes of the physicians that would be consistent regardless of the practice context, whereas practice context characteristics refer to the features of the practice environment and community in which the physician practices.3

Back to Top | Article Outline

Study population

The study population was 683 physicians (MDs and DOs) who completed a competence assessment at CPEP between 2000 and 2010. We did not include physicians presenting for a needs assessment prior to a return to practice after a voluntary leave from practice, and who were under no licensing or privileging sanctions, in this study.

Back to Top | Article Outline

CPEP assessment

Physicians are primarily referred to the CPEP programs by state licensing boards and hospitals, or they are self-referred. CPEP receives an average of approximately 100 referrals annually. Detailed descriptions of the CPEP competence assessment have been previously published6–8 and can also be found on the CPEP Web site.20 We note here only that the assessment is an in-depth evaluation that is tailored to the physician’s practice area and specialty. A personalized intake process involving interviews and written documentation allows CPEP to gather important personal and practice information from the physician and from the referring agency. A variety of assessment modalities are employed to ensure that all aspects of competence can be evaluated. These are review of practice profile, education, and postgraduate (PG) training and professional development activities; chart reviews (when charts from the physician’s practice are available); structured clinical interviews; simulated patient encounters including an evaluation of communication skills and a documentation exercise; cognitive function screen; written tests (depending on the specialty); and a review of the participant’s health information.

Back to Top | Article Outline

Assessment outcome

At the completion of the assessment, all data are reviewed, and CPEP staff assign an overall rating of the participant’s performance on the assessment. Rating is on a 4-point scale, with a rating of 1 indicating minimal or no educational needs and a rating of 4 indicating significant educational needs that are likely to require remediation in a residency or residency-like setting. Physicians rated 2 and 3 demonstrate a range of moderate to extensive educational needs such that supervision and oversight during remediation is recommended. Factors considered in determining the performance ratings include the extent and characteristics of educational needs identified and the level of supervision required to ensure patient safety while the physician addresses the educational needs. Two CPEP physician reviewers and the chief executive officer (E.J.K.) each review the data for each participant and reach a consensus agreement regarding the factors and level of educational needs. Remedial education is tailored for the individual physician and might include focused study, course work, workshops, preceptorships, and, in some cases, direct observation and supervision.

For the purposes of our investigation, we dichotomized the assessment outcomes into two rating categories: safe to return to practice (with or without the need for educational remediation; performance ratings 1–3) and unsafe to practice (performance rating 4), which included physicians who demonstrated significant educational needs that are likely to require remediation in a residency or other setting of similar rigor.

Back to Top | Article Outline

Potential predictors

As mentioned above, we included both personal physician characteristics and practice context details as potential predictors in our analyses. In our study, personal characteristics were age, gender, years in practice, medical school location (Liaison Committee on Graduate Medical Education [LCME]-accredited institution versus international medical school), degree type (MD versus DO), board certification status, years of PG training, and area of specialty training. We did not differentiate between the different board certification paths (e.g., lifetime certification versus participation in a recertification or maintenance of certification process) as these processes have changed over time and were not available or relevant for all physicians included in this study.

The practice context details we examined were solo practice, locale (rural versus other), match between training and practice, and whether the physician was in active practice at the time of assessment.

In addition, we examined whether details associated with the reason for referral would predict physicians’ performance on assessment. These details were current or previous (in prior 10 years) board action (e.g., license suspension or stipulations), Drug Enforcement Administration (DEA) registration certificate status, and referral source. Referral sources included medical board referrals, hospital referrals, self-referred physicians, and other types of referrals. Details of all the variables examined in the study are listed in Table 1.

Table 1

Table 1

Back to Top | Article Outline

Analysis

We performed descriptive analyses to provide an overview of the study population as well as examine the data for quality, completeness, and potential outliers. Basic cross-tabulations and chi-square analyses were performed to evaluate the data for simple associations between the assessment outcome and categorical predictors. We used t tests to evaluate differences in continuous predictors by assessment outcome.

We then used multivariate logistic regression models to determine whether differences in personal physician characteristics, practice context, and assessment referral details existed between those physicians who were found to have “safe” assessment outcomes versus those who were found to be “unsafe.” A backward stepwise conditional entry was used in the model, where P ≤ .05 was required for variable entry into the model and P ≥ .10 was the threshold for removal. All analyses were conducted using PASW Statistics GradPack 17.0 (IBM Corp., Armonk, NY).

We obtained ethical approval for this study from the Laurentian University research ethics board.

Back to Top | Article Outline

Results

Study population description

For the 683 physicians for whom we analyzed data, descriptive statistics and simple associations are shown in Table 2. The majority were male (83.9%), board certified (65.3%), and graduates of LCME-accredited medical schools (88.9%). International medical graduates (IMGs) constituted 17.4% of the study population. The mean age was 53.1 years, with a minimum age of 32 years and a maximum of 84. The average time in practice was 18.0 years, and over 86% of the study population had 3 or more years of PG training. FPs and general practitioners (GPs) constituted 29.1% of the total study population. The majority of physicians (61.5%) reported that their practices were in urban areas. More than half (59.0%) of the study physicians were in a solo practice setting, and approximately 3.2% had a practice scope that did not match the areas in which they were trained.

Table 2

Table 2

Physicians were referred to the CPEP program from various agencies. Over 70% were referred by state licensing boards, with the remaining 29% coming from hospital referrals, self-referrals, and other referral sources. The 70.9% of physicians referred by state licensure boards were referred by 47 unique boards (including two Canadian provinces), and 14.9% of the board-referred physicians were referred by the Colorado Medical Board. Although CPEP does not track the numbers of referrals from specific hospitals, CPEP believes that the majority of the 109 hospital referrals were from unique hospitals. Although there are a few hospitals that have referred 2 or 3 physicians for competence assessments, this is relatively uncommon. Approximately 42% of study physicians had board actions affecting their license at the time of assessment or within the prior 10 years, and 9.5% had a history of DEA prescribing suspensions or revocations. A majority of the study physicians, 64%, reported that they were in active practice at the time of their assessment.

Most physicians’ (87.4%) performance on CPEP competence assessment fell into the categories considered safe to practice; 12.6% were unsafe with pervasive educational needs significant enough to recommend remediation in a residency setting.

Back to Top | Article Outline

Simple associations between assessment outcome and independent variables

Pearson chi-square analysis indicated that significant relationships exist between assessment outcome and the following physician personal characteristics: medical school location, board certification, years of PG training, and specialty training. More specifically, more IMGs performed poorly on assessment than LCME graduates, and more physicians without board certification performed poorly than those with certification. A higher proportion of individuals with less than 3 years of PG training had unsafe outcomes compared with those with more years of PG training. In addition, higher proportions of GPs had unsafe assessment outcomes than those with FP or other specialty training outcomes. Likewise, t tests indicated that physicians who had an unsafe assessment outcome were 5.1 years older than physicians with safe outcomes (t = −5.008, P < .001). No significant associations were found between gender, years in practice, or type of medical training (MD versus DO) and assessment outcome.

Significant relationships were also found between assessment outcome and practice context variables. Significantly fewer individuals whose practice matched their training (P = .001) had an unsafe outcome on assessment (11.8%) compared with those who were practicing in an area in which they had not completed formal residency training (36.4%). Proportionately, more individuals in solo practice settings had an unsafe outcome on assessment (P < .001) when compared with those who were in other practice types. In addition, those who were not in active practice at the time of competence assessment were more frequently considered unsafe on assessment (P < .001) when compared with those who were in active practice. No significant association was found between urban or rural practice setting and assessment outcome.

Details of the referral were found to have significant associations with assessment outcome. Individuals with a history of board action or DEA registration revocations (current or within the previous 10 years) were more frequently found to have unsafe outcomes. Of the physicians with current or previous board action affecting license status, 19.4% were found to be unsafe, whereas only 7.6% of individuals without board action were found unsafe (P < .001). Approximately 25% of the individuals with DEA revocations were unsafe (P = .002). Significant differences were also found in assessment outcome by referring agency (P = .001). Proportionally more individuals referred by medical boards had poor outcomes, with 15.7% identified as unsafe to practice. This figure compares to the 2.8%, 1.2%, and 7.0% of assessments that were unsafe when referred by hospitals, self-referral, or other sources, respectively.

Back to Top | Article Outline

Multivariate logistic regression: Predictors of “unsafe” assessment outcome

The multivariate logistic regression model was significant (P < .001) and explained 26.2% of the total variation in assessment outcome (Table 3). Years in practice was excluded from the final regression to avoid any issues of collinearity in the model as it was found to be highly correlated with age (Pearson correlation coefficient = 0.7; P < .001). The results of the multivariate analysis differed somewhat from the univariate measures of association. Although some personal characteristics, practice context, and referral information remained significant in the multivariate model, they did not mirror the significant relationships found in the chi-square analyses or t tests.

Table 3

Table 3

Age, board certification, and training specialty all remained significant predictors in the regression analysis. The regression indicated that increasing age was a predictor of unsafe performance. Physicians were more likely to have an unsafe outcome for each year of increasing age (OR = 1.07; P < .001). Board-certified individuals were less likely to have a poor assessment outcome (OR = 0.40; P = .003) than uncertified individuals. FPs (OR = 0.36; P = .037) and other specialists (OR = 0.33; P = .012) were less likely than GPs to have unsafe assessment outcomes. The univariate analysis indicated a significant association between assessment outcome and location of undergraduate medical school and years of PG training. However, these failed to reach significance in the multivariate analysis.

Predictive practice context variables included practice-to-training match and solo practice. Physicians in solo practice setting were more likely (OR = 2.15; P = .037) to perform poorly on assessment than physicians in other settings. Physicians with a practice scope that matched their training were less likely (OR = 0.29; P = .023) to have unsafe assessment outcomes than those who did not have a practice–training match. Unlike what was found in the chi-square analysis, being in active practice at the time of assessment failed to reach significance in the multivariate model as a predictor.

Only one variable concerning referral information was a significant predictor of assessment outcome. Physicians with board action (current or within the past 10 years) were significantly more likely (OR = 2.47; P = .003) than those without previous board action to have an unsafe outcome. In contrast to the univariate analysis, DEA registration revocations and source of referral were not significant predictors in the regression model.

Back to Top | Article Outline

Discussion

The majority of physicians in this study either performed well or demonstrated the potential to remediate. A relatively small percentage (12.6%) of the 683 physicians in this study performed poorly (i.e., were considered unsafe to practice). A study of surgeons from another program with similar comprehensive assessments has indicated failure rates of 5.7%9 to 10.6%.12

Our findings suggest that there are several important predictors of physician performance on CPEP competence assessment that include personal characteristics, practice context characteristics, and details regarding the referral to the CPEP assessment. The likelihood of an unsafe assessment outcome increased with increasing age. Age has been well established throughout the literature as a predictor of poor performance across a wide spectrum of assessment and evaluation methodologies.3,4,21–29 Similarly, age has been found to be associated with risk for discipline by state licensing boards.30,31 Our findings further confirm the need to continue to evaluate the impact of physician aging on performance. As fewer physicians are choosing to retire at 65 years of age, the development of strategies to ensure safe and competent practice of older, more experienced physicians is of paramount importance for physicians and patients alike.

Our findings indicated that board-certified physicians, FPs and other specialists, and physicians whose training specialty matched their scope of practice were all predictors of a positive outcome on competence assessment. Other studies have demonstrated a link between certification and performance.3,4,22,32–34 It was not surprising to find that current board certification and training specialty were significant predictors of performance, given the current requirements for maintenance of certification which emphasize activities that are related to practice improvement, and also given that FPs and other specialists receive additional training compared with GPs. In future studies, it may be of value to use multilevel models to examine the effects of specific specialty areas in more depth.

We find it of great interest that physicians who were practicing within the scope of their training were much less likely (OR = 0.29; P = .023) to have an unsafe assessment result than physicians who practice outside their trained scope of practice. So-called practice drift, in which a physician’s scope of practice moves away from his or her area of training, is an area of current interest to licensure agencies, malpractice insurers, and the public. Various medical regulators across both the United States and Canada have recognized that practicing outside of the scope of practice in which a physician is trained and experienced can pose a risk to patients.35–37 In Ontario, the medical regulatory authority has implemented a “Changing Scope of Practice” policy which requires physicians to report significant practice scope changes; these physicians may be required to participate in a formal training process to ensure they are practice ready.35 Policies and position statements have also been issued by various state licensure boards in the United States in order to address concerns regarding the competence of physicians practicing outside the scope of their training, in particular the areas of pain management and cosmetic surgery. Our findings support these new requirements imposed by licensure boards on physicians with a scope of practice outside of their training36,37 and provide evidence to support such policies in protecting public safety.

We found that the odds of physicians in solo practice having an unsafe assessment outcome were 2.15 times higher than for physicians working in other practice settings. This finding is supported in the literature, which has identified working in small or solo practice settings as a risk factor for poor performance and quality-of-care concerns.3,10,25,26,38 It is important for education and remediation programs to ensure that participants understand the impact of practice size and organization structure on their risk of professional isolation and their ability to remain current in their practice skills. New technology as well as the acquisition of private practices by large health care organizations may provide opportunities to decrease some aspects of professional isolation of those in solo practice, but it is unclear whether such opportunities will mitigate whatever specific factors cause this increased risk for poor performance. Given that solo practice was a significant predictor of poor outcome, it was somewhat surprising that rural practitioners did not perform more poorly than nonrural physicians insofar as rurality is also often associated with professional isolation.39,40 Although anecdotal, CPEP has noted that struggling physicians sometimes are ostracized by peers and enter solo practice not by choice but, rather, after a series of unsuccessful professional relationships. Though merely conjecture, such a phenomenon could explain the association of poor outcome with solo practice, but not with rural practice. Furthermore, the term “rural” is challenging to define. Rural practices were self-identified by the physician participants and are therefore subjective in their interpretation. A more systematic evaluation of rural setting may be beneficial in future studies.

The last significant predictor of assessment outcome we found was having current or previous board action. Not surprisingly, physicians with a previous history of board action were more likely to have an unsafe assessment outcome than those without. This group represents physicians who may be experiencing recurrent practice difficulties and/or those whose reason for referral for a competence assessment was significant enough to warrant action on the physician’s license before the results of the assessment were known.

As with any study, we need to examine our findings within the context of the study limitations. The regression model was able to account for approximately 26% of the variation in assessment outcome, and although an excellent value statistically, it still leaves a considerable amount of variation unexplained. Additional data regarding practice environment, patient case mix, continuing professional development, physician health status, or other factors that have been shown to be linked with physician competence may help to improve predictive power of future analyses. Additionally, many of the data elements we used were based on participants’ self-reported information. Though CPEP staff made every effort to clarify unclear or inconsistent responses through questioning the participant or review of relevant documentation, inaccuracies in self-reported data do occur. For example, some physicians may be confused about their own board certification status because they are not always aware that active and unencumbered licensure is typically needed to maintain specialty board certification. Questions about the rural versus urban nature of practice are subjective and open to interpretation, thus potentially affecting the results and analysis about the impact of this important factor.

Although the quality of care of individual medical providers is being examined more closely in the United States than ever before, with tracking of quality measures and individual physician data, investigations into competency remain largely complaint- and/or incident driven.41 Identification of risk factors for poor performance on competence assessment testing could ultimately lead to a system of proactive interventions to enhance the practices of physicians with significant risk factors for competency concerns or perhaps a system of screening physicians with significant risk factors before a critical incident or patient harm occurs, facilitating earlier intervention.

In conclusion, we believe that our analysis represents the largest and most comprehensive study of its kind. Our findings confirm previous associations of certain physician characteristics and physician performance but also highlight new, previously unrecognized factors, such as practice drift. The findings emphasize the importance of evaluating performance in a broader environmental context rather than as a simple attribute of personal characteristics of the physician,4 which may be of particular relevance for those attempting to develop performance screening mechanisms or risk assessments. Consideration should be given to creating models that conceptualize physician performance as a complex construct resulting from the combined effects of the physician’s health, knowledge, skills, and attributes in situ rather than a simple reflection of credentials and training.

Back to Top | Article Outline

References

1. Federation of State Medical Boards. . Essentials of a Modern Medical Practice Act. 11th ed. 2006 http://www.fsmb.org/pdf/GPROL_essentials_eleventh_edition.pdf. Accessed February 13, 2014
2. Williams BW. The prevalence and special educational requirements of dyscompetent physicians. J Contin Educ Health Prof. 2006;26:173–191
3. McAuley RG, Paul WM, Morrison GH, Beckett RF, Goldsmith CH. Five-year results of the peer assessment program of the College of Physicians and Surgeons of Ontario. CMAJ. 1990;143:1193–1199
4. Wenghofer EF, Williams AP, Klass DJ. Factors affecting physician performance: Implications for performance improvement and governance. Healthc Policy. 2009;42:141–160
5. Bunnell KP, Kahn KA, Kasunic LB, Radcliff S. CPEPP: Development of a model for personalized continuing medical education. J Contin Educ Health Prof. 1991;11:19–27
6. Grace ES, Korinek EJ, Tran ZV. Characteristics of physicians referred for a competence assessment: A comparison of state medical board and hospital referred physicians. J Med Regul. 2011;96(3):8–15
7. Korinek LL, Thompson LL, McRae C, Korinek E. Do physicians referred for competency evaluations have underlying cognitive problems? Acad Med. 2009;84:1015–1021
8. Grace ES, Korinek EJ, Weitzel LB, Wentz DK. Physicians reentering clinical practice: Characteristics and clinical abilities. J Contin Educ Health Prof. 2011;31:49–55
9. Norcross WA, Henzel TR, Freeman K, Milner-Mares J, Hawkins RE. Toward meeting the challenge of physician competence assessment: The University of California, San Diego Physician Assessment and Clinical Education (PACE) program. Acad Med. 2009;84:1008–1014
10. St George I, Kaigas T, McAvoy P. Assessing the competence of practicing physicians in New Zealand, Canada, and the United Kingdom: Progress and problems. Fam Med.. 2004;36:172–177
11. Lockyer JM, Violato C, Fidler HM. Assessment of radiology physicians by a regulatory authority. Radiology. 2008;247:771–778
12. Cosman BC, Alverson AD, Boal PA, Owens EL, Norcross WA. Assessment and remedial clinical education of surgeons in California. Arch Surg. 2011;146:1411–1415
13. LaDuca A. Validation of professional licensure examinations. Eval Health Prof. 1994;17:178–197
14. Grol R. Changing physicians’ competence and performance: Finding the balance between the individual and the organization. J Contin Educ Health Prof. 2002;22:244–251
15. Rethans JJ, Norcini J, Baron-Maldonado M, et al. The relationship between competence and performance: Implications for assessing practice performance. Med Educ.. 2002;36:901–909
16. Hogg W, Rowan M, Russell G, Geneau R, Muldoon L. Framework for primary care organizations: The importance of a structural domain. Int J Qual Health Care. 2008;20:308–313
17. Geneau R, Lehoux P, Pineault R, Lamarche P. Understanding the work of general practitioners: A social science perspective on the context of medical decision making in primary care. BMC Fam Pract. 2008;9:12
18. Russell GM, Dahrouge S, Hogg W, Geneau R, Muldoon L, Tuna M. Managing chronic disease in Ontario primary care: The impact of organizational factors. Ann Fam Med. 2009;7:309–318
19. Dahrouge S, Hogg WE, Russell G, et al. Impact of remuneration and organizational factors on completing preventive manoeuvres in primary care practices. CMAJ. 2012;184:E135–E143
20. . Center for Personalized Education for Physicians. http://www.cpepdoc.org. Accessed February 10, 2014
21. Vermeulen MI, Kuyvenhoven MM, Zuithoff NP, van der Graaf Y, Pieters HM. Attrition and poor performance in general practice training: Age, competence and knowledge play a role [in Dutch]. Ned Tijdschr Geneeskd. 2011;155:A2780
22. Norman GR, Wenghofer E, Klass D. Predicting doctor performance outcomes of curriculum interventions: Problem-based learning and continuing competence. Med Educ.. 2008;42:794–799
23. Goulet F, Gagnon R, Gingras ME. Influence of remedial professional development programs for poorly performing physicians. J Contin Educ Health Prof. 2007;27:42–48
24. Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: The relationship between clinical experience and quality of health care. Ann Intern Med. 2005;142:260–273
25. Ashworth M, Schofield P, Seed P, Durbaba S, Kordowicz M, Jones R. Identifying poorly performing general practices in England: A longitudinal study using data from the quality and outcomes framework. J Health Serv Res Policy. 2011;16:21–27
26. Lipner R, Song H, Biester T, Rhodes R. Factors that influence general internists’ and surgeons’ performance on maintenance of certification exams. Acad Med. 2011;86:53–58
27. Norman GR, Davis DA, Lamb S, Hanna E, Caulford P, Kaigas T. Competency assessment of primary care physicians as part of a peer review program. JAMA. 1993;270:1046–1051
28. Blasier RB. The problem of the aging surgeon: When surgeon age becomes a surgical risk factor. Clin Orthop Relat Res. 2009;467:402–411
29. Duclos A, Peix JL, Colin C, et al. Influence of experience on performance of individual surgeons in thyroid surgery: Prospective cross sectional multicentre study. BMJ. 2012;344:d8041
30. Kohatsu ND, Gould D, Ross LK, Fox PJ. Characteristics associated with physician discipline: A case–control study. Arch Intern Med. 2004;164:653–658
31. Morrison J, Wickersham P. Physicians disciplined by a state medical board. JAMA. 1998;279:1889–1893
32. Reid RO, Friedberg MW, Adams JL, McGlynn EA, Mehrotra A. Associations between physician characteristics and quality of care. Arch Intern Med. 2010;170:1442–1449
33. Norton PG, Dunn EV, Soberman L. What factors affect quality of care? Using the Peer Assessment Program in Ontario family practices. Can Fam Physician. 1997;43:1739–1744
34. Norton PG, Dunn EV, Soberman L. Family practice in Ontario. How physician demographics affect practice patterns. Can Fam Physician. 1994;40:249–256
35. College of Physicians and Surgeons of Ontario. . Changing scope of practice (policy #1-08). http://www.cpso.on.ca/policies/policies/default.aspx?ID=1622. Accessed February 10, 2014
36. Arizona Medical Board Scope of Practice Guidelines. http://azmd.gov/Files/Guidelines/ScopeOfPracticeGuidelines.pdf. Accessed February 10, 2014
37. North Carolina Medical Board. . Physician scope of practice. http://www.ncmedboard.org/position_statements/detail/physician_scope_of_practice/. Accessed February 10, 2014
38. Shine KI. Health care quality and how to achieve it. Acad Med. 2002;77:91–99
39. Grzybowski S, Kornelsen J, Prinsloo L, Kilpatrick N, Wollard R. Professional isolation in small rural surgical programs: The need for a virtual department of operative care. Can J Rural Med. 2011;16:103–105
40. Richards HM, Farmer J, Selvaraj S. Sustaining the rural primary healthcare workforce: Survey of healthcare professionals in the Scottish Highlands. Rural Remote Health. 2005;5:365
41. Federation of State Medical Boards. . The Special Committee on Evaluation of Quality of Care and Maintenance of Competence. revised policy adopted by House of Delegates April 1999.
© 2014 by the Association of American Medical Colleges