Secondary Logo

Journal Logo

Quality of Care Provided by Board-Certified Versus Non-Board-Certified Psychiatrists and Neurologists

Wallace, Anna MPH; McFarland, Bentson H. MD, PhD; Selvam, Nandini PhD, MPH; Sahota, Gurvaneet MPH

doi: 10.1097/ACM.0000000000001233
Research Reports
Free
SDC

Purpose To examine associations between board certification of psychiatrists and neurologists and quality-of-care measures, using multilevel models controlling for physician and patient characteristics, and to assess feasibility of linking physician information with patient records to construct quality measures from electronic claims data.

Method The authors identified quality measures and matched claims data from 2006 to 2012 with 942 board-certified (BC) psychiatrists, 868 non-board-certified (nBC) psychiatrists, 963 BC neurologists, and 328 nBC neurologists. Using the matched data, they identified psychiatrists who treated at least one patient with a schizophrenia diagnosis, and neurologists attending patients discharged with a principal diagnosis of ischemic stroke, and analyzed claims from these patients. For patients with schizophrenia who were prescribed an atypical antipsychotic, quality measures were claims for glucose and lipid tests, duration of any antipsychotic treatment, and concurrent prescription of multiple antipsychotics. For patients with ischemic stroke, quality measures were dysphagia evaluation; speech/language evaluation; and prescription of clopidogrel, low-molecular-weight heparin, intravenous heparin, and warfarin (for patients with co-occurring atrial fibrillation).

Results Overall, multilevel models (patients nested within physicians) showed no statistically significant differences in quality measures between BC and nBC psychiatrists and neurologists.

Conclusions The authors demonstrated the feasibility of linking physician information with patient records to construct quality measures from electronic claims data, but there may be only minimal differences in the quality of care between BC and nBC psychiatrists and neurologists, or there may be a difference that could not be measured with the quality measures used.

A. Wallace is associate research director, Government and Academic Research, HealthCore, Inc., Wilmington, Delaware.

B.H. McFarland is professor emeritus of psychiatry, public health, and preventive medicine, Oregon Health & Science University, Portland, Oregon.

N. Selvam is senior director, Government and Academic Research, HealthCore, Inc., Alexandria, Virginia.

G. Sahota is researcher, Government and Academic Research, HealthCore, Inc., Alexandria, Virginia.

Funding/Support: This research was funded by the American Board of Psychiatry and Neurology, Inc.

Other disclosures: None reported.

Ethical approval: This project was approved by the Oregon Health & Science University institutional review board (number IRB00009872) on September 23, 2013.

Disclaimer: The views expressed in this article are those of the authors and do not reflect the official position of the American Board of Psychiatry and Neurology, Inc.

Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A361.

Correspondence should be addressed to Bentson H. McFarland, 160 Lee St., Apt. 307, Seattle, WA 98109-3199; telephone: (503) 245-6550; e-mail: mcfarlab@ohsu.edu.

Debate about maintenance of certification1–6 has raised questions about the relationship between quality of care and board certification itself.6–8 But this topic has rarely (if ever) been addressed for psychiatrists and neurologists.

Back to Top | Article Outline

Introduction

Studies have compared patient outcomes and/or care quality for board-certified (BC) versus non-board-certified (nBC) physicians in assorted specialties,9,10 including cardiology,11 family medicine,12–14 and internal medicine,12–14 but not in psychiatry or neurology. For example, BC cardiologists’ echocardiogram interpretations (adjusted for confounders) were more highly correlated with mortality than those of the nBC.11 Conversely, Chen et al12 found “treatment by a board-certified physician was associated with modestly higher quality of care for [acute myocardial infarction], but not differences in mortality.” However, another study found 5% mortality reduction associated with board certification among internists’ and family practitioners’ inpatients with acute myocardial infarction or congestive heart failure.13 Additionally, Medicare patients of BC primary care physicians were more likely to obtain preventive services (mammograms or colon cancer screenings) than patients of the nBC.14 Similarly, internists’ recertification examination scores positively correlated with services such as diabetes care, mammography, or lipid testing for Medicare patients.15 But few (if any) such studies involved psychiatrists or neurologists.

Reviews pertaining to physicians overall10,16,17 have generally found superior outcome and quality measures for BC versus nBC physicians. Sharp et al17 examined 11 studies containing 29 outcome measures, of which 16 were positively and significantly associated with board certification. But only 1 of those studies (on malpractice insurance loss) included psychiatrists and neurologists (apparently 30 or fewer of each).10 Lipner et al16 reviewed 29 studies on board certification and quality measures (e.g., operative mortality, surgical complications, preventive services), with most of the studies showing positive albeit modest effect sizes for the BC physicians. Again, none of the reviewed studies focused on psychiatrists or neurologists.

Reid et al9 used 124 RAND claims-based quality indicators18–20 to examine the care delivered by 10,408 Massachusetts physicians in 23 specialties (including 489 psychiatrists and 375 neurologists, although separate analyses for this group were not provided). Among all physicians, board certification was associated with 3% higher quality of care which was “not significant in a practical sense.”9 Reid et al9 also noted research challenges that included the numbers of physicians studied, the availability of physician characteristics, and the scope and validity of the quality metrics used.

Until recently, there have been rather few quality-of-care measures feasible for large studies addressing board certification among psychiatrists and neurologists.21,22 Pincus et al22 advised bringing “care for mental health and substance use disorders into the mainstream of quality measurement,” while Cassel and Jain23 noted that “efforts to assess physician performance are here to stay.”

Indeed, there are now numerous measures to address the quality of care provided by neurologists24–33 and psychiatrists.18,19,25,27,29–32,34–41 However, Cassel and Jain23 noted that although quality measures “are critical to a high-functioning health care system,” they are also “incomplete proxies of … physician performance.”

Consequently, it might not be possible to differentiate BC from nBC psychiatrists and neurologists with respect to quality of care because either there is little difference in the quality of care they provide or the measures are inadequate to the task of demonstrating the difference. It is also unknown whether quality measures pertaining to psychiatrists and neurologists can be obtained for a large enough number of physicians to conduct analyses with adequate statistical power.

These concerns notwithstanding, the feasibility of care quality assessment for psychiatrists and neurologists is timely because “board certification … [identifies] physicians who [meet] peer-established standards,”8 “is emerging as a measure of physician quality,”42 and “is well reflected in both training and delivery systems.”43 In addition, “the member boards of the American Board of Medical Specialties … in collaboration with external researchers, must ensure that the program’s research base and its quality is continuously improved.”4

Therefore, our primary objective for this study was to examine the associations between board certification of psychiatrists and neurologists and quality-of-care measures, using multilevel models controlling for physician and patient characteristics. We hypothesized that BC psychiatrists and neurologists would provide higher quality of care than the nBC. We also wished to assess the feasibility44 of linking psychiatrist and neurologist information with patient records to construct quality measures from electronic claims data.

Back to Top | Article Outline

Method

We identified quality measures via a multistep procedure (see Supplemental Digital Appendix 1 at http://links.lww.com/ACADMED/A361), focusing on quality measures that could be computed from electronic claims data.

We examined medical, pharmacy, and laboratory claims obtained by HealthCore from some 43 million Blue Cross Blue Shield enrollees in California, Colorado, Connecticut, Georgia, Indiana, Kentucky, Maine, Missouri, Nevada, New Hampshire, New York, Ohio, Virginia, and Wisconsin from 2006 to 2012. We selected these states because their Blue Cross Blue Shield plans provided HealthCore with longitudinal data. HealthCore physician identifiers included the National Provider Identifier45 plus the physician’s first name, last name, and office address. Several other studies46–48 have also used these claims data.

One of us (B.H.M.) identified office-based or hospital staff physicians who had completed a residency in psychiatry or neurology who were listed in the American Medical Association Physician Masterfile, which has been used in numerous studies.42,49,50 He then:

  1. defined physicians shown as certified to be BC and those not shown as certified to be nBC;
  2. assigned physicians trained and/or certified in both psychiatry and neurology their self-designated primary specialty;
  3. identified 4,067 BC psychiatrists, 1,241 nBC psychiatrists, 1,830 BC neurologists, and 366 nBC neurologists practicing in one of the HealthCore states mentioned above from 2006 to 2011;
  4. selected random samples of 1,000 BC psychiatrists, 1,000 nBC psychiatrists, and 1,000 BC neurologists from these groups (all 366 nBC neurologists were used); and
  5. provided their physician identifiers to HealthCore staff (A.W., N.S., G.S.) blinded with regard to certification.

HealthCore staff then matched (see Supplemental Digital Appendix 2 at http://links.lww.com/ACADMED/A361) 942 BC psychiatrists, 868 nBC psychiatrists, 963 BC neurologists, and 328 nBC neurologists to HealthCore claims data.

Using the matched data, HealthCore staff identified psychiatrists who treated at least one patient with a schizophrenia diagnosis from 2006 to 2012 and analyzed claims for their schizophrenia patients (with the first claim’s service date defining the patient’s index year). HealthCore staff focused on patients prescribed an atypical antipsychotic (aripiprazole, asenapine, iloperidone, lurasidone, olanzapine, paliperidone, risperidone, quetiapine, and ziprasidone, but excluding clozapine) by study psychiatrists during the study years. We analyzed claims following the first atypical antipsychotic prescription by the study psychiatrist. Quality measures included duration of any (typical or atypical) antipsychotic use, avoidance of multiple antipsychotics prescribed simultaneously, and glucose and lipid testing during the 365 days following the first fill of an atypical antipsychotic. Typical antipsychotics were those in Facts and Comparisons.51 HealthCore staff also computed medication possession ratios (MPRs) (see Supplemental Digital Appendix 3 at http://links.lww.com/ACADMED/A361).

Using the matched data, HealthCore staff also identified neurologists attending patients discharged with a principal diagnosis of ischemic stroke from 2006 to 2012 and analyzed claims from the most recent inpatient episode and 30 days post discharge (with the first claim’s service date defining the patient’s index year). Where possible, HealthCore staff noted orders for aspirin, clopidogrel, subcutaneous low-molecular-weight heparin, intravenous heparin, dysphagia evaluation, or speech/language evaluation (see Supplemental Digital Appendix 4 at http://links.lww.com/ACADMED/A361). HealthCore staff also identified patients with co-occurring atrial fibrillation and recorded their warfarin orders (or prescriptions). In addition, HealthCore staff identified patients who were known to have died during their inpatient stay or within 30 days post discharge.

We compared categorical variables with chi-square tests and continuous variables via analyses of variance. We constructed multilevel models (patients nested within physicians) using SAS 9.4’s GENMOD Procedure (SAS Institute, Inc., Cary, North Carolina). The primary independent variable was BC versus nBC. We used physician age, gender, and depth of experience (i.e., years in practice), as well as patient age and gender as covariates. We estimated fixed-effects models with random physician intercepts (random-effects models did not converge). All P values were two tailed, and statistical significance was set at P < .01. Odds ratios, confidence intervals, and log-likelihood increases for all predictors are given in Supplemental Digital Appendixes 5 and 6 (at http://links.lww.com/ACADMED/A361). We computed post hoc statistical power52 using G*Power 3.1 (Heinrich Heine University Düsseldorf, Düsseldorf, Germany)53 and Optimal Design (W.T. Grant Foundation/Western Michigan University, Kalamazoo, Michigan).54

Back to Top | Article Outline

Results

Overall, multilevel models (patients nested within physicians) showed no statistically significant differences in quality measures between BC and nBC psychiatrists and neurologists.

Table 1 describes the study psychiatrists who treated patients with schizophrenia. There were 312/942 (33.1%) BC psychiatrists versus 288/868 (33.2%) nBC (P > .99); both of these groups were chiefly male (234/312 [75.0%] vs. 223/288 [77.4%]). The BC psychiatrists (mean age 56.6 [standard deviation (SD) = 10.8]) were statistically significantly (P < .0001) younger than the nBC (mean age 60.4 [SD = 10.8]). However, their mean years in practice were similar (23.4 [SD = 11.5] vs. 24.5 [SD = 12.2]).

Table 1

Table 1

The patients with schizophrenia treated by the psychiatrists are also described in Table 1. There were statistically significantly (P = .04) more males among the patients of BC psychiatrists (523/958 [54.6%]) than the nBC (496/995 [49.8%]), but the mean ages were similar (42.8 [SD = 16.8] vs. 43.3 [SD = 16.4]). Index-year distributions did not differ significantly between groups.

The vast majority of patients had atypical antipsychotic prescriptions filled (795/958 [83.0%] for patients of BC vs. 792/995 [79.6%] for patients of nBC psychiatrists, P = .06). Three hundred forty-five of 1,587 (21.7%) patients with schizophrenia who had filled atypical antipsychotic prescriptions were enrolled in study state Blue Cross Blue Shield health plans for fewer than 365 days after their first atypical antipsychotic prescription fill (187/1,587 [11.8%] for 0–180 days and 158/1,587 [10.0%] for 181–364 days) with no statistically significant differences between patients of BC and patients of nBC psychiatrists (P = .48) (data not shown in Table 1).

Among those using atypical antipsychotics, more patients of BC (480/795 [60.4%]) versus nBC psychiatrists (440/792 [55.6%]) had a glucose test within 365 days after the first fill (P = .05). And more patients of BC psychiatrists had a lipid test within 365 days after the first fill than did patients of the nBC (321/795 [40.4%] vs. 277/792 [35.0%], P = .03) (Table 1).

Among patients with at least one antipsychotic prescription fill, mean MPRs were similar between groups (0.6 [SD = 0.3] for BC vs. 0.6 [SD = 0.3] for nBC psychiatrists, P = .09), and no statistically significant differences were observed for MPR ≥ 80% (P = .39) or MPR ≥ 90% (P = .56). The percentages of days with multiple antipsychotics (i.e., multidrug days) were similar for both groups of patients (Table 1).

In multilevel models, the board certification odds ratio for glucose testing was 1.19 (95% confidence interval: 0.96–1.49, P = .11) and for lipid testing was 1.24 (95% confidence interval: 0.99–1.55, P = .06) (see Supplemental Digital Appendix 5 at http://links.lww.com/ACADMED/A361). Increase in log-likelihood due to physician (vs. patient) factors was 0.88 of 9.66 (9.1%) for glucose testing and 1.40 of 19.19 (7.3%) for lipid testing (see Supplemental Digital Appendix 6 at http://links.lww.com/ACADMED/A361). There was 90% statistical power to detect glucose testing effect size of 0.17 and 90% statistical power to detect lipid testing effect size of 0.19, both of which Cohen52 considered small.

The study neurologists who treated ischemic stroke patients are described in Table 2. About two-thirds (622/963 [64.6%]) of the BC neurologists treated ischemic stroke inpatients versus around half (170/328 [51.8%]) of the nBC (P < .0001). Both BC and nBC neurologists were mostly male (490/622 [78.8%] vs. 132/170 [77.6%]), were of a similar mean age (54.9 [SD = 9.5] vs. 56.6 [SD = 11.0]), and had a similar depth of experience (23.0 [SD = 10.0] vs. 21.5 [SD = 11.3] mean years in practice).

Table 2

Table 2

The ischemic stroke patients treated by the neurologists are also described in Table 2. Males were the minority in both groups (5,669/12,204 [46.5%] for BC vs. 1,181/2,580 [45.8%] for nBC neurologists), with no statistically significant gender differences (P = .53). Patients of BC neurologists were on average a year older than those of the nBC (mean age 70.1 [SD = 15.1] vs. 69.4 [SD = 15.4], P = .02). Index-year distributions did not differ significantly between groups. Death during inpatient stay (i.e., mortality in hospital) was about 1% for both groups and was not statistically significantly different for patients of BC versus nBC neurologists (P = .80). Including deaths in the 30 days post discharge, mortality rates were 525/12,204 (4.3%) for patients of BC neurologists versus 93/2,580 (3.6%) for patients of the nBC (P = .12) (data not shown in Table 2). Of the 14,784 patients with ischemic stroke, 2,281 (15.4%) had fewer than 30 days of enrollment in study state Blue Cross Blue Shield health plans post discharge (616/14,784 [4.2%] due to death and 1,665/14,784 [11.3%] due to other reasons), with no statistically significant differences between patients of BC and patients of nBC neurologists (P = .28) (data not shown in Table 2).

No statistically significant differences were observed between patients of BC and patients of nBC neurologists for use of clopidogrel (1,938/12,204 [15.9%] vs. 428/2,580 [16.6%], P = .37), low-molecular-weight heparin (262/12,204 [2.1%] vs. 57/2,580 [2.2%], P = .84), or intravenous heparin (247/12,204 [2.0%] vs. 57/2,580 [2.2%], P = .55). Among the ischemic stroke patients with co-occurring atrial fibrillation, 673/2,869 (23.5%) of the BC neurologists’ patients and 113/563 (20.1%) of the nBC neurologists’ patients filled a warfarin prescription within 30 days post discharge (P = .08). It was not possible to analyze aspirin use (see Supplemental Digital Appendix 4 at http://links.lww.com/ACADMED/A361).

Inpatient speech/language evaluations were found less often for BC neurologists’ patients (67/12,204 [0.5%]) compared with those of the nBC (28/2,580 [1.1%]) which was statistically significantly different (P = .002). However, this difference was not observed when including the 30 days post discharge (536/12,204 [4.4%] for BC vs. 116/2,580 [4.5%] for nBC neurologists, P = .82). During the inpatient stay, 1,039/12,204 (8.5%) of BC neurologists’ patients versus 206/2,580 (8.0%) of nBC neurologists’ patients were evaluated for dysphagia. Including the 30 days after discharge, 1,242/12,204 (10.2%) of the BC neurologists’ patients versus 251/2,580 (9.7%) of the nBC neurologists’ patients had dysphagia evaluation claims.

In the multilevel model for inpatient dysphagia evaluation, the certification odds ratio was 1.06 (95% confidence interval: 0.89–1.25, P = .52), increase in log-likelihood due to physician (vs. patient) factors was 0.17 of 59.90 (0.3%) (see Supplemental Digital Appendixes 5 and 6 at http://links.lww.com/ACADMED/A361), and there was 90% statistical power to detect dysphagia evaluation effect size 0.09, which is much less than small.52

Back to Top | Article Outline

Discussion

We found very similar quality measures for BC versus nBC psychiatrists and neurologists; given multiple comparisons, it is unlikely that any differences we observed are clinically meaningful. These findings echo results for other specialists that suggest that quality-of-care differences between BC and nBC physicians are “generally not significant in a practical sense”9 or are only modestly higher for BC physicians.12

It may be surprising that the results of our study suggest that a BC physician who recently obtained certification could provide better quality of care than a physician who may be nBC but who has been practicing for a number of years. However, Reid et al9 also “did not find any associations between physicians’ years of experience and quality.” Moreover, Ericsson55 pointed out that “reviews and meta-analyses of thousands of experienced health professionals show weak or non-existent correlations between performance on representative tasks and years of professional experience.”

The strengths of our study included the use of recommended research methods17 designed to address concerns that board certification has “received minimal notice within the new quality movement.”56 Also, as shown in Supplemental Digital Appendix 7 (at http://links.lww.com/ACADMED/A361), universes of physicians57,58 were large (with the exception of nBC neurologists; nonetheless, we had considerable statistical power for the neurology analyses).

We also demonstrated the feasibility of linking physician information with patient records to construct quality measures from electronic claims data. We were able to compute all planned quality measures pertaining to patients with schizophrenia. However, besides warfarin prescription, we could not interpret (and, therefore, did not analyze) quality measures for ischemic stroke patients with co-occurring atrial fibrillation (see Supplemental Digital Appendix 8 at http://links.lww.com/ACADMED/A361). In addition, we could not construct certain quality measures for ischemic stroke patients (such as aspirin use and some medications ordered during the inpatient stay), but we could compute surrogate measures such as the medications prescribed post discharge.

Limitations of our study included possibly missing data. For one example, roughly 40% of patients with schizophrenia who were prescribed atypical antipsychotics had a lipid test; ideally, all patients would have been tested.59 However, physicians may have advised lipid tests that patients declined. For another example, only about 10% of ischemic stroke patients had inpatient dysphagia evaluations. But bedside examinations may have suggested that such tests were not needed, or hospitals may have bundled the dysphagia evaluation into an overall bill so that it would not be found in the claims.

Health system structure60 might have complicated quality comparisons across physicians. For example, a hospital may mandate stroke protocols that minimize quality-of-care differences among neurologists. Moreover, physicians are “embedded in teams.”61 Thus, the characteristics of neurologists on such teams may have contributed very little to the multilevel model goodness of fit.

Another limitation is that “almost all new graduates (of psychiatry residencies) sought certification”62 which implies (as we found) that BC physicians tend to be younger than the nBC. Therefore, we included physician age as a confounder, but there may have been other differences between BC and nBC physicians, such as licensing examination scores42 or numbers of skills, that we did not measure.63

Of course, quality of care may not be reflected adequately64 by process60 measures based on claims. Indeed, McGlynn et al65 noted that stakeholders “view measurement as burdensome, expensive, [and] inaccurate,” whereas Cassel et al21 indicated that “the current measurement paradigm … does not live up to its potential.” Conway et al66 and Lee64 advocated assessments “that are meaningful to patients,”64 whereas Bishop67 noted that “measures of high-level quality remain difficult to define and measure.”

Indeed, one can ask whether or not items such as inpatient orders for drugs administered to ischemic stroke patients or the duration of antipsychotic drug use and the avoidance of simultaneously prescribing multiple antipsychotic medications to schizophrenia patients are sufficient and convincing enough to make judgments regarding quality of care (let alone a precise statement about the association between board certification and quality of care). Alternative or additional process-of-care measures might include a checklist of procedures that a psychiatrist or neurologist should use for certain conditions. For example, Bond et al68 suggested that “standardized patients (SPs) could appear in the clinical environment, with providers (the study participant) blinded to the SP’s true identity. SPs could use checklists.” Other alternative or additional measures might come from Medicare’s Physician Quality Reporting System (see Supplemental Digital Appendix 9 at http://links.lww.com/ACADMED/A361).

The problem in this study (and analogous studies9) is that essentially all the quality measures are process measures gleaned from claims data, whereas none are outcomes data or patient/caregiver satisfaction data. Indeed, Reid et al9 raised the possibility of “stronger associations between physician characteristics and performance on quality measures that were not investigated, (eg, measures of patient experience or mortality),” but pointed out that claims analyses do “[allow] us to assess quality of care for a large number of physicians.”

Thus, the quality measures we analyzed did not significantly differentiate the BC from the nBC, but overall, the conclusions could be considered “soft.” That is, we have no idea whether the reason for our findings is that there is little difference in the quality of care provided or that the measures were inadequate to the task of demonstrating the difference. Perhaps the measures we employed reflect what is readily available, not what is discerning, insightful, or discriminating.

Suppose there are only minimal differences in the quality of care between those who are BC and those who are nBC. If there is little difference in quality, this would suggest that the tests used for board certification are not aligned with quality measures. It is worth speculating why that misalignment may be the case. One shortcoming of board certification testing is the one- or two-day time frame. Because health care is often delivered over months or years, ideally tests would reflect the quality of care over longer terms.

If, on the other hand, there is a difference but it cannot be measured with the quality measures used in this study, we might ask: What do these quality measures reflect, why do they produce similar results between BC and nBC physicians, and what better ways are there to differentiate between the quality of care provided by BC and nBC physicians? In other words, we can ask what alternative research approaches might lead to greater sensitivity, power, and meaningfulness for evaluating quality differences among psychiatrists and neurologists, whether BC or nBC.

Outcome measures could, in theory, include the ability of the schizophrenia patient to thrive with his/her disorder after receiving treatment from the psychiatrist or the mortality rate of ischemic stroke patients who were treated by neurologists (recognizing that data from the patients would have to be standardized). Another important outcome measure could be physician and patient interaction, which addresses quality of care in a manner that goes beyond the technical aspects of medicine. Similarly, one might also look at the patient ratings of BC physicians versus those who are nBC. Additional strategies might include record reviews (perhaps in collaboration with utilization review companies), patient interviews, simulations, and case–control studies of rare events (such as suicides). However, these methods present numerous challenges such as participant recruitment, informed consent, and statistical power (in addition to being expensive). Indeed, Bilimoria69 recently noted that

process measures offer important measurement benefits over outcome metrics. Process-of-care measures do not generally require risk adjustment because the clinician or hospital should, in theory, always be adherent if the inclusion and exclusion criteria are well specified. Similarly, large numbers of cases are not needed….

However, Ericsson70 pointed out that “medicine requires tests, similar to those developed for chess, to measure performance that are highly correlated with the real-world outcomes of actual patients.” But tests that respect the nature of expertise in medicine and other fields are not easily constructed. Indeed, Anders Ericsson and Towne71 noted that “the greatest challenge is to develop methods to measure and capture the full range of performance.” For example, unlike a timed chess game, health care is often provided in multiple encounters over a lengthy period. Performance tests would need to combine measurements addressing services provided to “patients over their total cycle of care.”23

These considerations are of considerable interest because alternative approaches could provide directions for further work that might benefit the consumers of these services. Indeed, “the new interest in improving the effectiveness of medical education and in simulation training offers a timely opportunity to motivate the collection and analysis of objective and detailed data on medical performance by individuals and teams.”70

In summary, we found it feasible to compute several psychiatry and neurology quality-of-care measures from electronic claims data. However, the quality measures did not show a significant distinction between BC and nBC psychiatrists and neurologists. Alternative research approaches may be needed to identify quality-of-care differences between BC and nBC psychiatrists and neurologists.

Back to Top | Article Outline

References

1. Gray BM, Vandergrift JL, Johnston MM, et al. Association between imposition of a maintenance of certification requirement and ambulatory care-sensitive hospitalizations and health care costs. JAMA. 2014;312:2348–2357.
2. Hayes J, Jackson JL, McNutt GM, Hertz BJ, Ryan JJ, Pawlikowski SA. Association between physician time-unlimited vs time-limited internal medicine board certification and ambulatory patient care quality. JAMA. 2014;312:2358–2363.
3. Iglehart JK, Baron RB. Ensuring physicians’ competence—Is maintenance of certification the answer? N Engl J Med. 2012;367:2543–2549.
4. Irons MB, Nora LM. Maintenance of certification 2.0—Strong start, continued evolution. N Engl J Med. 2015;372:104–106.
5. Teirstein PS, Topol EJ. The role of maintenance of certification programs in governance and professionalism. JAMA. 2015;313:1809–1810.
6. Teirstein PS. Boarded to death—Why maintenance of certification is bad for doctors and patients. N Engl J Med. 2015;372:106–108.
7. Grosch EN. Does specialty board certification influence clinical outcomes? J Eval Clin Pract. 2006;12:473–481.
8. Lee TH. Certifying the good physician: A work in progress. JAMA. 2014;312:2340–2342.
9. Reid RO, Friedberg MW, Adams JL, McGlynn EA, Mehrotra A. Associations between physician characteristics and quality of care. Arch Intern Med. 2010;170:1442–1449.
10. Schwartz WB, Mendelson DN. Physicians who have lost their malpractice insurance. Their demographic characteristics and the surplus-lines companies that insure them. JAMA. 1989;262:1335–1341.
11. Heidenreich PA, Maddox TM, Nath J. Measuring the quality of echocardiography using the predictive value of the left ventricular ejection fraction. J Am Soc Echocardiogr. 2013;26:237–242.
12. Chen J, Rathore SS, Wang Y, Radford MJ, Krumholz HM. Physician board certification and the care and outcomes of elderly patients with acute myocardial infarction. J Gen Intern Med. 2006;21:238–244.
13. Norcini JJ, Boulet JR, Dauphinee WD, Opalek A, Krantz ID, Anderson ST. Evaluating the quality of care provided by graduates of international medical schools. Health Aff (Millwood). 2010;29:1461–1468.
14. Pham HH, Schrag D, Hargraves JL, Bach PB. Delivery of preventive services to older adults by primary care physicians. JAMA. 2005;294:473–481.
15. Holmboe ES, Wang Y, Meehan TP, et al. Association between maintenance of certification examination scores and quality of care for Medicare beneficiaries. Arch Intern Med. 2008;168:1396–1403.
16. Lipner RS, Hess BJ, Phillips RL Jr. Specialty board certification in the United States: Issues and evidence. J Contin Educ Health Prof. 2013;33(Suppl 1):S20–S35.
17. Sharp LK, Bashook PG, Lipsky MS, Horowitz SD, Miller SH. Specialty board certification and clinical outcomes: The missing link. Acad Med. 2002;77:534–542.
18. Kerr EA, Asch SM, Hamilton EG, McGlynn EA. Kerr EA, Asch SM, Hamilton EG, McGlynn EA. Introduction. In: Quality of Care for General Medical Conditions: A Review of the Literature and Quality Indicators. 2000.Santa Monica, Calif: RAND.
19. Kerr EA, Asch SM, Hamilton EG, McGlynn EA. Kerr EA, Asch SM, Hamilton EG, McGlynn EA. Appendix A: Panel rating summary by condition. In: Quality of Care for General Medical Conditions: A Review of the Literature and Quality Indicators. 2000.Santa Monica, Calif: RAND.
20. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348:2635–2645.
21. Cassel CK, Conway PH, Delbanco SF, Jha AK, Saunders RS, Lee TH. Getting more performance from performance measurement. N Engl J Med. 2014;371:2145–2147.
22. Pincus HA, Spaeth-Rublee B, Watkins KE. Analysis & commentary: The case for measuring quality in mental health and substance abuse care. Health Aff (Millwood). 2011;30:730–736.
23. Cassel CK, Jain SH. Assessing individual physician performance: Does measurement suppress motivation? JAMA. 2013;307:2595–2596.
24. American Academy of Neurology; American College of Radiology; Physician Consortium for Performance Improvement; National Committee for Quality Assurance. Stroke and Stroke Rehabilitation Physician Performance Measurement Set. September 2006, Updated February 2009, Coding Reviewed and Updated September 2010. 2010.Chicago, Ill: American Medical Association.
25. Centers for Medicare and Medicaid Services. 2012 Physician Quality Reporting System Measure Specifications Manual for Claims and Registry Reporting of Individual Measures. 2012.Baltimore, Md: U.S. Department of Health and Human Services Centers for Medicare and Medicaid Services.
26. Cheng EM, Tonn S, Swain-Eng R, Factor SA, Weiner WJ, Bever CT Jr; American Academy of Neurology Parkinson Disease Measure Development Panel. Quality improvement in neurology: AAN Parkinson disease quality measures: Report of the Quality Measurement and Reporting Subcommittee of the American Academy of Neurology. Neurology. 2010;75:2021–2027.
27. Medicaid program: Initial core set of health quality measures for Medicaid-eligible adults. Fed Regist. 2010;75:82397–82399.
28. Fountain NB, Van Ness PC, Swain-Eng R, Tonn S, Bever CT Jr; American Academy of Neurology Epilepsy Measure Development Panel and the American Medical Association–Convened Physician Consortium for Performance Improvement Independent Measure Development Process. Quality improvement in neurology: AAN epilepsy quality measures: Report of the Quality Measurement and Reporting Subcommittee of the American Academy of Neurology. Neurology. 2011;76:94–99.
29. National Committee for Quality Assurance. NQF-Endorsed National Voluntary Consensus Standards for Physician-Focused Ambulatory Care: Appendix A—NCQA Measure Technical Specifications—April, 2008 V.7. 2008.Washington, DC: National Committee for Quality Assurance.
30. National Quality Forum. National Voluntary Consensus Standards for Ambulatory Care Using Clinically Enriched Administrative Data: A Consensus Report. 2010.Washington, DC: National Quality Forum.
31. National Quality Forum. National Voluntary Consensus Standards for Medication Management: A Consensus Report. 2010.Washington, DC: National Quality Forum.
32. National Quality Forum. National Voluntary Consensus Standards for Stroke Prevention and Management Across the Continuum of Care: A Consensus Report. 2009.Washington, DC: National Quality Forum.
33. Silberstein SD, Holland S, Freitag F, Dodick DW, Argoff C, Ashman E; Quality Standards Subcommittee of the American Academy of Neurology and the American Headache Society. Evidence-based guideline update: Pharmacologic treatment for episodic migraine prevention in adults: Report of the Quality Standards Subcommittee of the American Academy of Neurology and the American Headache Society. Neurology. 2012;78:1337–1345.
34. Addington DE, Mckenzie E, Wang J, Smith HP, Adams B, Ismail Z. Development of a core set of performance measures for evaluating schizophrenia treatment services. Psychiatr Serv. 2012;63:584–591.
35. Center for Quality Assessment and Improvement in Mental Health. STABLE Performance Measures: Summary. 2007. Boston, Mass: Institute for Clinical Research & Health Policy Studies; http://www.cqaimh.org/pdf/measures_summary.pdf. Accessed March 14, 2016.
36. Hermann RC, Leff HS, Lagodmos G. Selecting Process Measures for Quality Improvement in Medical Care. 2002.Cambridge, Mass: The Evaluation Center @ HSRI.
37. Hermann RC, Palmer RH. Common ground: A framework for selecting core quality measures for mental health and substance abuse care. Psychiatr Serv. 2002;53:281–287.
38. Horovitz-Lennon M, Watkins KE, Pincus HA, et al. Veterans Health Administration Mental Health Program Evaluation Technical Manual 2009.Santa Monica, Calif: RAND.
39. Joint Commission. Specifications Manual for Joint Commission National Quality Measures (version 2011A). 2010.Chicago, Ill: Joint Commission.
40. Substance Abuse and Mental Health Services Administration. National outcome measures: Update. SAMHSA News. 2007;15(2):9.
41. Watkins K, Horvitz-Lennon M, Caldarone LB, et al. Developing medical record-based performance indicators to measure the quality of mental healthcare. J Healthc Qual. 2011;33:49–66.
42. Jeffe DB, Andriole DA. Factors associated with American Board of Medical Specialties member board certification among US medical school graduates. JAMA. 2011;306:961–970.
43. Nora LM, Wynia MK, Granatir T. Of the profession, by the profession, and for patients, families, and communities: ABMS board certification and medicine’s professional self-regulation. JAMA. 2015;313:1805–1806.
44. McGlynn EA, Adams JL. What makes a good quality measure? JAMA. 2014;312:1517–1518.
45. HIPAA administrative simplification: National plan and provider enumeration system data dissemination. Fed Regist. 2007;72:30011–30014.
46. AbuDagga A, Stephenson JJ, Fu AC, Kwong WJ, Tan H, Weintraub WS. Characteristics affecting oral anticoagulant therapy choice among patients with non-valvular atrial fibrillation: A retrospective claims analysis. BMC Health Serv Res. 2014;14:310.
47. Bekelman JE, Sylwestrzak G, Barron J, et al. Uptake and costs of hypofractionated vs conventional whole breast irradiation after breast conserving surgery in the United States, 2008–2013. JAMA. 2014;312:2542–2550.
48. Patorno E, Bohn RL, Wahl PM, et al. Anticonvulsant medications and the risk of suicide, attempted suicide, or violent death. JAMA. 2010;303:1401–1409.
49. Oreskovich MR, Shanafelt T, Dyrbye LN, et al. The prevalence of substance use disorders in American physicians. Am J Addict. 2015;24:30–38.
50. Tilburt JC, Wynia MK, Sheeler RD, et al. Views of US physicians about controlling health care costs. JAMA. 2013;310:380–388.
51. Facts and Comparisons. Drug Facts and Comparisons 2012. 2011.St. Louis, Mo: Wolters Kluwer Health.
52. Cohen J. A power primer. Psychol Bull. 1992;112:155–159.
53. Faul F, Erdfelder E, Buchner A, Lang AG. Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behav Res Methods. 2009;41:1149–1160.
54. Spybrook J, Bloom H, Congdon R, Hill C, Martinez A, Raudenbush S. Optimal Design Plus Empirical Evidence: Documentation for the “Optimal Design” Software. 2011.Kalamazoo, Mich: W.T. Grant Foundation/Western Michigan University.
55. Ericsson KA. An expert-performance perspective of research on medical expertise: The study of clinical performance. Med Educ. 2007;41:1124–1130.
56. Brennan TA, Horwitz RI, Duffy FD, Cassel CK, Goode LD, Lipner RS. The role of physician specialty board certification status in the quality movement. JAMA. 2004;292:1038–1043.
57. Kaiser Family Foundation. Physicians by specialty area. Menlo Park, California: The Henry J. Kaiser Family Foundation. http://kff.org/other/state-indicator/physicians-by-specialty-area/. Accessed June 15, 2013.
58. Dall TM, Storm MV, Chakrabarti R, et al. Supply and demand analysis of the current and future US neurology workforce. Neurology. 2013;81:470–478.
59. American Diabetes Association; American Psychiatric Association; American Association of Clinical Endocrinologists; North American Association for the Study of Obesity. Consensus development conference on antipsychotic drugs and obesity and diabetes. Diabetes Care. 2004;27:596–601.
60. Donabedian A. Evaluating the quality of medical care. 1966. Milbank Q. 2005;83:691–729.
61. Baron RJ. Professional self-regulation in a changing world: Old problems need new approaches. JAMA. 2015;313:1807–1808.
62. Faulkner LR, Juul D, Andrade NN, et al. Recent trends in American Board of Psychiatry and Neurology psychiatric subspecialties. Acad Psychiatry. 2011;35:35–39.
63. Harber P, Wu S, Bontemps J, Rose S, Saechao K, Liu Y. Value of occupational medicine board certification. J Occup Environ Med. 2013;55:532–538.
64. Lee VS. Redesigning metrics to integrate professionalism into the governance of health care. JAMA. 2015;313:1815–1816.
65. McGlynn EA, Schneider EC, Kerr EA. Reimagining quality measurement. N Engl J Med. 2014;371:2150–2153.
66. Conway PH, Mostashari F, Clancy C. The future of quality measurement for improvement and accountability. JAMA. 2013;309:2215–2216.
67. Bishop TF. Pushing the outpatient quality envelope. JAMA. 2013;309:1353–1354.
68. Bond W, Kuhn G, Binstadt E, et al. The use of simulation in the development of individual cognitive expertise in emergency medicine. Acad Emerg Med. 2008;15:1037–1045.
69. Bilimoria KY. Facilitating quality improvement: Pushing the pendulum back toward process measures. JAMA. 2015;314:1333–1334.
70. Ericsson KA. Acquisition and maintenance of medical expertise: A perspective from the expert-performance approach with deliberate practice. Acad Med. 2015;90:1471–1486.
71. Anders Ericsson K, Towne TJ. Expertise. Wiley Interdiscip Rev Cogn Sci. 2010;1:404–416.

Supplemental Digital Content

Back to Top | Article Outline
© 2017 by the Association of American Medical Colleges