Secondary Logo

Journal Logo

Erratum

Comparison of methods for algorithmic classification of dementia status in the Health and Retirement Study

doi: 10.1097/EDE.0000000000001077
Erratum
Free

Dementia probabilities using the Crimmins algorithm were incorrectly calculated. Correction of these errors does not change the overall findings or conclusions of the paper, but results in changes to most of the reported Crimmins-specific performance metrics, and a narrower range in performance metrics across algorithms.1

In the abstract, the results should read: “In the unweighted training data, sensitivity ranged from 53% to 78%, specificity ranged from 83% to 97%, and overall accuracy ranged from 81% to 87%. Though sensitivity was lower in the unweighted validation data (range: 18% to 44%), overall accuracy was similar (range: 83% to 88%) due to higher specificities (range: 89% to 98%). In analyses weighted to represent the age-eligible US population, accuracy ranged from 91% to 94% in the training data and 90% to 94% in the validation data. Using a 0.5 probability cutoff, Crimmins and Wu maximized sensitivity, Herzog-Wallace maximized specificity, and Hurd maximized accuracy.”

In the statistical analysis, the last sentence of the second paragraph should read: “Finally, we used an alternate classification rule for the Crimmins algorithm whereby we only classify persons as having dementia if they had an estimated dementia probability greater than the estimated probability of normal cognition and greater than the estimated probability of cognitive impairment-no dementia.”

In the results, the beginning of the third paragraph should read: “In the unweighted training data, sensitivity ranged from 53% to 78%, specificity ranged from 83% to 97%, and overall accuracy ranged from 81% to 87% across the five algorithms (Table 3). Overall accuracy was similar in the unweighted validation data (range: 83% to 88%); however, this was largely driven by slightly higher specificities (range: 89% to 98%), as sensitivity was much lower (range: 18% to 44%). In both the training and validation datasets, the H-W algorithm had the highest specificity, the Crimmins and Wu algorithms had the highest sensitivity, and the Hurd algorithm had the highest accuracy based on point estimates”

In the results, the beginning of the last paragraph should read: “Sensitivity, specificity and accuracy across the three regression-based algorithms in each dataset when using an arbitrary 0.5 probability cutoff, as well as the AUCs from ROC analyses of each algorithm, are comparable (Table 5). The ROC curves for the three algorithms in eFigure 2; http://links.lww.com/EDE/B582 also demonstrate this, as the curves for the three algorithms, as well as the location of the 0.5 cutpoint, are close within each dataset Sensitivity was higher while specificity was lower for the Crimmins algorithm when applying our alternate classification rule (eTable 5; http://links.lww.com/EDE/B582).”

The last sentence of the results section should read: “Cutoffs producing best accuracy in the unweighted validation data (range: 87%-89%) uniformly optimize specificity (98%-99%) at the expense of sensitivity (13%-28%).”

The second sentence of the discussion should read: “Generally, H-W optimized specificity, Crimmins and Wu optimized sensitivity, and Hurd optimized accuracy.”

In table 3, unweighted Crimmins performance overall, and by subgroups (except proxy respondent) should be corrected to:

Table

Table

In table 4, weighted Crimmins performance overall should be corrected to:

Table

Table

In table 5, the Crimmins algorithm AUC ROC should be corrected to:

Table

Table

In figure 2, the Crimmins columns should be corrected as follows:

Figure

Figure

In eTable 1; http://links.lww.com/EDE/B582, weighted Crimmins performance by subgroups (except proxy respondent), should be corrected to:

Table

Table

In eTable 2; http://links.lww.com/EDE/B582, re-estimated Crimmins performance, overall and by subgroups (except proxy respondent), should be corrected to:

Table

Table

In eTable 3; http://links.lww.com/EDE/B582, LOOCV Crimmins performance, overall and by subgroups (except proxy respondent), should be corrected to:

Table

Table

In eTable 4; http://links.lww.com/EDE/B582, Crimmins performance, overall and by subgroups (except proxy respondent), in alternate validation data should be corrected to:

Table

Table

In eTable 5; http://links.lww.com/EDE/B582: row names for the alternate classification criteria should be corrected to: “P(dem) > P(normal) and P(dem) > P(CIND),” and Crimmins performance, overall and by subgroups (except proxy respondent), using P(dem) > 0.5 classification rule should be corrected to:

Table

Table

In eTable 6; http://links.lww.com/EDE/B582, Crimmins performance at alternate cutpoints should be corrected to:

Table

Table

In addition, the cutpoint maximizing overall accuracy using Hurd in the validation data should be corrected to 0.66.

In eFigure 2; http://links.lww.com/EDE/B582, the Crimmins ROC curve and corresponding text should be corrected as follows:

Figure

Figure

The authors regret these errors.

1. Gianattasio, Kan Z.a; Wu, Qiongb; Glymour, M. Mariac; Power, Melinda C. Comparison of Methods for Algorithmic Classification of DementiaStatus in the Health and Retirement Study. Epidemiology 2019;30:291–302.

Supplemental Digital Content

Back to Top | Article Outline
Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.