Secondary Logo

Journal Logo

Clinical Science

Assessing Cognitive Functioning in People Living With HIV (PLWH): Factor Analytic Results From CHARTER and NNTC Cohorts

May, Pamela E. PhDa; Heithoff, Abigail J. MSb; Wichman, Christopher S. PhDc; Phatak, Vaishali S. PhD, ABPP-CNa; Moore, David J. PhDd; Heaton, Robert K. PhD, ABPP-CNd; Fox, Howard S. MD, PhDb

Author Information
JAIDS Journal of Acquired Immune Deficiency Syndromes: March 1, 2020 - Volume 83 - Issue 3 - p 251-259
doi: 10.1097/QAI.0000000000002252

Abstract

INTRODUCTION

Combination antiretroviral therapy (cART) has helped reduce morbidity and mortality associated with HIV.1 In this context, cognitive impairment among people living with HIV (PLWH) has presented in milder forms compared to pre-cART era.2,3 The profile of HIV-associated neurocognitive disorders (HAND) during the cART era has been reported to be diffuse4 and variable, at times not only across cognitive domains, but also within cognitive domains.5,6

In the context of a comprehensive cognitive examination, different means to assess the presence of cognitive dysfunction or summarize cognitive findings in PLWH have been implemented.7,8 When a domain deficit score approach is implemented, demographically-corrected scores are converted to ratings ranging from 0 to 5. Scores falling within normal ranges (T-score ≥40) are rated as 0, and as such, deficits ranging from 1 to 5 carry more weight in domain averages. In the deficit score approach, all test score ratings are averaged within each domain; scores >0.5 suggest domain impairment, and 2 or more impaired domains are suggestive of overall cognitive impairment. In addition, the global deficit score (GDS) is commonly implemented, which involves averaging deficit scores of all tests administered, with GDS ≥0.5 suggestive of global cognitive impairment. These different means of assessing dysfunction can be applied to the “Frascati” guidelines for diagnosing HAND developed by Antinori et al9 (when data regarding comorbidities and daily functioning also are available).

Although it is worthy to examine the different strengths and weaknesses in cognitive profiles, often cognitive performance is summarized into a single, at times dichotomous score, such as when a GDS cut-off is used. A GDS ≥0.5 has been established to be suggestive of HAND.8 GDS detects impairment, regardless of the pattern of results (as different neuropsychological patterns have been observed in PLWH5). However, whether performances on distinct tests should be averaged without significant a priori analysis (e.g., factor analysis) is questioned from a statistical perspective, particularly if less is known about the relative weighting and associations between test scores in a particular battery and how they may contribute to a single, summary score.

In the current paper, our aims include examining the statistical structure of neuropsychological test performances in 2 cohorts of PLWH to assess whether performances load onto a unitary/global score, and detect which cognitive domains seem most sensitive to below normative cognitive performance in PLWH, in efforts to explore alternate means to identify facets of HIV-associated cognitive impairment. Prior literature has suggested that deficits in learning and executive functioning are the most predominant areas of weakness in PLWH (e.g., Heaton et al.2), with processing speed being less prominent. However, other authors have suggested that speeded information processing may be most predictive of global cognitive functioning in PLWH, consistent with findings on effects of HIV on subcortical and white matter structures and factor analytic models.10–13 We aimed to identify additional means to measure cognitive impairment based on factor analysis results. This examination will be completed using 2 large cohorts of PLWH, National NeuroAIDS Tissue Consortium (NNTC) and CNS HIV Anti-Retroviral Therapy Effects (CHARTER) Study.

METHOD

Participants

Baseline assessment data from adult PLWH of 2 longitudinal studies, CHARTER AND NNTC, were examined consecutively. Baseline data from these cohorts were collected during the cART era. Please refer to NNTC14 and CHARTER15 for details regarding participant recruitment. NNTC recruited participants who had more severe HIV disease/greater risk of death (relative to CHARTER). If participants with global clinical ratings ≥5 on the cognitive battery (classified as impaired) were deemed to have cognitive impairment due to another condition(s) per clinical judgment (following Frascati guidelines), if the cause of impairment could not be determined, or if the clinician was unable to confidently assign a neurocognitive diagnosis, such participants were excluded. These exclusions resulted in final sample sizes of 798 and 1222 for NNTC and CHARTER, respectively. Tables 1 and 2 provide demographic information regarding the samples used.

T1
TABLE 1.:
Demographics and t Test Comparisons Between Common Variables in CHARTER and NNTC Subsamples
T2
TABLE 2.:
Demographics and χ2 Comparisons Between Common Variables in CHARTER and NNTC Subsamples

Procedure and Measures

For CHARTER, cross-sectional data from the first neuropsychological visit were obtained from participants recruited from 2003 to 2007 (the complete CHARTER recruitment period). For NNTC, cross-sectional data from the first nonrecruitment visit (following the screening visit) were pulled from participants who enrolled starting in 2000 until 2015. This period was chosen to maximize the number of participants analyzed from NNTC to compare with those used for CHARTER analyses. Refer to prior papers on CHARTER15 and NNTC14 for details regarding neuromedical examination and neuropsychological test batteries. Table 3 lists neuropsychological tests common to both CHARTER and NNTC.

T3
TABLE 3.:
Neuropsychological Tests Shared Across CHARTER and NNTC Cohorts, by Domains as Specified in Previous CHARTER/NNTC Literature

For the purposes of factor analysis, tests were presumed to be specific to 7 cognitive domains as outlined in Woods et al14 and additional analyses by CHARTER group (see domains in Table 3). Factor analyses on neuropsychological tests were completed on normalized scaled scores (mean = 10; SD = 3) that were not demographically corrected and therefore, reflect absolute levels of performance. Additional analyses were completed on demographically corrected T-scores adjusting for effects of age, education, sex, and race/ethnicity (where available), as generated from Heaton, Norman et al.,16 and normative data from individual test developers.

Statistical Analysis

Factor analyses were conducted with SAS. Receiver operating characteristic curve analyses were conducted in PRISM (version 7.0e). Univariate and regression analyses were conducted in SPSS (version 25).

Primary analyses included factor analysis. Listwise deletion was implemented for factor analyses. Analyses were completed in CHARTER first, followed by NNTC, to serve as training and testing datasets, respectively. Confirmatory factor analysis (CFA) was first used to test the structure and fit of neuropsychological domains as previously specified in literature. Exploratory factor analysis (EFA) was next used to identify the factor structure of neuropsychological test scores, from a data-driven approach. Serial CFAs, as influenced by EFA structural models, were then tested to identify models with the best fit in CHARTER and then NNTC. The following model statistics and threshold values were used to evaluate acceptable model fit per literature guidelines17–19: Bentler Comparative Fit Index (BCFI; values >0.95), Root Mean Square Error of Approximation (RMSEA; values <0.07), Bentler–Bonnett Normative Fit Index (BBNFI; values >0.95), and Standardized Root Means Square Residual (SRMR; values <0.08).

Secondary analyses were conducted to compare an identified index from these factor analyses to other classification schemes (Frascati/clinical ratings and GDS), and examine variables that account for variance in this index. Variables of interest included psychosocial history (i.e., age, sex, race/ethnicity, years of education, estimated premorbid ability, and Beck Depression Inventory-Second edition score), HIV-related factors (i.e., nadir CD4 [cells/μL; square-root transformed to normalize distribution], current CD4 [cells/μL; square-root transformed to normalize distribution], and estimated years/duration of HIV), medical history (i.e., lifetime history of hypertension, hyperlipidemia, diabetes, high cholesterol, viral hepatitis, seizure, and head injury), and substance use (i.e., lifetime history of alcohol, cocaine, methamphetamine, and opiate abuse/dependence).

RESULTS

Cohort sample characteristics were compared with t-tests and χ2 analyses (Tables 1 and 2). There were significant differences between cohorts with respect to age, race/ethnicity, depressive symptoms, estimated HIV duration, current CD4, nadir CD4, HIV treatment, hemoglobin, lifetime history of multiple medical problems (i.e., viral hepatitis, hypertension, and hyperlipidemia), and lifetime history of substance use (cocaine, methamphetamine, and alcohol), with NNTC broadly reflecting an older and more health-compromised sample relative to CHARTER (as expected). The CHARTER cohort seemed to have a greater proportion of participants with positive substance use histories, relative to NNTC. Zero-order correlations for neuropsychological scaled scores within each cohort are reported in Tables 1 and 2, Supplemental Digital Content, https://links.lww.com/QAI/B411.

Primary Analyses and Results

Factor Confirmation/Development

A series of factor analyses was conducted on neuropsychological scaled scores to assess whether test scores loaded onto 7 domains, as previously proposed (Table 3). Factor analyses were completed on CHARTER first, as the training dataset, and on NNTC second, as the testing dataset. With listwise deletion, only 3% of the sample fell out of the analyses for CHARTER, whereas 23% of the sample fell out of the analyses for NNTC.

Using the 7-domain structure resulted in CFAs with less than adequate fit in CHARTER (RMSEA = 0.10; BCFI = 0.92; BBNFI = 0.91; SRMR = 0.04; see Figure 1, Supplemental Digital Content, https://links.lww.com/QAI/B411 for structure and standardized factor loadings). We attempted to improve model fit by including an exogenous, latent cognitive factor (i.e., overarching factor that is not caused by other factors; borrowing the concept of a GDS). This model did not converge.

In this context, the next step was to explore the structure of data, from a data-driven approach. As such, an oblique EFA on the correlation matrix for all neuropsychological test scaled scores was conducted in CHARTER (see Figure 2, Supplemental Digital Content, https://links.lww.com/QAI/B411 for model structure and standardized factor loadings). This EFA model identified 6 factors and multiple interfactor correlations. The first identified factor (including test scores representing speeded information processing, working memory, and executive functioning), explained most of the total variation (for CHARTER: 46% of total variance). Each of the remaining factors consisted of scores from only a single test (Grooved Pegboard, Hopkins verbal learning test-Revised, Brief Visuospatial memory test-Revised, or Wisconsin Card Sorting Test) and/or reflected common method variance (2 verbal fluency measures).

Subsequently, the CFA model was modified to incorporate the 6-factor structure as suggested by EFA results, without permitting factors to correlate. This model was not interpretable per model warnings. Adding an exogenous, latent cognitive factor, in addition to this 6-factor structure improved overall model fit to acceptable standards for CHARTER (RMSEA = 0.07, BCFI = 0.96; BBNFI = 0.95; SRMR = 0.04). However, when this model was applied to NNTC, it was not interpretable per model warnings.

Alterations to models continued, to determine whether a similar structure would be acceptable in both cohorts. Model fit was good for CHARTER (RMSEA = 0.07, BCFI = 0.97; BBNFI = 0.97; SRMR = 0.03) and then NNTC (RMSEA = 0.07, BCFI = 0.96, BBNFI = 0.95, SRMR = 0.04) when there were 4 factors, without verbal fluency or Wisconsin Card Sorting Test scores (Figs. 1A, B for structure and standardized factor loadings). Fit indices for these models remained largely the same with the addition of an exogenous, latent general cognitive factor for both cohorts. The first identified factor (including speeded processing, working memory, and executive function measures) accounted for 75% of total variance in CHARTER and 65% of total variance in NNTC. Of note, test scores that comprised this first factor were robustly stable across models, in both cohorts.

F1
Figure 1.:
Final CFA for CHARTER (A) and NNTC (B). Standardized factor loadings are presented. As detailed in the tables, these models have satisfactory fit per multiple indices of overall model fit. Abbreviations for neuropsychological tests above are BVMT-R, Brief Visuospatial Memory Test-Revised; HVLT-R, Hopkins Verbal Learning Test-Revised; WMS-III, Wechsler Memory Scale, Third edition; PASAT, Paced Auditory Serial Addition Test; WAIS-III, Wechsler Adult Intelligence Scale-Third edition.

Secondary Analyses and Results

Secondary analyses were completed to assess the utility of an index score generated from the initial factor in EFAs and best-fitting CFAs across cohorts, as a metric for assessing cognitive impairment in PLWH. Given the nature of the tests that comprised this factor, it was denoted as the “Cognitive Efficiency Factor” (CEF). Demographically corrected T-scores were used in these secondary analyses, as better reflective of variance of performance from normative expectations.

CEF was first calculated as the arithmetic average of T-scores of the 6 neuropsychological tests that comprised the initial EFA and final CFA models (i.e., WAIS-III Digit Symbol, WAIS-III Symbol Search, WAIS-III Letter Number Sequencing, Trail Making Test—Part A, Trail Making Test—Part B, Paced Auditory Serial Addition Task—first channel only). A minimum of 4 of 6 scores was required for CEF calculation. Although individual loading values onto this initial factor/CEF were not identical, their relative magnitudes/weightings were similar. No significant discrepancy between arithmetic mean vs. a weighted mean was found in either cohort. In both cohorts, CEF average T-scores showed a good fit to a Gaussian distribution (Fig. 2A).

F2
Figure 2.:
CEF distribution and comparison with Frascati and GDS. A, Frequency distributions of CEF average T-scores in both NNTC and CHARTER reveal a good fit to a Gaussian distribution. B, Frequency distributions of CEF DDS, with the majority for both NNTC and CHARTER having values between 0.0 and 0.2. C, CEF average T-scores compared with Frascati diagnosis for NNTC and CHARTER. Gray lines indicate median and interquartile range. One-way analysis of variance for both revealed significance (P < 0.001). Tukey multiple comparison tests were P < 0.0001 between all conditions except asymptomatic neurocognitive impairment (ANI) vs. mild neurocognitive disorder (MND), which for NNTC P = 0.0001, and CHARTER P = 0.0005. D, CEF DDS compared to Frascati diagnosis for NNTC and CHARTER. Gray lines indicate median and interquartile range. The nonparametric Kruskal–Wallis test was used given the distribution of the data, and revealed significance (P < 0.0001) for both. Dunn multiple comparison tests were P < 0.0001 between all conditions except ANI vs. MND, which for NNTC P = 0.0002, and CHARTER P = 0.0147, and for CHARTER only ANI vs. HAD P = 0.0001, and MND vs. HAD P = 0.0361. E, ROC curves for NNTC and CHARTER, comparing GDS with the CEF average T-scores and DDS to diagnose impairment (using the Frascati criteria) for NNTC, CHARTER, and the combined cohorts. The area under the curve (AUC) is indicated. F,. Frequency distribution of CEF average T-scores and DDS in unimpaired subjects from the combined NNTC and CHARTER cohorts. For the average T-scores (left) the cut-off at 1.5 SD below the mean is indicated, and for DDS (right) the cut-off at 0.5 is indicated. ROC, receiver operating characteristic.

Since domain deficit scores (DDS) show great utility in assessment of impairment, DDS was also calculated for CEF, for comparison. DDS were generated by converting T-scores for individual neuropsychological tests that comprise CEF to 0 to 5 ratings. These ratings were then averaged by total number of tests (6) to calculate DDS. As expected, DDS was skewed toward lower values (Fig. 2B).

Both average T-scores and DDS were statistically capable of distinguishing HAND levels (Figs. 2C, D). Although CEF was not developed to identify all of the deficits that can comprise HAND, we next compared the classification accuracy of CEF (either as average T-scores or DDS) with GDS in distinguishing individuals considered neuropsychologically normal vs. individuals diagnosed with HAND by the clinical ratings/Frascati approach. Receiver operator characteristic curve analyses indicated that neither method was as good as GDS in diagnosing impairment as a dichotomous outcome (Fig. 2E).

A cut-off value for CEF average T-score was defined for practical purposes. As normative values were not available, we examined CEF average T-score values from those considered neuropsychologically normal in the NNTC and CHARTER cohorts. These values followed a normal distribution (Fig. 2). We defined 1.5 SDs below the mean as significant impairment. As such, CEF average T-scores <41.7 were considered impaired and average T-scores ≥41.7 were normal. Similar to other definitions, we used a cutoff of DDS >0.5 for impairment (Fig. 2).

Predictors of CEF Values

Exploratory independent sample t-tests and χ2 tests were conducted to find predictors of CEF average T-score and DDS (by aforementioned cut-offs). To control for multiple comparisons in CHARTER, we used the Benjamini–Hochberg procedure by dependent variable. With this correction, only adjusted P-values < 0.05 were considered statistically significant.

CHARTER Analyses

With respect to impairment frequencies, 19.9% of the CHARTER sample had CEF average T-score <41.7, and 22.4% had CEF DDS >0.5. Independent samples t-tests revealed that impaired CEF average T-score was associated with greater age, lower nadir CD4, lower hemoglobin, greater estimated duration of HIV, and lower premorbid ability. χ2 analyses with categorical covariates revealed that CEF average T-score was independently associated with lifetime history of cocaine and methamphetamine abuse/dependence; direction of findings unexpectedly suggested that individuals with such substance use histories were more likely to have normal CEF, than not. Results also revealed that individuals with impaired CEF average T-score had a higher proportion of positive diabetes, hyperlipidemia, and high cholesterol lifetime histories, than those with normal CEF T-score. Tables 3 and 4, Supplemental Digital Content, https://links.lww.com/QAI/B411 list results by average CEF T-score.

Additional independent samples t-tests revealed detectable associations between CEF DDS, age, years of education, premorbid ability, HIV duration, and nadir CD4, with greater age, higher years of education, lower premorbid ability, longer HIV duration, and lower nadir CD4 associated with impaired CEF DDS. χ2 analyses revealed that CEF DDS was independently associated with lifetime history of cocaine, methamphetamine, and alcohol abuse/dependence, with results suggesting that individuals with such substance use histories were more likely to have normal CEF DDS than not. Results also revealed that individuals with impaired CEF DDS had a higher proportion of positive diabetes, hyperlipidemia, and high cholesterol lifetime histories, than those with normal CEF DDS. Tables 5 and 6, Supplemental Digital Content, https://links.lww.com/QAI/B411 list results by CEF DDS.

Based on these results, multiple regression was used to assess the relative value of age, premorbid ability, nadir CD4, estimated duration of HIV, hemoglobin, lifetime histories of cocaine, and methamphetamine abuse/dependence, hyperlipidemia, high cholesterol, and diabetes in predicting CEF average T-score (as a continuous variable). The model revealed a collective, significant effect of these variables on CEF average T-score, F(10, 1104) = 30.61, P < 0.001). Review of standardized coefficients revealed that only age (β = −0.33), premorbid ability (β = 0.23), hemoglobin (β = 0.06), and lifetime history of diagnosed cocaine abuse/dependence (β = 0.17) remained as significant predictors of CEF average T-score (P-values < 0.05).

Furthermore, multiple regression was used to assess the relative value of age, years of education, premorbid ability, estimated duration of HIV, nadir CD4, lifetime histories of cocaine, methamphetamine, and alcohol abuse/dependence, hyperlipidemia, high cholesterol, and diabetes in predicting CEF DDS (as a continuous variable). The model revealed a collective significant effect of these variables on CEF DDS, F(11, 1111) = 16.39, P < 0.001). Review of standardized coefficients revealed that only age (β = 0.17), years of education (β = 0.18), premorbid ability (β = −0.24), estimated duration of HIV (β = 0.06), and lifetime history of diagnosed cocaine abuse/dependence (β = −0.14) continued to have detectable associations with CEF DDS (P's < 0.05).

NNTC Analyses

Analyses were completed on a sample of 766 NNTC participants. CEF value could not be calculated for 32 participants, because they had fewer than 4 values for tests comprising CEF. With respect to impairment frequencies, 29.2% had CEF average T-score <41.67, and 28.9% had CEF DDS >0.5. Where possible, analyses were completed on variables similar to those in CHARTER. The Benjamini–Hochberg procedure was used to control for multiple comparisons. With this correction, only adjusted P-values < 0.05 were considered statistically significant.

Independent samples t-tests did not reveal any significant associations with CEF average T-score (P's > 0.001). χ2 analyses with categorical covariates revealed that CEF average T-score was only dependent on race/ethnicity, likely secondary to relatively higher proportions of Hispanic and “other” race/ethnicity categories having CEF average T-score <41.7, relative to participants in Caucasian/White and African American/Black categories. Tables 7 and 8, Supplemental Digital Content, https://links.lww.com/QAI/B411 report results by CEF average T-score.

Similarly, independent samples t-tests did not reveal significant associations with CEF DDS (all P's > 0.001). χ2 analyses with categorical covariates revealed that CEF DDS was dependent on race/ethnicity, with the same pattern of results previously demonstrated with CEF average T-score. Tables 9 and 10, Supplemental Digital Content, https://links.lww.com/QAI/B411 report results by CEF DDS.

DISCUSSION

Findings from 2 samples of PLWH suggested that neuropsychological variables did not readily load onto a single unitary factor representative of global cognitive functioning, without further data reduction. When neuropsychological data were reduced, generated models demonstrated adequate fit when a global cognitive functioning score was present or absent. Furthermore, results indicated that individual neuropsychological test scores did not load onto theoretically-defined cognitive domains as expected, or previously reported in literature. Attention, working memory, and speeded information processing loaded onto a common factor, and learning and recall scores loaded onto similar factors, by modality (verbal vs. visual). Final models for CHARTER and NNTC indicated that tests assessing speeded information processing (i.e., WAIS-III Symbol Search, Digit Symbol, Trails A), working memory (i.e., WAIS-III Letter-Number Sequencing and PASAT), and cognitive set-shifting (i.e., Trails B) explained most of the total variation in cognitive functioning in these samples of PLWH (46%–48% per EFAs and 65%–76% per CFAs). We refer to this particular factor as “the cognitive efficiency factor or CEF.” We explored the characteristics of CEF and its possible utility as an alternate cognitive index score based on factor analysis, in PLWH.

When examining variables that may account for CEF, a few consistent predictors of CEF emerged. In CHARTER, participant age, premorbid ability, nadir CD4, estimated HIV duration, hemoglobin, and lifetime histories of diabetes, high cholesterol, hyperlipidemia, cocaine abuse/dependence, and methamphetamine abuse/dependence were associated with CEF average T-score. The same variables were associated with CEF DDS, with the exception of hemoglobin. Furthermore, there were unique associations between years of education and lifetime history of alcohol abuse/dependence with CEF DDS in CHARTER.

It was expected that age would inversely predict CEF, because frontal and subcortical areas are vulnerable to both HIV and aging effects, leading to decline in speeded functions.20 Current findings may suggest accelerated cognitive aging, because the demographically corrected T-scores used to calculate CEF had already corrected for “normal” aging. Accelerated or advanced aging is consistent with other findings in HIV-infected subjects.21–24 It was also expected that normal CEF would be positively associated with level of premorbid ability or cognitive reserve. The direction of CEF's associations with nadir CD4 and estimated HIV duration are consistent with the literature, as lower nadir CD4 and longer HIV disease duration have been associated with greater cerebral atrophy25 and neuropsychological impairment26,27 With respect to hemoglobin, low levels/anemia have been found to be associated with lower GDS,28 consistent with our findings. Further, the development of cerebrovascular risk factors, such as diabetes and hyperlipidemia,29 have been speculated to reflect an indirect effect of HIV infection/treatment, and have been shown to be risk factors for cognitive impairment in HIV.30,31 However, there does not seem to be a clear explanation for the positive associations of cocaine, methamphetamine, and alcohol abuse/dependence history with CEF. Of note, participants with these positive substance use histories tended to use other substances as well, making conclusions related to any one particular substance use difficult. Recent literature reviews on the topic of cognition and cocaine32 suggest that cocaine use does not tend to lead to gross cognitive impairment, and differences in performance between users and nonusers on select cognitive tests are relatively small, highlighting that there may not be clinically significant changes in cognition related to using these substances, in general. Interestingly, when examining significant predictors of CEF average T-score collectively, only a few variables remained significant, including age, premorbid ability, hemoglobin, and history of cocaine abuse/dependence, with age accounting for the most variance. However, results differed slightly when collectively examining significant predictors of CEF DDS; in this context, premorbid ability, years of education, age, estimated HIV duration and history of cocaine abuse/dependence were the only variables that remained statistically significant, with premorbid ability accounting for the most variance. Both sets of results suggest that psychosocial variables, including age and premorbid ability, may hold more weight than variables specific to the HIV disease process (e.g., nadir CD4 and estimated HIV duration) and other variables reflective of contributing medical histories, when examining neurocognitive impairment in PLWH.

In NNTC, race/ethnicity was the only covariate significantly associated with CEF, with participants identified as Hispanic and “Other” exhibiting higher proportions of impaired CEF. There were relatively smaller sample sizes for Hispanic and “Other” groups. The effect of having less representative normative data for these groups, relative to those available for Caucasians and African Americans, cannot be ruled out. Furthermore, we also cannot rule out the impact of cultural bias and other psychosocial factors, including language, on these results. However, another CHARTER study revealed that Hispanics were more likely to decline neurocognitively over a 3-year (on average) follow-up.33 This association with Hispanic ethnicity was not anticipated by the respective authors, yet was posited to be in part secondary to reduced access to health care, relative to non-Hispanic whites.

The finding that “cognitive efficiency” is a prominent cognitive factor across cohorts is consistent with select literature on cognition2,13,34 and neuroimaging10,11 in PLWH. Overall, there tends to be a mild frontal–subcortical profile characteristic in PLWH, although this profile has been previously considered to be “spotty” without great consistency15 or uniform underlying neuropathology.35 The current analyses touch on both points, as multiple cognitive factors (i.e., “cognitive efficiency,” learning/memory, and fine motor dexterity) derived from our final CFA analyses were found to be predictive of overall cognitive functioning in 2 cohorts of PLWH, yet “cognitive efficiency” appeared to account for the greatest variance.

Limitations of the analyses include lack of a gold-standard for identifying cognitive impairment in PLWH (outside of neuropsychological test performance) and lack of a control/normal group for comparison. Results from CFAs may reflect, in part, shared methods variance. In addition, CFA model results for NNTC may be biased, because 23% of the sample was removed secondary to listwise deletion. Furthermore, neuropsychological variables were limited to those available in research test batteries. Of note, these test batteries included slightly more measures that were speeded in nature, than other cognitive tasks. Finally, although “cognitive efficiency” seems to be an important component of a comprehensive test battery, we do not recommend or imply that it replace a comprehensive battery.

The current analyses identified an alternate cognitive index (measurable by average T-score or DDS) that warrants further investigation, with respect to distinguishing PLWH who are cognitively intact vs. those with impairment. We intend to isolate a subpopulation who may follow a unique trajectory, infection outcome, and/or respond differentially to treatment regimens. The proposed index may be relatively easy to implement in research and clinical practice. Future directions include investigating predictors of “cognitive efficiency” and identifying how the HIV disease process is associated with changes in “cognitive efficiency” over time. The ultimate goals are to assess whether cognitive efficiency can serve as a practical marker of cognitive prognosis in HIV, help improve diagnostic accuracy of cognitive impairment among PLWH, and to find a measure that can be potentially valuable for preventative or treatment trials.

ACKNOWLEDGMENTS

The authors gratefully appreciate and acknowledge the investigators and staff associated with NNTC and CHARTER, and the participants who partook in these studies.

REFERENCES

1. Cihlar T, Fordyce M. Current status and prospects of HIV treatment. Curr Opin Virol. 2016;18:50–56.
2. Heaton RK, Franklin DR, Ellis RJ, et al. HIV-associated neurocognitive disorders before and during the era of combination antiretroviral therapy: differences in rates, nature, and predictors. J Neurovirol. 2011;17:3–16.
3. Cysique LA, Brew BJ. Neuropsychological functioning and antiretroviral treatment in HIV/AIDS: a review. Neuropsychol Rev. 2009;19:169–185.
4. Woods SP, Moore DJ, Weber E, et al. Cognitive neuropsychology of HIV-associated neurocognitive disorders. Neuropsychol Rev. 2009;19:152–168.
5. Dawes S, Suarez P, Casey CY, et al. Variable patterns of neuropsychological performance in HIV-1 infection. J Clin Exp Neuropsychol. 2008;30:613–626.
6. Heaton RK, Grant I, Butters N, et al. The HNRC 500—neuropsychology of HIV infection at different disease stages. HIV Neurobehavioral Research Center. J Int Neuropsychol Soc. 1995;1:231–251.
7. Devlin KN, Giovannetti T. Heterogeneity of neuropsychological impairment in HIV infection: contributions from mild cognitive impairment. Neuropsychol Rev. 2017;27:101–123.
8. Blackstone K, Moore DJ, Franklin DR, et al. Defining neurocognitive impairment in HIV: deficit scores versus clinical ratings. Clin Neuropsychol. 2012;26:894–908.
9. Antinori A, Arendt G, Becker JT, et al. Updated research nosology for HIV-associated neurocognitive disorders. Neurology. 2007;69:1789–1799.
10. Becker JT, Sanders J, Madsen SK, et al. Subcortical brain atrophy persists even in HAART-regulated HIV disease. Brain Imaging Behav. 2011;5:77–85.
11. Jernigan TL, Archibald SL, Fennema-Notestine C, et al. Clinical factors related to brain structure in HIV: the CHARTER study. J Neurovirol. 2011;17:248–257.
12. Ragin AB, Wu Y, Storey P, et al. Diffusion tensor imaging of subcortical brain injury in patients infected with human immunodeficiency virus. J Neurovirol. 2005;11:292–298.
13. Fellows RP, Byrd DA, Morgello S. Effects of information processing speed on learning, memory, and executive functioning in people living with HIV/AIDS. J Clin Exp Neuropsychol. 2014;36:806–817.
14. Woods SP, Rippeth JD, Frol AB, et al. Interrater reliability of clinical ratings and neurocognitive diagnoses in HIV. J Clin Exp Neuropsychol. 2004;26:759–778.
15. Heaton RK, Clifford DB, Franklin DR Jr, et al. HIV-associated neurocognitive disorders persist in the era of potent antiretroviral therapy: CHARTER Study. Neurology. 2010;75:2087–2096.
16. Norman MA, Moore DJ, Taylor M, et al. Demographically corrected norms for african Americans and Caucasians on the Hopkins verbal learning test–revised, Brief visuospatial memory test–revised, stroop color and Word test, and Wisconsin card sorting test 64-card version. J Clin Exp Neuropsychol. 2011;33:793–804.
17. Bollen KAL, Scott J. Testing Structural Equation Models. Newbury Park, CA: Sage; 1993.
18. Bollen KA. Structural Equations with Latent Variables. New York, NY: Wiley; 1989.
19. Hooper D, Coughlan J, Mullen M. Structural equation modelling: guidelines for determining model fit. EJBRM. 2008;6:53–60
20. Kerchner GA, Racine CA, Hale S, et al. Cognitive processing speed in older adults: relationship with white matter integrity. PLoS One. 2012;7:e50425.
21. Goodkin K, Miller EN, Cox C, et al. Effect of ageing on neurocognitive function by stage of HIV infection: evidence from the Multicenter AIDS Cohort Study. Lancet HIV. 2017;4:e411–e422.
22. Sheppard DP, Iudicello JE, Morgan EE, et al. Accelerated and accentuated neurocognitive aging in HIV infection. J Neurovirol. 2017;23:492–500.
23. Levine AJ, Quach A, Moore DJ, et al. Accelerated epigenetic aging in brain is associated with pre-mortem HIV-associated neurocognitive disorders. J Neurovirol. 2016;22:366–375.
24. Gross AM, Jaeger PA, Kreisberg JF, et al. Methylome-wide analysis of chronic HIV infection reveals five-year increase in biological age and epigenetic targeting of HLA. Mol Cell. 2016;62:157–168.
25. Cohen RA, Harezlak J, Schifitto G, et al. Effects of nadir CD4 count and duration of human immunodeficiency virus infection on brain volumes in the highly active antiretroviral therapy era. J Neurovirol. 2010;16:25–32.
26. Ellis RJ, Badiee J, Vaida F, et al. CD4 nadir is a predictor of HIV neurocognitive impairment in the era of combination antiretroviral therapy. AIDS. 2011;25:1747–1751.
27. Valcour V, Yee P, Williams AE, et al. Lowest ever CD4 lymphocyte count (CD4 nadir) as a predictor of current cognitive and neurological status in human immunodeficiency virus type 1 infection—the Hawaii Aging with HIV Cohort. J Neurovirol. 2006;12:387–391.
28. Kallianpur AR, Wang Q, Jia P, et al. Anemia and red blood cell indices predict HIV-associated neurocognitive impairment in the highly active antiretroviral therapy era. J Infect Dis. 2016;213:1065–1073.
29. Mulligan K, Grunfeld C, Tai VW, et al. Hyperlipidemia and insulin resistance are induced by protease inhibitors independent of changes in body composition in patients with HIV infection. J Acquir Immune Defic Syndr. 2000;23:35–43.
30. Valcour VG, Shikuma CM, Shiramizu BT, et al. Diabetes, insulin resistance, and dementia among HIV-1-infected patients. J Acquir Immune Defic Syndr. 2005;38:31–36.
31. Foley J, Ettenhofer M, Wright MJ, et al. Neurocognitive functioning in HIV-1 infection: effects of cerebrovascular risk factors and age. Clin Neuropsychol. 2010;24:265–285.
32. Frazer KM, Manly JJ, Downey G, et al. Assessing cognitive functioning in individuals with cocaine use disorder. J Clin Exp Neuropsychol. 2018;40:619–632.
33. Heaton RK, Franklin DR Jr, Deutsch R, et al. Neurocognitive change in the era of HIV combination antiretroviral therapy: the longitudinal CHARTER study. Clin Infect Dis. 2015;60:473–480.
34. Walker KA, Brown GG. HIV-associated executive dysfunction in the era of modern antiretroviral therapy: a systematic review and meta-analysis. J Clin Exp Neuropsychol. 2018;40:357–376.
35. Moore DJ, Masliah E, Rippeth JD, et al. Cortical and subcortical neurodegeneration is associated with HIV neurocognitive impairment. AIDS. 2006;20:879–887.
36. Wilkinson GS. Wide Range Achievement Test 3. Wilmington, DE: Wide Range, Inc.; 1993.
    37. Wechsler D. Wechsler Adult Intelligence Scale. 3rd ed. San Antonio, TX: Psychological Corporation; 1997.
      38. Heaton RK, Miller W, Taylor MJ, et al. Revised Comprehensive Norms for an Expanded Halstead-Reitan Battery: Demographically Adjusted Neuropsychological Norms for African American and Caucasian Adults, Professional Manual. Lutz, FL: Psychological Assessment Resources, Inc.; 2004.
        39. Benedict R, Schretlen D, Groninger L, et al. Hopkins Verbal Learning Test-Revised: normative data and analysis of inter-form and test-retest reliability. Clin Neuropsychol. 1998;12:43–55.
          40. Benedict R. Brief Visuospatial Memory Test—Revised. Odessa, Florida: Psychological Assessment Resources, Inc.; 1997.
            41. Diehr MC, Cherner M, Wolfson TJ, et al. The 50 and 100-item short forms of the paced auditory serial addition task (PASAT): demographically corrected norms and comparisons with the full PASAT in normal and clinical samples. J Clin Exp Neuropsychol. 2003;25:571–585.
            Keywords:

            HIV; neuropsychology; HIV-associated neurocognitive disorder; factor analysis

            Supplemental Digital Content

            Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.