Secondary Logo

Journal Logo


Reliability, validity and relevance of needs assessment instruments for informal dementia caregivers: a psychometric systematic review

Kipfer, Stephanie1,2; Pihet, Sandrine1

Author Information
doi: 10.11124/JBISRIR-2017-003976
  • Open


Summary of Findings


Dementia is characterized by a progressive decline of cognitive and social functions. This limits the autonomy of those affected by dementia, and makes it difficult for them to cope with daily life. They become increasingly dependent on the care of others, particularly informal caregivers. Informal caregivers are individuals who regularly provide unpaid care, assistance and/or supervision to a close person with reduced autonomy.1 Studies show that informal dementia caregivers provide care of a higher intensity (more hours per day) and longer duration (over more years), compared to caregivers of people without dementia. The caregiving time increases even more when the person with dementia shows behavioral symptoms.2 In a survey of Alzheimer Europe, almost half of informal dementia caregivers spent more than 10 hours per day providing care.3 This is comparable to data from the United States (US), where informal dementia caregivers spent on average nine hours per day caregiving,4 and 31.1% of informal dementia caregivers provide care for two to three years, 18.5% for four to five years and 38.4% for six or more years.5 For 2015, the estimated economic value of unpaid care provided by informal dementia caregivers in the US was US$221.3 billion.5 In Switzerland, an example of a Western European country, informal caregivers contributed 80 million unpaid hours in 2016, a substantial increase compared to 64 million hours invested in 2012 and 52 million in 2010.6,7 The contribution of informal caregivers is expected to further increase due to the rising care needs of an aging population and the growing prevalence of multiple chronic conditions, in particular dementia.6 In addition, the number of available formal carers is not expected to increase accordingly.8 Informal caregivers thus play a key role, not only for the people with dementia but also for society in the sustainability of the health care system.9-11 Recognizing the valuable contribution of informal caregivers and providing them adequate support is therefore a core public health issue.

Caring for a person with dementia is a challenging experience, and the burden of informal dementia caregivers is higher compared to informal caregivers of persons with other chronic conditions.12,13 Straining continuous care, an unpredictable course, and neuropsychiatric symptoms of the person with dementia can cause high levels of stress, which often leads to physical, psychological, emotional, social and financial problems.2,13-15 In addition, family caregivers often have no experience in performing care, feel unprepared, and are lacking the required knowledge and support from health care providers to deliver appropriate care.5,16 Informal caregivers report feelings of tiredness, stress, helplessness, and loneliness, and show high prevalence of depression and anxiety.17 Due to the nature of dementia, informal caregivers also struggle with feelings of guilt, ambivalence, grief and loss. Identified physical problems can create an increased risk for vascular disease, impaired wound healing, decreased immunity, and reduced likelihood to engage in preventive health behavior.17,18 Poor physical and psychological health conditions not only impair the quality of life of informal dementia caregivers but also affect their ability to provide care to the person with dementia and to sustain their own social support network, which leads to social isolation.2,18-20 Burden and health deterioration of informal dementia caregivers are core predictors of early institutionalization and mistreatment of their care recipient.18,21

Due to the challenges of caregiving and the associated burden, informal dementia caregivers report important unmet needs at all stages of the disease. Their needs cover very diverse areas, such as information about the illness and support resources; support for their own emotional concerns; support on how to communicate with the care recipient, the family or the service providers; practical support in daily care and respite; and financial support.2,22-26

Informal dementia caregivers often report that health care providers insufficiently attend and adapt to their multiple needs, and complain about care fragmentation and poor coordination, which ultimately increases their stress and underutilization of support services despite their needs.27-29 Underutilization of health care and other support resources contributes to the exhaustion of the informal dementia caregivers and precipitates institutionalization of their care recipient, thereby increasing health care costs.21,30,31 Informal caregivers do not always express spontaneously and directly how their needs can be met.32 Therefore, evaluating their needs in a systematic manner is crucial to supporting them in fulfilling their needs in a person-centered way in order to promote the quality of life of the caregiver and the affected person, as well as to maintain the caring situation at home.

Most studies on the needs of informal dementia caregivers have used qualitative study designs. Existing quantitative questionnaires have limitations: very few items for caregivers,17,33 poor validation,22,23,34,35 or lack of empirical evidence about needs dimensions (factor structure). This limits their use in both research and clinical practice. In addition, many of the assessment instruments, particularly semi-structured interviews, are time-intensive (e.g. assessment alone takes on average two hours26 or 90 minutes23). Furthermore, most of the collected information in interviews is qualitative. As such it is usually extensive, and more time is needed to prepare the information to make it available for the caregiver or other service providers (e.g. transcriptions). In view of the growing economic pressure on the social system, and of the rising support needs associated with population aging, such resources are impossible to secure on a large scale.

A search in MEDLINE, CINAHL, the JBI Database of Systematic Reviews and Implementation Reports and the Cochrane Database of Systematic Reviews was performed in January 2017, and again in January and July 2018, to identify completed and in-progress systematic reviews on needs assessment instruments for informal dementia caregivers. Three systematic reviews examining dementia needs assessment instruments were identified.36-38 Two reviews focused on documenting the diverse instruments available, focusing on their content and methodological approach to measure the needs, with no specific interest in their psychometric properties. Novais et al.36 included all types of studies that conducted a needs assessment, and Bangerter et al.37 concentrated on quantitative findings. The third review (Mansfield et al.38) critically examined the psychometric properties of a very limited number of instruments (N = 4) due to highly restrictive inclusion criteria; only peer-reviewed studies in English published until August 2015 were included. In addition, they targeted instruments where all items focused on the caregiver, and where the caregivers were directly asked about their needs.

Information about the methodological quality of measurement instruments are crucial for clinicians and researchers to make informed decisions about the best tool for their specific purpose. Therefore, the current review aimed to expand on the previous three reviews by focusing specifically on (at least partially) validated instruments and documenting in detail their psychometric properties. For this purpose, we followed the COSMIN (COnsensus-based Standards for the selection of health Measurement INstruments) guidelines for systematic reviews of measurement properties, which are recommended for psychometric reviews.39 In addition, the current review expanded on that of Mansfield et al.38 by including: i) instruments with diverse application methods (i.e. also professionally-assessed) or with items for both caregivers and persons with dementia, ii) studies published until February 2019 and in English, French and German, and iii) the CINAHL database as a relevant source for caregiver literature. The current review also includes recommendations for conducting further psychometric validations in the field of needs assessment among informal dementia caregivers. This review provides comprehensive and systematic information to guide further development of well-validated needs assessment instruments for use in research and clinical settings.

This review was conducted according to an a priori published protocol.40

Review questions/objectives

The objective of this review was to critically appraise, compare and summarize the measurement properties of needs assessment instruments for informal dementia caregivers. More specifically, the review questions were as follows:

  • i) What are the measurement properties of needs assessment instruments for informal dementia caregivers (primary outcome)?
  • ii) What is the relevance of these instruments for clinical practice, research and informal caregivers according to their purpose, application method, administration burden, number of items and domain structure (secondary outcome)?

The objectives section has been revised, compared to the a priori protocol, to provide more clarity without changing the overall objectives of the review.

Inclusion criteria

The inclusion criteria were developed following COSMIN guidelines and JBI guidance. The COSMIN guidelines for systematic reviews of measurement properties recommend the following inclusion criteria: i) the instrument should aim to measure the construct of interest (types of intervention(s)/phenomena of interest), ii) the study sample should comprise the target population of interest (types of participants), iii) the study should concern the type of measurement instrument of interest (self-reported or professionally interviewed), iv) the aim of the study should be the development of a measurement instrument or the evaluation of one or more of its measurement properties (types of studies).39


This review considered studies that included informal caregivers of persons with dementia living at home as the study sample or as a part of it. Where possible, only the corresponding data were included in the review. Informal caregivers were defined as individuals who regularly provide unpaid care, assistance and/or supervision to a close person with reduced autonomy, in this context with dementia.1 In regards to the person with dementia, there was no restriction regarding the type of dementia.


This review considered studies that report on needs assessment instruments for informal dementia caregivers. Needs was defined as “a condition that is important to the subject and that is not being satisfied in the subject's present environment.”41(p.772) The application method of the instruments was either self-reported or professionally interviewed. Further inclusion criteria were added while identifying and screening the literature to complement those of the a priori protocol: Instruments needed to be multidimensional (e.g. include more dimensions than only information needs). Instruments with items for informal caregivers and people with dementia were included if they contained at least two dimensions for the caregivers. Measuring the needs of informal dementia caregivers had to be an explicit objective of the instrument or of specific dimensions.


This review considered studies that included the following outcomes for psychometric properties:

  • Reliability (test-retest reliability, inter-rater reliability, internal consistency)
  • Validity (content validity, construct validity, structural validity, sensitivity to change)

In order to characterize the instruments, we additionally documented the following instrument characteristics:

  • Purpose (original intended use)
  • Application method (self-reported, professionally interviewed)
  • Administration burden (training for clinicians, time for completion)
  • Number of items and domain structure.

As not every article provided data for all of the psychometric outcomes, articles that reported at least one outcome regarding reliability or validity were considered. Criterion-related validity was not considered in the results as there was no reasonable gold-standard available for the included instruments (in accordance with COSMIN guidelines39). No data were found for the outcome sensivity to change.

Types of studies

This review considered psychometric studies, namely, instrument development or instrument evaluation studies. Other types of studies (in which needs assessment instruments were used) were included to identify eligible instruments and their responsible authors. If no published or unpublished psychometric study was available, other types of studies (e.g. a survey) were only included if they provided sufficient information to evaluate the methodological validity of at least one psychometric property according to the COSMIN checklist.42


This review was conducted according to an a priori published protocol40 and has been registered with PROSPERO (CRD42018090611).

Search strategy

The search strategy aimed to find both published and unpublished studies. A three-step search strategy was utilized in this review. An initial limited search of MEDLINE and CINAHL was undertaken followed by analysis of the text words contained in the title and abstract, and of the index terms used to describe the articles. A second search was undertaken in February 2019 across all included databases. The search strategy considered all identified keywords and index terms as well as search blocks for dementia and patient-reported outcome measures provided by the study group of the Biomedical Information of the Dutch Library Association (KNVI)43 and a search block combined with a filter for measurement properties, as suggested by and available from the COSMIN website.44 Thirdly, the reference lists of all selected full texts were searched for additional studies. A complementary search was performed in the included databases and gray literature using the names of the needs assessment instruments identified in the three foregoing steps, and authors were contacted to obtain possible additional gray literature relating to their instrument. Studies published in English, German and French were considered for inclusion in this review. There was no limitation regarding the publication time.

The databases searched included Embase, MEDLINE, CINAHL and PsycINFO. In contrast to the a priori protocol, Embase was searched as recommended by COSMIN, and CINAHL was searched instead of OVID Nursing.39

The search for unpublished studies included Google Scholar, ProQuest Dissertations and Theses, ResearchGate (contact with relevant researchers), homepages with information about needs assessment/outcome tools (e.g. and homepages of dementia or caregiver associations or organizations (e.g.,,,

Seven relevant researchers identified during the literature search were contacted by email to obtain unpublished psychometric studies or testing of the instrument, or to request further or lacking data. Two of them provided the instrument itself to complete the data extraction. In two cases, additional data were delivered to evaluate the methodological quality. One unpublished publication was found by contacting the responsible researcher. Three authors were not able to send additional information. The full search strategy is provided in Appendix I.

Study selection

Following the search, all identified citations were collated and uploaded into EndNote X7.8 (Clarivate Analytics, PA, USA) and duplicates removed. Titles and abstracts were then screened by two independent reviewers for assessment against the inclusion and exclusion criteria for the review. Potentially relevant publications were retrieved in full and their citation details imported into the standardized data extraction tool developed for this review. The full texts of selected citations were assessed in detail against the inclusion and exclusion criteria by two independent reviewers. Reasons for exclusion of full-text studies that did not meet the inclusion criteria are reported in Appendix II. Any disagreements that arose between the reviewers at each stage of the study selection process were either resolved through discussion, or with a third reviewer.

Assessment of methodological quality

Publications selected for retrieval were assessed by two independent reviewers for methodological validity prior to inclusion in the review using the COSMIN checklist.42,45,46 The COSMIN checklist is a standardized tool recommended for use in systematic reviews of measurement properties.42 This tool fulfills the specific requirements of a psychometric review and has already been successfully used in another JBI review protocol.47 The original checklist consists of 12 boxes with five to 18 items per box, and is a modular tool. We therefore used only the seven boxes evaluating relevant psychometric properties for our review, namely, internal consistency, reliability, measurement error, content validity, structural validity, hypotheses testing and responsiveness. The box for criterion-related validity was not included as there is no reasonable gold-standard available. The three boxes for additional methodological standards (item response theory models, interpretability and cross-cultural validity) were not used as they focus on more advanced properties. The last box on generalizability was used for data extraction of study characteristics. Each item of the checklist is assessed on a four-point response scale: excellent, good, fair and poor. However, some items have only two or three response options (e.g. only excellent, fair or poor). The methodological quality scores per box were obtained by taking the lowest rating across all items in the box (“worst score count”). The lowest score of any box presented the overall score of the reviewed study. Studies with poor scores in all boxes would have been excluded from the review as this would indicate inadequate methodological quality. In this review, none of the included studies were excluded due to poor scores in all boxes. The checklist used for assessing the methodological quality can be found in the a priori published protocol.40

All disagreements that arose between the reviewers were resolved through discussion, or with a third reviewer.

Data extraction

Data were extracted from papers included in the review using the standardized data extraction tool developed for this review.40 This tool was inspired by the standardized data extraction tools in JBI System for the Unified Management, Assessment and Review of Information (JBI SUMARI; JBI, Adelaide, Australia), by selecting the relevant parts and adapting them to the specificity of a psychometric review. JBI SUMARI is a web application designed to support researchers and practitioners in the entire process of conducting a systematic review. The data extracted included specific details about: i) study characteristics, ii) instrument characteristics and iii) outcomes of significance for the review question and specific objectives. Although the comprehensiveness and accuracy of the provided data varied across studies, the following data were collected;

  • i) Study characteristics: citation details, aim of the study, study design and method, setting, population characteristics, definition of informal caregivers and needs.
  • ii) Instrument characteristics: name of the instrument, purpose, target population, application method, administrative burden, number of items and domain structure, range of scores, response options/format.
  • iii) Psychometric outcomes: reliability (test-retest reliability, inter-rater reliability, internal consistency), validity (content validity, construct validity, structural validity).

Data were extracted and double checked independently by two reviewers (first and second author). To minimize errors, the data extraction form was first pilot tested and a standardized form was used. Disagreements between the reviewers were resolved by discussion, or with a third reviewer.

Data synthesis

The main aim of the data synthesis was to compare outcomes to provide recommendations on the most suitable instrument for research, clinical use and informal caregivers. The findings on instrument characteristics, reliability and validity were compared and described in narrative form, including tables to aid data presentation. The domain structure of the instruments and a summary of their content are presented in a table and in narrative form (see Appendix III and Table 1).

Table 1
Table 1:
Summary of topics

The quality criteria from Terwee et al.48 were used to judge the psychometric outcomes of the different instruments, namely, their content validity, internal consistency, construct validity, test-retest reliability (agreement), and inter-rater reliability (reliability). The quality of the instruments was assessed as positive, indeterminate, or negative, with a fourth category for “no information available and doubtful design or method.”40 The results of this appraisal are presented in a narrative form and in Appendix IV.


Study inclusion

A total of 4909 records were identified through the systematic search in the four databases. Searches in gray literature and requests to relevant researchers for unpublished literature or publications about specific instruments, identified a priori, revealed seven additional publications. After removing duplicates, 3468 records remained. Another 3404 records were excluded for not meeting the inclusion criteria after screening their titles or abstracts, resulting in 64 full texts assessed for eligibility. After reading the full texts, 19 articles met the inclusion criteria and were assessed for quality. One publication was excluded due to insufficient data to evaluate the methodological quality. Eighteen articles49-66 covering 14 different needs assessment instruments were included in the review with, in four cases, two different publications describing the same instrument, as they provided complementary information.50-55,62,63 A flow chart of the study selection is presented in Figure 1.67 The excluded full texts and the reasons for exclusion according to the inclusion criteria or the critical appraisal are listed in Appendix II. Table 2 presents an overview of the included instruments, their acronyms and the authors of the included publications.

Figure 1
Figure 1:
PRISMA flow diagram of search and study selection process67
Table 2
Table 2:
Overview of instruments and authors

Characteristics of included studies

Eleven of the 18 included publications focused on the development or the evaluation of an instrument.49,51-55,58,59,61,64,65 In addition, a development report and a manual were included to assess the content validity of an instrument or to provide additional results for the psychometric testing.63,50 Five other studies, not primarily aimed at validation but containing sufficient information about the development or the evaluation of the used instruments, were also integrated in the review.56,57,60,62,66 The publication dates ranged from 1996 to 2019.

Nine of the development or evaluation samples included only caregivers of persons with dementia living in the community.49,54-60,64 The remaining nine samples were mixed, including caregivers of persons with dementia residing in the community or living in institutions.50-53,61-63,65,66 Study samples differed in the relationship status, but caregivers were mostly spouses or offsprings of the persons with dementia. In all studies, the majority of caregivers were female. The mean age of the caregivers ranged from 51 to 68 years (with a standard deviation of 10 to 14.5 years), and the age range of people with dementia ranged from 76 to 85 years (with a standard deviation of 5.9 to 9.8 years). An overview of the characteristics of the different samples is presented in Appendix V.

Description of instruments

Five of the instruments were developed or tested in the US,56-59,66 four in the United Kingdom (UK),49,50,51,64,65 and one each in Austria,52,53 Singapore,54,55 Greece,60 Canada61 and the Netherlands.62,63 Eleven instruments contained only items for caregivers (CADI, RAM, QCNE, UNM, EAC, QNP, PBH-LCI:D, CNCD, NAS, CNA-D, SIDECAR) while three also included items for persons with dementia (CARENAP, JHDCNA, Tayside). Three instruments were recommended for use in both clinical and research settings (CADI, PBH-LCI:D, RAM) while one was recommended only for research purposes (CNA-D), and four others only for clinical assessments (CARENAP, CNCD, EAC, Tayside). For the last six instruments, the intended context of use was not specified (JHDCNA, NAS, QCNE, QNP, SIDECAR, UNM). With the application method, seven instruments were self-administered (CNCD, NAS, PBH-LCI:D, QCNE, EAC, SIDECAR, UNM), three were used in professional interviews (CARENAP, CNA-D, JHDCNA), and two could be either self-administered or professionally interviewed (QNP, Tayside). For two instruments, the application method was not clearly stated, but the descriptions suggested for both a self-reported application method (CADI, RAM). For seven instruments, the administration time was described, ranging from five to 50 minutes (CADI, CARENAP, CNA-D, NAS, RAM, QCNE, QNP). Information about the required training for clinicians was mentioned for two instruments. For one of them, no specific abilities or prior knowledge were necessary (QNP), whereas the other instrument could only be administered by professionals with experience in assessments and interviewing (CARENAP). Response options for all instruments were either nominally or ordinally scaled. For nine instruments, a total (CADI, CNA-D, CNCD, JHDCNA, PBH-LCI:D, RAM, EAC, UNM) or a mean score (NAS) could be obtained, with higher scores indicating higher unmet needs in most of the instruments. Four instruments did not use a scoring system (CARENAP, QCNE, QNP, Tayside). No concrete information was provided for SIDECAR on this aspect. The instruments differed regarding the domain structure and number of items. The number of items for caregivers ranged from 12 to 70 items. Appendix III provides an overview of the domain structure, number of items and response options of the different instruments. Further detailed information about the characteristics of the instruments is presented in Appendix VI.

Methodological quality

For each study, we evaluated the methodological quality of their assessment of six different psychometric properties, namely, content validity, structural validity (i.e. the factor structure of the instrument), internal consistency, reliability (including both test-retest and inter-rater), measurement error and construct validity. Criterion validity was not considered as there is currently no gold standard, and responsiveness (i.e. sensitivity to change) was excluded as no study assessed it. Table 3 provides an overview of the quality (excellent, good, fair or poor) of the assessment for each study and each specific standardized question (Q1 and following) regarding the psychometric properties evaluated in this study. For CNA-D, two studies testing construct validity with different variables were available and therefore evaluated separately, thus generating in total 15 studies for methodological quality. In the other three cases where two publications described the same instrument, as they contained complementary information, we treated them as one study when assessing methodological quality.

Table 3
Table 3:
Methodological quality of included studies
Table 3 (Continued)
Table 3 (Continued):
Methodological quality of included studies

Content validity was documented in 13 of the 15 studies. Seven of these 13 studies had excellent ratings for each of the five specific criteria considered (QCNE, CARENAP, QNP, PBH-LCI:D, CNCD, CNA-D, SIDECAR), and another study had good or excellent ratings (NAS). The five other studies all had at least one poor rating. Three of them failed to include informal caregivers in the item development process (i.e. Q2: RAM, UNM, EAC). Three did not assess if all items together comprehensively reflected the construct to be measured (i.e. Q4: CADI, Tayside, EAC). One did not assess if all items referred to relevant aspects of the construct to be measured (i.e. Q1: CADI). In summary, various examples of excellent methodological quality were available regarding content validity.

Structural validity is relevant for instruments aiming at measuring different domains of needs, which was clear for all studies. However, only four studies evaluated the factor structure of their instrument, one with a factor analysis meeting all the criteria (CADI), two with adequate factor analysis but samples of too limited size (PBH-LCI:D, CNCD), and a last one with a factor analysis inadequately performed separately for each dimension (QCNE). In summary, sample size issues and inappropriate statistical analysis limited the strength of the limited evidence regarding structural validity.

Internal consistency was reported in 10 of the 15 studies. Information on the percentage of missing data (Q2) and how they were handled (Q3) was provided in only two of these 10 studies (RAM, EAC). The sample size used to assess internal consistency was optimal (Q4, N = 100 or more) for seven studies (CADI, RAM, QCNE, UNM, QNP, PBH-LCI:D, CNCD), good for the EAC (50<N<99), moderate for the CNA-D (30<N<49), and poor for CARENAP (N<30). As presented above, four studies tested the unidimensionality of their subscales before computing the Cronbach alphas (Q5), in two cases with an appropriate sample size (Q6: CADI, QCNE) and in two with far too small samples (PBH-LCI:D, CNCD), while six studies did not assess unidimensionality. Seven studies computed the internal consistency statistic for each subscale separately (Q7: CADI, QCNE, UNM, EAC, PBH-LCI:D, CNCD, CNA-D), while three computed a single alpha for all items irrespective of possible dimensions (RAM, CARENAP, QNP). Eight studies calculated an appropriate statistic (Q9 and/or Q10: CADI, RAM, QCNE, UNM, EAC, PBH-LCI:D, CNCD, CNA-D), while two provided only item-total correlations (CARENAP, QNP). In summary, most studies met a majority of criteria for methodological quality, with the exception of unidimensionality testing, which was often neglected.

Reliability includes both test-retest and inter-rater agreement, and was documented in only 5 of the 15 studies. Information on the percentage of missing data (Q1) and how they were handled (Q2) was provided in only one of these five studies (PBH-LCI:D). The sample size used to assess reliability was good for PBH-LCI:D and Tayside (Q3, 50<N<99), moderate for EAC and CNA-D (30<N<49), and poor for CARENAP (N<30). The measurement occasions were clearly or assumably independent (Q5) and made in similar conditions (Q9) for three studies (EAC, PBH-LCI:D, CNA-D), while this was not the case for CNA-D and CARENAP. For CNA-D, test-retest and inter-rater agreement were evaluated in a combined way by having two interviews conducted by two different persons two weeks apart. For the CARENAP, only the inter-rater agreement was tested, based on a simultaneous evaluation made by an interviewer and an observer. The time interval between the two measurements (two or three weeks) was clearly stated (Q6) in four of the five studies. The exception was Tayside, where the time interval was highly variable, with an average of 21 days. The time interval was appropriate (Q8) and it was assumable or evidenced that informal dementia caregivers were stable in the interim period in terms of needs (Q7) for PBH-LCI:D, CARENAP and CNA-D. As mentioned, Tayside used a questionable time interval. EAC had a rather long interval of three weeks without any caution to ensure that informal dementia caregivers were stable within this period. Finally, three studies calculated adequate statistics for the assessment of reliability (Tayside, CARENAP, CNA-D), while EAC and PBH-LCI:D used less appropriate indices, namely, a Pearson correlation without evidence that no systematic change had occurred. In summary, procedures mixing test-retest and inter-rater assessment, as well as as irregular or too long time intervals, and modest sample sizes, undermined the limited efforts to evaluate reliability.

Measurement error was only reported for one study (PBH-LCI:D), with no information on the percentage of missing data (Q1) and how they were handled (Q2). However, the study had a good sample size of N = 79 (Q3), and used a clear (Q6) and appropriate (Q8) time interval between the two measures (i.e. two weeks), suggesting that the informal dementia caregivers were stable over that period (Q7). It was also assumable that the measurements were independent (Q5) and test conditions were similar (Q9), and appropriate statistics were used (Q11). The very limited evidence about measurement error was thus of good quality.

Construct validity was documented in eight of the 15 studies. Only JHDCNA and PBH-LCI:D provided information on the percentage of missing data (Q1) and how they were handled (Q2). Six studies had optimal or good sample sizes (Q3, N>50: JHDCNA, RAM, UNM, EAC, PBH-LCI:D, NAS), while CNA-D had a moderate sample in both studies with N = 45. In six studies, multiple hypotheses (PBH-LCI:D, CNCD, CNA-D) or a minimal number of hypotheses (JHDCNA, EAC, CNA-D) were formulated a priori (Q4), while RAM and UNM formulated only vague hypotheses. EAC was the sole study to specify the expected magnitude of the association (Q6). Seven studies described adequately all or most constructs measured by the comparator instruments (Q7), with only UNM delivering a poor description. In contrast, only JHDCNA and CNA-D provided comprehensive evidence of the measurement properties of the comparator instruments (Q8), with all of the others giving only partial information. Finally, all studies used clearly or assumably appropriate statistics, with the exception of PBH-LCI:D which used parametric statistics while the standard deviations for some variables suggested non-normal distributions. In summary, there was some quality empirical evidence on construct validity, although the psychometric properties of the comparator instruments were often insufficiently described.

Review findings

Quality assessment of psychometric properties

We will first summarize the evidence available for each type of psychometric feature, with the exclusion of criterion validity, which is irrelevant in the absence of a gold standard, and of responsiveness or sensitivity to change, which was never evaluated for the reviewed instruments. We will then briefly discuss the psychometric properties of each instrument. The relevant information is presented in Appendix IV.

Content validity was documented for 13 of the 14 reviewed instruments. As mentioned above, the procedure used to optimize content validity was evaluated satisfactory in seven of these 13 instruments (CARENAP, CNA-D, CNCD, NAS, PBH-LCI:D, SIDECAR, QNP). Most studies generated items based on a literature review and/or an expert consultation, and then reviewed these items in collaboration with experts and at least five informal dementia caregivers. However, the initial pool of the SIDECAR instrument was inductively developed on the basis of 42 interviews with caregivers. Caregivers, researchers and carer consultants were included in further steps of questionnaire development and testing of content validity. For the QNP, the final set of items was further submitted to additional informal dementia caregivers to optimize understandability. For three other instruments, a doubtful design was used (CADI, QCNE, Tayside; see Appendix IV for more details), and the three last instruments failed to include the target population in the process of item development (RAM, EAC, UNM).

Internal consistency was assessed for 10 of the 14 reviewed instruments. For three of these 10 instruments, the Cronbach alphas were computed for dimensions based on the results of a factor analysis. The latter supported a structure with five to eight dimensions (five for CNCD, with three to seven items; seven for PBH-LCI:D, with three to eight items; eight for the CADI, with two to seven items per dimension). However, for CADI, most alphas were below .70 despite an adequate factor analysis, and for CNCD and PBH-LCI:D, the factor analysis had largely insufficient sample sizes but all alphas were above .70. These results are therefore all partly problematic. For four other instruments, Cronbach alphas were provided for each dimension and comprised between .70 and .95, but for three of them no factor analysis was conducted (CNA-D, EAC and UNM), and for QCNE the factor analysis was inadequate; therefore this evidence should be considered inconclusive. Three other instruments reported only the alpha for the full scale without considering the dimensions, with low values in for CARENAP and RAM, and a high value for QNP.

Reproductibility was evaluated for four of the 14 reviewed instruments in terms of test-retest agreement (CNA-D, PBH-LCI:D, EAC, Tayside), for three in terms of inter-rater reliability (CARENAP, CNA-D, Tayside), and for PBH-LCI:D in terms of measurement error. CNA-D and PBH-LCI:D showed satisfactory test-retest agreement, as evaluated with a proper procedure, with correlations in the .70 range. For two other instruments where the procedure was questionable, the average Kappa was excellent for EAC but varied between poor and excellent for the different subscales of Tayside. As presented above, all the procedures used to evaluate the inter-rater reliability were problematic. For CARENAP, which compared the simultaneous evaluation of an interviewer and an observer, thereby increasing the likelihood of agreement, the Kappas were high. For Tayside, which compared the ratings of a professional to the self-report of the informal caregiver, thereby reducing the chances of agreement, the Kappas were very low. For CNA-D, which evaluated the test-retest and inter-rater reliability in combination, based on two interviews conducted by different persons two weeks apart, the mean Kappa was high. The results for measurement error for PBH-LCI:D were good. Sound evidence regarding reproductibility is therefore still scarce, and limited to test-retest agreement and measurement error.

Validity was consistently assessed based on construct validity in seven of the 14 reviewed instruments. For CNA-D, PBH-LCI:D, RAM and EAC, precise a priori hypotheses were formulated and at least 75% of the results were in accordance with them. In contrast, CNCD had less than 75% of its well-formulated hypotheses confirmed. Most hypotheses focused on associations between unmet needs and the caregiver's objective (e.g. number of hours of care, problem behaviors or functional dependency of the care recipient) or subjective burden, depression or anxiety symptoms or psychological distress, amount of formal or social support received, self-care or quality of life. These postulated associations were either based on plausible links with other common relevant outcomes for informal dementia caregivers (e.g. depression and subjective burden for RAM; subjective burden for CNA-D; burden and psychological morbidity for CNCD), or on different theoretical models, namely, the patient activation model for PBH-LCI:D, and Caplan's model of mental health consultation for EAC. For JHDCNA and UNM, the formulated hypotheses were too vague and numerous, seeming to pre-empt associations between all needs domains and all outcomes, which resulted in low rates of confirmation. They were based on plausible links with objective and subjective burden for JHDCNA, and Pearlin's stress process model for UNM. There is thus some evidence for construct validity.

Best-validated instruments

Regarding individual instruments, the best-validated one is currently PBH-LCI:D by Sadak et al.58 PBH-LCI:D was developed using an appropriate procedure regarding content validity, included six domains confirmed in a factor analysis and with good internal consistency, demonstrated adequate test-retest stability after two weeks, and showed the expected correlations with other variables indicating construct validity. EAC (in French) by Laprise et al.61 had appropriate evidence of test-retest reliability and construct validity, although informal caregivers were not involved in the item development process and Cronbach alphas were computed without a dimensionality analysis. Four other instruments had adequate support for content validity, but insufficient evidence on all other psychometric properties (CARENAP, CNCD, QNP, NAS). The excellent content validity of SIDECAR provided a valid basis for further psychometric testing of this instrument, which seems to be currently underway. The CNA-D and RAM had good evidence for construct validity, but inconclusive evidence for all other properties. The last five instruments currently provide no convincing evidence on any psychometric property (CADI, JHDCNA, QCNE, Tayside, UNM).

Content and structure of dimensions

The topics assessed in the instruments can be divided into three thematic groups of needs: i) need for information and education, ii) needs related to emotional support, and iii) need for other accessible and appropriate services. All instruments contained at least one item in the first and second group. Ten instruments comprised items in the third group. The first group of needs was typically assessed with items about the need for education regarding care tasks, especially dementia-specific caring skills (included in all instruments), items about information on local services or community resources for persons with dementia or caregivers (included in 13 instruments). Needs for information about dementia and its treatment were present in 10 instruments, and needs in relation to the characteristics, accessibility and availability of services were included in nine instruments. In the second thematic group, the most considered topics were counseling for negative emotions (nine instruments); support from the informal network such as family, friends or other caregivers (nine instruments) or society (eight instruments); and respite (eight instruments). Financial and legal support (seven items) was the most common topic in the third group of needs. Table 1 provides a summary of the topics assessed in the different instruments.


This psychometric literature review identified 14 needs assessment instruments for informal dementia caregivers with empirically evaluated measurement properties. Their systematic evaluation, based on the COSMIN criteria,42,45,46 revealed that half of them had excellent content validity. In contrast, the structure validity was rarely examined, and factor analyses were in most cases of low quality because of insufficient sample sizes or questionable procedures. None of the instruments had an optimally tested and good internal consistency, as the sole one with an adequate dimensionality analysis had low alphas, and all others had either their dimensionality evaluated with small samples, or no proper evaluation of their dimentionality – although some had high alphas. Regarding reliability, test-retest agreement was rarely tested, and only two instruments used a satisfactory procedure and obtained good correlations. Inter-rater agreement was relevant only for professionally interviewed instruments and evaluated using inadequate procedures. Regarding validity, in the absence of a gold standard, no criterion validity could be assessed, but construct validity was evaluated in more than half of the instruments, with satisfactory procedures and results on four of them. Comparing the different instruments reviewed, PBH-LCI:D was rated as having the best psychometric evidence, and EAC was also partly supported, while most other instruments had limited or no proof of their psychometric soundness. SIDECAR showed very promising results regarding content validity. However, further psychometric testing is needed and seems to be currently underway.

This overview highlighted the importance of guidelines such as COSMIN42,45,46,48 in guiding the efforts in the development of instruments with optimal psychometric properties in a specific field. As we noted for needs assessment in informal dementia caregivers, despite international investments in instrument development, optimal standards were on many occasions not achieved due to problematic procedures and analyses, as well as largely insufficient sample sizes. Our review therefore provides essential information to inform future efforts in the development of such measures in order to achieve more robust psychometric results.

Regarding needs assessment in informal dementia caregivers, our review showed that there are presently several instruments with adequate content validity, developed in diverse countries (Singapore, Austria, the Netherlands, the UK and the US). This information provides an excellent starting point for further development. The priority should now be to identify the structure, in terms of the number of different domains required to cover the diverse needs of informal dementia caregivers. The preliminary factor-analytic evidence that we reviewed identified five to eight domains of needs, but overall the instruments reviewed comprised two to 18 different need domains. Currently, there is no established theoretical model to organize the diverse and complex needs of our population of interest. Pini et al.68 made a first attempt in this direction by developing a needs-led framework based on qualitative interviews to conceptualize the impact of caring on the lives of family caregivers. A robust theoretical framework might enhance the conceptual clarity around the assessment of these needs, and could be further informed by explorations of the factorial structure with sufficient sample sizes. Nevertheless, cultural differences in the experience of dementia caregiving could affect the content of needs assessment instruments.69 Such differences were difficult to evaluate at the stage of content validity. They should be examined at later stages of instrument validation by testing the measurement invariance.

Our review also identified different approaches that were applied to assess the construct validity of some of the reviewed instruments. These approaches involved testing the associations between needs and other related constructs, predominantly the informal caregiver's objective or subjective burden, psychological symptoms, the amount of support received, self-care or quality of life. The postulated associations were mostly based on plausible links with common outcomes for informal dementia caregivers, and sometimes on a theoretical model, with different models being used. The diversity of these procedures indicated that further conceptualization of the place of needs within relevant theoretical models could strengthen the nomological net and thereby support more solid examination of the construct validity. Another challenge in this area is to ensure that the content of the items of the different instruments are not overlapping, to prevent spurious correlations. This would typically be the case for needs assessment including items about burden or depression, for example, when their construct validity is tested with instruments measuring these constructs.

Regarding reliability, test-retest stability was scarcely assessed. And inter-rater agreement was always tested with questionable procedures, despite its importance as multiple professionals are normally involved with informal dementia caregivers. Substantial efforts need to be invested in these aspects, but we also have to keep in mind that their assessment is challenging in a fragile population such as informal dementia caregivers. Indeed, a short time interval is required as the situation of the person with dementia and the larger context can change quickly, thereby modifying the needs of the caregiver. Yet it is certainly difficult to obtain two assessments within one or two weeks from chronically stressed and often exhausted caregivers.

Finally, needs assessments were mainly used for evaluations at one specific point in time. Increasingly, the assessment of needs might be considered as an outcome to analyze the impact of interventions, or used to document the evolution of needs over time. The use of needs assessment as an outcome or longitudinal measure requires a satisfactory sensitivity to change, which was not assessed in the reviewed instruments, and warrants scientific attention in the future.

This review has both strengths and limitations. First, our use of a highly structured procedure based on the COSMIN criteria is a strength. However, COSMIN uses a very stringent evaluation of the reviewed instruments, as the final appreciation for each psychometric property is based on the lowest grade across all specific criteria for this property. Although the resulting synthesis could give the impression of a globally poor quality of evidence, we believe it is very helpful in the process of achieving high psychometric standards. Secondly, we were able to include instruments and articles in more diverse languages than previously published reviews,36-38 although our limitation on our language skills did not allow us to examine publications in any Asian or Arabic languages. Thirdly, despite our efforts, we could not always access gray literature, as some authors did not respond, persons in charge of the project were absent, or authors were in a commercial process preventing them to provide access to the manual. Our conclusions are therefore limited to available information.


Recommendations for practice

This review revealed several instruments for measuring the needs of informal dementia caregivers. However, the evidence for their use in the clinical or research setting is often limited. The two best-validated instruments are PBH-LCI:D by Sadak et al.58 and EAC by Laprise et al.61 While PBH-LCI:D is in English and intended for use in clinical and research settings, EAC is in French and recommended only for clinical use. Both of them include a scoring system that allows for comparison of results at different time points and between different informal caregivers. PBH-LCI:D and EAC contain items covering the most common topics found across all instruments, namely, the need for information and education, needs related to emotional support and the need for other accessible and appropriate services. Their self-administered application method suggests less effort in administration time for professionals. Nevertheless, there is no information regarding administration burden for both, which would be important in terms of usability, especially in the clinical setting. In regards to well-documented cultural differences in the experience of dementia caregiving, and the absence of empirical evidence on the measurement invariance of the available instruments across diverse cultural groups, caution should be taken in using them in cultural contexts different from those in which the instruments were developed.69

Recommendations for research

Although we identified moderate to high evidence of strong psychometric properties for PBH-LCI:D and EAC, this review highlights the need for further developments in the field of needs assessment in informal dementia caregivers, particularly in structural validity and construct validity, as well as test-retest reliability and sensitivity to change. The evaluation of both forms of validity would certainly benefit from a more robust theoretical framework about the core dimensions of needs in informal dementia caregivers, and the relationship between needs and other relevant outcomes for this population. We also need to identify appropriate procedures to assess test-retest reliability with minimal additional burden for informal dementia caregivers, and to evaluate sensitivity to change appropriately despite this involving a demanding procedure.


The authors thank Prof. Dr. Manuela Eicher (Institute of Higher Education and Research in Healthcare, University of Lausanne, Switzerland) and Prof. Dr. Dawn Carnes (School of Health Sciences Fribourg, University of Applied Sciences and Arts Western Switzerland) for comments that greatly improved the manuscript.

The authors also thank all researchers who contributed to this review by sending us complementary information about their needs assessment instruments.


This review was funded by the grant number 03-O17 of the Health Department, University of Applied Sciences and Arts Western Switzerland.

Appendix I: Search strategy


Searched on 21 Feb 2019

MEDLINE via Pubmed

Searched on 21 Feb 2019


Searched on 21 Feb 2019


Searched on 3 Mar 2019

Appendix II: Excluded studies

Articles ineligible following full-text review

Article excluded on critical appraisal

Appendix III: Domain structure, number of items, and response options

Appendix IV: Quality criteria for psychometric outcomes

Appendix V: Characteristics of included studies

Appendix VI: Instrument characteristics


1. Swiss ConfederationSupport for family caregivers: an analysis of the situation and the required measures in Switzerland [Soutien aux proches aidants: Analyse de la situation et mesures requises pour la Suisse]. 2014; Bern:Federal Council, French.
2. Thompson GN, Roger K. Understanding the needs of family caregivers of older adults dying with dementia. Palliat Support Care 2014; 12 (3):223–231.
3. Alzheimer EuropeWho cares? The state of dementia care in Europe. Luxembourg:Alzheimer Europe; 2006.
4. Fisher GG, Franks MM, Plassman BL, Brown SL, Potter GG, Llewellyn D, et al. Caring for individuals with dementia and cognitive impairment, not dementia: findings from the aging, demographics, and memory study. J Am Geriatr Soc 2011; 59 (3):488–494.
5. Alzheimer's Association2016 Alzheimer's Disease Facts and Figures. Alzheimers Dement 2016; 12 (4):32–40.
6. De Pietro C, Camenzind P, Sturny I, Crivelli L, Edwards-Garavoglia S, Spranger A, et al. Switzerland - Health system review. Brussels:European Observatory on Health Systems and Policies; 2015.
7. Swiss ConfederationExplanatory report on the preliminary project of the federal law on improving the reconciliation between professional activities and the care of relatives. [Rapport explicatif concernant l’avant-projet de la loi fédérale sur l’amélioration de la conciliation entre activité professionnelle et prise en charge de proches]. 2018; Bern:Federal Council, French.
8. Zwaanswijk M, Peeters JM, van Beek AP, Meerveld JH, Francke AL. Informal caregivers of people with dementia: problems, needs and support in the initial stage and in subsequent stages of dementia: a questionnaire survey. Open Nurs J 2013; 7:6–13.
9. Kesselring A. Caring for family members at home [Angehörige zu Hause pflegen: Anatomie einer Arbeit]. Swiss Medical Journal [Schweizerische Ärztezeitung] 2004; 85 (10):504–506. German.
10. Rosa E, Lussignoli G, Sabbatini F, Chiappa A, Di Cesare S, Lamanna L, et al. Needs of caregivers of the patients with dementia. Arch Gerontol Geriatr 2010; 51 (1):54–58.
11. Kraft E, Marti M, Werner S, Sommer H. Cost of dementia in Switzerland. Swiss Med Wkly 2010; 140:w13093.
12. Butcher HK, Holkup PA, Buckwalter KC. The experience of caring for a family member with Alzheimer's disease. West J Nursing Res 2001; 23 (1):33–55.
13. de la Cuesta C. The craft of care: family care of relatives with advanced dementia. Qual Health Res 2005; 15 (7):881–896.
14. Vellone E, Sansoni J, Cohen MZ. The experience of Italians caring for family members with Alzheimer's disease. J Nurs Scholarsh 2002; 34 (4):323–329.
15. Knapp M, Prince M, Albanese E, Banerjee S, Dhanasiri S, Fernandez J-L, et al. Dementia UK. London:Alzheimer's Society; 2007.
16. Agency for Healthcare Research and Quality (US), Reinhard SC, Given B, Petlick NH, Bemis A. Hughes RG. Supporting family caregivers in providing care. Patient safety and quality: An evidence-based handbook for nurses 2008. 341–404.
17. EUROFAMCARE. EUROFAMCARE – Services for Supporting Family Carers of Older Dependent People in Europe: Characteristics, Coverage and Usage. Brussels: EUROFAMCARE Consortium; 2006.
18. Schulz R, Martire LM. Family caregiving of persons with dementia: prevalence, health effects, and support strategies. Am J Geriatr Psychiatry 2004; 12 (3):240–249.
19. Perrig-Chiello P, Höpflinger F, Schnegg B. SwissAgeCare: Nursing relatives of older people in Switzerland. Bern, German:University of Bern; 2010.
20. Brodaty H, Donkin M. Family caregivers of people with dementia. Dialogues Clin Neurosci 2009; 11 (2):217–228.
21. Gaugler JE, Kane RL, Kane RA, Clay T, Newcomer R. Caregiving and institutionalization of cognitively impaired older people: utilizing dynamic predictors of change. Gerontologist 2003; 43 (2):219–229.
22. Black BS, Johnston D, Rabins PV, Morrison A, Lyketsos C, Samus QM. Unmet needs of community-residing persons with dementia and their informal caregivers: findings from the maximizing independence at home study. J Am Geriatr Soc 2013; 61 (12):2087–2095.
23. Ducharme F, Kergoat MJ, Coulombe R, Levesque L, Antoine P, Pasquier F. Unmet support needs of early-onset dementia family caregivers: a mixed-design study. BMC Nurs 2014; 13 (1):49.
24. Bass DM, Judge KS, Snow AL, Wilson NL, Morgan R, Looman WJ, et al. Caregiver outcomes of partners in dementia care: effect of a care coordination program for veterans with dementia and their family members and friends. J Am Geriatr Soc 2013; 61 (8):1377–1386.
25. Afram B, Verbeek H, Bleijlevens MH, Hamers JP. Needs of informal caregivers during transition from home towards institutional care in dementia: a systematic review of qualitative studies. Int Psychogeriatr 2015; 27 (6):891–902.
26. Freudiger Pittet S, Jordan A. Evaluation of caregiver burden and needs. [Evaluation de la charge et des besoins des proches aidants]. Lausanne, French:Association vaudoise d’aide et de soins à domicile (AVASD); 2012.
27. Brodaty H, Thomson C, Thompson C, Fine M. Why caregivers of people with dementia and memory loss don’t use services. Int J Geriatr Psychiatry 2005; 20 (6):537–546.
28. Bass DM, Judge KS, Snow AL, Wilson NL, Looman WJ, McCarthy C, et al. Negative caregiving effects among caregivers of veterans with dementia. Am J Geriatr Psychiatry 2012; 20 (3):239–247.
29. Karlsson S, Bleijlevens M, Roe B, Saks K, Martin MS, Stephan A, et al. Dementia care in European countries, from the perspective of people with dementia and their caregivers. J Adv Nurs 2015; 71 (6):1405–1416.
30. Gaugler JE, Kane RL, Kane RA, Newcomer R. Unmet care needs and key outcomes in dementia. J Am Geriatr Soc 2005; 53 (12):2098–2105.
31. Gaugler JE, Kane RL, Kane RA, Newcomer R. Early community-based service utilization and its effects on institutionalization in dementia caregiving. Gerontologist 2005; 45 (2):177–185.
32. van der Roest HG, Meiland FJ, Maroccini R, Comijs HC, Jonker C, Droes RM. Subjective needs of people with dementia: a review of the literature. Int Psychogeriatr 2007; 19 (3):559–592.
33. Reynolds T, Thornicroft G, Abas M, Woods B, Hoe J, Leese M, et al. Camberwell Assessment of Need for the Elderly (CANE): Development, validity and reliability. BJPsych 2000; 176 (5):444–452.
34. Wancata J, Krautgartner M, Berner J, Alexandrowicz R, Unger A, Kaiser G, et al. The Carers’ Needs Assessment for Dementia (CNA-D): development, validity and reliability. Int psychogeriatr 2005; 17 (3):393–406.
35. McWalter G, Toner H, McWalter A, Eastwood J, Marshall M, Turvey T. A community needs assessment: the care needs assessment pack for dementia (CarenapD)--its development, reliability and validity. Int J Geriatr Psychiatry 1998; 13 (1):16–22.
36. Novais T, Dauphinot V, Krolak-Salmon P, Mouchoux C. How to explore the needs of informal caregivers of individuals with cognitive impairment in Alzheimer's disease or related diseases? A systematic review of quantitative and qualitative studies. BMC Geriatr 2017; 17 (1):86.
37. Bangerter LR, Griffin JM, Zarit SH, Havyer R. Measuring the needs of family caregivers of people with dementia: an assessment of current methodological strategies and key recommendations. J Appl Gerontol 2017; 1:1–15.
38. Mansfield E, Boyes AW, Bryant J, Sanson-Fisher R. Quantifying the unmet needs of caregivers of people with dementia: a critical review of the quality of measures. Int J Geriatr Psychiatry 2017; 32 (3):274–287.
39. Terwee CB. Protocol for systematic reviews of measurement properties. Amsterdam:VU University Medical Center Knowledgecenter Measurement Instruments; 2011.
40. Kipfer S, Eicher M, Oulevey Bachmann A, Pihet S. Reliability, validity and relevance of needs assessment instruments for informal dementia caregivers: a psychometric systematic review protocol. JBI Database System Rev Implement Rep 2018; 16 (2):269–286.
41. Hileman JW, Lackey NR, Hassanein RS. Identifying the needs of home caregivers of patients with cancer. Oncol nurs forum 1992; 19 (5):771–777.
42. Terwee CB, Mokkink LB, Knol DL, Ostelo RWJG, Bouter LM, de Vet HCW. Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist. Qual Life Res 2012; 21 (4):651–657.
43. Biomedical Information of the Dutch Library Association. Search blocks. Nijkerk: Biomedical Information of the Dutch Library Association [Internet]. 2019 Feb [cited 2019 Apr 18]. Available from:
44. COSMIN. Search block and filter. COSMIN. [Internet]. 2019 Feb [cited 2019 Apr 18]. Available from:
45. Terwee CB. COSMIN checklist with 4-point scale. Amsterdam: University Medical Center, Department of Epidemiology and Biostatics, EMGO Institute for Health and Care Research; 2011.
46. Mokkink LB, Terwee CB, Patrick DL, Alonso J, Stratford PW, Knol DL, et al. COSMIN checklist manual. Amsterdam:University Medical Center, Department of Epidemiology and Biostatics, EMGO Institute for Health and Care Research; 2012.
47. Simpelaere I, White A, Bekkering GE, Geurden B, Van Nuffelen G, De Bodt M. Patient-reported and proxy-reported outcome measures for the assessment of health-related quality of life among patients receiving enteral feeding: a systematic review protocol. JBI Database System Rev Implement Rep 2016; 14 (7):45–75.
48. Terwee CB, Bot SD, de Boer MR, van der Windt DA, Knol DL, Dekker J, et al. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol 2007; 60 (1):34–42.
49. Charlesworth GM, Tzimoula XM, Newman SP. Carers Assessment of Difficulties Index (CADI): psychometric properties for use with carers of people with dementia. Aging Ment Health 2007; 11 (2):218–225.
50. McWalter GJ, Toner HL, Eastwood J, Corser AS, Marshall MT, Turvey AA, Howie C. User Manual for the Care Needs Assessment Pack for Dementia (CarenapD). Stirling:University of Stirling; 1996.
51. McWalter G, Toner H, McWalter A, Eastwood J, Marshall M, Turvey T. A community needs assessment: the care needs assessment pack for dementia (CarenapD)–its development, reliability and validity. Int J Geriatric Psychiatry 1998; 13 (1):16–22.
52. Wancata J, Krautgartner M, Berner J, Alexandrowicz R, Unger A, Kaiser G, et al. The Carers’ Needs Assessment for Dementia (CNA-D): development, validity and reliability. Int Psychogeriatr 2005; 17 (3):393–406.
53. Kaiser G, Krautgartner M, Alexandrowicz R, Unger A, Marquart B, Weiss M, et al. Die Übereinstimmungsvalidität des “Carers’ Needs Assessment for Dementia” (CNA-D). Neuropsychiatrie 2005; 19 (4):134–140.
54. Vaingankar JA, Subramaniam M, Picco L, Eng GK, Shafie S, Sambasivam R, et al. Perceived unmet needs of informal caregivers of people with dementia in Singapore. Int Psychogeriatr 2013; 25 (10):1605–1619.
55. Vaingankar JA, Abdin E, Chong SA, Sambasivam R, Shafie S, Jeyagurunathan A, et al. Validity and reliability of the Caregivers’ Needs Checklist for Dementia. Arch Psychol 2018; 2 (1):1–16.
56. Hughes TB, Black BS, Albert M, Gitlin LN, Johnson DM, Lyketsos CG, et al. Correlates of objective and subjective measures of caregiver burden among dementia caregivers: influence of unmet patient and caregiver dementia-related care needs. Int Psychogeriatr 2014; 26 (11):1875–1883.
57. Wackerbarth SB, Johnson MMS. Essential information and support needs of family caregivers. Patient Educ Couns 2002; 47 (2):95–100.
58. Sadak T, Korpak A, Borson S. Measuring caregiver activation for health care: validation of PBH-LCI:D. Geriatr Nurs 2015; 36 (4):284–292.
59. Czaja SJ, Gitlin LN, Schulz R, Zhang S, Burgio LD, Stevens AB, et al. Development of the risk appraisal measure: a brief screen to identify risk areas and guide interventions for dementia caregivers. J Am Geriatr Soc 2009; 57 (6):1064–1072.
60. Dimakopoulou E, Sakka P, Efthymiou A, Karpathiou N, Karydaki M. Evaluating the needs of dementia patients’ caregivers in Greece: a questionnaire survey. Int J Caring Sci 2015; 8 (2):274–280.
61. Laprise R, Dufort F, Lavoie F. Construction and validation of a scale about consultation expectations of caregivers of older people. [Construction et validation d’une echelle d’attentes en matiere de consultation aupres d’aidant(e)s de personnes agees]. Can J Aging 2001; 20 (2):211–232. French.
62. Peeters JM, Van Beek A, Meerveld J, Spreeuwenberg P, Francke A. Informal caregivers of persons with dementia, their use of and needs for specific professional support: a survey of the National Dementia Programme. BMC Nurs 2010; 9 (9):1–8. French.
63. Van der Poel K, van Beek A. Development of the questionnaire ’Wishes and problems of caregivers of people with dementia’. [Ontwikkeling vragenlijst ‘Wensen en problemen van mantelzorgers van mensen met dementie’]. Utrecht, Dutch:Nivel; 2006.
64. Oyebode JR, Pini S, Ingleson E, Megson M, Horton M, Clare L, et al. Development of an item pool for a needs-based measure of quality of life of carers of a family member with dementia. PATIENT 2019; 12:125–136.
65. Gordon DS, Spicker P, Ballinger BR, Gillies B, McWilliam N, Mutch WJ. A population needs assessment profile for dementia. Int J Geriatr Psychiatry 1997; 12 (6):642–647.
66. Gaugler JE, Anderson KA, Leach MS, Smith CD, Schmitt FA, Mendiondo M. The emotional ramifications of unmet need in dementia caregiving. Am J Alzheimers Dis Other Demen 2004; 19 (6):369–380.
67. Moher D, Liberati A, Tetzlaff J, Altman DG. The PRISMA GroupPreferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA Statement. PLoS Med 2009; 6 (6):e1000097.
68. Pini S, Ingleson E, Megson M, Clare L, Wright P, Oyebode JR. A needs-led framework for understanding the impact of caring for a family member with dementia. Gerontologist 2018; 58 (2):e68–e77.
69. Janevic MR, Connell CM. Racial, ethnic, and cultural differences in the dementia caregiving experience: recent findings. Gerontologist 2001; 41 (3):334–347.

Dementia; informal caregivers; instrument; needs assessment; psychometrics