Wasser, Thomas PhD, MEd; Pasquale, Mae Ann MSN, RN, CCRN; Matchett, Stephen C. MD; Bryan, Yvonne PhD, RN; Pasquale, Michael MD
The assessment of patient satisfaction as an outcome healthcare measure is not a new concept. One of the first meta-analysis studies reported in the literature was published over a decade ago (1). However, assessing patient satisfaction becomes complicated when critical care patients are involved. This may occur because of the multitude and extent of patients’ treatments, the severity of their condition, level of consciousness, and the fact that, in the critical care environment, patients often are not making decisions related to their own care.
The perceived needs of families of critically ill patients have been addressed in the literature. Many of the findings stem from several prominent studies, including Molter (2), Rogers (3), Daley (4), Leske (5), and Price et al. (6). These various studies identify two primary needs of critical care patients’ family members: First, to have honest, intelligible, and timely information, and second, to feel assured that their loved one is being tended to by competent and caring people. Furthermore, these studies recognize ten separate dimensions that represent the level of care and quality of facilities that families want and expect from staff who care for the patient. These dimensions/needs include the following: a) to be given a sense of hope, b) to have their questions answered honestly, c) to know the best possible care is being provided for the patient, d) to be assured that they will receive a call at home if any changes occur in the patient’s condition, e) to be given explanations in simple terms and receive timely responses to their questions and concerns, f) to have physical access to the patient as much as possible, g) to be informed at least once a day of the patient’s progress, h) to know specific facts about the patient’s prognosis, i) to feel comfortable in the hospital setting, and j) to have the staff be warm and friendly, yet professional.
Previous research indicates these ten statements can be categorized into six distinct domains: “Assurance,” the need to feel hope for a desired outcome; “Information,” the need for consistent, realistic, and timely information; “Proxim-ity,” the need for personal contact, and to be physically and emotionally near the patient; “Support,” the need for resources, support systems, and ventilation; “Comfort,” the need for family members’ personal comfort; and “Help to Family Members,” the need to feel that treatments are beneficial to the patient (7).
The goal of this research was to develop the 20-item Critical Care Family Satisfaction Survey (CCFSS), and demonstrate its validity and reliability using methods similar to Bubela et al. (8) in the development of their Patient Learning Needs Scale. In addition, this study also includes a confirmatory factor analysis (CFA) that tests the underlying variable factor structure, as well as defining the model fit. This method was selected because of the latent (unobservable) nature of “Satisfaction” and the ability of the CFA technique to describe latent variables to a path model containing observed factors (9).
MATERIALS AND METHODS
Development of the Critical Care Family Satisfaction Survey Scale.
Establishing the content validity of a questionnaire is a technique used to link questions to the content areas, or subscales. In this study, content validity was established by developing and randomizing 37 questions—abstracted from the literature and thought to assess patient care satisfaction—into a list, and then asking critical care physicians, residents, nurses, and researchers (n = 8) to sort these items into the six subscale constructs related to satisfaction, which were mentioned previously. Items were considered to represent the satisfaction construct if 75% or more of the raters linked them with a specific content area; items that received <75% identification were removed. Individual content areas were removed from the scale altogether if fewer than two items applied to them. Concurrent validity, which is an index of how well the questionnaire being developed correlates with other instruments measuring the same content areas, was not assessed as there was not an existing satisfaction by proxy survey instrument available.
Scaling and Scoring Methods.
For ease of administration, all items were phrased using positive tense, with the survey asking the family member to rate their degree of satisfaction with the particular item. Items were scaled as 5 = “very satisfied,” 4 = “satisfied,” 3 = “not certain,” 2 = “not satisfied” and 1 = “very dissatisfied.” Because each subscale was represented by a different number of items, averages were calculated by dividing the sum of the item responses by the number of items within the subscale. This scoring method resulted in an average response of between 1 and 5, and facilitated comparison across all subscales.
Statistical Analysis Methods and Rationale.
Internal consistency (IC) is important to establish in instruments that primarily assess latent variables (10). The CCFSS’s IC was assessed by a Pearson correlation coefficient of the total score (calculated as the sum of all of the subscores) with of the subscale scores of the five measured constructs (10). In addition, Cronbach’s alpha—another measure of IC—was calculated on the total score using the five sums of each of the subscales.
Discriminant validation (DV)—sometimes referred to as divergent validity (11) —is an important criterion for tests that purport to measure a single latent variable and is an index (or matrix, in this application) of how the questionnaire’s subscales should not correlate with each other. Campbell (12) defined the concept of DV, indicating the importance of subscales to measure constructs independently of one another. In this study, DV was demonstrated by a Pearson correlation matrix of the subscales. Ideally, the results of each subscale correlation would be <0.80. Failing this, the average correlation of a subscale with the other subscales should be <0.80 (12).
Factor analysis was completed on the hypothesized five-construct model. No rotation techniques were used for this because of the presumed underlying single latent variable (family satisfaction) model. Eigenvalues were calculated based on the five subscales in order to test this single factor hypothesis. (Eigenvalues provide an index regarding the variability of factors in a model and, in this case, the variance explained by a single influencing factor. By convention, eigenvalues obtained from a model >1 indicate a separate measurable factor.) Scree analysis was then performed to confirm (or disconfirm) the model using a standard eigenvalue cutoff of 1.0 for classification.
Validity was established using confirmatory factor analysis. Figure 1 illustrates the CFA model and its path model components. The oval labeled “Satisfaction” is the latent factor thought to be indicated by the five observable subscales (in rectangles) of the CCFSS. The five circles represent the statistical evaluation of measurement error present within the model. This path model allows for the calculation of factor weights of each observable subscale with the latent factor, lower bound reliability coefficients, assessment of model fit, and error of measurement. Each of these values was calculated and reported for this model (13).
The assessment of model fit for the four- and five-factor model was performed using two methods—the chi-square goodness-of-fit index and the “GFI” goodness-of-fit index, both described by Joreskog and Sorbom (8). These different fit approaches were used for two reasons: First, the chi-square goodness-of-fit statistics indicate the within-model fit independently for both the four- and five-factor model. However, this test is not useful for comparing differences between models, as each model has different degrees of freedom. Therefore, the “GFI” method is used because it adjusts for the difference of degrees of freedom, and presents a more standardized score between 0.0 (no fit) and 1.0 (perfect fit). Using this index, it was simpler to compare the four- and five-factor models against each other given the relative ease of interpreting a 0.0 to 1.0 scale.
From the original pool of 37 items, 14 failed to meet the criteria for matching to a construct. In addition, one construct (“Help to Family Members”) failed to be represented by 75% of any two items. These 14 items and one construct were removed from the questionnaire and subsequent factor analysis, before the questionnaire was administered to patients’ families. Between December 1997 and September 1998, 237 respondents completed the CCFSS. Viewed as a quality of care research project, the hospital’s institutional review board declared the project as exempt and so consent to participate in the study was not required.
Table 1 presents data regarding the respondents’ relationships to each patient for the entire sample. Respondents ages were 18–24 (0.8%), 25–34 (5.8%), 35–59 (40.9%), 60 and over (50.6%), and not answered (1.7%). Table 2 presents inpatient information regarding length of stay and to which critical care unit the patient was admitted. After data were collected and entered into the computer, the sample was reduced to 145 (61.2%) because of incomplete responses on the questionnaires.
After the first phase of factor analysis, three additional items were found not to significantly match any of the five remaining constructs. These items were eliminated, leaving a total of 20 items that mapped to five constructs. A list of the final 20 items and matching constructs can be found in Table 3.
Table 4 presents internal consistency data as Pearson’s “r” and Cronbach’s alpha, as well as significance levels between the satisfaction scale total score and subscores of the domains. Table 5 presents DV data as Pearson’s “r,” and significance levels between the satisfaction scale subscores. Based on the results from Table 4 —which indicated that the fifth subscale, “Comfort,” was performing poorly—both the four- and five-indicator models were analyzed in the subsequent factor and CFA analyses.
Both the four- and five-factor results indicated the existence of an underlying structure containing a single latent factor (presumably, family satisfaction). Results of scree analysis for the five-factor model showed the first component had an eigenvalue equal to 3.712, which explained 74.2% of the variance. The second component had an eigenvalue equal to 0.622, explaining only an additional 12.4% of the variance. For the four-factor model, the first component had an eigenvalue slightly lower—3.331. However, the single factor explained 83.267% of the variance. The second component had an eigenvalue equal to 0.357, which explained an additional 8.915% of the variance.
Confirmatory Factor Analysis.
Table 6 reports the standardized regression weights, standard error estimates, and reliability estimates for the four- and five-factor models. Goodness-of-fit indices indicate the data fit both models very well. The use of the model as outlined in Figure 1 allows for breakdowns of explained variance, as well as error. Given these dichotomous partitions, the variance is synonymous with an expected lower bound to the reliability coefficient.
Consequently, the reliability estimates in Table 6 may be interpreted as minimum estimates (13). The “Comfort” subscale performs poorly with regard to reliability (0.31), whereas the remaining subscales all have reliability estimates >0.60 regardless of the model used. The “Information” and “Support” subscales both have reliability estimates >0.80.
Internal Consistency and Discriminant Validity.
Data from this study indicate excellent results for four of the five factors, namely: “Assurance,” “Information,” “Family,” and “Support,” with correlations all >0.835. The “Comfort” construct was marginally acceptable at 0.750. The CCFSS also demonstrated high DV; the results indicate that four of the five constructs are assessing traits independently of the others, which is the optimal situation (11). There is some overlap regarding the “Support” subscale, as well as >0.80 correlations with the “Assurance” (r = 0.830) and “Information” (r = 0.883) subscales. This is probably due to a conceptual inter-relationship of the three subscales. The amount of assurance a family feels and information they receive would originate from critical care staff. As a result, these two higher correlations might be affected by transference, or the relationship between the source and topic. Although these two correlations are higher than desired to indicate discriminant validity, the overall averages for the four- and five-factor models are still within an acceptable range (below the 0.80 level). The exception is “Support” in the four-factor model, which has an average of 0.807—just slightly higher than the acceptable range.
Factor Analysis and CFA.
Both models clearly identified a single latent factor as demonstrated by an eigenvalue greater than one, and the scree analysis showing a difference between the first and second component. These results are promising because the data indicate only one latent variable—presumably family satisfaction with patient care—is being measured. It should be noted that the four-factor model has a 9% increase in explained variation (from 74% to 83%). This increase is substantial and would argue for exclusive use of the four- over the five-factor model. The CFA analysis also supports this result.
The results of reliability testing are somewhat less than optimal, given the subscales. Clearly, the “Comfort” subscale does not demonstrate acceptable reliability, and conversely the “Information” and “Support” subscales—having values >0.80—demonstrate good reliability. The marginal reliability estimates for “Assurance” and “Proximity” indicate moderate reliability. However, given that these are minimum estimates (13), the reliability coefficients may be higher in practice. Test/retest techniques represent an area for the CCFSS that needs further research.
This study provides support that the Critical Care Family Satisfaction Survey—which yields five subscales, “Assurance,” “Information,” “Proximity,” “Support,” and “Comfort”—is reliable and valid. The recommendation for using this five subscale instrument is made for several reasons: First, the internal consistency loss of 0.0226 is not enough to warrant removal of the “Comfort” subscale. This is especially true because the internal consistencies for both four- and five-factor models are >0.90. The results of the CFA indicate that this decline is directly related to the addition of the fifth subscale and, as a result, there is no additional unexplained variance added to the four factors. Using the five-factor model simply does not weaken the questionnaire. Second, the four-factor questionnaire can be administered and totaled independently of the “Comfort” subscale using the information contained in this report. In practice, researchers could duplicate the questionnaire from Table 3 and leave item number eight, “Cleanliness/appearance of the waiting room,” and item 17, “Peacefulness of the waiting room,” out of the questionnaire, and implement the four-factor model. Of course, the resulting 18-item questionnaire would require additional independent validation. However, the expectation is that the results would be similar to those of the four-factor model. Third, the five-factor model has slightly better discriminant validity, as all five subscales have average correlations <0.800, whereas the four-factor model has one subscale average (“Support”) >0.800 (0.807). Lastly, there is significant evidence to indicate the existence of a fifth factor of patient satisfaction both intuitively (via construct and content analysis), as well as statistically (via factor analysis and confirmatory factor analysis). Including the fifth “Comfort” subscale construct will allow researchers to assess and investigate this issue more carefully.
Sincere thanks go to Jay Cowen, MD (Northeastern Hospital, Arlington Heights, IL), and Kathy Baker, RN (Lehigh Valley Hospital, Allentown, PA), for providing technical advice and assistance in conducting this study.
1. Hall J, Dornan M: Meta-analysis of satisfaction with medical care: Description of research and analysis of overall satisfaction levels. Soc Sci Med 1988; 27: 637–644
2. Molter N: Needs of relatives of critically ill patients: A descriptive study. Heart Lung 1979; 8: 332–339
3. Rogers C: Needs of relatives of cardiac surgery patients during the critical care phase. Focus Crit Care 1983; 10: 50
4. Daley L: The perceived immediate needs of families with relatives in the intensive care setting. Heart Lung 1984; 13: 231–237
5. Leske J: Needs of relatives of critically ill patients: A follow-up. Heart Lung 1986; 15: 189–193
6. Price D, Forrester D, Murphy P, et al: Critical care family needs in an urban teaching medical center. Heart Lung 1991; 20: 183–187
7. Bryan Y, Fox M, Fuss M, et al: Transforming the critical care environment: Impact of restructuring on satisfaction and quality of care. In:
Nurses and Continuous Improvement. Burlington,VT, University of Vermont, May 1998
8. Bubela N, Galloway S, McCay E, et al: The patient learning needs scale: Reliability and validity. J Adv Nurs 1990; 15: 1181–1187
9. Joreskog K, Sorbom K: LISREL 7: User’s Reference Guide. Mooresville, IN: Scientific Software, Inc., 1989, pp 25–27
10. Anastasi A: Psychological Testing, 5th ed. New York, Macmillan Publishing Co., 1982, pp 146–147
11. Haller K: Research instruments: Assessing validity. Am J Mat Child Nurs 1990; 15: 214
12. Campbell D: Recommendations for APA test standards regarding construct, trait and discriminant validity. Am Psychol 1960; 15: 546–553
13. Arbuckle J, Wothke W: AMOS 4.0: User’s Guide. Chicago, IL: Small-Waters Corp., 1999, pp 185–193
© 2001 by the Society of Critical Care Medicine and Lippincott Williams & Wilkins