Secondary Logo

Journal Logo

FEATURE ARTICLE

Acceptability of an Electronic Self-report Assessment Program for Patients With Cancer

WOLPIN, SETH PhD, MPH, RN; BERRY, DONNA PhD, RN; AUSTIN-SEYMOUR, MARY MD; BUSH, NIGEL PhD; FANN, JESSE R. MD; HALPENNY, BARBARA MA; LOBER, WILLIAM B. MD, MS; MCCORKLE, RUTH PhD, RN

Author Information
CIN: Computers, Informatics, Nursing: November 2008 - Volume 26 - Issue 6 - p 332-338
doi: 10.1097/01.NCN.0000336464.79692.6a
  • Free

Abstract

Eliciting symptom and quality-of-life information from patients is an important component of medical and nursing care processes. Patients are commonly given screening forms related to symptom and quality-of-life information to complete when they arrive in the reception area. Presumably, when the patient is called for his/her visit with a clinician, those written responses are appended to the medical chart and used as a basis for further assessment. Several problems exist with this paper-based approach, ranging from difficulty in integrating responses with the electronic medical record to requirements for manual scoring of scaled questionnaires to determine whether critical thresholds have been exceeded.

The routine use of computerized screening for symptom and quality-of-life information presents many advantages over paper-based approaches. Foundational work by Slack and colleagues1 in 1964 established the feasibility of computerized data collection, complete with question branching and automatically generated exception reports for physicians. The advantages of computerized screening are numerous: patient responses can be integrated into the electronic medical record in real time, questionnaires can be drawn from banks of validated instruments, scores can be automatically calculated and presented as graphical summaries with attention drawn to critical values, questionnaires can be customized to particular patient needs (which provides tailored question stems and skip patterns), and multimedia content can be embedded such that the process is transformed into an adaptive educational tool.

Within the last decade, researchers have studied computerized screening in a number of settings. Newell and colleagues2 found that using touch screens for assessing physical side effects, anxiety, depression, and perceived needs was highly acceptable for 229 patients with cancer who were receiving chemotherapy, even among computer-naive users. Velikova and colleagues3 conducted a randomized trial with oncology patients and found that when summaries were immediately provided to clinicians, there was a statistically significant increase in the frequency of quality-of-life issues discussed. Carlson and colleagues4 examined the acceptability of administering a computerized quality of life instrument to 46 patients in a cancer center. On pretest, patients had rated computerized data collection as less preferable than face-to-face collection methods and as acceptable as paper and pencil. However, at posttest positive responses on the variables indicated a high level of acceptability and significant shifts in attitude, with patients rating it as acceptable as face-to-face collection and preferable to paper-and-pencil collection.

Our interdisciplinary research team has developed an open-source Distributed Health Assessment and Intervention Research (DHAIR) platform. Designed to be administered to patients using any device capable of supporting a standards-compliant Web browser, the DHAIR platform includes a diverse set of tools for administering Web-based surveys, including question branching, tailored content, avatars, text to speech, interfaces for portable devices, and real-time data access for researchers. Technical components of the DHAIR platform have been described elsewhere.5 The DHAIR platform has been put to use in a variety of health-related studies, including examining treatment decision making in prostate cancer,6 and is currently being used in studies related to self-management for patients with dyspnea, virtual surrogate readers for health literacy and in assessing sexual risk and antiretroviral medication adherence among an HIV-positive Peruvian population.

Current Study

Most recently, we implemented a randomized clinical trial exploring the impact on clinical interactions and outcomes of delivering a real-time graphical summary of symptom and quality-of-life information on treatment and referral patterns by physicians, physician assistants, and advanced practice nurses. This 3-year randomized controlled clinical trial, Electronic Symptom and Report Assessment-Cancer (ESRA-C), is the largest study to use the DHAIR framework and is approaching completion (PI: Berry D. NIH R01 NR 008726). Patients in a multispecialty oncology clinic complete validated symptom and quality-of-life measures on wireless touch-screen laptop computers before starting treatment (T1) and again about 6 to 7 weeks later after the commencement of cancer treatment (T2). Half of these patients are assigned to the intervention group at the end of the second survey session, with a graphical summary of their survey responses provided to their care team. No graphical summary of survey response is provided if patients are in the control group; however, any responses that indicate severe symptom distress, suicidal ideation, depression, or pain are communicated to the care team regardless of study group. By audiotaping clinical encounters and conducting medical chart reviews, we can determine whether the intervention increases provider-patient dialogue around symptoms and quality-of-life management and what impact it has on treatment and referral patterns.

Purpose

The purpose of this article was to report on patients' acceptability of using the ESRA-C symptom and quality-of-life program in a diverse clinical setting and whether any differences in acceptability may be attributed to demographics and symptom and quality-of-life levels, such as depression and cognitive and emotional functioning.

METHODS

Sample

Research participants for the ESRA-C study were recruited from the Seattle Cancer Care Alliance (SCCA), a consortium among the University of Washington Medical Center, Fred Hutchinson Cancer Research Center (FHCRC), and Children's Hospital and Regional Medical Center in Seattle, WA. The SCCA provided care for 3609 new patients during fiscal year 2006, with the majority (85%) originating from Washington State (D. Meadearis, personal communication, 2006). Eligibility criteria for the current study are as follows: patients who were being evaluated for new radiation therapy, medical oncology therapy, or hematopoietic stem cell transplantation; at least 18 years of age; able to communicate in English; and competent to understand the study information and give informed consent. All procedures and protocols were initially approved by the University of Washington Human Subjects Division (APP00000089) and subsequently approved in years 2 and 3 by the Cancer Consortium Institutional Review Board (IRB) at the FHCRC (6210).

Between April 2005 and November 2006, a total of 698 eligible patients were invited to participate in the study, with 509 (72.9%) patients providing written consent. To date, 342 of these patients have completed a follow-up survey (T2). The 342 patients who have provided both a baseline and a follow-up survey represent the research sample within this analysis.

Survey Instruments

During the first survey session (T1; see the Procedures section below), participants were presented with an introductory screen explaining the purpose of the study followed by nine demographic questions. Four validated questionnaires were presented during both T1 and T2 (follow-up) survey sessions: the 13-item Symptom Distress Scale (SDS), the 30-item European Organization for Research and Treatment of Cancer Quality of Life Questionnaire (QLQ) 30 v3,7 a single-item Pain Intensity Numerical Scale, and the nine-item Patient Health Questionnaire-depression module (PHQ-9).8 The full PHQ-9 was only triggered if certain items on the PHQ-9, SDS, or QLQ-30 exceeded a predetermined threshold. During the second survey session (T2), six acceptability items were presented after the PHQ-9 instrument. The acceptability questions were adapted with permission from the work of Carlson et al.4

Materials and Equipment

DEVELOPMENT AND TESTING

Development time for ESRA-C was approximately 6 months and involved rapid prototyping and extensive testing following the usability engineering lifecycle proposed by Mayhew.9 Usability testing was also conducted with a sample of proxy patients at a community center for adults with literacy needs, and minor revisions were made based on these results.10

SYSTEM ARCHITECTURE

The DHAIR platform was built on an open-source architecture comprising a Linux Operating System, Apache Web server, MySQL database system, and the PHP or PERL or Python programming languages (LAMP). An administrative interface provided a survey editing environment for researchers where questions and response options could be entered and immediately deployed. Options for layout, question branching, forced response, and user control were also available within this interface.

SETTING AND SECURITY

The ESRA-C was presented to participants via laptop computers equipped with wireless network cards in the clinic reception areas or examination rooms before a provider visit. When completing the survey in reception areas, participants were seated in areas reserved for research studies containing privacy partitions. If this was not possible, care was taken to locate seating a suitable distance from other patients. All connections to the wireless base stations were secured via Machine Address Code, NT network login, and 128-bit wireless G encryption standards. Several brands of laptops were used, including both touch screen laptops with Windows XP Professional (Microsoft, Redmond, WA) and "tablet" laptops that required use of a proprietary stylus and Windows XP Tablet.

USER/SURVEY INTERFACE AND OPERATION

To minimize the need for scrolling and to increase focus on individual questions, participants typically found one survey question per screen with response options arranged in either a vertical or horizontal fashion. Most response options were radio buttons or check boxes, which were redesigned into a larger format from native HTML form elements to accommodate the size of participant fingers. Participants were able to change their response after making a selection with all intermediate responses recorded by the server. The lower part of each screen was anchored by a graphical progress bar; on either side of this were previous and next buttons. Large font sizes and mid-tone colors were used on each screen with minimal supplementary text to increase usability.11,12 All survey responses and associated time stamps were sent to a secure Web server via an encrypted connection. At the conclusion of the survey, patients were shown a list of items that they may have purposefully or accidentally skipped and were invited to revisit the missed questions. Patients were also presented with a list of the entire set of previously completed questions and were given the option of revising their answers.

Procedures

Clinic scheduling staff or registered nurses asked patients if they were willing to meet with a member of the ESRA-C research team during a normally scheduled visit to discuss participating in a study about improving methods for evaluating patient symptoms and quality of life. Patients who met with research staff received a more in-depth explanation per IRB-approved informed consent procedures. This usually occurred in a private examination room or in the reception area set aside for research studies. If consent was obtained, the research staff created a participant record in the DHAIR platform before handing the laptop to the patient.

Patients were surveyed a second time (T2) approximately 6 to 7 weeks after beginning treatment. At the conclusion of the second survey, patients were greeted with a "Thank you" screen, along with a message that they give the laptop back to the research staff. The staff member then entered a password to access an administrative screen. This action triggered an automatic randomized script, which assigned the patient to the intervention or control group. If the patient was assigned to the intervention group, a two-page graphical summary of their responses was printed and placed on top of the chart. If the patient was assigned to the control group, then no graphical summary was printed. For patients in both groups an audio-recorder was placed in the examination room to record the clinical interaction.

Safety Review

After each patient completed the survey, the survey administrator examined a "Safety Net Review Screen" indicating whether any survey responses for severe distress, suicidal ideation, depression, or pain exceeded threshold values. If any of these thresholds were exceeded, the survey administrator communicated this information to the clinical care team and documented respective actions in the system.

Data Analysis

This acceptability evaluation used descriptive and univariate statistics to examine data collected within the ongoing ESRA-C clinical trial. Data were downloaded as an ASCII Comma Separated Value File (CSV) from the administrative interface of the DHAIR platform. The CSV file was then imported into SPSS version 13 (SPSS, Chicago, IL), with value and variable labels assigned through the use of an automatically generated SPSS syntax file. Data were then examined for irregularities and data quality issues before conducting the analysis. Two-tailed independent group t tests with a level of significance of α = .05 were computed to examine differences between demographic variables and acceptability items. Demographic items and symptom and quality-of-life information were dichotomized and used as independent variables. Time to survey completion was also considered as a demographic variable; this variable was dichotomized using a median split into a fast and slow group. One value fell directly on the median and was grouped into the fast group.

RESULTS

Demographics

Demographics of the study participants are shown in Table 1. The sample was predominantly white and highly educated, with a majority reporting using computers in the home and in the workplace often or very often.

Table 1
Table 1:
Demographics by Service Line

Survey Administration

Participants completed the computerized survey in an average (SD) of 15 minutes and 20 seconds (6.26 minutes). Given that patients could be interrupted by family members and clinical staff, if the total survey time was more than 20 minutes, the individual time intervals between survey responses were examined, and any intervals longer than 10 minutes were removed. Research staff logged any technical issues encountered during the session in a Web-based study tracking system. A review of these notes found five notes flagged as "technical-major," indicating that technical issues forced the survey session to be abandoned. A content analysis of these notes indicates that all were related to disruptions in the wireless network; each of the survey sessions was rescheduled. An additional 30 notes were entered and flagged as "technical-minor," indicating that issues were encountered but a survey session was completed. Most of these issues dealt with momentary interruptions with the wireless network and a poorly calibrated stylus on one of the tablet laptops. These momentary interruptions, when corroborated by study notes, were removed from the calculation average survey time. Nearly 20% of the patients answered questions out of sequence, meaning that they navigated backward through the survey to change answers or to revisit questions presented on a list of "skipped questions" near the end of the survey.

Symptom and Quality-of-life Data

Patient's scores on the emotional and cognitive functioning subscales of the QLQ-C30 at T2 ranged from 76 to 78 (Table 2). The possible range for these subscales is 0 to 100, with higher scores reflecting a higher/healthy level of functioning. For the PHQ-9 scores, if participants triggered the scale and skipped one or two questions, these missing values were replaced with their item average score. However, if more than two questions were skipped, their PHQ-9 score was treated as missing. When valid percents were computed, nine missing cases resulted in a total sample size of 333. More than a fifth of the sample (n = 76, 22.80%) reported PHQ-9 scores of 10 or greater-representing at least moderate depression. If either of the two SDS intensity items (nausea and pain) were missing, they were coded as one; however, if more than these two items were missing, the sum score was coded as missing. All responses were indexed from one, resulting in a possible range of 13 to 65. When valid percents were computed, 19 missing cases yielded a total sample size of 323. Of these valid cases, 52 subjects (16.10%) reported summary SDS scores of at least 33, a suggested cutoff point for severe distress.13,14 In general, participants reported low levels of pain, with a mean (SD) score of 2.2 (2.02) on the 0 to 10 Pain Intensity Numerical scale.

Table 2
Table 2:
Symptom and Quality-of-Life Subscales

Acceptability Scores

With response options ranging from one (not at all) to five (very much), the six acceptability questions presented at the end of the T2 survey garnered high acceptability, with five items having a mean greater than 4.0 (Table 3).

Table 3
Table 3:
Acceptability

PREDICTORS OF ACCEPTABILITY

A number of significant differences were found when exploring the relationships between demographic variables and acceptability items. There was a significant effect for sex, t(332) = −2.47, P = .014, with women reporting higher scores (mean [SD] = 4.14 [0.942]) than did men (mean [SD] = 3.88 [0.962]) in response to "How much did you enjoy using this computer program?" Similarly, women also reported higher levels of overall satisfaction (mean [SD] = 4.40 [0.793]) with the computer program, t(330) = −2.243, P = .026, than did men (mean [SD] = 4.20 [0.859]). There was a significant effect for age, t(214) = 2.16, P = .032, with participants younger than 60 years reporting higher scores (mean [SD] = 3.97 [0.927]) in response to "How helpful to you was this computer program in describing your symptoms and quality of life?" than those in the older age category (mean [SD] = 3.72 [1.049]). However, those older than 60 years reported that the time it took to complete the program was more acceptable (mean [SD] = 4.44 [0.967]) compared with those younger than 60 years (mean [SD] = 4.16 [1.314], t(187) = 1.99, P = .048).

A number of significant effects were found with respect to speed of completion. People who took longer to finish the survey reported higher scores (mean [SD] = 4.01 [0.954]) in response to "How helpful to you was this computer program in describing your symptoms and quality of life?" than did people who finished the survey more quickly (mean [SD] = 3.75 [0.987], t(331) = 2.38, P = .018). Time to complete and ease of use were also significant findings, t(243) = 2.99, P = .003, with slow survey takers reporting that it was easier (mean [SD] = 4.93 [0.311]) compared with fast survey takers (mean [SD] = 4.78 [0.608]). Similarly, questions were reported as more understandable, t(301) = 2.54, P = .012, with slow survey takers reporting higher levels of understanding (mean [SD] = 4.85 [0.432]) compared with fast survey takers (mean [SD] = 4.71 [0.585]). The time it took was more acceptable, t(309) = 2.50, P = .013, for slow survey takers (mean [SD] 4.49 [0.948]) than for fast survey takers (mean [SD] = 4.19 [1.234]).

Lastly, there was a significant effect for distress, t(314) = 2.02, P = .044, with nonseverely distressed participants reporting higher levels of overall satisfaction (mean [SD] = 4.34 [0.799]) compared with severely distressed patients (mean [SD] = 4.08 [0.954]).

DISCUSSION

Principal Results

This acceptability analysis yielded several interesting findings within a diverse oncology patient sample. The primary finding was that participants were able to use ESRA-C quickly and without difficulty in a real-world clinical setting and that they were quite satisfied with the ESRA-C platform. The fact that nearly 20% answered questions out of sequence points to the need for designing flexible navigation systems, notably, providing mechanisms for returning to prior questions to re-evaluate responses. The mean survey administration time of 15 minutes 20 seconds is feasible within a busy clinical setting. The symptom data collected in this study are consistent with previously reported data from prior work.15 While we found several significant differences in acceptability measures when examined by demographic characteristics and quality-of-life measures, it is difficult to determine whether these differences are clinically meaningful given that regardless of demographic category, acceptability levels were relatively high on all demographic characteristics. Those younger than 60 years found the program more helpful; however, across all of the acceptability items, helpfulness was the lowest scored item. The overall low score for this item may be due to the fact that participants recognize that there is only a 50% chance that their clinician will receive a summary of their responses, nor will they (the patients) receive their own copy. However, those older than 60 years reported that the time it took was more acceptable, which could be due to having more time available given the proximity to retirement age.

When evaluating differences between those who finished quickly and those who took longer, a number of interesting differences were found. Notably, those who took longer to finish found that the program was more helpful and easy to use, the questions were more understandable, and the time it took to complete the survey was more acceptable. The take-home message may be that time should be specially set aside in the clinic for electronic symptom and quality-of-life data collection efforts and that it is important not to rush patients through the process.

Comparison With Prior Work

Our findings are consistent with earlier work in smaller samples by Carlson et al4 and Newell et al,2 in which high levels of acceptability were reported after using a computerized program for symptom reporting in a cancer clinic. Our findings were also consistent with acceptability of touch-screen symptom reporting by patients in the research of Velikova and colleagues.3 Our research adds to the field by evaluating the impact of demographic variables and distress levels on acceptability.

Limitations

One of the limitations of this study is that the available sample comprised mostly well-educated participants with generally high levels of computer experience derived largely from the Pacific Northwest. Furthermore, acceptability questions were only asked on the second survey session, which could reflect a biased sample. Perhaps participants who disliked the DHAIR platform during the first session elected not to participate in the second survey. However, among the 509 patients who consented, 57 (11%) patients did not return for a second survey, only two of these were voluntary withdrawals, and 17 patients were deceased. Although our results indicate relationships between these variables, we cannot specify causation; there may be other factors at play, such as the waiting room environment and symptom status influencing acceptability.

Conclusions

This analysis has confirmed that we have created an application for collecting symptom and quality-of-life information that is easy for patients to use and acceptable across a range of user characteristics, including age, sex, and service line. We intend to build on our work by using the DHAIR platform in other modalities toward systems that not only are more efficient but also ensure that the patient's symptoms are documented and needs are identified at times and locations convenient to the patient.

REFERENCES

1. Slack W, Hicks P, Reed C, Van Cura LJ. A computer-based medical history system. N Engl J Med. 1966;274(4):194-198.
2. Newell S, Girgis A, Sanson-Fisher RW, Stewart J. Are touchscreen computer surveys acceptable to medical oncology patients? J Psychosoc Oncol. 1997;15(2):37-44.
3. Velikova G, Brown JM, Smith AB, Selby PJ. Computer-based quality of life questionnaires may contribute to doctor-patient interactions in oncology. Br J Cancer. 2002;86(1):51-59.
4. Carlson LE, Speca M, Hagen N, Taenzer P. Computerized quality-of-life screening in a cancer pain clinic. J Palliat Care. 2001;17(1):46-52.
5. Dockrey MR, Lober WB, Wolpin SE, Rae LJ, Berry DL. Distributed health assessment and intervention research software framework. AMIA Annu Symp Proc. 2005:940.
6. Rae LJ, Lober WB, Wolpin SE, Dockrey MR, Ellis WJ, Berry DL. Acceptability of an Internet treatment decision support program for men with prostate cancer. AMIA Annu Symp Proc. 2005:1091.
7. Aaronson NK, Ahmedzai S, Bergman B, et al. The European Organization for Research and Treatment of Cancer QLQ-C30: a quality-of-life instrument for use in international clinical trials in oncology. J Natl Cancer Inst. 1993;85(5):365-376.
8. Kroenke K, Spitzer R, Williams J. The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med. 2001;16(9):606-613.
9. Mayhew DJ. The Usability Engineering Lifecycle: A Practioner's Handbook for User Interface Design. San Francisco, CA: Academic Press; 1999.
10. Youngworth SJ. Electronic Self Report Assessment-Cancer (ESRA-C): Readability and Usability in a Sample with Lower Literacy. Seattle, WA: University of Washington School of Nursing; 2005.
11. Holt BJ, Komlos-Weimer M. Older Adults and the World Wide Web: A Guide for Web Site Creators. Chevy Chase, MD: SPRY Foundation; 1999.
12. Kinzie MB, Cohn WF, Julian MF, Knaus WA. A user-centered model for Web site design: needs assessment, user interface design, and rapid prototyping. J Am Med Inform Assoc. 2002;9(4):320-330.
13. McCorkle R, Young K. Development of a Symptom Distress Scale. Cancer Nurs. 1978;1(5):373-378.
14. McCorkle R, CM, Shea JA. A User's Manual for the Symptom Distress Scale. Philadelphia, PA: University of Pennsylvania; 1998.
15. Mullen KH, Berry DL, Zierler BK. Computerized symptom and quality-of-life assessment for patients with cancer, part II: acceptability and usability. Oncol Nurs Forum. 2004;31(5):E84-E89.
Keywords:

Internet; Quality of life; Survey methods

© 2008 Lippincott Williams & Wilkins, Inc.