The concept of patient-centered care, in which patients are the “final arbiters in deciding what treatment and care they receive,” is core to efforts to transform healthcare in the United States (Institute of Medicine, 2001). Within patient-centered care, the needs and values of the patient provide the basis for individualized care and decisions; yet all too often, patients are unable to articulate or learn how treatment may affect what is most important to them (Institute of Medicine, 2001). Communication and decision-making becomes more meaningful when the impact of illness and its treatment on the individual is assessed from the patient perspective (Bowling, 1995; Lindblad, Ring, Glimelius, & Hansson, 2002; Patel, Veenstra, & Patrick, 2003).
Patients and clinicians are more likely to discuss health-related quality of life (HRQL) issues when data are available to them. There is a need for patient relevant outcome measures that promote the patient perspective in healthcare discussions and decisions occurring during clinical interactions (Aburub, Gagnon, Rodríguez, & Mayo, 2016; Atkinson & Rubinelli, 2012; Hall, Kunz, Davis, Dawson, & Powers, 2015; McCleary et al., 2013).
The three-step patient-generated index (PGI; Ruta, Garratt, Leng, Russell, & MacDonald, 1994) provides a novel approach to the measurement of HRQL that accounts for individual values and preferences. First, patients identify areas most important to them that are effected by cancer and its treatment. Second, they score each item for severity. Third, they prioritize the importance of the items. The PGI, historically available only in paper format, has been studied in cancer (Camilleri-Brennan, Ruta, & Steele, 2002; Martin, Camfield, Rodham, Kliempt, & Ruta, 2007; Tang, Oh, Scheer, & Parsa, 2014; Tavernier, Totten, & Beck, 2011; Tavernier, Beck, Clayton, & Pett, 2011). A computerized version, however, has the capability to address documented navigational and computation errors (Tavernier, Totten, & Beck, 2011), threatening the content validity of the PGI. The purpose of this study was to refine and evaluate the usability and acceptability of an electronic version of the PGI (ePGI) prototype in the outpatient radiation oncology setting.
Investigators developed a prototype based on the three-step PGI (Ruta et al., 1994). The initial electronic prototype consisted of five screens with navigation and completion processes incorporated to address documented navigation and computation errors of the paper and pencil version (Tavernier, Totten, & Beck, 2011).
Investigators obtained institutional review board approval prior to beginning the study. Clinic liaisons assisted with identifying and inviting eligible participants; the investigators obtained written consent. The study occurred at a large outpatient radiation department with data collected during routine appointments.
All full-time oncology physicians and registered nurses working in the radiation outpatient clinic were eligible and invited to participate. Investigators used convenience sampling methods to accrue adult patients who were receiving radiation treatment at the study site for at least 2 weeks or more at time of consent, were able to speak and read English, and were being seen by a consenting clinician.
The study had two phases (Figure 1). In Phase 1, the investigators implemented an end-user adaptive agile design approach (Gustafson, 2011; Nielsen, 2000; Wolpin & Stewart, 2011) using cognitive interviews (Willis, 2005) and direct observation to examine and improve the usability (ease of use, understandability, navigational elements) and acceptability (interface layout and visual design) of the ePGI prototype (Office of Desease Prevention and Health Promotion, 2016). The agile design specifically focused the development of the ePGI for the end-user, testing each iteration until there were no further changes suggested. In the second phase, investigators used a survey and structured interview to evaluate patient, nurse, and physician perceptions of the usability and acceptability of the ePGI at the point of care.
Phase 1 cognitive interviews (Table 1) focused on the comprehension, navigation, layout, and difficulties experienced or observed during completion of the ePGI. Phase 2 interviews addressed using the ePGI at the point of care (Table 2). For Phase 2, the investigators developed a 19-item survey of previous computer use and acceptability and usability of the ePGI, loosely adapted from other related survey questions (Basch et al., 2005; Carlson, Speca, Hagen, & Taenzer, 2001; Clark, Bardwell, Arsenault, DeTeresa, & Loscalzo, 2009). ePGI and surveys were completed using tablet computers and data automatically stored on a secure server using Research Electronic Data Capture (REDCap).
Patients in Phase 1 completed the ePGI prototype using a touch screen tablet provided by the principal investigator. Participants had only the instructions within the prototype provided. The investigator observed the patient and provided directions or answered questions only if there was an inability to proceed further, taking notes of hesitations, completion errors, and verbalized difficulties for reference during the subsequent interview. Cognitive interviews were recorded and assessed the ease of use, understandability, navigational elements, interface layout, visual design, and potential usefulness of the ePGI. Patient recommendations were incorporated and tested with the next iteration, tested on the same and additional patients. The process was repeated until there were no observed difficulty with the completion of the ePGI nor patient suggestions to improve the ePGI.
In Phase 2, patients completed the ePGI on a computer tablet once a week for two consecutive weeks before seeing the clinician; the patient and treating clinician had the opportunity to share the ePGI results on the tablet during the visit. In the first week, patients answered survey questions about previous computer use and completed the ePGI. In the second week, the patient completed the ePGI, usability and acceptability survey questions, and interview. Clinicians were interviewed individually at an agreed-upon time after all patients had completed the study.
Notes and recordings of Phase 1 interviews were reviewed by the first author for content related to suggestions for improving the ePGI. All suggestions were discussed with a bio-informatics professional to resolve any conflicting feedback and then incorporated into the ePGI for testing in the next iteration. Recorded interview data from Phase 2 were coded by the first author and analyzed directly from the recordings using content analysis. Quantitative data were downloaded from REDCap and analyzed using Excel for Windows. Because it was a pilot study with a small sample size, only descriptive analysis is reported.
During Phase 1, three iterations of testing and refining the ePGI, involving seven patients, were required to obtain a usable ePGI; one in which patients could easily complete, provide a description of what to do on each screen congruent with investigators’ intent, and not have additional questions about completion or suggestions for improvement. Cognitive interview results led to prototype changes in screen content, spacing, and language. A sixth screen providing a summary of patient responses and overall HRQL index score was added based on feedback.
In Phase 2, 15 patients completed the ePGI and interview; 12 (80%) were women, and 12 (80%) were older than 60 years of age. Fourteen patients completed the survey questions (Table 3); six (42%) had little or no computer touch screen experience, three of whom had never used any type of computer prior to the study. Patients required less than 30 seconds or no coaching when completing the ePGI. Nearly all participants rated the ePGI as “easy” or “very easy” to use, navigate, understand, and follow instructions. Less than half stated they frequently shared the type of information they entered on the ePGI with their physician or nurse; however, most (n = 8, 67%) felt the information on the ePGI would be useful in making decisions regarding their disease or treatment.
Patient interview responses provided additional detail to the survey findings. Those who shared the ePGI results verbally or from the tablet with their doctor or nurse (n = 6 and 9, respectively) during the visit initiated the discussion, not the clinician. All patients stated they consider the areas they listed on the ePGI when making treatment decisions but are often unsure how the treatment will affect the areas identified. All patient participants also stated they would like to share the ePGI survey with clinicians even if uncertain if or how it might help.
Eight radiation oncology clinicians (four nurses and four physicians) participated in the study. They all described the typical visit as clinician driven, focused on assessing for common side effects of radiation and those related to the anatomical features within the radiation field. Patient quality of life was defined predominantly (n = 6) as minimal symptomatology and the ability to provide self-care. Two clinicians defined it as enjoying ones’ life, patient well-being, and lack of bother. Five clinicians stated they were familiar with patient-reported outcome measures but had not seen them widely used in practice. All eight clinicians articulated the lack of time and possible disruption of patient flow as significant barriers to using patient-reported outcomes in practice.
The predominant theme voiced by clinicians after a demonstration of the ePGI was the potential value of ePGI use in starting a dialogue about quality of life issues, revealing infrequent or unusual effects of treatment and assist with symptom management. All clinicians felt the ePGI could be used in practice, provided the concerns of patient flow disruption, linking the ePGI to the patient’s electronic health record and having adequate resources to respond to patient needs, were addressed. All but one clinician stated patients did not typically share ePGI type of information with them during visits. Four clinicians were perplexed by some shared responses on the ePGI (Table 4), not understanding how areas listed were related to the patient’s treatment. For example, a patient listed a pet as most important and affected by cancer and its treatment. The clinician expressed “total befuddlement on how to explore the answer or if I would be able to do anything about it” and did not pursue it further. In the patient interview, the patient described their inability to walk or train the dog due to the patient feeling too tired and out of breath when walking, describing the impact of fatigue in their life.
This study is the first known use of an electronic platform for the PGI, allowing ePGI completion from any Internet capable device. Responses support the acceptability and usability of the final prototype at the point of care by patients and clinicians. Although clinician concerns about integrating patient-reported outcomes into the electronic health record are understandable, examples of strategies to do so are described in the literature (Bennett, Jensen, & Basch, 2012; Berry et al., 2011; Chung & Basch, 2015; Lobach et al., 2016; McCleary et al., 2013; Wagner et al., 2015). The study precludes generalization of findings; the perplexity felt by clinicians over patient responses demonstrates a lack of correspondence between how patients and clinicians view quality of life. It is important for clinicians to explore any perplexing reports during the patient visit as it may reveal previously unassessed effects of treatment.
The ePGI seems a simple and practical approach to restructure patient–provider interactions and enhance meaningful dialogue at the point of care. Moreover, using a tool such as the ePGI allows for the expression of patient voice and personalized care. Further study evaluating the effects of the ePGI on communication and decision-making is needed.
Aburub A. S., Gagnon B., Rodríguez A. M., & Mayo N. E. (2016). Using a personalized measure (patient generated index (PGI)) to identify what matters to people with cancer. Supportive Care in Cancer
, 24, 437–445. doi:
Atkinson S., & Rubinelli S. (2012). Narrative in cancer research and policy: Voice, knowledge and context. Critical Reviews in Oncology/Hematology
, 84(Suppl. 2), S11–S16. doi:
Basch E., Artz D., Dulko D., Scher K., Sabbatini P., Hensley M., … Schrag D. (2005). Patient online self-reporting of toxicity symptoms during chemotherapy. Journal of Clinical Oncology
, 23, 3552–3561. doi:
Bennett A. V., Jensen R. E., & Basch E. (2012). Electronic patient-reported outcome
systems in oncology
clinical practice. CA: A Cancer Journal for Clinicians
, 62, 336–347. doi:
Berry D. L., Blumenstein B. A., Halpenny B., Wolpin S., Fann J. R., Austin-Seymour M., … McCorkle R. (2011). Enhancing patient-provider communication with the electronic self-report assessment for cancer: A randomized trial. Journal of Clinical Oncology
, 29, 1029–1035. doi:
Bowling A. (1995). What things are important in people's lives? A survey of the public's judgements to inform scales of health related quality of life
. Social Science & Medicine
, 41, 1447–1462. doi:
Camilleri-Brennan J., Ruta D., & Steele R. C. (2002). Patient generated index: New instrument for measuring quality of life
in patients with rectal cancer. World Journal of Surgery
, 26, 1354–1359. doi:
Carlson L. E., Speca M., Hagen N., & Taenzer P. (2001). Computerized quality-of-life screening in a cancer pain clinic. Journal of Palliative Care
, 17, 46–52.
Chung A. E., & Basch E. M. (2015). Incorporating the patient's voice into electronic health records through patient-reported outcomes as the “review of systems.”. Journal of the American Medical Informatics Association
, 22, 914–916. doi:
Clark K., Bardwell W. A., Arsenault T., DeTeresa R., & Loscalzo M. (2009). Implementing touch-screen technology to enhance recognition of distress. Psychooncology
, 18, 822–830. doi:
Gustafson A. (2011). Adaptive web design: Creating rich experiences with progressive enhancement
. Chattanooga, TN: EasyReaders, LLC.
Hall L. K., Kunz B. F., Davis E. V., Dawson R. I., & Powers R. S. (2015). The cancer experience map: An approach to including the patient voice
in supportive care solutions. Journal of Medical Internet Research
, 17, e132. doi:
Institute of Medicine. (2001). Crossing the quality chasm: A new health system for the 21st century
. Washington, DC: National Academies Press.
Lindblad A. K., Ring L., Glimelius B., & Hansson M. G. (2002). Focus on the individual—Quality of life
assessments in oncology
. Acta Oncologica
, 41, 507–516.
Lobach D. F., Johns E. B., Halpenny B., Saunders T. A., Brzozowski J., Del Fiol G., … Cooley M. E. (2016). Increasing complexity in rule-based clinical decision support: The symptom assessment and management intervention. Journal of Medical Internet Research
, 4, e36. doi:
Martin F., Camfield L., Rodham K., Kliempt P., & Ruta D. (2007). Twelve years' experience with the patient generated index (PGI) of quality of life
: A graded structured review. Quality of Life Research
, 16, 705–715. doi:
McCleary N. J., Wigler D., Berry D., Sato K., Abrams T., Chan J., … Meyerhardt J. A. (2013). Feasibility of computer-based self-administered cancer-specific geriatric assessment in older patients with gastrointestinal malignancy. The Oncologist
, 18, 64–72. doi:
Nielsen J. (2000). Why you only need to test with 5 users
. Retrieved from https://www.nngroup.com/articles
Office of Desease Prevention and Health Promotion. (2016). Health literacy online: A guide for simplifying the user experience
. Retrieved from https://health.gov/healthliteracyonline/
Patel K. K., Veenstra D. L., & Patrick D. L. (2003). A review of selected patient-generated outcome measures and their application in clinical trials. Value in Health
, 6, 595–603. doi:
Ruta D. A., Garratt A. M., Leng M., Russell I. T., & MacDonald L. M. (1994). A new approach to the measurement of quality of life
: The patient-generated index
. Medical Care Research and Review
, 32, 1109–1126.
Tang J. A., Oh T., Scheer J. K., & Parsa A. T. (2014). The current trend of administering a patient-generated index
in the oncological setting: A systematic review. Oncology Reviews
, 8, 245. doi:
Tavernier S. S., Beck S. L., Clayton M. C., & Pett M. A. (2011). Validity of the patient generated index as a quality of life
measure in radiation oncology
. Oncology Nursing Forum
, 38, 319–328. doi:
Tavernier S. S., Totten A., & Beck S. L. (2011). Assessing content validity of the patient generated index using cognitive interviews. Qualitative Health Research
, 21, 1729–1738. doi:
Wagner L. I., Schink J., Bass M., Patel S., Diaz M. V., Rothrock N., … Cella D. (2015). Bringing PROMIS to practice: Brief and precise symptom screening in ambulatory cancer care. Cancer
, 121, 927–934. doi:
Willis G. B. (2005). Cognitive interviewing: A tool for improving questionnaire design
. Thousand Oaks, CA: Sage.
Wolpin S., & Stewart M. (2011). A deliberate and rigorous approach to development of patient-centered technologies. Seminars in Oncology Nursing
, 27, 183–191. doi: