Patient experience surveys are widely used to collect information about patient experiences and satisfaction with care. In the United States, patient experience surveys are increasingly being used for formal reporting,1 including certification of patient-centered medical homes,2 online reporting and comparison,3,4 and performance-based reimbursement programs.5,6 The Consumer Assessment of Healthcare Providers & Systems Clinician & Group Survey (CG-CAHPS), the gold standard for collecting patient experience data from outpatient settings in the United States, is used in multiple internal and external care improvement programs across health care settings,7 including pay-for-performance programs like the Medicare Shared Savings Program8 and Medicare Access and CHIP Reauthorization Act’s (MACRA) Merit-based Incentive Program.9
Although the CAHPS is traditionally administered by mail, survey vendors offer alternate modes of administration, including phone-based and web-based administration.10 Studies using CAHPS data have found response rates ranging from 34% to 61%11–13 and response rates have been found to be lower for female, nonwhite, younger, and limited English-proficient patients.13–15 In particular, underserved patient populations facing greater challenges with literacy, numeracy, and/or English proficiency may face greater difficulty with existing CAHPS questionnaires,13,15,16 underscoring the need for exploring new ways to engage diverse patients in reporting on their care.
Thus far, there has been little discussion about whether CAHPS captures the true domains of patient health care experience among patients receiving care from safety net settings, or whether data collection via mobile technology might result in higher response rates from more representative populations. Therefore, we sought to evaluate the acceptability and usability of a low-literacy CG-CAHPS adaption administered via a tablet device among a diverse group of patients from a safety net health care setting.
Study Setting and Participants
The study took place at the San Francisco Health Network, which provides primary care to over 63,000 patients a year, of whom 35% are Latino, 25% are Asian, 17% are black, and 17% are white. The majority of patients have Medicaid, Medicare, or are uninsured.
Low-literacy and Spanish Adaptation
We first adapted the CAHPS (Clinician and Group version 3.0)17 to improve readability and shorten the survey length. We consulted strategies used by previous researchers to improve readability18 and aimed for a 5th-grade reading level or less using the Flesch-Kincaid Grade Level test,19,20 following recommendations for low-literacy audiences.21 Strategies to improve readability included replacing multisyllabic words, breaking up multiclause questions into multiple sentences, and reducing the number of words per question. To further improve readability, we removed the phrase “In the last 6 months” from each question and prefaced the entire survey with a screen that stated, “These questions ask about your own health care, in the last 6 months.” We shortened the survey from 31 questions to 16 questions, based on recommendations from a validated study22 and feedback from patient advisory groups about the importance of specific questions to their clinical experience. For example, educational attainment was removed from the adapted survey due to feedback from patient advisors about the stigma of the question and perceived impertinence to their medical care. On the basis of feedback from patient advisors, we adjusted the response options for reporting sex and race/ethnicity, and added additional response options for reporting nonapplicability of questions regarding test results and prescription medications (specific changes in Table 2). On the basis of guidance from a study validating a shorter CAHPS, we added a question to assess timeliness of care.22 In the remaining 16 questions, we improved readability from an average grade level of 7.0 to 4.6. To create a low-literacy Spanish adaptation, 2 members of the research team performed a translation and back-translation.
A total of 14 questions were deemed by the research team to retain the original structure of the CAHPS survey item, while 2 (Table 2; questions 8, 9) were deemed to have been more substantially altered via the patient advisory board feedback process (Table 2). For example, the CAHPS subscale on care coordination involved questions about the frequency of follow-up about medications or test results, which patients found to be confusing if there was no expected standard process for those activities within the clinic (eg, providers and staff did not follow up with test results if they were negative or deemed “in range”).
Technical Adaptation for Tablet Administration
To develop a tablet-based application for administering the CAHPS survey, we partnered with Tickit Health to create an interface that was icon-based and simple to use. The final tablet-based platform displayed one survey question per screen and utilized icons to visualize the main topic of each question (eg, an icon of a patient-provider interaction) and Likert scale response options (Fig. 1). In addition, questions were broken up and arrayed on multiple lines each composed of logical phrases to promote readability.18 For questions with more than a few response options, a variety of displays for selecting response options (eg, slider bar, scroll bar) were used to test usability. To test the feasibility of administering a tablet-based patient experience survey at the point of care, we piloted tablet-based survey administration in the examination rooms within 1 clinic in the network. During the pilot, changes were made to the interface of the survey (eg, adding an always-on, self-explanatory homepage inviting patients to take the survey) and administrative practices (eg, tethering tablets and cables to the examination room wall to prevent theft and ensure charging). The year-long pilot resulted in a steady 30–50 responses per month and higher representation from younger and Latino individuals (further details and data shown in Appendix A, Supplemental Digital Content 1, http://links.lww.com/MLR/B659), supporting the feasibility of tablet-based administration at the point of care.
Patient Advisory Board Feedback
From October to December 2015, we elicited feedback from 3 active patient advisory councils at 3 respective clinics within the San Francisco Health Network to refine the tablet-based survey adaptation. We elicited feedback on survey content (focusing on improving readability and suitability with clinic processes), interface of the application, and workflow for administration in the clinics. We used this feedback to refine the survey before cognitive interviewing.
Patient Cognitive Interviews for Feedback and Validation
Recruitment of Sample
Once the adapted tablet-based CAHPS tool was created, we conducted cognitive interviews from April to July 2016. From an electronic query of patients who had visited the clinic in the past year, clinic providers and staff identified patients who were actively engaged in a discussion about their care. Participants were eligible for the study if they: (1) did not have severe cognitive impairment or visual impairment; and (2) were comfortable speaking and reading English or Spanish. Study staff screened and recruited participants by phone.
We completed 25 cognitive interviews (19 in English, 6 in Spanish) to: (1) collect feedback about the interface and content of the tablet-based survey; (2) compare the tablet-based survey to the standard paper-based CG-CAHPS; and (3) elicit perspectives about reporting experience to providers and clinics. Participants reported demographic information (sex, age, race/ethnicity, and education), current health conditions, and current medication use. To assess health literacy status, we used a validated screening question, “How confident do you feel filling out medical forms by yourself?,”23,24 categorizing participants noting quite a bit of confidence or less as having limited health literacy.24,25 Participants reported how often they used a tablet or smartphone. We asked participants to report how interested they were in using the internet to manage their health care (5-point Likert scale).
To gain perspectives about usability and acceptability during the cognitive interviews, we gave participants 2 versions of the survey (paper versus tablet-based) in a random order and followed a standard think-aloud approach.26,27 To provide a standardized, usual care comparison of the CG-CAHPS survey, we provided a paper version of the CG-CAHPS survey with the original wording of 16 questions from survey version 3.0. The tablet-based survey contained the same questions adapted for a low-literacy audience. If participants asked for help at any step while using the tablet, the research team provided assistance to the next step of the process. Following the completion of both surveys, we used a retrospective approach to ask participants to reflect on their overall satisfaction, usability of each survey, and comparability of the 2 modalities.
We administered the tablet-based survey on an iPad Air 2. Interviews were video-recorded using Game Capture software. Interviews were transcribed and deidentified before analysis. Participants received $25 for participating in the study. The University of California, San Francisco Institutional Review Board approved the study.
We first summarize the major tablet application design changes made, including the patient advisory board feedback and the pre and post readability level of each survey item.
Next, within the cognitive interviews, we summarize participant characteristics and then present our qualitative findings related to acceptability, usability, and relevance of the shorter, lower literacy CAHPS items and the process of completing the survey on the tablet. To identify themes in perspectives about patient experience and preferences for survey administration, authors read the interview transcripts in their entirety before independently analyzing them. We used deductive analysis, informed by the interview guide to identify themes specific to the question domains and predetermined categories of patient experience and usability. We also used inductive coding to identify emerging themes, meeting regularly to discuss and establish consensus on themes.28 Coding was carried out using Dedoose (Manhattan Beach, CA). During cognitive interviews, we also quantitatively assessed respondents’ answers on the written and tablet administration of the survey, to calculate the concordance of answers between the 2 modes of survey administration.
Half (13, 52.0%) of the interviewee sample was male, the majority (21, 84.0%) were nonwhite, and the mean age was 53. Over half (14, 56.0%) had at least 1 chronic condition, and 68% (17) reported limited health literacy. Although the majority (18, 72.0%) used a smartphone daily, tablet use was less common (Table 1); 40% of participants had never used a tablet. Interest in using the internet to manage health care was mixed; one third (9, 36.0%) reported high interest.
Question Adaptations and Concordance
Response concordance between the tablet-based and paper-based survey questions ranged from 41.7% to 100.0%, with an average of 82.5% (Table 2). Concordance was lower for the questions in which meaning had been altered through adaptation or in which response options had been adapted to allow participants to note nonapplicability. For example, concordance was lowest (41.7%) for the adapted question “How often were you able to get your test results if you wanted them?” (Table 2, question 8). Some participants found it difficult to answer the original question as the need for follow-up is often variable based on the specific test, echoing perspectives from patient advisory board members who had informed survey adaptation. In addition, 4 participants noted that this question was not applicable by choosing the response option “I didn’t have any tests done” in the adapted question. Concordance was highest for questions for which the original CAHPS wording remained largely intact (Table 2, questions 4, 10–11, 14–15).
Perspectives of Survey Administration and Usability
Overall, there was a preference for the tablet versus paper administration, and several major themes emerged about usability (Table 3). Of the 25 participants, 18 (72.0%) noted that they preferred the tablet-based survey and 4 (16.0%) preferred the paper-based version. For example, one participant remarked, “I liked it [the tablet version] because everyone will fill it out when it’s that easy. If it’s difficult, then people like me who don’t have a lot of education will have trouble” (age 55–64, female, Latina, Spanish speaker). Three participants (12.0%) stated no preference for either mode, one of whom noted that their preference would be situational: “Sometimes, it’s quicker when you use a tablet because you can just zip through, and sometimes if you want to take a little more time to write out your answers, then it’s good to use paper, so it depends on the situation” (age 35–44, male, mixed race/ethnicity). Despite varying experience using tablets, all participants could complete the survey with light assistance. The most common usability barriers were lack of knowledge of how to use an onscreen scroll bar (6, 24.0%); general difficulties using a tablet, such as pressing too hard (5, 25%); and issues using an onscreen keyboard (3, 12.0%).
Technical Advantages of Tablet-based Survey
Many participants embraced the idea of using a tablet-based survey as fitting in with the technology-driven modern world: “People are used to playing with their phones and it’s just kind of—it’s more familiar” (age 55–64, female, Latina). Some participants found the tablet survey more fun or exciting than completing the paper-based version: “It’s a new thing. I’m going to go home and tell my kids I touched on a tablet” (age 45–54, female, black or African American). Although both surveys contained the same number of questions, some participants reported that it was faster to complete the survey using the tablet, citing the use of figures, survey interface, and adaptations to questions to improve readability: “If each question has its own page … it feels like it will be faster. I don’t think there was a huge difference between this [tablet-based survey] and this [paper-based survey] except that this was a little clearer than the paper. This felt faster … probably a little faster because of the way the questions were asked” (age 45–54, male, white).
Administration at the Point of Care
Participants also noted the value of completing a survey at the point of care, which would: (1) allow them to reflect on the quality of their care while in the same environment; (2) occupy their time while waiting for an appointment; and (3) eliminate the barriers to completing and returning the survey from home: “… When I’m at home—it’s like, ‘I’ll do it later.’ I might get around to it and I might not. So, if I’m already at the doctor’s office, it’s going to get done” (age 55–64, female, Latina). Although administration at the point of care was a benefit for most participants, a couple of respondents noted the potential for breaches in privacy in a clinical setting: “I think that you have that at home and you tend to be a little bit more private, so you have more time to write it out because I think that tablets are a little more public …. Somebody could be reading over your shoulder” (age 35–44, male, mixed race/ethnicity).
Familiarity With and Tradition of Paper-based Survey
For the small minority of participants who preferred the paper-based survey (n=4), lack of experience with mobile devices played a large role in determining their preference. One participant noted: “There are a lot of people, myself included, who … It’s not that we are against the tablet. It’s just that we are a little afraid to use it. We think if we don’t understand it, we’re better off not touching it” (age 55–64, female, Latina, Spanish speaker). Although all participants completed the paper survey unaided, some expressed that they would require assistance in using a less familiar tablet-based survey, or that the introduction of technology increased the potential of making mistakes or submitting inaccurate answers.
Perspectives on Reporting Care Experiences
Table 3 summarizes participant perspectives about whether and how the survey items captured their experiences with health care. The most highly prioritized domains by the participants were provider communication, access to care, and staff respect. Although participants valued aspects of quality from their providers, they often emphasized that sharing information about their health was most important.
Most participants emphasized the importance of anonymously reporting care experiences, which empowered them to drive changes in their care. However, participants noted that the quality of their care was not always concordant with the metrics that are captured by patient experience surveys. In a few instances, the discordance between care experience and survey measures drove participants to intentionally choose responses that protected their clinic. In response to the timeliness of care question, one participant explained the discordance between her response of “usually” and her actual experience: “I lied on that one … I’ve never seen my doctor within 15 minutes when I go in the office … I’ve sat there for almost 45 minutes in that room. I’m not complaining because it’s the doctor. I mean, some people have more special needs” (age 45–54, female, black or African American). In addition, some participants felt that the current survey did not contain enough opportunity to report gratitude or good quality of care. For a few participants, the survey was insufficient to enact meaningful change in the quality of their care or to resolve negative experiences, which they felt were more appropriate for in-person communication. To improve the capture of their true care experiences, some participants suggested the addition of free response questions to allow for more nuanced and personalized reports of care.
Despite varying levels of education, health literacy, and experience with tablets, we found that both English-speaking and Spanish-speaking patients in a safety net health care setting strongly preferred a tablet-based delivery of CAHPS patient experience questions versus traditional paper-based survey methods. Overall, acceptance and preference for tablets seemed to be linked to the streamlined design of the tablet application which presented only a small number of usability barriers. In addition, the shorter and lower literacy survey delivered via tablet appeared functional for most items, and there were clear priorities among patients in using the experience survey to report about the relationships/communication with their care team and access to care.
Although there is evidence in hospital settings that differential response rates do not bias performance comparisons following adjustment for case mix and nonresponse bias,29,30 improving response rates particularly for populations with traditionally low representation may improve the accuracy of reports of care in diverse settings. In a study comparing web-based to mailed CAHPS collection, web-based administration yielded lower response rates, but comparable results for the majority of CAHPS domains.10 However, other studies have found that evaluations of care may be influenced by mode of administration.31,32 Moreover, studies examining the effects of readability have found that improved readability of CAHPS questions15 and survey instructions33 may improve response rates. Although formal quality reporting programs using CG-CAHPS questions require use of a mixed-methods protocol of mail and telephone-based administration,8,9 this study provides exploratory results about the acceptability of survey administration at the point of care. As health systems increasingly explore customized administration of patient experience surveys, further research is needed to examine the effects of enhanced readability and survey mode on ratings of patient care.
Although the study was limited to a small sample, the use of cognitive interviews allowed us to gain in-depth perspectives about the design and content of a low-literacy tablet-based CAHPS adaptation. Although the study was not a psychometric evaluation of the low-literacy survey items or tablet modality, we present exploratory results on the acceptability and reliability of the presented items among safety net patients. Because we adapted the CAHPS in content, presentation, and mode before cognitive interviewing, it is difficult to know what types of adaptations were most influential in directing preference for the tablet-based survey. While we know of other efforts underway to create lower literacy34 and tablet-delivered CAHPS survey items,18 this is one of the first studies to our knowledge that has officially reported on this process. Moreover, we feel that this study is unique in harnessing an in-depth collaboration with a digital health company (Tickit Health) to create a final product that maximized readability and usability, as opposed to putting verbatim text onto a mobile platform without as much attention to the design process. Of note, this company had previous experience in participatory mobile health design in multiple languages, which directly informed this work.35–37
Moving forward, there is more work needed in several domains. First, there are multiple questions about the best strategies for reporting processes. Operational and practical work is required to determine the feasibility of mode of delivery (emailed link, texting, in-clinic, mailed). Although almost all participants in this study were open to tablet-based survey administration, providing options for completing the survey will be critical for matching patient preferences. Moreover, integrating survey administration into clinic operations will require work to optimize clinical workflows, protect patient privacy, and ensure that point-of-care data collection does not compromise data quality and validity. Finally, while employing customized surveys and collecting open-ended feedback would allow patients to report experiences that may be more personal,38 identifying and addressing issues may require significant time, may be less comparable across patients, and may be less applicable for formal reporting purposes.11 As we move forward with federal policy supporting patient experience data collection and reporting, this study provides next steps to ensure underrepresented, and vulnerable patient perspectives are engaged and represented in this process. If designed with patient input, tablet-based surveys may be a feasible and effective method for collecting patient experience data at the point of care.
The authors thank Blanca Chavez and Mekhala Hoskote for their contributions to the study.
1. Anhang Price R, Elliott MN, Zaslavsky AM, et al. Examining the role of patient experience surveys in measuring health care quality. Med Care Res Rev. 2014;71:522–554.
2. CAHPS Patient-Centered Medical Home (PCMH) Item Set. Rockville, MD: Agency for Healthcare Research and Quality; 2017. Available at: http://www.ahrq.gov/cahps/surveys-guidance/item-sets/PCMH/index.html
3. Physician Compare Initiative. Baltimore, MD: Centers for Medicare & Medicaid Services; 2014. Available at: https://www.cms.gov/medicare/quality-initiatives-patient-assessment-instruments/physician-compare-initiative/
4. Comparative Data. Rockville, MD: Agency for Healthcare Research and Quality; 2017. Available at: https://cahpsdatabase.ahrq.gov/cahpsidb/
5. Consumer Assessment of Healthcare Providers & Systems (CAHPS). Baltimore, MD: Centers for Medicare & Medicaid Services; 2018. Available at: https://www.cms.gov/Research-Statistics-Data-and-Systems/Research/CAHPS/
7. Improve Patients’ Experiences With Primary and Specialty Care. Rockville, MD: Agency for Healthcare Research and Quality; 2017. Available at: https://www.ahrq.gov/cahps/surveys-guidance/cg/improve/index.html
8. CAHPS Survey for Accountable Care Organizations Participating in Medicare Initiatives. Baltimore, MD: Centers for Medicare and Medicaid Services; 2018. Available at: http://acocahps.cms.gov/en/about-the-survey/
9. 2017 Consumer Assessment of Healthcare Providers and Systems (CAHPS) for the Merit-based Incentive Payment System (MIPS) Survey via CMS-Approved Survey Vendor Reporting. Baltimore, MD: Centers for Medicare & Medicaid Services; 2017. Available at: https://www.cms.gov/Research-Statistics-Data-and-Systems/Research/CAHPS/Downloads/CAHPS-for-MIPS-Fact-Sheet.pdf
10. Bergeson SC, Gray J, Ehrmantraut LA, et al. Comparing web-based with Mail Survey Administration of the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Clinician and Group Survey. Prim Health Care. 2013;3:1000132.
11. Anhang Price R, Elliott MN, Cleary PD, et al. Should health care providers be accountable for patients’ care experiences? J Gen Intern Med. 2015;30:253–256.
12. Elliott MN, Lehrman WG, Goldstein EH, et al. Hospital survey shows improvements in patient experience. Health Aff (Millwood). 2010;29:2061–2067.
13. Elliott MN, Edwards C, Angeles J, et al. Patterns of unit and item nonresponse in the CAHPS Hospital Survey. Health Serv Res. 2005;40 (pt 2):2096–2119.
14. Zaslavsky AM, Zaborski LB, Cleary PD. Factors affecting response rates to the Consumer Assessment of Health Plans Study survey. Med Care. 2002;40:485–499.
15. Fongwa MN, Setodju CM, Paz SH, et al. Readability and missing data rates in CAHPS 2.0 Medicare Survey in African American and White Medicare respondents. Health Outcomes Res Med. 2010;1:e39–e49.
16. Klein DJ, Elliott MN, Haviland AM, et al. Understanding nonresponse to the 2007 Medicare CAHPS survey. Gerontologist. 2011;51:843–855.
17. An overview of version 3.0 of the CAHPS Clinician & Group Survey. Rockville, MD: Agency for Healthcare Research and Quality; 2015.
18. Introducing the New CAHPS Clinician & Group Survey 30 (Webcast) CAHPS Research Directions. Rockville, MD: Agency for Healthcare Research and Quality; 2015. Available at: http://www.ahrq.gov/cahps/news-and-events/events/20150917/webinar-091715.html
19. Kincaid JP, Fishburne RP Jr, Rogers RL, et al. Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel. Millington, TN: US Department of Commerce; 1975.
22. Stucky BD, Hays RD, Edelen MO, et al. Possibilities for shortening the CAHPS Clinician and Group Survey. Med Care. 2016;54:32–37.
23. Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy
. Fam Med. 2004;36:588–594.
24. Sarkar U, Piette JD, Gonzales R, et al. Preferences for self-management support: findings from a survey of diabetes patients in safety-net health systems. Patient Educ Couns. 2008;70:102–110.
25. Sarkar U, Karter AJ, Liu JY, et al. The literacy
divide: health literacy
and the use of an internet-based patient portal in an integrated health system-results from the diabetes study of northern California (DISTANCE). J Health Commun. 2010;15(suppl 2):183–196.
26. Willis GB. Cognitive Interviewing: A “How To” Guide. Research Triangle Park, NC: Research Triangle Institute; 1999.
27. Harris-Kojetin LD, Fowler FJ Jr, Brown JA, et al. The use of cognitive testing to develop and evaluate CAHPS 1.0 core survey items. Consumer Assessment of Health Plans Study. Med Care. 1999;37(suppl):Ms10–Ms21.
28. Pope C, Mays N. Reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research. BMJ. 1995;311:42–45.
29. Saunders CL, Elliott MN, Lyratzopoulos G, et al. Do differential response rates to patient surveys between organizations lead to unfair performance comparisons?: evidence from the English Cancer Patient Experience Survey. Med Care. 2016;54:45–54.
30. Elliott MN, Zaslavsky AM, Goldstein E, et al. Effects of survey mode, patient mix, and nonresponse on CAHPS hospital survey scores. Health Serv Res. 2009;44(pt 1):501–518.
31. de Vries H, Elliott MN, Hepner KA, et al. Equivalence of mail and telephone responses to the CAHPS Hospital Survey. Health Serv Res. 2005;40(pt 2):2120–2139.
32. Tesler R, Sorra J. CAHPS Survey Administration: What We Know and Potential Research Questions (Prepared by Westat, Rockville, MD, Under Contract No HHSA 290201300003C). Rockville, MD: Agency for Healthcare Research and Quality; 2017.
33. Fredrickson DD, Jones TL, Molgaard CA, et al. Optimal design features for surveying low-income populations. J Health Care Poor Underserved. 2005;16:677–690.
34. Shea JA, Aguirre AC, Sabatini J, et al. Developing an illustrated version of the Consumer Assessment of Health Plans (CAHPS). Jt Comm J Qual Patient Saf. 2005;31:32–42.
35. Blander E, Saewyc EM. Adolescent reactions to icon-driven response modes in a tablet-based health screening tool. Comput Inform Nurs. 2015;33:181–188.
36. Whitehouse SR, Lam PY, Balka E, et al. Co-creation with TickiT: designing and evaluating a clinical eHealth platform for youth. JMIR Res Protoc. 2013;2:e42.
37. Zieve GG, Richardson LP, Katzman K, et al. Adolescents’ perspectives on personalized e-feedback in the context of health risk behavior screening for primary care: qualitative study. J Med Internet Res. 2017;19:e261.
38. Alemi F, Jasper H. An alternative to satisfaction surveys: let the patients talk. Qual Manag Health Care. 2014;23:10–19.