Secondary Logo

Journal Logo

Consumer Response to Patient Experience Measures in Complex Information Environments

Schlesinger, Mark, PhD*; Kanouse, David E., PhD; Rybowski, Lise, MBA; Martino, Steven C., PhD§; Shaller, Dale, MPA

doi: 10.1097/MLR.0b013e31826c84e1
Reporting and Improving CAHPS®
Free
SDC

Background: As indicators of clinician quality proliferate, public reports increasingly include multiple metrics. This approach provides more complete performance information than did earlier reports but may challenge consumers’ ability to understand and use complicated reports.

Objectives: To assess the effects of report complexity on consumers’ understanding and use of patient experience measures derived from the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) survey.

Research Design: In an Internet-based experiment, participants were asked to compare information on physician quality and choose a primary care doctor. Participants were randomly assigned to choice sets of varied complexity (CAHPS alone vs. CAHPS with other measures) and number of doctors. Participants completed surveys before and after this choice task.

Subjects: A total of 555 US residents, aged 25–64, who had Internet access through computer were recruited from an existing online panel.

Measures: Recall seeing CAHPS measures; use of CAHPS measures for making choices; ratings of ease of use, usefulness and trustworthiness of CAHPS ratings; concerns about usefulness and trustworthiness.

Results: Participants presented with CAHPS information and other performance indicators relied less on CAHPS than did those presented with CAHPS information only, although they considered CAHPS information as valuable as did other respondents. Participants presented with smaller choice sets also judged CAHPS information as less easy to use when accompanied by other metrics than when it was presented alone.

*Yale School of Public Health, New Haven, CT

RAND Health, Santa Monica, CA

The Severyn Group, Ashburn, VA

§RAND Health, Pittsburgh, PA

Shaller Consulting Group, Stillwater, MN

Supplemental Digital Content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Website, www.lww-medicalcare.com.

Supported by cooperative agreement U18HS016980 from the Agency for Healthcare Research and Quality.

The authors declare no conflict of interest.

Reprints: Mark Schlesinger, PhD, Yale University School of Public Health, Room 304, LEPH, 60 College St., New Haven, CT 06520. E-mail: mark.schlesinger@yale.edu.

In the last decade, the number of standardized quality indicators for medical groups and individual clinicians has expanded to include standardized measures of patient experience (such as those derived from the CAHPS Clinician & Group Survey), indicators of clinical quality and patient safety and, more recently, narrative commentary from patients. As various kinds of comparative quality information have become available, numerous organizations have combined them into reports intended to inform and support decisions by consumers and health care organizations. For example, over 80% of quality reports with patient experience information in the Agency for Healthcare Research and Quality’s online “Health Care Report Card Compendium” also include data on clinical quality, patient safety, and/or cost.1 Of the 18 reports in the Compendium that convey patient experience with medical groups, 14 also include other information on quality or cost.

Most public reports currently present these multiple performance measures in separate sections or subpages of Websites, making it difficult for consumers to form a comprehensive, integrated picture of provider performance. As the number and variety of performance metrics have proliferated, report sponsors have begun to experiment with consolidating multiple measures to facilitate side-by-side comparisons. This approach is intended to foster consumers’ use of a richer, more complete conception of quality but may create its own challenges as consumers struggle to strike trade-offs across quality metrics that are not easily integrated. To determine how best to report measures side-by-side, report developers need to understand the ways that consumers deal with this richer information environment and anticipate the ways in which reports can best be designed to ease the use and interpretation of multiple metrics.

To explore these reporting challenges, we conducted an experiment to assess the impact of a complex reporting format on consumers’ understanding and use of the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) measures of patient experience. Unlike measures based on specific conditions, generalized patient experience surveys are applicable to all patients and all clinicians, making them an essential foundation of medical consumerism. However, although CAHPS surveys have been extensively evaluated as a freestanding source of information, relatively little is known about how consumers interpret CAHPS results in conjunction with other measures of performance. In this study, we examined the responses to performance information of a random sample of US adults of working age who had access to the Internet on a computer. We then assessed the extent to which current exposure to multiple performance metrics and past exposure to different types of health care report cards are related to consumers’ use and assessment of CAHPS results in their choice of physicians.

Back to Top | Article Outline

CONCEPTUAL FOUNDATIONS

Our analysis is based on a view of decision makers as “boundedly rational,” a conceptual paradigm increasingly applied to medical consumerism.2–5 Bounded rationality suggests that in making decisions, individuals are often constrained by both the limits of their cognitive capacity to process information as well as by the time and attention they have for any given decision. This view contrasts with the neoclassic economic model of consumer behavior, which assumes that people seek out and consider all available data to inform their choices and easily trade off the advantages and risks of each choice in a rational way.

Under the neoclassic model, introducing additional measures to health care quality reports would be predicted to unambiguously enhance their information value, giving consumers’ additional reasons to engage with and closely attend to the report. Although some consumers are primarily concerned with technical aspects of quality and others with interpersonal aspects of quality, many consumers value both.6 Multiple measures could also encourage consumers to make greater use of 1 performance metric by clarifying the various dimensions of provider performance, improving consumers’ understanding of which aspects of performance are based on patients’ reports versus clinical metrics, and making it easier for report users to focus on the particular information they value.

In contrast, under the paradigm of bounded rationality, the opportunities afforded by multiple measures of provider performance displayed side-by-side could be offset by the increased complexity of the information presented. If consumers are limited in their cognitive capacity and by their time constraints, increased information complexity may deter them from examining some of the available metrics. If consumers devote only limited time to health care provider choices, greater attention to clinical performance measures, for example, could reduce the time available for exploring CAHPS data. In the extreme—for those whose cognitive limitations are greatest or whose choice sets are most complicated—consumers who face complex portfolios of performance measures may feel so overwhelmed by the challenges of choosing among health care providers using a multiplicity of metrics that they give up entirely on making any sense of their options (termed “cognitive overload” in the bounded rationality literature), choosing arbitrarily among providers or allowing the selection of a clinician to be made for them according to whatever default protocols exist.3,7

Thus, the neoclassic model and the paradigm of bounded rationality make different predictions about consumer behavior under conditions of increased information complexity. To be sure, neoclassic consumers presented with multiple measures might make less use of survey-based measures than if presented with survey results alone, because the availability of additional measures leads them to value the survey findings less. Under bounded rationality, however, consumers might make less use of survey-based measures in a multi-measure report, even if they judged them to be just as valuable for choice as when the survey results are presented alone.

Although all consumers will be challenged by complex choices, the impact of bounded rationality is likely to be more pronounced under several circumstances. The first is when choice sets are large or complicated (involving multiple incommensurate metrics), so that cognitive overload becomes more likely. The second is when consumers have few heuristics or experience-based rules that they can use to simplify their choices. We hypothesize that the former will be true for choice sets that include a larger number of providers to select among and that the latter will hold for consumers who have more limited experience assessing medical care and making choices.

Consequently, one would expect variation in the impact of bounded rationality. Some consumers—such as those in large urban areas—face far more complex choice sets than do residents of smaller communities. Also, some consumers have not been exposed to health care report cards, reducing their opportunities to develop the conceptual tools to integrate multiple aspects of health care quality: in any given year, for example, only a third of all Americans report having seen comparative quality reports for health care providers.8

Back to Top | Article Outline

METHODS

To explore how consumers understand and use reports that include multiple types of performance data, we constructed an experiment in which a random sample of working-age adults were directed to a fictitious Website with comparative information on physician quality to choose a primary care doctor. Respondents were told that “Although you will not really be selecting a doctor, we’d like you to consider this choice as carefully as if you were making it for yourself.” Before logging onto the Website, respondents answered questions about their recent health care experiences. After using the site to select a primary care doctor, participants answered questions about their understanding and impressions of the performance measures and how they made their choice.

For the purposes of this paper, we focus on (a) the extent to which participants made use of CAHPS survey results (assessed through both their postchoice responses and tracking data from the Website) and (b) participants’ perceptions of the trustworthiness and usefulness of these metrics. Their responses were analyzed in light of their own reported experience with health care and quality reports as well as the context within which their choices were made. Participants were randomized across a set of experimental conditions or “arms,” which varied in the complexity of the information and number of clinicians presented.

Back to Top | Article Outline

The Study Population: Knowledge Networks Sample

Participants were recruited randomly from an existing survey panel of >60,000 households developed and maintained by Knowledge Networks (Menlo Park, CA). Knowledge Networks constructs this panel using a combination of random digit dialing and address-based sampling; respondents who do not have access to the Internet are provided with free access when they join the panel. The panel has been shown to represent the noninstitutionalized population of the United States, including households with listed and unlisted phone numbers, cell-phone-only households, and nontelephone households.9–10

To avoid dealing with issues that are unique to Medicare, we restricted eligibility for this study to working-age adults and to those who access the Internet through computer, approximating the types of people within this age range likely to seek online information about doctors. Excluding participants using web-TV appliances reduced the eligible panel members by approximately 11% but left a substantial number of older and less-educated adults eligible. Knowledge Networks randomly sampled 1757 panel members aged 25–64 and invited them to participate in the study; nearly half (48.3%) accepted the study invitation.

Back to Top | Article Outline

The SelectMD Website

To present a choice setting that would seem as realistic as possible, we engaged a marketing and communications firm, Wowza Inc., to build a prototype Website called SelectMD to display comparative information on the performance of the fictional physicians. The Website was designed to achieve a “modal” level of content, format, and functionality by replicating many of the basic presentation and navigation features commonly found in contemporary web-based reports, including references to the independence of the site’s fictional sponsor.

A page labeled “Performance Overview” presented summary scores for “Service Quality” and “Technical Quality” for a set of primary care doctors. To indicate how each doctor compared with others in the same geographic area, performance was represented by 1–5 stars, where 3 stars was average. As on many similar Websites, participants were able to sort doctors by level of performance on the summary measures available in that experimental condition and to “drill down” within a given performance category to view component measures contributing to the summary score. Selected experimental arms also contained a set of patient comments designed to imitate the range of anecdotes increasingly available on real-world Websites. A more detailed discussion of the features of this site is presented in Technical Appendix A.

Although not visible to participants, the SelectMD Website had a tracking system to monitor participants’ online use of the Website. The system was designed to facilitate analyses of use patterns by recording every click made by participants and the time spent on each page.

Back to Top | Article Outline

Analytic Methods

Experimental Design

We assessed the implications of complexity by presenting randomly selected subgroups of participants with different choice sets. The full experiment included 6 arms; this paper focuses on 4 arms that can be represented as a 2×2 experimental design (Table 1), with the complexity of performance metrics [CAHPS alone vs. CAHPS combined with Healthcare Effectiveness Data and Information Set (HEDIS) measures] crossed with the complexity of choice context (12 physicians with limited demographic information vs. 24 physicians with the same demographic information plus patients’ comments).

TABLE 1

TABLE 1

  • Baseline/control: Participants saw CAHPS scores labeled as “Service Quality” for 12 doctors. Participants could review a summary measure of service quality on the Performance Overview page and drill down to 4 measures from the CAHPS Clinician & Group Survey: courtesy and respect shown by office staff, ease of getting appointments, doctor-patient communication, and an overall rating of the clinician.
  • CAHPS plus clinical quality: Participants saw service quality for 12 doctors along with clinical process measures labeled as “Treatment Quality”: a summary score on the Performance Overview page and a drill-down page with 4 clinical process indicators (prevention and screening, care for asthma, care for diabetes, and care for heart disease) drawn from HEDIS.
  • CAHPS plus patient comments and a larger choice set: Participants in this arm were presented with “Service Quality” for 24 doctors as well as “Patient Reviews”: 4–6 comments from patients describing their experiences with each doctor.
  • Maximum cognitive loadlarge choice set and 3 measures of performance: In this arm, participants were presented with all 3 types of information (“Service Quality,” “Treatment Quality,” and “Patient Reviews”) for 24 doctors.
Back to Top | Article Outline

Measuring Consumer Use of CAHPS

We deployed 4 measures to assess consumers’ use of CAHPS results. Two measures of CAHPS use were based on data derived from electronically tracking consumers as they moved through the SelectMD Website. A second pair of measures was based on questions asked immediately after participants selected a physician.

The tracking measures captured 2 aspects of using CAHPS information: (1) whether participants opened up the subpage that displayed results for the 4 CAHPS measures and (2) whether participants used CAHPS results to sort among physicians. Although neither of these actions is necessary for consumers to use CAHPS results in choosing a physician, each suggests proactive engagement with the information. Thirty-nine percent of all participants probed down to the components of the CAHPS summary score; only 7% sorted physicians by their CAHPS results.

The measures drawn from the postchoice questionnaire assessed how easily participants could use the CAHPS information. The first involved recall: Did participants remember having seen performance ratings based on surveys of patient experience? (The CAHPS results were present in every experimental arm). Eighty-two percent recalled having seen this information.

Participants were also asked about the usefulness of the CAHPS results for choosing among physicians (or how useful they thought they would be, if they did not recall having seen them). SelectMD provided individual star ratings for the 4 component measures. Participants indicated how easy or difficult it was to identify the best doctors using each of these CAHPS measures. We combined the 4 responses, which were weighted equally and scaled so that a high score indicated greater ease of use. Most respondents found the CAHPS measures to be easy to use.

Back to Top | Article Outline

Measuring Consumer Perceptions of CAHPS

Consumers may choose not to use CAHPS results if they regard the information as unhelpful or untrustworthy.11–12 We assessed whether CAHPS results were considered “useful” in selecting among clinicians and perceived as trustworthy measures of physician practices through the use of 4-point categorical response scales (ranging from “very useful/trustworthy” to “not at all useful/trustworthy”) and by examining open-ended responses elicited from respondents. A content analysis of the open-ended responses showed that the majority of concerns fall into 4 categories: (1) generalizability (eg, survey respondents are not representative of all patients); (2) salience (eg, survey results are less useful than personal experience for assessing quality); (3) documentation (eg, methods are not well described); and (4) data source (eg, certain sources may appear biased).

Back to Top | Article Outline

Assessing the Impact of Prior Experience

To explore the impact of prior experience with health care and report cards, we asked participants (before exposure to the SelectMD site) that whether they had encountered report cards for clinicians, hospitals, or health plans in the past 12 months. Those indicating that they had seen quality information for clinicians were then asked that whether the information included CAHPS-like measures or other systematic metrics of performance, as opposed to patients’ comments on physicians or their practices.

Controlling for Other Factors Related to Experiential Measures: Measures based on experience are likely to be correlated with other participant attributes that can influence how consumers make sense of performance metrics. To distinguish the influence of these other factors from the relationship of prior experiences to CAHPS use and assessment, we estimated a set of regression models that included as covariates participants’ sex, age, education, race and ethnicity, presence of chronic health conditions, and number of visit to a doctor in the past year, all suggested by past research to influence whether consumers’ use quality reports. Because our outcome variables were dichotomous, categorical or counts of concerns, we estimated these models as logistic (or ordered logistic) regressions.

Back to Top | Article Outline

RESULTS

A total of 849 respondents participated in the overall experiment; excluding those in the 2 arms not relevant to this study yielded 555 participants who completed both preexposure and postexposure surveys. Item nonresponse ranged from 3 to 11 participants for each outcome. (To maximize statistical power, regression estimates reported in Table 4 are based on the full sample of experimental participants.)

Because our sample was restricted to participants who had computer-based access to the Internet, it is skewed slightly toward higher socioeconomic status and older participants. For example, 62% of those using SelectMD had at least some college education, compared with 59% of the American population aged 25–64 (census data from 2008). Forty-nine percent of our sample was age 45 and older, compared with 45% of the American population aged 25–64. As a result of this older skew, our experimental sample had a somewhat larger proportion of participants with chronic illness than does the working-age public: 39% compared with 36%.

Back to Top | Article Outline

Assessments of CAHPS

CAHPS survey results were considered easy to use: Forty percent of participants gave scores averaging 4.5 or higher on the 5-point ease-of-use index, and 82% gave scores averaging 3.5 or higher. Roughly 80% of participants reported that each of the individual components of the CAHPS scores were very or somewhat easy to use. By comparison, the percentage of participants reporting that the clinical quality (HEDIS) measures were very or somewhat easy to use averaged about 70%.

Many participants also considered CAHPS results both useful and trustworthy. Thirty-eight percent reported that CAHPS results were “very useful,” with another 53% reporting “somewhat useful.” The distribution of perceived trustworthiness of CAHPS surveys results was similar. The correlation between the 2 measures was 0.55, suggesting that some who judged survey results useful did not see them as trustworthy (and vice versa).

Concerns about generalizability were the most common threat to perceived usefulness, expressed by about 29% of participants who judged the data not to be useful. Each of the other 3 categories of concerns was reported by <10% of those who questioned the usefulness of CAHPS. Concerns about generalizability were an even more common barrier to trustworthiness (cited by 38% of participants who judged the data to be untrustworthy), with concerns about documentation reported by 13% of those who considered CAHPS untrustworthy. Two other sources of concern about trustworthiness—salience and the source of the data—were each mentioned by <10% of those who questioned CAHPS trustworthiness.

Back to Top | Article Outline

The Impact of Complexity

Comparisons across experimental arms are presented in the left-hand portion of Table 2. When clinical quality measures are reported with CAHPS measures (comparing arm 1 with 2 and arm 3 with 4), use of the CAHPS measures is reduced, even though there is no corresponding reduction in the perceived trustworthiness or utility of that information. Some of these differences are more pronounced in the more complex choice context (arms 3 and 4), but others are more pronounced in the simple choice context.

TABLE 2

TABLE 2

Back to Top | Article Outline

The Impact of Prior Experience

The association between CAHPS use and evaluation and past experience with quality reports is conveyed through the bivariate findings presented in the right-hand portion of Table 2. Past exposure to report cards seems to be associated with heightened use and greater perceived reliability of CAHPS information, although these effects are largely confined to prior experience with physician report cards, rather than reports on the performance of hospitals or health plans.

Back to Top | Article Outline

Benchmarking Against Other Predictors of CAHPS Use

Past research indicates that better-educated and sicker people are more likely to have seen and used health care report cards.13 This was also true for our experimental sample (Table 3). Participants who were more experienced with health care (either due to a chronic medical condition or frequent visits to the doctor) viewed CAHPS results as somewhat easier to use and substantially more reliable. By contrast, better-educated participants seem to make greater use of CAHPS information and are more skeptical about its reliability.

TABLE 3

TABLE 3

The magnitude of the differences identified in Table 3 is about on par with the magnitude of the differences identified in Table 2 for complexity and learning. Thus, the increased complexity of the information seems to have as large an effect on the use and perceptions of CAHPS as do individual characteristics that have long been known to affect the use of report cards.

Back to Top | Article Outline

Experiential Findings Derived From Multivariate Models

Because both past exposure to report cards and past experience with health care seem to be related to the use and perceptions of CAHPS results, we parsed out their distinctive associations in the multivariate models presented in Table 4. For simplicity, we report only 4 sets of findings (2 related to past exposures to report cards and 2 for health experiences), although they are derived from models with a larger number of explanatory variables. (Complete results from these logistic regression models are presented in Technical Appendix B, Supplementary Digital Content 1 http://links.lww.com/MLR/A339.) These are reported as odds ratios to simplify their interpretation and comparison across the different explanatory variables.

TABLE 4

TABLE 4

The findings reported in Table 4 suggest that past exposure to physician quality reports is associated with greater use and perceived greater reliability of CAHPS results. This pattern is consistent with other sources of learning, such as coming into regular contact with the health care system due to a chronic condition. However, the anomalous results involving participants’ education persist in these more multivariate models, with more educated consumers making greater use of CAHPS but remaining more skeptical of its trustworthiness.

Back to Top | Article Outline

DISCUSSION

In a randomized sample of working-age Americans, participants presented with both CAHPS and HEDIS information relied less on CAHPS when selecting among clinicians than did those presented with CAHPS results only—even though the first group judged CAHPS information to be no less useful for choosing a physician. For participants presented with simple choice sets (fewer doctors, less ancillary information), combining HEDIS with CAHPS data within a report also led participants to judge CAHPS data to be less easy to use.

These results are consistent with several possible explanations deriving from the conceptual model of bounded rationality. Multiple measures in a single report may have increased the cognitive burden of selecting a physician, thereby increasing the difficulty of using CAHPS information and reducing its use. Alternatively, the additional measures may have simply pushed the time requirements of the task closer to participants’ limits, reducing attention to all measures. In an earlier simulated choice experiment, Spranca et al14 found that when plan disenrollment information was added to CAHPS and HEDIS data on Medicare health plans, participants reduced their time with the CAHPS and HEDIS data rather than increasing their time reviewing the report. Our results do, however, tend to rule out the possibility that presenting HEDIS measures leads consumers to devalue CAHPS measures, because an equally large number of participants across all the experimental arms reported CAHPS measures as very or somewhat useful. These findings do not speak to how people trade off service quality and technical quality in choosing among clinicians but demonstrate that when presented in combination, then both aspects of quality are valued by most consumers.

Our findings suggest that prior exposure to quality reports is associated with both greater use and greater perceived usefulness of CAHPS measures. The fact that this effect was observed only for report cards about physicians suggests that this was a specific effect of exposure or prior use rather than a consequence of some consumers’ being more interested in performance measurement related to health care.

These findings need to be considered in the light of methodological features and limitations of our experiment. It is unclear to what extent the design elements of the SelectMD Website affected consumers’ ability to make sense of CAHPS versus other performance metrics. Although SelectMD incorporated the modal features of most existing Websites, perhaps other ways of presenting performance measures might alter consumers’ acceptance or understanding of CAHPS.

To choose a physician, participants in this experiment engaged in a role-playing exercise. Although we went to great lengths to make the Website realistic, the motivation that participants bring to hypothetical decisions cannot match that associated with choices having real consequences. Moreover, it seems unlikely that consumers would seek the kind of social support and advice from family members and friends for a simulated choice that they frequently seek out in the real world.15 These differences may have affected the choice process, although not in ways that would be expected to vary across experimental arms. In contrast, it seems likely that the in-home setting in which participants examined the available information is similar to a real-life environment for decision-making, complete with domestic distractions. In this respect, our study has greater verisimilitude than laboratory experiments.

Our experiment was designed to answer the question of how increasing the complexity of the information environment by including multiple metrics affects consumers’ evaluations and use of CAHPS. We did not examine the parallel question of how presenting CAHPS information affects consumers’ perceptions and use of HEDIS data. The effects of increasing the complexity of contextual information may be similar for both CAHPS and HEDIS measures, but our data do not speak to that point. Moreover, the most rapidly proliferating form of performance information involves narrative commentary from consumers.16 Researchers need to explore the ways in which free-form comments affect the use and interpretation of CAHPS results specifically (eg, a physician with a low CAHPS communication rating might be described by patients as seeming distracted) and consumers’ understanding of various aspects of provider quality more generally (eg, a clinician with a low rating for preventive care might be described by patients as always seeming rushed).

The impact of past exposure to health care report cards seems to depend on what sort of performance information consumers have been exposed to; although we asked about this, it is difficult to be certain what this past exposure entailed. Other research suggests that recent exposure is skewed toward more anecdotal reports and less standardized performance measures, but we need further study of how consumers learn to use (or misuse) performance reports.17

Finally, this study did not explore how much CAHPS measures matter for choice. Past research suggests that there is sometimes a gap between how consumers assess CAHPS results and how much exposure to CAHPS information actually affects choice. The linkage between consumers’ attitudes toward information and their actual use of information deserves greater attention, especially in a more complex information environment.12 Before making choices, consumers often indicate an interest in a multiplicity of quality measures—yet may be unable to use them sensibly.

Back to Top | Article Outline

Implications for Consumer Reporting and Broader Health Policy

The complexity of the information environment does affect the way in which consumers perceive and process CAHPS results. This finding, especially in view of the increasingly complex reports now being published on health care providers, highlights the need to improve methods of reporting quality information to consumers. A number of approaches might be pursued, although each has potential challenges.

Several approaches involve how performance ratings are presented; others involve ways to help consumers interpret those ratings. One presentation strategy that is often recommended is simplification.18,19 Recognizing that a proliferating number of performance measures may lead to “cognitive overload”—when the challenges of assessing a choice set are so daunting that consumers just give up and select at random—some report designers have begun to “roll up” multidimensional metrics into a single score. However, this approach may obscure important details about health care quality. CAHPS reporting guidelines recommend conveying a multidimensional picture of quality by providing measures of several important domains of patient experience. But when using SelectMD, 80% of users started by focusing on rolled-up performance measures; fewer than half ever “drilled down” to the disaggregated CAHPS scores so that they might observe that some clinicians seemed stronger on certain aspects of service quality than on others.

In addition, interactive reports (those on Websites) can offer consumers the option of “filtering” choice sets based on various clinician characteristics, such as experience, sex, or office location. Our results suggest that when consumers consider smaller choice sets, they have a greater propensity to drill down into disaggregated CAHPS measures, thereby gaining access to richer information and offsetting the major liability of rolled-up scores. But only about 20% of those using SelectMD took advantage of the filtering option, which suggests that most consumers will not navigate beyond rolled-up performance metrics unless effective ways are found to encourage them to do so.

Both researchers and report designers should place a high priority on finding ways to help consumers integrate information without losing the contours of what is being integrated. Quality reports serve to both inform and to educate; facilitating consumers’ choices should not entirely overshadow the goal of enhancing their understanding. Striking an appropriate balance between the 2 objectives will require going beyond the design of reports to consider ways of assisting consumers so that they can more fully comprehend complex performance metrics.

Our findings suggest that 1 way to do that involves exposing the public to reports on clinician quality on a more regular basis. As our findings suggest, users of SelectMD who had previously seen reports on clinician quality judged CAHPS ratings of individual physicians to be more useful and meaningful. Yet only about 1 in 10 Americans have seen a report on clinician quality in the past year, and this has not changed since the mid 1990s.8 To be sure, much of the public may have such limited interest in comparative quality ratings that they would ignore quality reports even if freely and widely available. Yet about 40% of SelectMD users indicated that they believed there to be large differences in quality among primary care clinicians, suggesting that a substantial portion of the public might see such reports as salient and valuable.

More broadly, inherent limitations in the extent to which the cognitive burden of decision-making can be “pushed” onto consumers suggests a need for increased use of intermediaries who can actively help people make use of comparative performance information. The health insurance navigators mandated in the Patient Protection and Affordable Care Act (PPACA) are a step in this direction, intended to assist with choices among health plans.20 To date, however, no comparable arrangements exist for assisting with choices among clinicians, as the equivalent “patient navigator” initiatives have been focused on maintaining patients’ connections with the health care system rather than on facilitating choice among providers.21 The track record of these initiatives does suggest, however, that policy interventions can encourage the creation of an infrastructure of trusted intermediaries who can help consumers make sense of complicated performance information and complex choices.

Our results also indicate that people with chronic illness perceive CAHPS results as more useful and more trustworthy than do those without chronic illness. This is encouraging, because it suggests that the domains of patient experience addressed by CAHPS are especially likely to resonate for consumers with the greatest experience with the health care system; all the more so since >120 million Americans live with a chronic health condition. Consumers with chronic illness stand to benefit by using CAHPS to aid their selection of providers at important transition points and to periodically assess whether their current treatment arrangements are adequate.

If CAHPS seems to work especially well for particular segments of the consumer population, report designers should consider how best to adapt the survey or its presentation to the needs of those groups. This might include reporting results disaggregated into subgroups, although that raises some challenging issues of adequate sampling and may exacerbate problems related to complexity, because condition-specific measures, in order to be sufficiently reliable, will almost always need to be presented at the clinician-group level rather than for individual clinicians.22

Back to Top | Article Outline

CONCLUSIONS

In this study, we assessed the effects of providing complex information on multiple dimensions of physician quality in a single report. Although providing additional relevant information should at least in theory be helpful to choice, our results suggest that any benefits of the additional information are at least partially offset by the disadvantages posed by increased cognitive burden and processing time for consumers.

The quest for better informed medical consumers is fraught with challenges and laden with embedded ironies. Consumers seek report cards that can simplify difficult and highly consequential choices, but report designers recognize the need to educate consumers about the multiple facets of quality even while trying to facilitate choice. Consumers turn to report cards hoping to simply and unambiguously identify the “best” providers and plans, yet top performers on some dimensions of quality often fall short on other dimensions. Advocates of medical consumerism extol the value of patient choice, yet such choices often involve trade-offs that many consumers would prefer not to face.

Although the US health care system is clearly still in the midst of a transition toward greater consumer empowerment and informed decision-making, these tensions are inherent in consumer choice in medical settings and will not go away, even if more Americans are exposed more frequently to performance metrics. It is essential that health services researchers and policymakers recognize these persisting tensions and approach the task of consumer empowerment with realistic expectations and thoughtful responses to such persistent challenges.

Back to Top | Article Outline

ACKNOWLEDGMENTS

The authors thank Debra Dean of Westat and Jeff Rabkin and Will Garrison of Wowza for their assistance on this project, as well as the staff at Knowledge Networks for fielding this survey experiment.

Back to Top | Article Outline

TECHNICAL APPENDIX A A More Detailed Description Of The SelectMD Website

The Website was designed to achieve a “modal” level of content, format, and functionality. The intent was to include many of the basic presentation and navigation features commonly found in contemporary real-world web-based reports, so as to be representative of what consumers encounter. For example, the home page of the SelectMD site (which is constant across all experimental conditions) included the ability to specify type of doctor (internist or family practitioner) and distance the user is willing to travel. Both features are common to current physician search Websites.

Like many sites with quality information, the SelectMD site had a simple overall look and feel, with a spare design, neutral colors, and a conventional typeface and font size. To indicate sponsorship, the site displayed a SelectMD logo, and a copyright footnote credited “The Better Health Coalition,” a nonprofit, independent organization receiving no funding from health care organizations or employers.

Once past the home page, study participants saw a page labeled “Performance Overview” that presented summary scores for “Service Quality” and “Technical Quality” for a set of primary care doctors. To indicate how each doctor compared with others in the same geographic area, their performance was represented by 1–5 stars, where 3 stars was average. As on many similar Websites, participants were able to sort doctors by level of performance on each measure available in that experimental condition and to “drill down” within a given performance category to view component measures contributing to the summary score. To see more detailed results, participants could click on hyperlinks or a tab.

Other modal features of public reporting Websites adopted in constructing the SelectMD Website include:

  • Use of stars as symbols of relative performance.
  • Use of emoticons to signal a positive, neutral, or negative patient review.
  • A “scroll-over” function to learn more about the definition of performance measure subcategories.

Certain features and functions of the SelectMD Website were not typical of today’s reports. Two of these were meant to expand on current efforts to enable users to narrow down their choices: participants could both filter the list of doctors by sex and/or years of experience and highlight any number of doctors for comparison purposes. A third atypical feature present in two-thirds of the experimental conditions was the ability to read patient reviews alongside performance metrics. Although a growing number of Websites today offer patient reviews of physicians, we had not found any when we initiated this study that combined comments with comparative information based on standardized measures. To enhance the verisimilitude of the patient reviews on the SelectMD site, users had the ability to add their own comments.

We conducted 2 rounds of usability testing during the course of Website development, which led to significant revisions to improve navigation, comprehension, and ease of use. More specifically, it became clear that because users could not visit the Website repeatedly to learn how it functioned, it was crucial to provide participants with a brief tutorial to orient them to the Website’s content and functionality and to explain the doctor selection step at the end of their session. The tutorial consisted of a series of 4 screenshots describing the performance measures that might be included in the participant’s session, as well as the sorting, drill down, and highlighting functions.

Interested readers may view the experimental Website starting with the tutorial at http://www.selectmd.org/site/intro/. The experimental arms are assigned randomly upon reaching the homepage.

Back to Top | Article Outline

REFERENCES

1. Agency for Healthcare Research and Quality. 2011. Health Care Report Card Compendium. Available at: https://www.talkingquality.ahrq.gov/content/reportcard/search.aspx Accessed October 12, 2011
2. Schlesinger M. Choice cuts: parsing policymakers’ pursuit of patient empowerment from an individual perspective. Health Econ Policy Law. 2010;5:365–387
3. Hanoch Y, Rice T. Can limiting choice increase social welfare? The elderly and health insurance. Milbank Q. 2006;84:37–73
4. Schneider C. After autonomy. Wake Forest Law Rev. 2006;41:411–444
5. Hibbard JH, Slovic P, Peters EM, et al. Strategies for reporting health plan performance information to consumers: evidence from controlled studies. Health Serv Res. 2002;37:291–313
6. Fung CH, Elliott MN, Hays RD, et al. Patients’ preferences for technical versus interpersonal quality when selecting a primary care physician. Health Serv Res. 2005;40:957–977
7. Anderson C. The psychology of doing nothing: forms of decision avoidance result from reason and emotion. Psychol Bull. 2003;129:139–166
8. 2008 Update on Consumers’ Views of Patient Safety and Quality Information Publication #7819. 2008 Menlo Park, CA Kaiser Family Foundation
9. Baker L, Bundorf MK, Singer S, et al. Validity of the Survey of Health and Internet and Knowledge Network’s Panel and Sampling. Unpublished manuscript, Stanford University, 2003
10. Chang L, Krosnick. JA. National surveys via RDD telephone interviewing versus the internet: comparing sample representativeness and response quality. Public Opin Q. 2009;73:641–678
11. Alexander JA, Hearld LR, Hasnain-Wynia R, et al. Trust in sources of physician quality information. Med Care Res Rev. 2011;68:421–440
12. Faber M, Bosch M, Wolldersheim H, et al. Public reporting in health care: how do consumers use quality-of-care information? A systematic review. Med Care. 2009;47:1–9
13. Lubalin J, Harris-Kojetin L. What do consumers want and need to know in making health care choices? Med Care Res Rev. 1999;56(suppl 1):67–102
14. Spranca M, Elliott MN, Shaw R, et al. Disenrollment information and Medicare plan choice: is more information better? Health Care Financ Rev. 2007;28:47–59
15. Marshall M, McLoughlin V. How do patients use information on providers? BMJ. 2010;341:1255–1257
16. Lagu T, Hannon NS, Rothberg MB, et al. Patients’ evaluations of health care providers in the era of social networking: an analysis of physician-rating websites. J Gen Intern Med. 2010;25:942–946
17. Sick B, Abraham J. Seek and ye shall find: consumer search for objective health care cost and quality information. Am J Med Qual. 2011;26:433–440
18. Fasolo B, Reutskaja E, Dixon A, et al. Helping patients choose: how to improve the design of comparative scorecards of hospital quality. Patient Educ Counsel. 2010;78:344–349
19. Uhrig JD, Harris-Kojetin L, Bann C, et al. Do content and format affect older consumers’ use of comparative information in a Medicare health plan choice? Results from a controlled experiment. Med Care Res Rev. 2006;63:701–718
20. Rosenbaum S. Realigning the social order: the patient protection and affordable care act and the U.S. health care system. J Health Biomed Law. 2011;7:1–31
21. Natale-Pereira A, Enard KR, Nevarez L, et al. The role of patient navigators in eliminating health disparities. Cancer. 2011;117(suppl):3543–3552
22. Martino SC, Kanouse DE, Elliot MN, et al. A field experiment on the impact of physician-level performance data on consumers’ choice of physician. Med Care. 2012 This issue
Keywords:

CAHPS; choice of physicians; physician quality report; bounded rationality; experiment

Supplemental Digital Content

Back to Top | Article Outline
© 2012 Lippincott Williams & Wilkins, Inc.