Public reporting of information on the comparative performance of health care providers has become increasingly common.1 One of the main objectives of providing this information to the public is to support consumers in choosing among providers.2–4 Until recently, most public reports on health care quality have focused on hospitals, health plans, and large physician groups,1 and few public data have been available to inform decision making about individual physicians. This is not due to a lack of demand; studies consistently find that consumers are most interested in individual physician-level quality information.5–7
Momentum toward public reporting of data on the quality of individual physicians is starting to build. For example, the Centers for Medicare and Medicaid Services are soliciting physician-quality data as a part of its Physician Compare Initiative (http://www.medicare.gov/find-a-doctor/provider-search.aspx), with plans to report these data online. The Physician Compare web site will present quality of care and patient-experience data to help Medicare and non-Medicare patients and their families to assess the quality of the providers. Consumer Reports has recently begun providing quality information on individual physicians for a fee through its ConsumerReportsHealth.org web site (http://www.consumerreports.org/health/home.htm?loginMethod=auto). Two medium-sized health plans have recently begun publishing web-based quality reports on individual physicians. The report issued by one of these plans, HealthPlus of Michigan, is the focus of our study.
Considering the significant interest in public reporting of physician-level performance data and the potential stakes for consumers and providers, it is critical to know if publicly displaying these data has the intended consequences on consumer choice. There are, however, few published evaluations of physician-level quality reports. A recent review of the evidence for publicly reported performance data as a means to affect decisions about individual providers identified 7 studies on this issue.8 Together, these studies provide inconsistent evidence for an association between public reporting and selection of individual providers. It is difficult to generalize from these studies, however, as all focused on a single reporting system—the New York State Cardiac Surgery Reporting System—which publishes clinical outcome data (eg, mortality rates) for individual cardiac surgeons in New York. There are substantially more published evaluations of reporting at other levels (ie, health plans and hospitals). Likewise, these studies provide little evidence regarding the effectiveness of quality information in driving consumers to select higher quality health plans and hospitals.3,8,9 One cannot assume, however, that results for health plan and hospital choice will necessarily match those for individual provider choice, as the context of these decisions can be very different.9
In interpreting the results of studies that examine the impact of publicly reported quality data on provider choice, researchers must consider both whether the research design allows one to make causal inferences about the effects of reporting and how closely the decisions being studied resemble those made in the real world. Some researchers have used experimental designs to isolate the effects of quality information on consumer decision making.10–12 Although experimental studies offer a high level of control over the many other factors that may influence provider choice, they are limited in their external validity. Because participants in these studies make only hypothetical choices, they may lack the incentives, emotions, and engagement of consumers making such choices in the real world. A handful of studies have used real-world “field” experiments to investigate the impact of health care quality data on consumer decision making.13–15 In these studies, individuals needing to select a new provider are randomly assigned to receive or not receive data on the set of plans under consideration. Consumers’ plan choices are then analyzed as a function of experimental condition. Although studies like these have enhanced external validity over studies conducted in the laboratory, they are impractical in situations in which exposure to quality data cannot be randomly assigned, for example, when they are already publicly available.
There has been significant progress in methods to rigorously evaluate treatment effects when participants cannot be randomly assigned.16 With nonrandom exposure to treatment, people who use health care quality information may be predisposed to selecting higher quality providers than those who do not seek out or use such information. One way around the potential self-selection bias is a randomized trial of an intervention that increases exposure to the quality data among some participants. In such a randomized encouragement design,17–19 a randomly selected group receives extra encouragement or incentives to undertake a treatment (in our case, to access and use physician-level performance data). Under certain assumptions that we discuss below,16,17 this design may enable unbiased estimates of the effect of comparative quality data on consumer choice of providers.
HealthPlus of Michigan is an independent, nonprofit plan that contracts with over 900 primary care provider (PCPs) to serve approximately 72,000 adult, commercial health maintenance organization (HMO) members in central/east Michigan. In the current study, we used a randomized encouragement design to investigate the impact of an online physician performance report on new HMO members’ choice of HealthPlus PCPs. Specifically, we offered a randomly selected half of all adult new members who were required to select a PCP an encouragement beyond the standard information provided to all new members by the health plan to utilize the online physician performance report, and then we tracked use of the report and the quality of PCPs selected by all participants.
In keeping with the vast majority of current public reporting efforts,3 HealthPlus of Michigan disseminates its physician performance data through the Internet (on its publicly accessible web site). Dissemination of comparative health care quality data through the Internet is an attractive option in that it is relatively inexpensive and can allow consumers to customize data display to fit their preferences and concerns.20 At the same time, information provided through the Internet will not be seen by those who are unaware of its existence, and accessing and using this information may prove challenging for those who do not use computers regularly or do not have friends or family who are willing and able to assist them. Thus, in our study, we also examined participants’ Internet use savvy as a potential moderating factor in the relationship of interest.
This study used a randomized encouragement design to investigate the influence of physician performance data on new health plan members’ choice of a PCP. By randomizing encouragement and tracking both exposure to the treatment and outcomes for all those who do and do not receive the encouragement, it is possible to obtain unbiased estimates of the effects of the encouragement and—under certain assumptions discussed below—of the treatment itself. The effect of encouragement on the outcome can be analyzed directly as randomized. If the encouragement’s only effect on the outcome is by increased uptake of the treatment, then the effect of the treatment can be estimated using the Wald estimator.21,22
This variation of an experimental design is useful when it is impractical or unethical to exert full control over which study participants are exposed to a treatment. For example, randomized encouragement designs have been used to estimate the effectiveness of an influenza vaccine on reducing morbidity in high-risk adults23 and to study the effectiveness of physician adherence to treatment guidelines on improving the survival rate of patients with heart failure.24 In our case, random assignment to treatment (exposure to physician performance data) was both impractical (the physician performance report is publicly available to all who wish to access it) and unethical (withholding from a subset of new health plan members data that could facilitate selection of a high-quality PCP).
Participants and Procedure
During its new-member-enrollment period in the fall-winter of 2009–2010, HealthPlus of Michigan informed all new commercial HMO members of the availability of physician performance data on its web site. This notification was provided in the information packet mailed to all new-member households. Information about the online physician performance data was also disseminated more broadly through a press release issued by HealthPlus in the fall of 2009.
Each year during new-member enrollment, HealthPlus mails a notice to all new commercial HMO members who have not designated a PCP upon enrollment that they need to do so as soon as possible. When these mail contacts were made between October 2009 and January 2010, a randomly selected half of new members were given extra encouragement to access the online physician performance report and use the information it contained to select a PCP. (New enrollees were randomized in batches, via urn randomization, on a weekly basis throughout this 4-month enrollment period.) This extra encouragement was provided in part by a 1-page letter—signed by the chief medical officer at the health plan—that explained how to access and use the online report and emphasized the importance of doing so. In particular, new members assigned to the encouragement condition were told that, “choosing a primary care doctor who provides quality care and meets your specific needs is important to your health.” Furthermore, they were told that the online physician performance reports contain “important information about the quality of the doctors you are considering, including what patients like you say about their experiences with doctors and their staff.” Approximately 1 week after the mail contact, new members assigned to the encouragement condition additionally received a brief (54 s), automated phone call reminding them about the online physician performance data and further reinforcing their potential usefulness.
Approximately 2 weeks after the new-member–enrollment period ended, in February 2010, all 1347 new commercial HMO members assigned to either the encouragement or control condition were mailed a survey that asked about their exposure to and use of the physician performance report in selecting their PCP. Reminder postcards were mailed a week later, and replacement surveys were mailed 1 month later to those who did not respond to the survey sent initially. In addition to information on participants’ use of the physician performance data, the mail survey elicited demographic data and information on participants’ use of the Internet. HealthPlus provided data on the quality of each PCP (as reported in the online provider directory) selected by new enrollees assigned to a condition of our study regardless of whether they completed a survey. Although the majority of plan members designated a PCP within the first few months of enrollment, some had not done so even after a full year. Those who did not select a PCP within a year of enrollment were excluded from our analyses of the effects of encouragement on the quality of PCP selected. All procedures for this study were approved by RAND’s Human Subjects Protection Committee.
Nature of the Treatment
Data on physician performance is imbedded within HealthPlus of Michigan’s online provider directory (http://www.healthplus.org/templates/quicklink.aspx?id=2142). This directory shows, in a grid format, individual provider names, practice addresses and phone numbers, HealthPlus product affiliations (eg, HMO or preferred provider organization), specialty, and scores on “overall clinical quality,” and “member satisfaction.” In the online directory, a hyperlink is provided to information about the data underlying the clinical quality and member satisfaction scores. Users who click on this link are told that the overall member satisfaction score summarizes (through equal weighting) 3 composite measures (quality of doctor communication; courteous and helpful office staff; and timeliness of appointments, care, and information) and an overall rating of the doctor, all derived from the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) Clinician and Group Survey (CG-CAHPS). Further information is presented on each of these underlying measures, including their rating scales and the individual items in each of the composite measures. Likewise, users are told that the overall clinical quality score summarizes (through equal weighting) Healthcare Effectiveness Data and Information Set indicators for asthma care; breast, cervical, and colorectal cancer screening; and diabetes management. Information is also provided on the benchmarks used to establish physician rankings on the overall member satisfaction and clinical quality scores that are displayed.
Users of the directory are able to restrict the number of providers displayed by limiting them to a certain geographic location, specialty, or product affiliation. The web site also offers the capability to sort providers on any of the variables in the display, including the 2 physician performance scores. Each of the physician performance scores is presented using a row of 1–5 symbols, where the symbol is HealthPlus’s corporate logo, a variation of a “plus” symbol. A legend at the top of the data table explains that 5 “plus” symbols is the highest score a doctor can achieve and that 1 “plus” symbol is the lowest. The legend also explains several missing data indicators that may appear in the columns of the grid that present the performance scores. “N/A” indicates that data were available from too few plan members to calculate a reliable score for the physician. “New to network” indicates that data are not yet available for the physician as s/he only recently joined the health plan's network of physicians. “Under review” indicates that data have only recently been collected on the provider and are not yet available for display.
Quality of Provider Chosen
Our main outcome variable is the quality of PCP chosen by plan members, as indicated by the provider’s member satisfaction and overall clinical quality scores displayed in the provider directory. As noted above, scores on each of these measures ranged from 1 to 5 (displayed as 1–5 “plus” symbols in the provider directory), with higher scores indicating higher quality. Table 1 shows the distribution of scores on these 2 measures among all PCPs listed in the provider directory at the time of our study. Scores on these 2 measures were uncorrelated, r460=0.04, so we analyzed them separately.
Use of the Physician Performance Data to Select a PCP
As part of the mail survey, we asked participants to indicate whether they had accessed the online provider directory, and if so, whether they had accessed the directory before or after selecting a PCP.
Participants reported how often they connected to the Internet (never to at least once/day) and whether in the past 12 months they used the Internet to (a) make travel plans; (b) find information on a hobby or favorite activity; (c) find information on health insurance; (d) find information on a possible purchase; (e) find information on doctors or hospitals; (f) find information on a health condition; and (g) find information about a medical treatment or procedure. Participants who said they accessed the Internet at least weekly or who said they accessed the Internet at least monthly and that they had used the Internet for at least 4 of the 7 stated purposes in the past year (78% of participants) were classified as regular Internet users using an indicator variable. Those who are not regular Internet users may not be as affected by the encouragement to use Internet-based information, so the overall analyses are followed by analyses restricted to regular Internet users.
We began by comparing survey respondents and nonrespondents to investigate the possibility of selection bias. Information on the latter group was limited to data on age and sex from the health plan’s administrative records. In addition, we used data from HealthPlus’s 2010 CAHPS survey to compare the racial/ethnic composition of the health plan’s total commercial HMO membership with that of our survey respondents (who provided data on race/ethnicity as part of this study’s mail survey). Finally, we compared survey respondents and nonrespondents on whether or not they selected a PCP within a year of enrolling in the health plan, information that was also available through the health plan’s administrative records.
Next, we conducted a simple bivariate analysis to test whether participants in the encouragement condition were more likely than those in the control condition to access the provider directory before selecting a PCP. A key assumption underlying our study is that encouragement makes plan members more likely to access the online provider directory than they would have in the absence of encouragement. This assumption was testable only among survey respondents as the survey was our sole source of information about plan members’ use of the online provider directory.
In contrast, data on the quality of PCP selected by new enrollees was available for all plan members assigned to a condition of our study regardless of whether they returned a completed survey. Thus, we began our analysis of PCP selection by looking among the entire study population to see whether PCPs selected by plan members in the encouragement condition were of higher quality than PCPs selected by plan members in the control condition. Next, we used ordinary least-squares regression to predict, separately, the member satisfaction and clinical quality scores of selected PCPs from encouragement condition, directory use (whether a participant reported looking at the online provider directory before selecting a PCP), and their interaction. In this model, which could only be tested among new enrollees who returned a survey, the encouragement coefficient estimates the effect of encouragement among those not accessing the directory, the directory use coefficient estimates the effect of directory use among those in the control condition, and the interaction coefficient estimates the amount by which the treatment (directory use) effect on quality is greater among those who were encouraged than among those who were not.
Wald estimation of the effects of treatment (directory use) on physician quality requires that all effects of encouragement on the outcome (physician quality) occur through increased uptake of the treatment. First, there must be no effect of encouragement on physician quality in the absence of directory use, and thus the encouragement coefficient in the model must not differ significantly from zero. Second, the effect of treatment (directory use) on quality must not differ as a function of encouragement, and thus the interaction term must not differ significantly from zero. If either of these assumptions is not met, there is evidence of direct effects of encouragement on quality that are not “fully mediated” by increased directory use.
Analysis of Survey Nonresponse
Of the 1347 new plan members assigned to a study condition, 693 (337 control, 356 encouragement) returned a completed survey, a 51% participation rate (Fig. 1). The likelihood that a new plan member returned a completed survey did not differ by study condition, χ21=1.13, P=0.29. Compared with survey nonrespondents, a greater proportion of survey respondents were female (57% vs. 50%, χ21=6.88, P=0.009) and age 55 years or older (23% vs. 11%, χ21=35.14, P<0.001), and a smaller proportion were age 34 or younger (44% vs. 27%, χ21=44.45, P<0.001). Compared with HealthPlus’s entire commercial HMO membership, similar proportions of survey respondents were non-Hispanic white (87% of survey respondents vs. 88% of total membership), non-Hispanic African American (6% vs. 7%), Hispanic (3% vs. 3%), and other race/ethnicity (4% vs. 2%). Of the 1347 new plan members assigned to a study condition, 923 (69%) selected a PCP within a year of enrollment (Fig. 1). The likelihood of selecting a PCP within a year of enrollment differed between survey respondents and nonrespondents: 81% of survey respondents selected a PCP within a year of enrollment versus 56% of survey nonrespondents (χ21=87.95, P<0.001), a point that we address in the Discussion section.
Effect of Encouragement on Exposure to the Physician Performance Data
Of the 693 survey respondents, 25 (4%) did not provide data on whether they accessed the provider directory before selecting a PCP. Missingness on this variable was unrelated to study condition, χ21=0.22, P=0.64. In the control condition, 22% of participants reported looking at the provider directory before selecting a PCP. In the encouragement condition, 28% of participants reported doing so, a 27% increase. A χ2 test of the difference between these 2 proportions produced a marginally significant result, χ21=2.88, P (2-tailed)=0.09, P (1-tailed)=0.045.
Effect of Encouragement on Quality of Provider Selected: All New Plan Enrollees
At the time of our study, 38% of all commercial PCPs listed in the online provider directory had nonmissing data on at least 1 of these 2 measures of physician quality. Of the 923 new plan enrollees who selected a PCP, 517 (56%) selected a PCP for whom data on quality appeared in the online provider directory. Study condition was unrelated to whether a participant selected a PCP, χ21=0.03, P=0.84, and whether a participant chose a PCP for whom quality data appeared in the provider directory, χ21=1.22, P=0.27. Among new plan enrollees who picked a PCP for whom data on the member satisfaction (CAHPS) measure appeared in the provider directory, study condition was unrelated to the member satisfaction score of selected PCPs [M encouragement=3.24 (SD=0.89), M control=3.22 (SD=0.89), t461=0.28, P=0.78]. Similarly, among new plan enrollees who picked a PCP for whom data on the clinical quality (Healthcare Effectiveness Data and Information Set) measure appeared in the provider directory, study condition was unrelated to the clinical quality score of selected PCPs [M encouragement=3.76 (SD=0.87), M control=3.78 (SD=0.95), t512=−0.26, P=0.80].
Effect of Encouragement on Quality of Provider Selected: Survey Respondents Only
The regression results shown in Table 2 are based on survey respondents (n=315) who chose a PCP for whom performance data were reported in the online provider directory. As this table shows, among this subset of participants, there was no evidence of an effect of encouragement or exposure to physician performance data on the overall clinical quality scores of selected PCPs. There was also no evidence that participants who viewed physician performance data selected a PCP scoring higher on the measure of member satisfaction (P=0.49). There was, however, a significant effect of encouragement (P=0.04) on the member satisfaction scores of selected PCPs. In particular, survey respondents in the encouragement condition selected PCPs with higher scores on member satisfaction (M=3.31, SD=0.89) than did survey respondents in the control condition (M=3.17, SD=0.86; Cohen d=0.16). As the nonsignificant (P=0.83) interaction indicates, there is no evidence that the effect of encouragement differed between those who did versus those who did not access the physician performance data before selecting a PCP. In other words, among survey respondents, encouragement appears to have affected the quality of PCP selected, but that effect did not result from the hypothesized mechanisms of increased use of the physician directory among those encouraged and selection of higher quality PCPs among all users of the directory.
Table 3 shows these same regression models for the subset of survey respondents categorized as regular Internet users. This table shows the same pattern of results as seen among the entire sample of survey respondents. That is, there was no evidence of an effect of encouragement or exposure to the physician performance data on the overall clinical quality of PCPs selected. There was, however, a significant (P=0.002) effect of encouragement on the member satisfaction scores of selected PCPs. This effect, which was more than twice as large as that observed among the entire sample of survey respondents (Cohen d=0.36), did not result from the hypothesized mechanisms of increased use of the physician directory among those encouraged and selection of higher quality PCPs among all users of the directory.
Our study was primarily intended to evaluate the effect of publicly reported physician-quality data on PCP choice. We investigated this association in the real world among a population for whom data on individual provider quality were highly relevant—new health plan members who were required to select a PCP—and we controlled for possible selection effects by using a randomized encouragement design. Yet, our study produced no evidence for the effectiveness of physician performance reports on consumer decision making. Although it is possible to interpret these findings only as failing to demonstrate the utility of comparative quality data on provider choice, such a conclusion may not be warranted.
A growing body of research suggests that data accessibility and display issues play a vital role in determining whether consumers use quality data once they are aware of them.12,15,25–28 Our encouragement messages directed new enrollees to the introductory page of the HealthPlus provider directory. The steps necessary to get from that page to an actual data display are not obvious, which may have frustrated many study participants. In particular, there are many input fields that users may complete (using pull-down menus) as a way of customizing their search for a provider. The response options in the pull-down menus are not always clear. Some of the fields are required and some are not; again, the distinction is not always clear. Despite these problems, which are not uncommon in online reports of provider quality (http://www.innovations.ahrq.gov/content.aspx?id=577), there is value in assessing the effect of HealthPlus’s physician performance report as it is one of very few examples of physician-level reporting to date.
A second issue is the amount of missing data on physician quality in the HealthPlus provider directory. Nearly two thirds of the physicians listed in the directory at the time of our study did not have data on the performance measures reported there. The high rate of missing data may have eroded consumers’ confidence in the data as a whole, thereby limiting the effects of exposure to quality information on new members’ choice of a PCP. However, our study does provide evidence about how consumers behave in response to a report in which data are missing for a substantial proportion of health care options. Consumers’ responses to and use of physician ratings in the context of abundant missing data may differ from how they would respond to ratings in the context of more complete data.
The impact of missing information on consumer choice, especially when such information is missing in a large proportion of cases, is poorly understood and is an issue of significant policy concern that is just beginning to get the research attention that it deserves.29,30 Consumers facing such data may doubt the reliability or usefulness of the data as a whole and may ignore them even where they do exist. Consumers who do not have such a reaction must make the difficult inference about how to compare the quality of physicians with no data to the quality of physicians with data. They might, for example, assume that the absence of data for some providers means that those providers are of lower quality.29 Such an assumption would lead to an impact of reporting that is substantially different from the impact of a report with more complete data. In our study, a majority (55%) of participants who reported consulting the provider directory before choosing a PCP picked a provider with missing data. This suggests that many participants may have ignored the quality data when making their decision.
The most common reason for missing data in the HealthPlus provider directory is that there are a large number of providers in the health plan’s network for whom HealthPlus members constitute only a small fraction of their total pool of patients (and thus provide too little data from which to estimate reliable performance scores). Among physicians in HealthPlus’s “core service area” in which the plan has long-standing membership and established physician networks, the problem of missing data is much less severe: 68% of physicians in the plan’s core service area had nonmissing quality data in the online directory at the time of our study versus 38% of all commercial primary care physicians. However, because physicians in the core service area are not reported separately from physicians in outlying areas, users of the online directory—including those looking for a physicians in the core service area—must contend with large amounts of missing data when trying to compare the quality of PCPs.
The problem of missing data is likely to be common to all but the largest health plans, and suggests an important challenge for publicly reporting quality data at the individual physician level. Small sample sizes are a problem in general in public reporting,31 and there is general agreement that it is better to indicate missing data in a report than to include unreliable data based on small sample sizes. To minimize the problem of having too few patients per provider for reliable reporting, it may be necessary to report individual provider-level data at other than the health plan level (eg, statewide or by all-payer claims databases).
Another possible explanation for why participants who visited the online directory were not swayed by the performance data is that they did not fully understand the measures of physician quality. In general, consumers have difficulty understanding comparative quality data.32,33 Although little is known about how consumers understand roll-up scores such as those reported in the HealthPlus’s physician performance report, there is reason to believe that consumers may have particular difficulty understanding these scores. A roll-up combines multiple measures that are not necessarily related conceptually into a single score. In principle, roll-up scores should make it easier to arrive at an overall evaluation by reducing the number of dimensions that people need to consider; however, consumers may have little understanding of the dimensions that are rolled up and thus little motivation to use roll-ups for decision making.
Although our study does not provide evidence that publicly reported data on physician quality affect the quality of PCPs selected by consumers, it did identify a low-cost means of encouraging new plan members to access these data. This is an important finding given that one of the most important challenges in public reporting (at any level) has been promoting awareness of these data.15,34–36 This finding also suggests the value of strategies directing consumers to physician-quality data at a point when they are most likely to be interested in seeing it.37,38
Among those who responded to our survey, our encouragement manipulation seems to have done more than just draw them to the physician performance data—it may have led them to choose PCPs with higher ratings on member satisfaction. That encouragement appears to have influenced the quality of providers chosen by survey respondents, even when they did not access the data on physician quality, suggests that encouragement had an effect that was not dependent on viewing the intended source of quality information. It is plausible that the encouragement intervention enhanced the salience of physician quality, which might then have activated diffuse information-seeking behavior, such asking friends, family members, or colleagues for recommendations, or consulting for-profit web sites (eg, vitals.com, Angie’s List) that present information on individual providers. If so, then it is also plausible that the experiences underlying those recommendations would be predictive of patient satisfaction but not of clinical measures of quality. The encouragement effect was strongest among regular Internet users who would have had greater access to information on doctors besides what was presented in the HealthPlus’s directory. More research is needed to understand the mechanism or mechanisms underlying this effect.
That the effect of encouragement on the member satisfaction (CAHPS) scores of selected PCPs was not evident among the entire study population suggests that respondents to our survey may have been a select subgroup of new enrollees. Although a 51% response rate is similar to those observed in surveys of outpatient and inpatient experiences,39,40 and response rates tend to be only weakly associated with nonresponse bias in well-conducted probability samples,41 the possibility of nonresponse bias remains. We observed some evidence of demographic nonrepresentativeness with women and older people being more likely to return a survey. These are standard patterns in survey response.42,43 We also observed that survey respondents were more likely than nonrespondents to select a PCP within 1 year of enrolling in the health plan, suggesting that survey respondents may be a more activated group of health care consumers, in greater need of health care, or perhaps more conscientious generally. Thus, caution is warranted in making inferences about prevalence based on our sample data. Even so, our randomized encouragement design should protect us against bias in comparing across study conditions, and equal response rates across conditions suggest that there was no differential response by condition.
Our study had other important limitations. First, nearly a third of new plan members assigned to a condition of our study did not select a PCP within a year of enrollment. Although an interesting finding in its own right and not indicative of selection bias per se, it reduced the sample size available to test for effects of encouragement and exposure on quality of PCP selection. Thus, some caution is merited in interpreting nonsignificant results. Second, our study sample was necessarily limited to those new enrollees who did not predesignate a PCP on their enrollment form. Many of those who predesignate a PCP are switching plans and already have a doctor with whom they are satisfied and who is available through the new plan. Our study excludes this subset of new enrollees, although how this may affect our results is unclear. Third, information about exposure to the physician-quality report was limited to participants’ self-reports about whether and when they accessed the provider directory. To the extent that participants in the encouragement condition felt compelled to report accessing the directory even if they had not, our results may be biased toward finding an effect of encouragement on this outcome. Other studies that use a similar design should consider more covert ways to collect this information. Finally, it is important to note that our model of PCP choice does not account for many of the reasons why people choose a particular PCP, including word-of-mouth reputation, availability, and location of the provider’s office.44,45 However, our randomized encouragement design should protect us against bias that might result from the exclusion of such factors.
A number of conditions must be met for health care quality reports to be effective. Consumers must know about the data, have access to them, and be in a state of readiness to make a decision.35,38 The data must be understandable, seen as trustworthy, and relevant to their choice options.44,45 If any one of these conditions is not met, exposure to performance data is likely to have little or no (detectable) effect, even if it has an effect under optimal conditions. In this study, we were able to provide consumers with access to information at the time they needed it to make a decision, but the performance data covered only a fraction of their choice options. This highlights challenges in finding ways to minimize missing data and provide appropriate information about why data are missing so that consumers’ confidence in the data is not undermined.29 Our study identifies encouragement as a possible “salience intervention” that may have value independent of consumer reporting, and which, under ideal circumstances, may synergistically enhance the effects of reporting.
1. O’Neil S, Schurrer J, Simon S Environmental Scan of Public Reporting Programs and Analysis. 2010 Cambridge, MA Mathematica Policy Research
2. Berwick DM, James B, Coye MJ. Connections between quality measurement and improvement. Med Care. 2003;41:I30–I38
3. Harris KM, Buntin MB Choosing a Health Care Provider: The Role of Quality Information. Research Synthesis Report No. 14. 2008 Princeton, NJ The Robert Wood Johnson Foundation
4. Marshall MN, Shekelle PG, Leatherman S, et al. The public release of performance data: what do we expect to gain? A review of the evidence. JAMA. 2000;283:1866–1874
5. Hibbard JH, Jewett JJ. What type of quality information do consumers want in a health care report card? Med Care Res Rev. 1996;53:28–47
6. Isaacs SL. Consumer’s information needs: results of a national survey. Health Aff (Millwood). 1996;15:31–41
7. Longo DR, Everet KD. Health care consumer reports: an evaluation of consumer perspectives. J Health Care Finance. 2003;30:65–71
8. Fung CH, Lim YW, Mattke S, et al. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148:111–123
9. Faber M, Bosch M, Wollersheim H, et al. Public reporting in health care: how do consumers use quality-of-care information? A systematic review. Med Care. 2009;47:1–8
10. Schoenbaum M, Spranca M, Elliott M, et al. Health plan choice and information about out-of-pocket costs: an experimental analysis. Inquiry. 2001;38:35–48
11. Spranca M, Kanouse DE, Elliott M, et al. Do consumer reports of health plan quality affect health plan selection? Health Serv Res. 2000;35:933–947
12. Uhrig JD, Short PF. Testing the effect of quality reports on the health plan choices of Medicare beneficiaries. Inquiry. 2002;39:355–371
13. Farley DO, Short PF, Elliott MN, et al. Effects of CAHPS
health plan performance information on plan choices by New Jersey Medicaid beneficiaries. Health Serv Res. 2002;37:985–1007
14. Farley DO, Elliott MN, Short PF, et al. Effect of CAHPS
performance information on health plan choices by Iowa Medicaid beneficiaries. Med Care Res Rev. 2002;59:319–336
15. Hibbard JH, Berkman N, McCormack LA, et al. The impact of a CAHPS
report on employee knowledge, beliefs, and decisions. Med Care Res Rev. 2002;59:104–116
16. West SG, Duan N, Pequegnat W, et al. Alternatives to the randomized controlled trial. Am J Public Health. 2008;98:1359–1366
17. Bradlow ET. Encouragement designs: an approach to self-selected samples in an experimental design. Market Lett. 1998;9:383–391
18. Hirano K, Imbens GW, Rubin DB, et al. Causal inference in encouragement designs with covariates. Biostatistics. 2000;1:69–88
19. Holland PW. Causal inference, path analysis, and recursive structural equations models. Sociol Methodol. 1998;18:449–484
20. Marshall M, McLoughlin V. How do patients use information on providers? BMJ. 2010;341:1255–1257
21. Imbens GW, Angrist JD. Identification and estimation of local average treatment effects. Econometrica. 1994;62:467–475
22. Imbens GW, Rubin DB. Bayesian inference for causal effects in randomized experiments with noncompliance. Ann Stat. 1997;25:305–327
23. Zhou XH, Li SM. ITT analysis of randomized encouragement design
studies with missing data. Statist Med. 2006;25:2737–2761
24. Subramanian U, Fihn SD, Weinberger M. A controlled trial of including symptom data in computer-based care suggestions for managing patients with chronic heart failure. Am J Med. 2004;116:375–384
25. Harris-Kojetin LD, McCormack LA, Jael EF, et al. Creating more effective health plan quality reports for consumers: Lessons learned from a synthesis of qualitative testing. Health Serv Res. 2001;36:447–476
26. Hibbard JH, Greene J, Daniel D. What is quality anyway? Performance reports that clearly communicate to consumers the meaning of quality of care. Med Care Res Rev. 2010;67:275–293
27. Robinowitz DL, Dudley RA. Public reporting of provider performance: can its impact be made greater? Annu Rev Public Health. 2006;27:517–536
28. Uhrig JD, Harris-Kojetin LD, Bann C, et al. Do content and format affect older consumers’ use of comparative information in a Medicare health plan choice? Results from a controlled experiment. Med Care Res Rev. 2006;63:701–718
29. American Institutes for Research. How to Present Missing Data Clearly. 2012 Princeton, NJ The Robert Wood Johnson Foundation
30. American Institutes for Research. Three Reasons for Missing Data. 2012 Princeton, NJ The Robert Wood Johnson Foundation
31. Elliott MN, Zaslavsky AM, Cleary PD. Are finite population connections appropriate when profiling institutions? Health Serv Outcomes Res Methodol. 2006;6:153–156
32. Hibbard JH, Slovic P, Peters E, et al. Strategies for reporting health plan performance information to consumers: evidence from controlled studies. Health Serv Res. 2002;37:291–313
33. Marshall MN, Romano PS, Davies HTO. How do we maximize the impact of the public reporting of quality of care? Int J Qual Health Care. 2004;16:I57–I63
34. Schwartz LM, Woloshin S, Birkmeyer JD. How do elderly patients decide where to go for major surgery? Telephone interview survey. BMJ. 2005;331:821–824
35. Sofaer S. Commentary on “what is quality anyway?”. Med Care Res Rev. 2010;67:297–300
36. Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA. 2005;293:1239–1244
37. Shaller D, Kanouse D, Schlesinger M Meeting Consumers Halfway: Context-Driven Strategies for Engaging Consumers to Use Public Reports on Health Care Providers. Peer-reviewed background paper prepared for the AHRQ National Summit on Public Reporting for Consumers, March 2011
38. Fanjiang G, von Glahn T, Chang H, et al. Providing patients web-based data to inform physician choice: if you build it, will they come? J Gen Intern Med. 2007;22:1463–1466
39. Goldstein E, Elliott MN, Lehrman WG, et al. Racial/ethnic differences in patients’ perceptions of inpatient care using the HCAHPS survey. Med Care Res Rev. 2010;67:74–92
40. Roland M, Elliott M, Lyratzopoulos G, et al. Reliability of patient responses in pay for performance schemes: analysis of National General Practitioner Patient Survey data in England. BMJ. 2009;339:1756–1833
41. Groves R, Peytcheva E. The impact of nonresponse rates on nonresponse bias: a meta-analysis. Public Opin Quart. 2008;72:167–189
42. Elliott MN, Edwards C, Angeles J, et al. Patterns of unit and item nonresponse in the CAHPS
Hospital Survey. Health Serv Res. 2005;40:2096–2119
43. Klein DJ, Elliott MN, Haviland AM, et al. Understanding nonresponse to the 2007 Medicare CAHPS
survey. Gerontologist. 2011;51:843–855
44. Abraham J, Sick B, Anderson J, et al. Selecting a provider: what factors influence patients’ decision making? J Healthc Manag. 2011;56:99–114
45. Hibbard JH. What can we say about the impact of public reporting? Inconsistent execution yields variable results. Ann Intern Med. 2008;148:160–161