Health research policy is changing. Governments and funding agencies increasingly are seeking to enhance the relevance and quality of research results, increase the public’s engagement with the research enterprise, and create “health research systems” that are better able to advance science along with health and wealth.1 For the academic medicine community, these developments make it more difficult to assign incentives and value for faculty conducting basic research and to ensure that findings are appropriately translated into improved health products and broad social and economic benefits.2
Substantial investment in biomedical research by members of the public through taxation supports basic discovery science.3,4 Governments have long justified this investment because of its potential to support economic and social prosperity.5 However, the ways in which biomedical research can contribute to tangible improvements in health, along with broader social and economic returns, has received increased attention in recent years.6
Today, the traditional biomedical research enterprise faces a triumvirate of changes. First, research funders increasingly are using impact frameworks and assessment processes to analyze the return on investment of public monies.7 Second, governments are more heavily emphasizing the ultimate social and economic outcomes of these investments, such as increased employment or decreased social isolation.8 For example, the United Kingdom’s Research Excellence Framework now places a 20% weight on the economic, social, or cultural outcomes of research that are felt beyond the halls of academia.9 Third, policy makers are calling for enhanced public engagement so that they can better understand and prioritize the needs of the public—the end users and ultimate beneficiaries of health research.10
For the academic medicine community, these changes have added to the challenge of sustaining the tripartite mission of research, patient care, and education11–13 because the goals of these reforms may not align with the mission of academic medicine. For example, is academic medicine’s commitment to excellence in research and education sustainable alongside the push for increased economic outcomes? Are increased economic outcomes and more consumer-focused results congruent?14 Indeed, concern about the impact of this potential misalignment seems to underpin some researchers’ resistance to policy reforms.15
As governments and funding agencies design and implement initiatives to maximize the outcomes and return on investment of health research,9,16,17 the academic medicine community must understand how different stakeholders value the various outcomes of basic biomedical research. To date, most studies have focused on health services and clinical research and have used qualitative methods.10,15 Thus, to quantify key stakeholders’ preferences for the different outcomes of basic biomedical research and to identify possible misalignments in priorities among stakeholders, we conducted a stated preference discrete choice experiment across two groups in Canada. Specifically, we explored the preferences of basic biomedical researchers, who—through peer review—assess research outcomes and set research priorities and are themselves important users of the immediate outputs of research (e.g., knowledge, hypotheses, research tools), and laypeople, who are the payers and end beneficiaries of research.
Sample and data collection
In autumn 2010, we administered a cross-sectional national survey to compare the stated preferences of academic, biomedical researchers with those of lay members of the public in Canada. We recruited all those basic biomedical researchers (principal investigators; doctoral or postdoctoral trainees) funded by the Canadian Institutes of Health Research in fiscal year 2009–2010 for whom we could identify a publicly available Canadian e-mail and/or postal address. Following the Dillman tailored method of mixed-mode survey design, we contacted participants up to five times (four times to complete an online version of the survey; once to complete a paper-based version instead).18 As a token of our thanks for participating, we entered respondents who completed the questionnaire into a draw for an iPad.
We recruited lay members of the public (whom we refer to in this report as citizens) through an Internet survey panel (i.e., a standing panel of respondents) provided by Research Now, which hosts such online panels to support market and academic research.19 To achieve a representative sample, Research Now recruited panelists by age, gender, and region of residence, in accordance with 2006 Statistics Canada data.20 The survey was offered in English and French in Québec and in English in the rest of Canada. To ensure panel integrity and to recognize the time that the panelists invested, Research Now provided an incentive (e.g., sweepstakes, prize, or cash, in accordance with respondents’ preferences) to eligible panelists who completed the questionnaire.
Choice experiments, like ours, elicit individual respondents’ preferences by presenting alternatives (described by a number of attributes) and asking respondents to select their most preferred option. The underlying assumption in choice experiments is that respondents obtain utility by judging the attributes of an alternative,21 not the alternative per se, hence weighing the attributes against each other to maximize their utility. The merits of choice experiments have resulted in a growing number of applications for this method in health economics and health policy in recent years.22
We developed our questionnaire from a review of the academic and policy literature on research evaluation.23–28 Our questionnaire included items to build and assess knowledge among citizen respondents and to measure selected attitudes and demographics among both citizen and researcher respondents. For the choice experiment, we asked respondents to decide whether to grant a five-year renewal of funding to one of two large research teams, which had already received five years of funding (as renewals, it was feasible to assess many of the initial outcomes of each team). To reduce the cognitive burden of the questionnaire and in recognition of the involvement of citizens, we specified that all proposals were of high scientific quality.29,30 Finally, we informed respondents that the funding agency did not identify which outcomes were more important and that they (the respondents) were to select their own preferred outcomes.
Specifically, we asked respondents: “Imagine that you have been asked to help review and assess the impacts [i.e., outcomes] of academic biomedical research teams that were funded by a large public agency in Canada. You—the reviewer—must choose which team should be funded again.” We emphasized five key elements of the choice scenario in a text box to ensure that the respondents understood these points: “You are helping to review academic biomedical research teams”; “Research teams have been funded once and now report their impacts”; “Research teams are applying to continue their research”; “All research teams are doing good science and can be funded”; and “The two teams do similar kinds of science (similar topics or methods).”
We adapted our lists of attributes and the levels of those attributes from the impact framework developed by the Canadian Academy of Health Sciences (see Appendix 1).23 We modified the questionnaire after conducting cognitive pretest interviews with seven researchers (key informants known to our research team members) and nine citizen respondents (recruited through online advertisements); the team ensured face and content validity.
We used a fractional factorial D-optimal design to reduce the total number of possible combinations of attributes to 18 pairwise choices, plus an opt-out option (fund neither). We further reduced this set of 18 choices into three blocks of six choices, so each respondent only answered six questions; we randomly assigned the blocks to respondents.31 To assess the stability of respondents’ preferences, we also included consistency tests (two repeated questions for citizens and one dominance test for researchers). We used SAS 9.1.3 (Cary, North Carolina), with built-in capabilities, for all our experimental design work.31
Model estimations and data analysis
For our estimations, we used nested logit models.22 We effects-coded all categorical variables. We used the differences in predicted probabilities to infer the relative importance of the attributes within each group32; the willingness-to-pay values are scale free, permitting us to compare values between the two groups, and we used t statistics to assess differences.32 We then tested differences in underlying preferences and estimated parameters between the two groups with a likelihood ratio test.33 We performed all analyses using Stata 12 (College Station, Texas).
Of the potential 3,260 researchers, 1,749 completed our questionnaire (response rate of 53.65%); excluding ineligible respondents (i.e., nonbasic biomedical researchers) and those with missing data, our final sample included 1,602 researchers. We received completed questionnaires from 1,002 citizens (as is typical for Internet surveys that generate samples representing the population on key demographic criteria, no response rate was available).34 Although our sample was representative of all Canadians by region, age, and gender, citizen respondents were significantly better educated than the average Canadian.20 Further, although the mean income of our citizen respondents did not differ from the national average, our sample significantly underrepresented those who make either less than $20,000 or more than $150,000 (results not shown). See Supplemental Digital Appendix 1 (https://links.lww.com/ACADMED/A118) for additional demographics data.
A minority of researchers (45/1,602; 2.8%) and citizens (213/1,002; 21.3%) failed our consistency tests. In a logit regression, no researcher characteristics were relevant to the probability of passing our consistency tests; for citizens, only age had a significant effect (P = .012; results not shown). We only used data from respondents who passed our consistency tests in our estimations (i.e., 1,557 researchers and 789 citizens).
Nested logit estimation results and predicted probabilities
Few individual characteristics had a statistically significant effect on the probability that a respondent would grant renewal funding (see Table 1). On average, and irrespective of the attributes of the proposal, both groups more often granted the renewal funding than opted out of doing so (see Table 1).
In Table 2, we present the differences in the predicted probabilities, by attribute, of respondents granting renewal funding when comparing an alternative scenario with a baseline scenario. We also present the percentage of change in the probability of respondents granting renewal funding, as the result of each attribute. We present the probability of funding for each of the four baselines (two each for citizens and researchers); we compared the difference for all alternative scenarios with the respective baseline. The alternative scenario only differs in the level of the attribute indicated. Citizens were very likely to fund the baseline scenario, with small increases in the probability of them doing so for each of the attributes. For researchers, corresponding probabilities were small compared with the increases in the probability of them granting renewal funding for each of the attributes. In addition, the changes in probabilities, as a result of the increased cost, were small.
For researchers, the attribute most likely to predict their granting renewal funding was influential scientific papers (relative to no papers), followed by trainees. For citizens, that attribute was outstanding trainees, followed by papers. For both groups, the next most likely attribute to predict their granting renewal funding was patents licensed by industry (relative to no patents or patents not licensed by industry). Research targeting economic priorities (relative to health priorities) decreased the probability of respondents granting renewal funding among both groups. Unlike researchers, citizens placed little value (i.e., the parameter was statistically insignificant) in those proposals targeting scientific priorities. Neither group distinguished between patents not licensed by industry and no patents at all. Using likelihood ratio tests (using a model with gender as the only demographic in common), we rejected the equality of all the estimated parameters between the two groups.
Willingness-to-pay values indicate the extra dollars by which a respondent is willing to increase/decrease funding for each attribute included in the proposal. We found most willingness-to-pay values to be strongly significant (see Table 3). Willingness-to-pay values for the attributes of no papers and no trainees demonstrated a consistent pattern—Respondents were willing to reduce funding for proposals at reduced attribute levels (no papers, no trainees) by more than they were willing to increase funding for proposals with outstanding attribute levels (very influential papers, outstanding trainees). For example, they were willing to reduce funding by $8,195,000 for a proposal that resulted in no papers compared with being willing to increase funding by $7,253,000 for a proposal that resulted in outstanding papers.
Comparing willingness-to-pay values for researchers and citizens, t tests indicated some statistical differences. In particular, researchers valued very influential papers more than citizens did; they also disvalued absence of papers more than citizens did. Similarly, citizens valued patents licensed by industry more than researchers did, and they also disvalued the attribute of no patents more than researchers did. Another difference was the value researchers gave to proposals targeting scientific priorities; for citizens, this attribute was statistically insignificant. Conversely, researchers and citizens equally disvalued proposals for research targeted at economic priorities. Finally, both groups had equal willingness-to-pay values for trainees, with no statistically significant differences between researchers and citizens for any of the three levels of the attribute.
Our findings suggest that the preferences of citizens and researchers for health research outcomes are fundamentally aligned. First, both researchers and citizens prioritized high-quality scientific outcomes over all others. Further, in penalizing the absence of traditional scientific outcomes (no papers, no trainees) more than rewarding the presence of their high-quality counterparts (very influential scientific papers, outstanding trainees), respondents suggested what they considered to be the essential nature of biomedical research. Second, both groups valued patents licensed by industry, implying a common assessment of the value of technology transfer—efforts to move discoveries from academia to health products and services through commercialization. Third, both groups prioritized research targeted at health priorities and disvalued research targeted at economic priorities—the latter serving as an imperfect proxy for economic outcomes, which are unlikely to be realized within the time frames of the funded research teams.
Despite these similarities, we found differences in the preferences and strength of preferences between researchers and citizens. In particular, each group prioritized the attributes of traditional scientific outcomes in a different order. Similarly, whereas researchers valued proposals targeting scientific priorities, citizens placed no value in this attribute (though they did not disvalue it). Finally, citizens valued patents licensed by industry more than researchers did, as is evidenced by their greater willingness to pay for this attribute and to punish its absence.
Our novel approach substantially advances our understanding of the attitudes of the lay public and of researchers toward the outcomes of basic biomedical research—identifying the order, strength, and differences in their preferences. Our results confirm decades of public opinion research showing that citizens value basic biomedical research,35 while demonstrating that, for both groups, traditional return-on-investment outcomes take priority over other outcomes. In showing that researchers and citizens both prioritized equally efforts to use patents to commercialize academic research, our findings add perspective to both researchers’ concerns about the potential negative consequences of patents on the research enterprise36 and citizens’ reduced trust in private and commercial science.37 Our findings regarding a shared prioritization of research targeted at health outcomes highlight an unexpected alignment between the groups regarding future strategic directions for the research enterprise.38 Our results showing that both groups did not highly value research targeted at economic outcomes suggest some disinterest among both communities in the economic returns of medical research—a perspective that may accurately reflect the research enterprise’s economic potential39 even as it departs from the current policy priorities.9,16,17 Finally, despite the fundamental alignment in preferences among researchers and citizens, the remaining differences in order and strength of preferences suggest that the public’s engagement in setting the research agenda in the future is likely to lead to some changes in priorities.34
We acknowledge a few potential limitations to our study. First, respondents’ high willingness-to-pay values could be a result of the small weight that respondents assigned to the cost attribute. Authors of past studies have identified nonattendance to the cost attribute as a reason for such findings,40 suggesting that monetary values presented in choice experiments might be too small to influence individuals’ decision making. Second, the representativeness of our samples suggests that we use caution in generalizing our findings. The 54% researcher response rate indicates that our results reflect the views of a large number of basic biomedical researchers, but it limits our ability to generalize our findings to the population as a whole. In addition, our sample of Canadian citizens is better educated than average and does not fully represent the distribution of household incomes. Further, Internet panelists may not have been fully engaged with our online questionnaire. Supporting these factors as limitations are our findings that citizens were more likely to grant renewal funding, the moderate magnitude of their preferences, and the relatively large percentage of respondents who failed our strenuous consistency test. Yet, less developed preferences are common in social discrete choice experiments,41 and respondents may fail consistency tests because of an evolution of their preferences. Further, any concerns about research using Internet panelists must be balanced against the usefulness of the Internet as a medium for both communicating complex concepts and administering questionnaires.
Our findings have several implications for the academic medicine community. For impact frameworks, our findings highlight the importance of measures of quality (i.e., high-quality papers, outstanding trainees, licensed patents) over quantity (i.e., numbers of papers, numbers of trainees, number of patents) in scientific outcomes, despite the fact that measuring quantity is easier than measuring quality.42 Further, given the different value that researchers and citizens placed on patents (as a measure of technology transfer) and research targeted at economic priorities (as a measure of economic outcomes), our findings support published concerns at the widespread use of patents as a metric for economic outcomes.43 That biomedical research can produce health and wealth may be helpful, but it is the health outcomes—not the wealth outcomes—that key stakeholders actively seek. Further, our findings have implications for policy aimed to enhance the public’s engagement in research priority setting.10 The fundamental alignment of preferences between researchers and citizens may ease researchers’ concerns that increased public engagement will lead to the devaluation of basic science, even as it reminds them that some differences in priorities remain. Finally, by showing that citizens have clear preferences for, and will trade among, different biomedical research outcomes, we have made a contribution to the literature on social discrete choice experiments, through which respondents make judgments as arbiters of a public good that affects others as well as themselves.41
Funding/Support: This research was funded through an investigator-initiated grant (F.A.M.) from the Canadian Institutes of Health Research (FRN no. 81195).
Other disclosures: None.
Ethical approval: This study was approved by the University of Toronto health sciences research ethics board.
Previous presentations: An earlier version of this report was presented at the International Health Economics Association annual meeting, Toronto, Canada, July 11, 2011.
1. Pang T, Sadana R, Hanney S, Bhutta ZA, Hyder AA, Simon J. Knowledge for better health: A conceptual framework and foundation for health research systems. Bull World Health Organ. 2003;81:815–820
2. Dorsey ER, Thompson JP, Carrasco M, et al. Financing of U.S. biomedical research and new drug approvals across therapeutic areas. PLoS ONE. 2009;4:e7015
3. Dorsey ER, de Roulet J, Thompson JP, et al. Funding of US biomedical research, 2003–2008. JAMA. 2010;303:137–143
4. Kneller R. The importance of new companies for drug discovery: Origins of a decade of new drugs. Nat Rev Drug Discov. 2010;9:867–882
5. Kleinman DL. Politics on the Endless Frontier: Postwar Research Policy in the United States. 1995 Durham, NC Duke University Press
6. Moses H 3rd, Martin JB. Biomedical research and health advances. N Engl J Med. 2011;364:567–571
7. Kuruvilla S, Mays N, Pleasant A, Walt G. Describing the impact of health research: A research impact framework. BMC Health Serv Res. 2006;6:134
8. Buxton M, Hanney S, Jones T. Estimating the economic value to societies of the impact of health research: A critical review. Bull World Health Organ. 2004;82:733–739
9. Department for Employment and Learning.. Decisions on Assessing Research Impact: Higher Education Funding Council for England, 2011. www.ref.ac.uk/pubs/2011-01/
. Accessed December 6, 2012
10. Oliver S, Clarke-Jones L, Rees R, et al. Involving consumers in research and development agenda setting for the NHS: Developing an evidence-based approach. Health Technol Assess. 2004;8:1–148, III
11. Clark J, Tugwell P. Does academic medicine matter? PLoS Med. 2006;3:e340
12. Awasthi S, Beardmore J, Clark J, et al.International Campaign to Revitalise Academic Medicine. Five futures for academic medicine. PLoS Med. 2005;2:e207
13. Smith T, Whitchurch C. The future of the tripartite mission: Re-examining the relationship linking universities, medical schools and health systems. Higher Educ Manag Policy. 2002;14:39–52
14. Lehoux P, Williams-Jones B, Miller F, Urbach D, Tailliez S. What leads to better health care innovation? Arguments for an integrated policy-oriented research agenda. J Health Serv Res Policy. 2008;13:251–254
15. Robinson L, Newton J, Dawson P. Professionals and the public: Power or partnership in health research? J Eval Clin Pract. 2012;18:276–282
16. Canadian Institutes of Health Research.. International Review Panel Report. www.cihr-irsc.gc.ca/e/43993.html
. Accessed December 6, 2012
17. Department of Innovation, Science and Industry. Australian Government.. Focusing Australia’s Publicly Funded Research Review: Maximizing the Innovation Dividend: Review Key Findings and Future Directions. http://www.innovation.gov.au/Research/Documents/ReviewAdvicePaper.pdf
. Accessed December 6, 2012
18. Dillman DA, Smyth JD, Christian LM Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method. 20093rd ed Hoboken, NJ John Wiley & Sons, Inc.
19. . Research Now. http://www.researchnow.com/en-gb.aspx
. Accessed December 6, 2012
20. Statistics Canada.. 2006 Census of Population. www12.statcan.ca/census-recensement/2006/index-eng.cfm
. Accessed December 6, 2012
21. Lancaster KJ. A new approach to consumer theory. J Polit Econ. 1966;74:132–157
22. Ryan M, Gerard K, Amaya-Amaya M Using Discrete Choice Experiments to Value Health and Health Care. 2008 Dordrecht, Netherlands Springer
23. Panel on Return on Investment in Health Research. Canadian Academy of Health Sciences.. Making an Impact: A Preferred Framework and Indicators to Measure Returns on Investment in Health Research. http://www.cahs-acss.ca/wp-content/uploads/2011/09/ROI_FullReport.pdf
. Accessed December 6, 2012
24. Glaser BE, Bero LA. Attitudes of academic and clinical researchers toward financial ties in research: A systematic review. Sci Eng Ethics. 2005;11:553–573
25. Lipton S, Boyd EA, Bero LA. Conflicts of interest in academic research: Policies, processes, and attitudes. Account Res. 2004;11:83–102
26. Pardo R, Calvo F. Mapping perceptions of science in end-of-century Europe. Sci Commun. 2006;28:3–46
27. Landry R, Amara N, Ouimet M. Determinants of knowledge transfer: Evidence from Canadian university reesarchers in natural sciences and engineering. J Technol Transf. 2007;32:561–592
28. Landry R, Amara N, Rherrard I. Why are some university reesarchers more likely to create spin-offs than others? Evidence from Canadian university. Res Policy. 2006;35:1599–1615
29. Saunders C, Girgis A, Butow P, Crossing S, Penman A. From inclusion to independence—Training consumers to review research. Health Res Policy Syst. 2008;6:3
30. Resnik D. Setting biomedical research priorities: Justice, science, and public participation. Kennedy Inst Ethics J. 2001;11:181–204
31. Kuhfeld WF. Marketing Research Methods in SAS. SAS Technical Paper MR2010. 2010 Cary, NC SAS Institute Inc.
32. Lancsar E, Louviere J, Flynn T. Several methods to investigate relative attribute impact in stated preference experiments. Soc Sci Med. 2007;64:1738–1753
33. Swait J, Louviere J. The role of the scale parameter in the estimation and comparison of multinomial logit models. J Market Res. 1993;30:305–314
34. Stewart RJ, Caird J, Oliver K, Oliver S. Patients’ and clinicians’ research priorities. Health Expect. 2011;14:439–448
35. Miller JD. Public understanding of, and attitudes toward, scientific research: What we know and what we need to know. Public Understand Sci. 2004;13:273–294
36. Johns MM, Barnes M, Florencio PS. Restoring balance to industry–academia relationships in an era of institutional financial conflicts of interest: Promoting research while maintaining trust. JAMA. 2003;289:741–746
37. Johnston J, Wasunna AAHastings Center. . Patents, biomedical research, and treatments: Examining concerns, canvassing solutions. Hastings Cent Rep. 2007;37:S1–36
38. Tallon D, Chard J, Dieppe P. Relation between agendas of the research community and the research consumer. Lancet. 2000;355:2037–2040
39. Macilwain C. Science economics: What science is really worth. Nature. 2010;465:682–684
40. Hensher DA, Rose JM, Greene WH. The implications on willingess to pay of respondents ignoring specific attributes. Transportation. 2005;32:203–222
41. Green C, Gerard K. Exploring the social value of health-care interventions: A stated preference discrete choice experiment. Health Econ. 2009;18:951–976
42. Lindsey D. Using citation counts as a measure of quality in science measuring what’s measurable rather than what’s valid. Scientometrics. 1989;15:189–203
43. Mars MM, Bercovitz J, James BE. Toward measuring the social and economic value of university innovation: A survey of the literature. Adv Stud Entrep Innov Econ Growth. 2009;19:1–25