To discover ways that the biomedical research community can foster the public's trust essential to sustain the research enterprise, we undertook two strategies to identify lessons and practices from outside of health care and biomedical research that can be applied to the biomedical research setting. First, we convened a group of national leaders from sectors outside of academic science and health care that are also dependent on the public's trust. Second, we reviewed reports in the literature regarding efforts under way in the health care setting to adapt strategies employed in other sectors to improve the safety of health care. In the rest of this article, we report the information the national leaders provided about what their sectors do to earn the public's trust that is applicable to academic biomedical research institutions, their insights into ways academic research institutions should respond to crises that have the potential to diminish the public's trust, and what we learned from our literature review. On the basis of what we learned at the workshop and from the relevant literature, we propose recommendations for fostering a culture of trustworthiness in biomedical research and suggest benchmarks that can be used to measure efforts under way in biomedical research to foster a culture of trustworthiness.
The public's trust is essential to the biomedical research enterprise. Lack of trust could lead to a number of undesirable research-related outcomes, including a shortage of volunteers for clinical studies, concerns about the validity of published investigational results, increased regulation of research, and decreased public funding. These and other consequences of erosion in the public's trust have led to a growing consensus that the biomedical research community must foster the public's confidence in biomedical research.1 However, it is not always obvious what institutions and researchers should be doing to prove themselves worthy of this trust.
To date, many efforts to build trust have been largely reactive, driven by external oversight and concerns about public reactions, such as those associated with conspicuous acts of fraud2 and deaths of research volunteers.3,4 Federal funding and regulatory agencies have become more compliance oriented, requiring that institutions bolster their educational programs for trainees in responsible conduct of research and examine the practices of institutional review boards (IRBs) charged with protecting human subjects. In the wake of renewed federal oversight, institutions have established more rigorous procedures to ensure compliance with research regulations.5 Concerns about financial conflicts of interest have also prompted federal sponsors to require institutions to implement policies that require disclosure of certain financial relationships of their investigators.6,7 There have also been calls for reform of peer review and other editorial policies, resulting in changes to publication practices.8
These efforts to bolster the public's trust in biomedical research by avoiding misconduct and preventing misbehavior can help enhance the integrity of the research enterprise, but they do not include activities that other sectors outside of biomedical research have successfully used to promote a culture of trustworthiness. Other public sectors besides biomedical research have experienced crises and have had to work hard to recover the public's trust. For example, the meat industry has struggled with outbreaks of contagion within the food supply; the nuclear power industry has had to deal with the aftermath of Three Mile Island; the airline industry has had to address high-profile accidents; and the judiciary, with the introduction of DNA evidence, faces growing skepticism about objectivity and fairness. The scientific research community can learn from these other sectors and, possibly, avoid the kinds of crises from which these other sectors have had to recover.
Insights From Nonmedical Industries: From Compliance to Cultural Change
To identify trust-promoting strategies used successfully in other sectors and to consider their potential application in biomedical research, we conducted a two-day workshop in October 2005. The workshop was planned by members of the TIES (Trust, Integrity, and Ethics in Science) Project working group; we and those listed in this article's acknowledgements are the members of that group. We and our colleagues conducted the workshop so that all of us could meet with national leaders from sectors that, like biomedical research, all require the public's trust. We decided that it would be beneficial to hear from a cross-section of the various sectors that depend on the public's trust. We wanted to learn from some of the sectors that require the public's trust on a daily basis to sell their products, so leaders from the food and aviation sectors were invited. We also thought it was important to hear from a sector that had suffered a serious breach in the public's trust in the past. So, we invited a representative from the nuclear power industry. We also thought it was important to hear perspectives from a government enterprise, so we invited a leader from a judicial organization as well. Everyone invited from each sector readily agreed to participate. We also invited two communications experts, one with a long career in the energy industry and another who heads a national communications firm, both of whom specialize in managing crises in the public's trust and confidence. There were a total of 18 participants at the workshop; attendance was limited to the members of the TIES Project working group, the invitees from the nonhealth sector, and a representative of one of the sponsoring organizations. To ensure as much candor as possible during discussions, we told all of the invitees from the nonhealth sector that we would not publish their names or the names of their institutions or organizations in this report.
The workshop was held in Aspen, Colorado, at the Given Institute of the University of Colorado. The workshop invitees were briefed about the aims of the workshop in advance and received semistructured guidance for preparing formal remarks. The guidance instructed the invitees to discuss how their organizations or sectors define trust and what its perceived importance is, strategies they employ for maintaining the public's trust, and approaches they use to restore trust after a breach. In addition to formal presentations and extensive discussion about invitees' relevant experiences in their respective sectors, we presented them with two controversial cases from biomedical research and engaged them in unstructured group discussion about the cases. The cases discussed were the Jesse Gelsinger case, which involved a research participant who died while enrolled in an experimental gene therapy protocol,3 and a lead paint abatement study conducted in low-income housing.9
List 1 presents benchmarks identified during the deliberations and discussions conducted at the workshop. These benchmarks focus on two recurring themes that emerged at the workshop that reflect the priorities of trustworthy organizations: (1) attending to multiple types of relationships and (2) maintaining multiple levels of accountability.
Empirically derived benchmarks of cultural evolution: Relationships and accountability
The relationships our invitees identified as critical to promoting the public's trust consisted both of internal relationships within an organization and external relationships with varied stakeholders, such as local residents, national trade organizations, the public at large, and regulators. When building these relationships, invitees suggested that identifying shared values and organizational commitments was important. The invitees also stressed that communication was best done within each industry in the spirit of “We are all in this together” rather than as a top-down strategy or public relations messaging, creating relationships in the process that were less hierarchical and which promoted buy-in.
Another aspect of relationship building had to do with developing communications practices that identify the industry's relevance to the local community. As one invitee urged, “If the public does not understand why your industry is important and what it does, they have no motivation to trust you,” underscoring agreement among the invitees that communication efforts centered on drawing attention to major accomplishments and awards received—common strategies employed at many academic centers—do very little to communicate an institution's actual relevance to the community. They agreed that the point of communication strategies is not to call attention to how special any given institution is but, rather, how important the entire enterprise is because of the social good it pursues. Invitees also noted that communication ought not be unidirectional, highlighting the need for institutions to have communication mechanisms in place to listen to their publics and solicit feedback from them. Community-based advisory panels are one such mechanism, and our invitees reported that they were used to greatest effect by those sectors that interacted directly with the public, as many biomedical research institutions have already discovered.10,11
The invitees consistently mentioned the importance of candor in communication. A common trust-promoting strategy suggested by these leaders was to strive to demonstrate openness and honesty about the inherent risks of one's enterprise and to foster transparency about procedures in place to minimize risks and promote safety. They also identified the need for humility and responsiveness in times of public concern so that sincere effort is made to get to the bottom of the causes of incidents. Invitees were adamant about the need for institutional leaders to acknowledge their concerns, fears, and outrage when a disaster occurs. They also stressed the need to release quickly emerging information in times of crisis and maintained that doing so is critical for restoring trust. Three Mile Island was discussed in this context, specifically with regard to how errors had occurred at other nuclear power plants before events at Three Mile Island and how there had been too little communication between organizations at that time to try to avoid subsequent problems. This “code of silence” within the nuclear industry subsequently fueled skepticism about the industry's credibility, producing an iconic image of mistrust.
In addition to using appropriate communication strategies to foster ongoing relationships, invitees discussed how building good relationships takes will and intentionality. Sustaining relationships must not be approached as if the effort were little more than window dressing. Instead, it requires careful planning and oversight. Invitees reported that this same degree of effort must be exerted to build productive relationships within organizations, because internal relationships are just as important in the effort to build trust as relationships with external constituents. As one invitee noted, “More than 90% of infractions reported are reported from within.”
Whereas current norms of accountability in biomedical research are based largely on adherence to the scientific method itself, along with professional norms, peer review, IRB oversight of human subjects research, and disclosure of financial conflicts of interest, the participants from non-health-care sectors stressed the importance of a much more proactive and expansive approach to accountability. A preoccupation with regulatory compliance was conspicuously absent from the comments of our invitees, even though the three commercial industries represented at the workshop are all highly regulated. Instead, all made it clear that statutory and other regulations set a bare minimum for their accountability practices. As one of our invitees observed, “You can follow all the rules and still not get it right.” We also heard how employees from one of the industries care as much about their entire industry's reputation as they do about the reputation of their particular company, just one comment of many that was indicative of an aspirational approach to accountability.
When faced with issues of vital importance to the public's trust—such as product or worker safety—invitees unanimously emphasized the importance of self-regulation over and above that which is required by external mandates. We learned that, even in highly competitive markets, different companies within the same industry declared that such issues are noncompetitive and that they work collectively to establish national and international safety standards and to develop practices and procedures for implementing and monitoring standards.
Setting self-imposed standards independent of government regulations highlighted for us the importance of leadership's involvement in the effort to be accountable. Leadership at the sectors we heard from recognized that industries themselves are in a better position to identify workable solutions to accountability than are external agencies. For example, one invitee commented, “If you don't have entities in place to help you achieve excellence and standards, you need to create them.” With the engagement of leadership in these industries, they were able to develop and test standards that made doing the right thing achievable.
Related to the role of leadership on standard setting is a need for standards to be responsive. Invitees discussed the importance of moving quickly and decisively when a need is identified, demonstrating how the need will be addressed, and then presenting data about improvements that occur as a result. They reported how responsiveness on this scale requires feedback mechanisms that are widely used. When referencing significant past failures, one invitee described how “arrogance gave way to humility and responsiveness” as the sector transitioned from a reluctance to look at and discuss mistakes to embracing greater transparency about risks and how they are managed.
Another key feature of responsiveness reported by the invitees was the propensity of organizations to develop mechanisms for prospectively assessing risks and for taking action to ameliorate risks before harm occurred. A key element of being able to be this responsive was taking the necessary steps to empower multiple stakeholders to solve problems. The industry invitees recognized that problems and, therefore, solutions, looked different depending on where one worked in an organization and that a nonhierarchical system of reporting must supplant old hierarchies. For example, we learned that an important shift in the culture of the airline industry occurred when it expanded ownership of flight safety to both flight and ground crews. The end result of this shift is that everyone now assumes some degree of ownership of the decision to release a plane from the gate because it is safe to fly.
Conceptually derived signs of cultural maturity
There is precedence within health care for looking to outside sectors for useful lessons and strategies, including a fairly extensive conceptual literature describing the experiences of high-risk nonmedical industries that have undergone transformation in response to a crisis and adopted a “culture of safety.”12–21 One of the most significant contributions of this literature is the appreciation that adverse events—in this case, medical mistakes—and “near misses” result from system-wide problems. Rather than isolated occurrences that can be blamed on individual negligence, such events are caused by institutional failures of one sort or another. There is little disagreement that safety requires a shift in institutional culture toward more trust and openness.12 Major safety developments are characterized by open, objective reporting systems14 and respect for the legitimacy of others' viewpoints,12 points all reiterated by our invitees. There also is general consensus that adverse events are preceded by a “continuum of cascade effects,”15 some of which may seem trivial.22
Prevention of such events requires an understanding of all the “weak links” in the chain; these can best be obtained through a root-cause analysis, a tool that was championed by some of our invitees. Although some health care organizations have begun to make great strides toward eliminating their weak links to create a culture of safety,21 the majority of efforts in health care (e.g., promoting patient safety) are still at an early stage of development compared with those in other sectors, such as the aviation and oil and gas industries. Commentators argue12,17 that a barrier to creating organizational change within health care stems from excessive reliance on professional expertise and a punitive environment whose focus on individual responsibility leads to secrecy. These same characteristics are also hallmarks of the culture of biomedical research, and they help explain why “near misses” within biomedical research have rarely been subjected to root-cause analysis. So, there is much progress still to be made in the effort to change the culture of biomedical research to promote the public's trust and confidence in its work.
Hudson17 presents a model of cultural maturity for the assessment of safety cultures (Figure 1) that we propose be used to review progress to date by the research community to change its culture and practices. According to Hudson's model, the culture of human subjects protections currently is situated somewhere between Hudson's calculative and proactive stages of development. Federal regulations that reflect a commitment to managing and preventing hazards associated with research have been promulgated, and research institutions have systems in place to comply with those regulations (e.g., IRBs). However, there are both perceived and real risks of reporting adverse events and “near misses” in many research institutions. Communication is often neither sufficiently open nor blame-free. This may be particularly true among junior investigators and research staff who are concerned that “blowing the whistle” on senior investigators will have adverse effects on their careers. At the level of individual researchers, many consider the system of human subjects protections as onerous. One can characterize human subjects research as a “culture of compliance” and, thus, one that falls short of a fully mature safety culture.
However, even if the culture of human subjects' protections could “mature” to more advanced stages of development, what we learned from our workshop invitees is that fostering the public's trust requires more than a primary or exclusive focus on safety. There is another spectrum of activities having to do with the responsible conduct of research that speaks to the integrity of the enterprise itself that we worry is at or even below “reactive” on Hudson's developmental model. Unlike the domain of human subjects protections, review and sanction of events that call into question the integrity of other components of the research enterprise are usually limited to the most egregious of events, such as data falsification, data fabrication, and plagiarism, which constitute the Office of Research Integrity's definition of research misconduct.23 Although lapses in integrity are more common than is widely believed,22,24 the excessively narrow definition of misconduct, coupled with institutions' mainly focusing on regulatory compliance, has meant that behaviors that promote or threaten integrity on more everyday scales, such as mentoring and teamwork, as well as activities that seek to identify and reduce errors in data entry, labeling, and image reproductions, have had inadequate attention. Thus, if accountability practices in biomedical research are to reach more advanced stages, such that they more closely resemble the accountability practices described by some of our invitees, the biomedical research community must find approaches that maximize identification and correction of questionable practices that undermine the integrity of the research enterprise. It will be apparent when this “maturation” is under way because the focus will have shifted from isolated cases of blaming individuals for misconduct to continuous efforts aimed at improving the institutional climate to foster best practices.
Recommendations for Advancing the Culture of Scientific Research
Drawing on both the empirical and conceptual considerations above, we created Table 1 to present specific recommendations from the non-health-care participants in the domains of relationships and accountability to promote and restore trust in biomedical research. Institutions that rely on compliance activities, along with university-based practices that respond to instances of pathology, such as review of allegations of data falsification, are “stuck” well down on Hudson's developmental scale, suggesting that ways must be found to overcome the inertia that has prevented the broader research community from making the developmental progress that other sectors dependent on the public's trust have achieved.
The benchmarks of cultural change derived from our workshop participants (List 1) can be used to assess progress along this path. Although a culture of safety is necessary for promoting trust in biomedical research, it is not sufficient for enhancing the trustworthiness of the entire research enterprise. This highlights the need for additional work to determine what more needs to be done to move from a culture in biomedical research that currently mostly culminates in compliance to a more mature and proactive culture that culminates in trustworthiness.
Transitioning the research community to this more mature culture will prove difficult. Individual researchers and research institutions have historically focused primarily on their own reputations. It will be a challenging, but necessary, shift for this community to consider a mutual investment in the research enterprise as a whole that will be comparable with the mutual investments other sectors have made. We offer a few suggestions for how such a shift might be initiated through relationship building and accountability practices, both to build and restore trust.
Institutions' engagement with the public
We learned at the workshop that public education sessions have served organizations interested in building the public's trust in the court system well. We think these could also be used to great effect by biomedical research institutions and organizations. Just as members of the public might struggle to initially understand the role of the adversarial system in the pursuit of justice at trial, so too might they struggle with understanding the ubiquitous financial ties between academia and the pharmaceutical industry. Research institutions need to proactively justify in their local communities why such ties are necessary at times and how institutions are able to discern the difference between manageable and inappropriate financial conflicts of interest. Even though evidence suggests that appropriate disclosure in informed consent forms can help mitigate these conflicts for prospective research participants,25 research institutions should not blindly trust that the public at large will be able to discriminate between legitimate and illicit financial conflicts of interest when they are exposed to information about potentially inflammatory financial ties by their local media.
With a credible community-based advisory board in place, institutional leaders could work with those boards to develop a public education curriculum that addresses a broad range of important topics about modern biomedical research and use the board members' connections to local organizations to offer the sessions in settings and ways that will be accessible to their constituents. The community engagement cores in research institutions that have received clinical science translation awards from the National Institutes of Health are obvious mechanisms through which to convene such boards and plan such curricula.
Relationships can be used with equal effect to help institutions and research organizations restore trust after a crisis as well. For example, if a strong advisory board is in place and if institutional leaders truly value the board's members and their dedication to the local community's well-being, then the institution will be better positioned to understand the local concerns of the public in a time of a crisis in confidence and empathize with those concerns. Board members can also help identify what information needs to be conveyed and how it is best conveyed and, thereby, possibly avoid missteps in efforts to restore trust in the institution or organization.
Individual research teams
Individual research teams, as well as the organizations where they conduct their investigations, can also implement measures to increase accountability. Just as is the case regarding the role of relationships, these measures can be preventive and, thereby, help to build trust, and they can be implemented after a crisis to help restore trust. On the prevention side, measures can be as simple as team members' working together to identify ways that research data could be tainted, either through mishandling or misidentification, and then implementing procedures to guard against those occurrences. Such efforts could have the added bonus of assessing whether the culture of the team permits the nonhierarchical and blame-free reporting that a culture of true accountability requires.
Research institutions can help restore trust by using root-cause analysis to better understand why research lapses occur and be better positioned to avoid similar lapses in the future. For example, to the extent that considerations of confidentiality would permit it, an institution's research misconduct committee could conduct root-cause analyses on closed cases of a variety of alleged instances of research misconduct. Once an analysis is completed, the institution would then be able to work with various parties to help identify and implement solutions to the problems the analysis uncovered. Such exercises should help move the accountability culture in an institution from one that waits for misconduct to occur and then punishes the misconduct to one that prevents it instead.
If fostering the public's trust and confidence truly is a maturation process as Hudson's model suggests, there can be no “quick fix” for the growing crisis of public confidence in research. Rather, as other sectors have demonstrated, transforming the culture of research is an evolutionary process that involves and affects all people at all levels of organizations and institutions. No one should underestimate the difficulty of pursuing the transformations required to create a pervasive culture of trustworthiness in biomedical research. It will require major changes in practices and behaviors from individual scientists and from investigative teams, as well as from research institutions and their leaders. All aspects of the research community will need to create both greater openness and greater vigilance while simultaneously reaching out to and working with communities in multiple new ways. We hope that the benchmarks and recommendations identified at our workshop and the examples drawn from them will help advance the needed changes and, thereby, promote greater trustworthiness in the biomedical research enterprise.
The authors would like to thank the anonymous reviewers and editorial staff members for their suggested revisions to the manuscript. Funding for the workshop was received from the Office of Research Integrity (HHSP233200500943P) and the Colorado Health Foundation. Other members of the TIES (Trust, Integrity, and Ethics in Science) working group who participated in the workshop and helped summarize the workshop findings are Wylie Burke, MD, PhD, Rick Carlson, JD, Marilyn Coors, PhD, Alison Lakin, RN, PhD, Debra Lappin, JD, Arnold Levinson, PhD, and Jeremy Sugarman, MD, MPH, MA.
1Association of American Medical Colleges. Clinical Research: A Reaffirmation of Trust Between Medical Science and the Public, Proclamation and Pledge of Academic, Scientific, and Patient Health Organizations. (http://www.aamc.org/newsroom/pressrel/2000/000608b.htm
). Accessed February 2, 2009.
5Phillips DF. IRBs search for answers and support during a time of institutional change. JAMA. 2000;283:729–730.
6Public Health Service Policies on Research Misconduct; Final Rule 42 CFR Part 50 and 93. U.S. Department of Health and Human Services, Federal Register, May 17, 2005.
7Schulman KA, Seils DM, Timbie JW, et al. A national survey of provisions in clinical-trial agreements between medical schools and industry sponsors. N Engl J Med. 2002;347:1335–1341.
8Kennedy D. Responding to fraud. Science. 2006;314:1353.
9Glantz LH. Nontherapeutic research with children: Grimes v Kennedy Krieger Institute. Am J Pub Health. 2002;92:1070–1073.
11U.S. Department of Housing and Urban Development. Office of University Partnerships Web site. Available at: (http://www.oup.org
). Accessed December 18, 2008.
12Carroll JS, Rudolph JW, Hatakenaka S. Lessons learned from non-medical industries: Root cause analysis as culture change at a chemical plant. Qual Saf Health Care. 2002;11:266–269.
13Dubreuil GH, Lochard J, Girard P, et al. Chernobyl post-accident management: The ETHOS Project. Health Phys. 1999;77:361–372.
14Smart K. Credible investigation of air accidents. J Hazard Mater. 2004;111:111–114.
15Barach P, Small SD. Reporting and preventing medical mishaps: Lessons from non-medical near miss reporting systems. BMJ. 2000;320:759–763.
16Grogan EL, Stiles RA, France DJ, et al. The impact of aviation-based teamwork training on the attitudes of health care professionals. J Am Coll Surg. 2004;199:843–848.
17Hudson P. Applying the lessons of high risk industries to health care. Qual Saf Health Care. 2003;12(suppl 1):i7–i12.
18Ilan R, Fowler R. Brief history of patient safety culture and science. J Crit Care. 2005;20:2–5.
19Nolan TW. System changes to improve patient safety. BMJ. 2000;320:771–773.
20Ruchlin HS, Dubbs NL, Callahan MA. The role of leadership in instilling a culture of safety: Lessons from the literature. J Healthc Manag. 2004;49:47–58.
21Berenholtz SM, Pronovost PJ. Monitoring patient safety. Crit Care Clin. 2007;23:659–673.
22Martison BC, Anderson MS, de Vries R. Scientists behaving badly. Nature. 2005;435:737–738.
24Titus SL, Wells JA, Rhoades LJ. Commentary: Repairing research integrity. Nature. 2008;453:980–982.
25Weinfurt KP, Hall MA, Dinan MA, et al. Effects of disclosing financial interests on attitudes toward clinical research. J Gen Intern Med. 2008;23:860–866.