Financial conflicts of interest in academic biomedical research first burst into public consciousness during the 1980s, hand in hand with a series of widely publicized and politically irresistible episodes of scientific misconduct, in some of which faculty investigators were accused of having fabricated or falsified research data about therapeutic products in which they were found to have had substantial financial interests. The single episode most clearly demonstrating a causal relation between financial self-interest and research fraud involved a Harvard Medical School ophthalmologist at the affiliated Massachusetts Eye and Ear Infirmary, who falsified research results in order to promote a worthless eye ointment in which he held a large financial interest that he had liquidated just prior to the revelation of fraud.
The linkage between related financial interests in research and high-profile cases of scientific misconduct was unfortunate because it seems to have imprinted indelibly in the minds of the Congress and the media the belief that such interests inevitably create conflicts, are inherently wrong, and are often accompanied by scientific misrepresentation or misconduct.
It is notable that the primary concerns at that time were focused on research misconduct and the communal threat to public health posed by falsified scientific information, and not so much on the welfare of and relationship of trust with individual research subjects. By 1988 Congress had heard enough and mandated new regulation of federally sponsored research, but it is worth recalling that the mandate proved much easier than its implementation, which would take nearly seven years to accomplish. In 1990, these tawdry matters were summarized in the 1990 report of the House Committee on Government Operations entitled Are Scientific Misconduct and Conflicts of Interest Hazardous to Our Health?1
Today, financial conflicts of interest in biomedical research again confront academic medicine and the clinical research enterprise more generally, and they recur in the context of growing skepticism about the adequacy of the entire federal system for protecting the welfare of human research subjects. The larger topic has been addressed in well-publicized reports from the General Accounting Office and the Office of the Inspector General, Department of Health and Human Services (DHHS), and has drawn attention from Congress, the White House, the DHHS Secretary, the National Institutes of Health (NIH), the Office for Protection from Research Risks (OPRR) and its successor, the Office for Human Research Protections (OHRP), and the National Bioethics Advisory Commission.
In this climate of widening concern, the signal event that brought financial conflicts of interest to the forefront of public consciousness was, of course, the death of Jesse Gelsinger. This tragedy was soon followed by the revelation of massive under-reporting to the NIH of adverse events in related gene-transfer experiments in settings in which investigators, and often their academic institutions, were either alleged or shown to hold significant financial interests. Adding fuel to the fire have been multiple reports in the scientific literature and the media alleging industry suppression or manipulation of data obtained in clinical trials directed by academic investigators, as well as flagrant misbehavior by investigators conducting clinical trials, in some of which linkage between the prospect of financial rewards with professional misconduct was persuasive. Violations of federal regulations led the OPRR, the OHRP, and the FDA to suspend all human-subjects research at some of the nation's most prestigious academic medical centers, including Duke University and the University of Colorado, among others, and generated national headlines.
Amidst a growing consensus that the system of federal oversight of human-subjects research must be strengthened, there are calls for new laws and expanded regulations that promise to be prescriptive and burdensome. At the very least, they will lead to greater federal interposition into the conduct of biomedical research on our campuses and restriction of the behaviors and traditional privileges of faculty investigators. Just how burdensome the new rules will be may well depend on whether and how the academic community responds.
At times like these, it becomes too easy to lose perspective about conflicts of interest. We forget the reasons why the public is so generous in its support of biomedical research, and why it endorses federal policy created explicitly to facilitate the translation of research findings into new products to relieve public suffering and disability. Absence of context is dangerous: it can lead to proposed remedies that could damage both the scientific process and the reduction of scientific inventions into public benefit.
However, it is troubling that our nation's medical schools and teaching hospitals, which remain the fount of medical innovation, may have responded inadequately to the profound changes that have transformed the culture of academic medicine since the birth of recombinant DNA technology in the early 1970s and the passage of the Bayh—Dole Act in 1980 (vide infra), and thereby allowed themselves to become vulnerable to corrosive public skepticism.
INDIVIDUAL CONFLICTS OF INTEREST
First, some perspective. Conflicts of interest are ubiquitous and inevitable in academic life, indeed in all professional life. The challenge for academic medicine is not to eradicate them, which is fanciful and would be inimical to public policy goals, but to recognize and manage them sensibly, effectively, and in a manner that can withstand public scrutiny. Successful scientists cannot be totally dispassionate about their work, nor can academic medical researchers be immune from the jumbled and often intense, conflicting, and non-financial motivations that characterize the contemporary academic milieu. These include the desires for faculty advancement; to compete successfully for sponsored research funding; to receive accolades from professional peers; and to alleviate human pain and suffering. The last may be the most enduring motivation of all, the one that first led the researcher to chose an arduous academic career and then persist in it in spite of its demands, uncertainties, and disappointments. All of these can generate conflicts of interest by creating strong bias toward positive results, and all may influence faculty behavior more powerfully than prospects of financial enrichment.
These kinds of pervasive academic conflicts are of little note to the public but well recognized within the academy, and to manage them, institutional policies and procedures, as well as scientific processes and the scientific method itself, have long been in place. In contrast, financial conflicts tend to be unrecognized unless disclosed, but can be alarming to the public and for this reason pose a special risk to the credibility of academic institutions. Non-financial and financial conflicts that may affect research differ in another very important way: The oversight of the former has been left traditionally to the academy and the professions, but the latter has during the past decade become a shared, and importantly, a contingent responsibility of the academy and the federal government.
Nearly seven years elapsed between congressional mandate of regulations governing financial conflicts of interest in federally funded research and the issuance of the present regulations. These years were marked by intense negotiations between the government and the academic community over the extent to which federal trespass into the traditional academic sanctuary would be tolerated. The lengthy process proved essential to arriving at a mutually acceptable accommodation. In the end, the academy reluctantly had to acknowledge the government's legitimate interest in the issue. But it argued successfully that that interest is bounded, and should be satisfied by receiving the assurance of the awardee institution that research is conducted with integrity and in compliance with federal law and regulations, and that data supporting decisions that affect the health of the public are sound and trustworthy. We must always remember, however, that the boundary is not fixed, but remains contingent on the academy's diligence in living up to the obligations that attend its fiercely defended claim to the privileges of self-governance and academic freedom.
A signal feature of U.S. science policy during the past 50 years has been the relatively light hand of federal oversight of the scientific process, and the deference shown to scientific and academic self-governance, which, in turn, rests on sustained trust in the integrity of faculty and scientists. It has helped that the vast majority of federal funding for basic science has flowed through universities, which have benefited enormously from their public image as independent and disinterested creators and arbiters of knowledge. As the Association of University Professors stated in 1915:
All true universities, whether public or private, are public trusts designed to advance knowledge by safeguarding the free inquiry of impartial teachers and scholars. Their independence is essential because the university provides knowledge not only to its students, but also to the public agency in need of expert guidance and the general society in need of greater knowledge; and … these latter clients have a stake in disinterested professional opinion, stated without fear or favor, which the institution is morally required to respect.2
Trust in faculty integrity has also been the foundation of university policies governing faculty behavior, which have likewise tended to be light-handed and minimally intrusive.
Today, there is good reason for concern that this idealistic image of academic virtue and the public's willingness to trust in it may both be tottering. Why has this happened? Most would agree that the etiology can be traced to three profoundly catalytic events that together forever altered the culture of the academic medical center and the research university. The first was the invention of recombinant DNA technology, which spawned the biotechnology industry, the scientific agenda of which remains deeply intertwined with academic biomedical research and researchers. The second was the seminal holding of the U.S. Supreme Court in 1980 (Diamond v Chakrabarty) that a recombinant bacterium created for its ability to digest petroleum products was patentable subject matter, stating in its opinion that “anything under the sun” whose invention involved the hand of man was patentable.3 By sweeping living organisms under the reach of patentability, the Court deemed a vast expanse of biotechnology eligible for intellectual-property protection, an expanse whose boundaries continue to expand and be hotly contested in the U.S. Patent and Trademark Office and the courts. The third event, of course, was the enactment of the Bayh-Dole Act, also in 1980, which gave recipients of federal research funds both the right to patent federally funded research inventions and the obligation to spur the translation of those inventions into public benefit. The intent of the Act plainly was to stimulate use of the patent system to enhance commercialization of federally funded research results. Of note, the preamble to the Act lists, among its other objectives, the promotion of collaboration between commercial concerns and nonprofit organizations.
A measure of the success of the Bayh-Dole Act can be found in the recently released annual report from the Association of University Technology Managers, entitled AUTM Licensing Survey, FY 2000 Survey.4 The AUTM reports that in FY 2000 research inventions from 190 universities, teaching hospitals, and independent research institutes, including 93 of the nation's 100 top research universities, led to the introduction of nearly 350 new products; spawned 454 new companies, in 82% of which the institutions held equity interests; reported over 13,000 Invention Disclosures; filed nearly 6,400 new patent applications and received nearly 3,800 patents; executed 4,362 new licenses, 50% of which were exclusive; and received adjusted gross license income of $1.3 billion. The report notes that in FY 2000 nearly 21,000 licenses and options were active, and that since 1980 about 3,400 new companies had been formed based on a license from an academic institution. The AUTM's FY 1999 report roughly estimated that academic technology transfer had generated $40 billion worth of economic activity, supported about 270,000 jobs, and contributed about $5 billion in federal, state, and local taxes.
The Bayh-Dole Act has been enormously successful in achieving its goals, including stimulation of small businesses, and it can be argued that the public is receiving a robust return on its generous investment of federal research dollars. But, the Act has its dark side. While dramatically increasing the flow of revenues into research institutions, and driving the interests of both institutions and faculty toward evermore-vigorous commercialization of their intellectual property, it may have created a troubling intoxication with and dependency upon “cashing in” on academic biomedical research. The result has been deepening entanglement of research universities with industry and progressive blurring of the boundaries that once reasonably, albeit never perfectly, demarcated academic interests and values from those of the world of commerce.
Some have argued that the ethos of the academy is being replaced by that of business, and that the rewards of increased commercial activity have come at the expense of the very characteristics that make universities uniquely important and socially privileged as the sites of unfettered research and the disinterested, and therefore trustworthy, sources of new knowledge. Others have asked whether academia is busily bartering its very soul for the prospects of material enrichment.
Nowhere in academe have these changes been deeper, or generated greater concerns, than in medicine, which, while continuing to serve as a major source of biotechnologic innovation and the generator of an insatiable public appetite and impatience for ever-more-wondrous treatments and cures, has, in the eyes of some observers, created a veritable pandemic of financial conflicts of interest in faculty and their institutions.
Admittedly, the engagement of academe with commerce is not limited to biomedicine, or, indeed, even to research. This was most recently illustrated at Harvard University, when an eminent professor of law prepared a videotape of one of his standard courses to sell to a new, Web-based virtual law school, thereby causing consternation within the Harvard administration, followed by revision of the university's faculty conflict-of-interest policy. But when faculty or institutional conflicts occur outside medicine, they typically do not generate front-page stories or become featured on the six-o'clock news.
Simply put, the relationship between the public and academic medicine is special and different in kind from any other in academe. It is rooted in trust that is nowhere more evident or fragile than in medical research involving the participation of human subjects, where even the perception that faculty investigators or their institutions may have conflicting financial interests that might compromise their independence and credibility can be demoralizing. Especially is this so, and is trust violated, when those interests have not been openly disclosed up front. Admittedly, this sets a very high standard for academic medicine, much more stringent than that faced by any other faculty, or indeed, most members of society. But academic medicine and medical research have flourished in this country since World War II in a unique state of grace, in a special status resting on public confidence and trust that demands that a very high standard be met.
Like most federal oversight of the conduct of research, that of financial conflicts of interest has been accomplished by gentle regulation rather than harsh prescription, and managed through the mechanism of institutional assurance. But as noted earlier, such gentleness is neither pre-ordained nor guaranteed. Recall that when financial conflicts were last in the Congressional cross-hairs over a decade ago, the initial draft regulations proposed by the DHHS were roundly denounced by the scientific community, academic medical centers, and universities alike for being overly sweeping and prescriptive, and unacceptably intrusive into matters of faculty behavior traditionally reserved to the academy and the professions. The outcry was so intense that then-Secretary Louis Sullivan ordered the proposal withdrawn and sent the NIH back to the drawing board.
In 1995, the DHHS issued the more deferential regulations in place today, the most prescriptive feature of which is the establishment of a federal threshold to define “significant financial interest.” Under these regulations, responsibility for the disclosure and management of faculty members' financial conflicts is retained entirely within awardee organizations. The organization's obligation to the federal sponsor is limited to assurance that policies are in place and being implemented, and to notification of the sponsor prior to expending any awarded funds if a conflicting interest exists (but not the nature of the interest or any details), and that the conflict has been managed, reduced, or eliminated.
In light of the deep and extensive financial entanglements that have come to exist between medical school researchers —and, often, medical schools and their parent universities —and industry, it is fair to ask whether the present federal regulations are still sufficient, whether disclosure alone still suffices for purposes of institutional management and public reassurance, and whether medical schools and teaching hospitals have been diligent in adjusting and enforcing their institutional policies.
INSTITUTIONAL CONFLICTS OF INTEREST
Concerns with institutional financial conflicts of interest are of relatively recent vintage; they did not surface during the 1980s, and they are not addressed in current federal regulations. In contrast to individual conflicting interests, which universities and the federal government have been deliberating for over a decade and tend to understand, and which are addressed by a rather substantial developing literature, institutional conflicting interests in research are very much unexplored territory. The literature is exceedingly sparse, the topic, until very recently, had been little if at all deliberated within the academic community, and there is certainly no consensus about, nor even a workable definition of, these institutional conflicts, let alone how they should be managed. Moreover this is terra incognita that is likely to be heavily mined, for the topic deals with matters of institutional resource management and investment policy that are central to principles of institutional self-governance and autonomy.
On the other hand, one must acknowledge that the concerns are legitimate, for even the appearance, let alone the reality, of institutional self-interest in research strikes to the heart of institutional credibility and public accountability, the pillars on which our entire system of federal oversight of research integrity rests.
Although systematic data remain fragmentary, certainly some forms of alleged financial conflicts of interest, both individual and institutional, that have recently come to public attention would seem to fall beyond the pale of acceptability. Yet the very idea of “forbidding” faculty, let alone institutional, behavior always raises difficult cultural and policy issues for academia. Universities and their academic medical centers, at least the private institutions, have typically managed their faculties' outside professional interests with circumspection by limiting time spent but not money earned or the nature of the outside professional activity, with two common exceptions. The first has to do with teaching, to which the employer—university may commonly lay claim by forbidding a faculty member from teaching his or her course in another institution. The second dates to the creation of the system of full-time faculty appointment in clinical disciplines and the later establishment of faculty practice plans, and typically involves limitation or outright prohibition of faculty earnings from outside clinical practice.
Given this cultural reluctance to “prohibit,” it is note-worthy that among academic medical institutions, Harvard Medical School may have been the first to circumscribe the historic research prerogatives of its faculty when in 1990 it set hard limits on the amount of financial interest and the kinds of commercial relationships that could be held by a full-time faculty investigator engaged in clinical research. More recently, in response to the Gelsinger tragedy and evidence of rampant under-reporting of adverse events in genetransfer experiments, the American Society of Gene Therapy (ASGT) went further by declaring off-limits certain kinds of financial arrangements for major participants in such trials. Whether one agrees with the details of these actions is beside the point. What is important is that these entities, working from within the profession, drew lines that prescribe boundaries of acceptability in research involving human subjects.
The question of how best the academic community and the federal government should respond to public concerns about conflicting financial interests, and strengthen confidence in the integrity and credibility of biomedical research, is not easy to answer, but overreaction assuredly is. And challenging the search for the right solutions is the complicated American ecology of technology transfer, to which I now turn.
THE ECOLOGY OF TECHNOLOGY TRANSFER
Public discourse about financial conflicts of interest in biomedical research is badly confounded by deep-seated conflicts between the public's understanding and its expectations of how biomedical research and development are accomplished in our society. The most salient of these “conflicts of public interest” are, first, that research universities are being forced to walk an unprecedentedly fine line between the expectations of society that they serve as local and regional engines of economic development and remaining scrupulously uncontaminated by their commercial interactions and motivations. The apocryphal accomplishments of Silicon Valley and Route 128 (outside Boston), and more recently the flourishing centers of biotechnology in the San Francisco Bay area and Montgomery County, Maryland, in each instance attributed to proximity and ease of interaction with major publicly funded research institutions, are widely admired. Indeed, they have become mythic in the minds of local, state, and federal political leaders, all of whom are eager to bring similar bounties of socioeconomic improvement and public recognition to their communities.
Yet at the same time, the public maintains a puritanical intolerance of any tinge of suspicion that the academy's deepening engagements with industry might distort the conduct or color the reporting of research. Nowhere is this contradiction, and the dilemma and exposure it creates, greater than in academic medicine, which finds itself struggling to find a precarious equipoise between Bayh—Dole and by-God.
Second, both the public and congressional supporters of biomedical research are impatient for novel medicinal products, disease preventions, and dramatic cures. But they fail to understand or too easily forget that our capitalistic economy decries any federal initiative that smacks of “industrial policy,” and that the translation of research invention into public benefit is totally dependent on private-sector funds. For very early or novel inventions, where the risks of successful commercial development are large, that translational pathway is often dependent on venture capital or small business investment, the availability of which commonly requires the active participation of the academic inventors. The rule here tends to be simple: no participation, no money. Moreover, in these circumstances limited financial resources favor the structuring of technology-transfer agreements around equity rather than dollars.
It is worth recalling that one of the explicit objectives of the Bayh—Dole Act is to encourage maximum participation of small business firms (which, of course, includes start-ups) in the development of federally funded research inventions. It is noteworthy that according to the most recent AUTM Licensing Survey, in FY 2000 two thirds of the new licenses and options executed by the respondent academic institutions fell into this category.4 I suggest that this feature of the ecology of technology transfer, perhaps more than any other single cause, drives the dramatic increase in medical faculty entrepreneurship, and as well, the increasing tendency of research universities to structure their licensing deals around equity interests rather than royalty payments.
Third, we must remember that faculty are not indentured; to the contrary, they can be—and are—highly mobile. We must therefore guard against creating within our institutions a climate of suspicion and over-regulation that will drive the most innovative and able faculty into industry, or into small not-for-profit organizations that may be created especially for them by start-ups or small businesses. Such organizations are fully eligible for federal research funding, and their proximity to research universities provides them with access to graduate students and postdoctoral fellows. Such an adverse climate and resulting forced migration would serve the best interests of neither academia nor the public.
Proposers of new remedies to deal with financial conflicts of interest in academic biomedical research should take care that in their zeal to attain an idealized state of virtue in which all financial conflicts of interest are eliminated, they not interdict a robust developmental pathway of immense social benefit.
Since these conflicts of public understanding and expectation will not disappear, the academic community and professional societies must join together to enhance public understanding of their profoundly changing relationships with the world of commerce. At the same time, they must embrace and enforce uniformly high standards of individual and organizational behavior that the public will understand and find credible, while working much harder to explain the process by which tax dollars invested in academic biomedical research yield beneficial products. For academia, the problem is perforce a community responsibility, for lapses or transgressions by any single member inevitably result both in communal punishment and shaken public confidence and trust in the entire enterprise. Recent publications5–10 have revealed inconsistencies and inadequacies in university systems of protection of human research subjects, and sounded a clarion call to the academic medical community—indeed to all of academe—to get its messy house in order before it's too late.
How is academia responding? First, during the past two years both the AAMC and the AAU have had task forces working on financial conflicts of interest. The AAU Task Force on Research Accountability, composed of university presidents, chancellors, and other academic leaders, has addressed both individual and institutional conflicts from the campus-wide perspective of the university leadership. The AAMC Task Force on Financial Conflicts of Interest in Clinical Research, with a more circumscribed charge that focuses on research involving human subjects, is intended to be complementary to the AAU effort. The roster of the AAMC task force has prominent representation from all of the stakeholder groups, including academic medicine, law, industry, bioethics, patients, the media, and the public. The task force issued its first report, Protecting Subjects, Preserving Trust, Promoting Progress: Policy and Guidelines for the Oversight of Individual Financial Interests in Human Subjects Research, in December 2001 and issued its final report, Protecting Subjects, Preserving Trust, Promoting Progress II: Policy and Recommendations for the Oversight of Institutional Financial Interests in Human Subjects Research, in October 2002 (Both reports will be published in Academic Medicine in February 2003).
Second, the AAMC, working with six sister organizations representing university leadership (the Association of American Universities—AAU—and the National Association of State Universities and Land Grant Colleges—NASULGC), biomedical, behavioral, and social scientists (the Federation of American Societies of Experimental Biology—FASEB—and the Consortium of Social Science Associations—COSSA), patient advocacy groups (the National Health Council—NHC—and Public Responsibility in Medicine and Research—PRIM&R) played a leadership role in creating a new, independent, nonprofit entity, the Association for the Accreditation of Human Research Protection Programs (AAHRPP). This entity will conduct a voluntary, peer-driven, educationally focused program of accreditation of academic institutions and other organizations that conduct (or oversee) research involving human subjects. It has been created to raise the bar of human research protection programs, to achieve greater consistency across the research community, and, as articulated by Dr. Greg Koski, Director of the OHRP, to change the culture of our community from one of compliance to one of conscience and responsibility.
Third, the editors of 13 major medical journals in the United States, Canada, and abroad recently published an editorial entitled “Sponsorship, Authorship, and Accountability” that decries undue sponsor influence on the design, conduct, or reporting of research.11 The editorial stresses that “authorship means both accountability and independence,” and many of the editors have agreed to ask that each responsible author sign a statement indicating that he or she accepts full responsibility for the conduct of the trial, has had access to the data, and controlled the decision to publish.
So I would argue that, yes, there is evidence that the academic community and the professions have heard the call and have begun to respond on several fronts, all of which can work synergistically to recognize, minimize, and avoid wherever possible financial conflicts of interest in human-subjects research, protect research integrity, enhance the welfare of human research subjects, and preserve the public confidence and trust on which our enterprise depends. Whether these responses will prove sufficient to meet these lofty goals will depend on our institutions, our professional societies, and every one of us.