Reynolds and colleagues describe much that is worthy and promising in psychiatry as clinical neuroscience, and their particulars offer little with which to disagree. However, psychiatric trainees learn quickly that what is not said is often as important as what is said. In the spirit of offering what has not been said before, I offer a discussion of three themes prominent in the history of psychiatry—stigma, conscience, and science—themes that address the future of psychiatry from a different perspective. Developing an historical trajectory for the field, I will look back, consider the present, and make provisional looks forward for each of these motifs. Because of the broad sweep of this essay, I will base the following historical discussions on eight core sources.1–8
If, in medicine, any historical universals exist, the stigmatizing of the mentally ill would be among them. From ancient texts to the Western present, the “mad” are feared, misunderstood, excluded, and punished. Before the Western medical subspecialty of psychiatry emerged in the 18th century, now-archaic names for the mentally ill symbolize both the moral judgments and social exclusions perpetrated upon them: “drunkards,” “idiots,” “lunatics,” “witches,” “maniacs,” and “vagabonds” to mention a few. The mad whom society did not lock up, excommunicate, drown, or execute might have found temperate care in church communities, conscientious monitoring by neighbors, or management by family members. As Western Europe industrialized and colonial America grew from small communities and farms into big cities and towns, the management of the mad through local community effort became untenable, and the mad came to be lumped with the criminal and deviant, and seen as a social, not local, problem. A variety of social institutions developed around the turn of the 19th century to deal with the mad problem, and the “mentally ill” as a population formed. The mad obtained services in asylums, the idiots and imbeciles were taught in special residential schools, the homeless, orphaned and impoverished resided in almshouses and orphanages, and the delinquents and adult criminals were served in workhouses, jails, prisons, and still later, juvenile detention facilities.
A number of factors, however, diluted the status gained by the mad as ill, rather than dangerous and deviant. One was the failure of the institutions for the deviant to develop classification systems that enabled the competent sorting of services and public policies for the sick, the poor, the criminal, and the intellectually impaired. Then, as today, substantive subgroups of these populations qualified for multiple services. Another diluting factor was the waxing and waning of public attitudes toward the deviant, reflected in variable largesse for the institutions that cared for them. The difficulty in sorting services for the mentally ill, the intellectually impaired, and the criminally deviant led to institutional confusion and compounded costs that persist into the present, a fragmentation that led both to unnecessary duplication of services and to gaps in services. By the latter half of the 20th century, stigma persisted in public attitudes, but increasing awareness within the psychiatric and patient communities resulted in collaborative efforts to combat stigma.
The field of psychiatry resonates with stigma awareness. In medicine, the development of increasingly effective medications and psychosocial treatments for the mentally ill, an intelligent research infrastructure within the mental health field, and the standardization of diagnosis and treatment regimens has contributed to the empowerment of patients and their families as a growing political force, perhaps for the first time in history. Equally important to antistigma trends has been the proliferation of diverse mental health advocacy communities to push for better access, better treatments, and better doctors. Many of these new consumers, service users, and psychiatric survivors are critical of many clinical practices and coercive powers in psychiatry, and an historically oppressed population is gaining dignity in choice and voice. Psychiatry’s lobbying with advocacy groups has contributed to the dismantling of longstanding political symbols of stigma, most notably the lack of parity—inequities in insurance coverage for mental, relative to physical, disorders.
Despite these positive trends, stigma remains culturally evident in today’s newspapers: shame and adverse consequences of care-seeking prevent our own medical students from seeking desperately needed mental health services; war veterans with Post-Traumatic Stress Disorder are denied Purple Heart commendations; and access to and quality of care for both the public and private sector are often poor because of fears of unjustified expenses. Further, many American cities grudgingly tolerate a catastrophic problem of homeless people who are mentally ill. The criminal justice and prison systems dispense disproportionately huge proportions of mental health care and too often simply incarcerate countless mentally ill. Public attitudes are improving but still wary.9
Given stigma’s persistence over millennia, claiming the dawn of a stigma-free era would be an historical conceit. While neuroscientific advances as described by Reynolds and colleagues will undoubtedly play a role in reducing stigma, I would argue that stigma requires much more moral, social, educational, and political attention than neurobiological attention. Cures for mental disorders are a long way off. In the meantime, substantive moral, educational, and social questions, if addressed, could have immediate impact in the public sphere. Can our public (and medical) education about mental illness effectively address stigma? Will our religious institutions distinguish sin from disorder, and our justice system distinguish crime from illness? Can the American public sustain the political willpower to humanely care for the mentally ill on par with the physically ill? Will mental health advocates become powerful political forces in sustaining good and humane care? All of these are open questions for an uncertain future.
The history of psychiatry is a rich historical drama, both from the perspective of the historical stories themselves and from the vigorous debate that historians conduct around them. Nonetheless, I have attempted to be noncontentious here.
Historical accounts of the moral conundrums faced by asylum doctors have often led to grim historical appraisals of psychiatry. I would argue, as I would expect many historians of psychiatry would, that the controversy lies in the very nature of dealing with the moral, personal, and social complexity of mental illness. The history of conscience in psychiatry is the history of psychiatry. For brevity’s sake, this section considers crises of conscience, periods in which psychiatry experienced moral summonses to action.
Before the advent of psychiatrists, some courageous individuals dared to question the received opinion of what madness was and what it represented. The Old Testament teaches that madness could be a punishment from God; later in the Middle Ages, madness was demonic possession, or more generously, represented capitulation to Satanic temptations. In the late 15th century, bold individuals such as Weyer and Jorden challenged the prevailing mores, denying witchery in hysterics and madwomen—they were sick, not evil. These early challenges to madness-as-moral-failing set the stage for the “medicalizing” of madness, and during the Enlightenment, madness was redefined as illness. However, the plight of the Enlightenment mad did not improve much, considering their confinement in towers and dungeons, scathingly exemplified by Hogarth’s engraving A Rake’s Progress (1735).
In the early days of the American colonies, families and the community locally tended the mad, along with the indigent, drunk, orphaned, and criminal; social service institutions like those available today (psychiatric hospitals, homeless shelters, prisons, jails, foster care) had not yet developed. To emerge, these institutions required cities, crowding, mobility, the emergence of medical science, and social activism—all developments in the 1800s. These latter factors and others birthed the need for asylums and other social service institutions (almshouses, jails, schools for idiocy), as the public increasingly viewed the disenfranchised deviant as a burden or threat. The asylums and the profession of psychiatry emerged in a symbiotic relationship, each justifying the other in moral and civic-need terms.
In England and the American colonies by the 1800s, well-meaning private citizens offered their homes as humane alternatives to the prisonlike asylums. Custodial and largely nonmedical, the early madhouses in England came under increasing public criticism as even the moneyed bourgeoisie were subject to often atrocious conditions, despite good intentions. On the other hand, conscientious innovators such as Battie, Tuke, and Brigham among others established therapeutic communities based upon principles of charity, regularity, constructive work, and encouragement—i.e., “moral therapy.” Reformers like Pinel in France and Chiarugi in Italy pioneered similar moral reforms.
The spread of reform moved across the Atlantic with the founding of American asylums based on optimistic medical and moral principles. Institutions with names still familiar today, the McLean, Hartford Retreat, and Worchester State Hospital along with institutions for treatment and research like the New York State Psychiatric Institute, resulted from those reforms.
The managers of these institutions and the asylums ultimately organized in 1844 as the Association of Medical Superintendents of American Institutions for the Insane, ultimately becoming the American Psychiatric Association. The early years of their work resulted in significantly more humane treatment of the ill, even some therapeutic success, carried along by a national ethos of reform and optimism not just for asylums, but also for prisons, schools, and hospitals as solutions for social deviance. Indeed, the enthusiasms of many of the new psychiatrists led them to overstate therapeutic successes, promising and claiming cures when evidence was scant or flawed. Expanding medical domain, some psychiatrists claimed criminality itself was illness in need of the emerging psychiatric therapies.
This combination of social optimism and professional hubris set the stage for the backlash following the Progressive Era in the early American 20th century. As the promised cures failed to materialize and hospital rolls swelled, the humanistic ideals of many psychiatrists crumbled when confronted with the custodial realities they had originally decried. Much psychiatric theory became more pessimistic about the prognosis of core diseases, developing more cynical theories about disease etiology such as atavism and degeneration, leading to discussions of eugenics. Public and political trust in the mental health mission waned, followed by a corresponding shrinking of mental health budgets, then snowballing into indecent treatment and public outrage once again.
The new leaders, postwar psychiatrists such as Grinker and Menninger, encouraged by the successes of dynamic psychiatry in returning shell-shocked WWII soldiers to the front lines, left the hospitals and emigrated to outpatient psychoanalytic psychiatry, the medical schools, and the treatment of neuroses. By the mid-1950s, the dawn of the psychopharmacology era, confidence in asylum psychiatry was low and criticism of the mental hospitals was high; the exciting prospect of new chemotherapeutic agents and social psychiatric interventions for serious mental illness opened the possibility of emptying the crowded state hospitals. The progressive administrations of the 1960s instituted well-intended (and now historically familiar) federal reforms to deinstitutionalize the state hospitals, moving the seriously mentally ill back into their communities and families, where medications, rehabilitation programs, housing assistance, and education efforts would reintegrate the patients into society.
On paper, this reintegration sounded wonderful, but other social issues concerned the public: the war against communist expansionism in Vietnam, the countercultural antipsychiatry movement which exposed abuses and distrusted psychiatric powers, and other social changes that captured the American imagination (civil rights, feminism, Head Start, Medicare etc.). Medicine’s successes in prolonging the lives of people led to the emergence of a growing population of debilitated aged whose need for custodial care further engorged the overwhelmed state mental hospitals. The fate of deinstitutionalization was that federal encouragements to develop state-based comprehensive community mental health care never bore fruit. The consequences were a large homeless mentally ill population, prisons increasingly filled with untreated psychiatric patients, and enormous variations in mental health services amongst the states. The growth of for-profit psychiatric hospitals in the late 20th century and the greedy practice of first draining insurance coverage and then dumping patients provided a psychiatric contribution to cost-containment worries, adding momentum to the managed-care revolution of the 1990s. The failed deinstitutionalization mission, the exploitation of for-profit care, and the tolerance of a criminalized mentally ill population—all failures of conscience—joined with stigma in contributing to public distrust of, and wary attitudes about, mental health today.
In sum, the history of psychiatrists and psychiatry in the United States could be characterized as a series of crises of conscience, wavering between the extremes of idealism and social reform, on the positive side, and hubris and self-interest, on the negative. Psychiatry, depending on the era and Zeitgeist, could be an engine of humanistic reform as well as an institution in need of humanistic reform.
Psychiatry today has crises of conscience reminiscent of its history. Psychiatrists and other mental health professionals have allied themselves with patients, families, and other advocates, accomplishing unprecedented successes in support of good mental health care. Public support for research into mental health has never seen better decades of support. On the other hand, an extraordinary portion of psychiatric patients occupy jails and prisons. Reductions in the political misuse of mental health care after the Cold War have foundered in the face of Islamic extremism and the involvement of mental health professionals in interrogations, and possibly, torture. Psychiatric hubris and greed have again emerged in current public scandals about influence-peddling in cahoots with the pharmaceutical industry. Regulatory failures from the Food and Drug Administration (FDA) and elsewhere have impugned (particularly, but not only) psychopharmacological clinical trials. Extravagant claims of drug efficacy and tolerability have reappeared, but with discoveries of longer-term complications (e.g., metabolic syndrome with new generation antipsychotics, suicide potential with antidepressants), and limited differential efficacy in comprehensive treatment comparison studies (e.g., the CATIE, CUtLASS, and STAR*D studies).10–11
Regardless of our growing abilities to explain and understand them, mental disorders will remain complex conditions interwoven with the personal identities, values, attitudes, and belief systems of patients as complex human beings.12 The nature of mental disorder is to afflict the self, and attitudes toward which portions of self should be remade, and which preserved, will remain a matter of debate in homes, communities, courts, the media, professional meetings, and consulting rooms. Ever controversial, psychiatry must nurture the public trust by doing the right thing even when no one is looking, and practice humility, not hubris. While cures and effective prevention may be decades away, optimism about the future must be tempered by immediate attention to the humanistic reforms, steadiness of virtue, and high ideals that have always exemplified the noblest features of psychiatrists and psychiatry.
In Western history, the Enlightenment seems a logical place to locate the beginnings of psychiatric science. Wallace and Gach (2008) identify four fundamental psychiatric orientations that have been operative since the beginnings of psychiatry: humanitarian, psychological, sociocultural, and biological. Following the methods of the clinico-pathological correlation, syndrome description, and the tracking of longitudinal course pioneered by Western Europeans like Pinel, Esquirol, Sydenham, Kraepelin and others, research was confined to clinical settings and the reserve energies of intelligent practitioners.
The early psychiatric researchers sought to link brain lesions to psychosis, an unreachable goal using the technology of the time. The failure to find pathological findings by the latter 1800s led to the formulation of the functional mental disorder—defined as a malady that lacked a lesion. Concepts like neurosis and psychopathy soon followed, though even Freud dreamt of biomechanistic explanations for mental disorders, creating psychoanalysis only as a stopgap.
An extraordinary diversity of schools of thought, methodological practices, and research settings has always characterized psychiatric science. From the organic holism of Griesinger, to the neuropsychiatry of Westphal, from the neuropathologists such as Nissl, Golgi, Wernicke, and Ramón y Cajal, to degeneration- and heredity-based theorists such as Lombroso and Moreau, early scientists attempted to establish etiological theories of mental disorder.
Treatment studies were similarly diverse and were simple treatment trials. Early medications involved substances like alcohol, hashish, and cocaine, while somatic interventions involved water treatments (i.e., hydrotherapy), bloodletting, and creative physical restraints like Benjamin Rush’s “tranquilizing chair.” Later, as the science of chemical synthesis developed, psychiatrists introduced compounds such as bromides and barbital. More violent, but mostly short-lived, treatments like castration, oophorectomy, trephination, insulin coma, and prefrontal lobotomy were tested, promulgated, and eventually spurned.
Early psychiatric science occurred in the asylums, gradually migrating to laboratories, homes, and consulting rooms as outpatient psychiatric practice grew. The university-based research paradigm scientists take for granted today had not yet formed, as medical schools were mostly private institutions run by experienced physicians. These for-profit medical schools of the late 1800s varied widely in the adequacy of their facilities, humane care of patients, and the application of the developing scientific methods of medicine.
In the early 20th century, the often deplorable conditions of private medical schools in the U.S. provoked the then-new American Medical Association to join with the Carnegie Foundation to develop the influential Flexner Report (1910), which situated medical education and medical research in universities, with corresponding scientific principles and practices stipulated by credentialing laws of state governments. The principles of the Flexner Report represent an extraordinary reform in medicine unmatched to this day—a partnership of professional medicine with public interest.
Lacking philosophical discernment and scientific techniques such as the controlled clinical trial, some prominent psychiatrists fell victim to hubris and uncritically advocated invasive treatments (e.g., convulsive therapy and lobotomies) for a wide variety of conditions, much to the detriment of many patients. The establishment of the contemporary FDA in 1906 complemented the reforms of the Flexner Report, seeking to standardize the safety, efficacy, and purity of drugs used in the U.S. The pharmacological revolution in psychiatry delivered not just new drugs with promising effects, but new scientific methods to more convincingly evaluate them: controlled clinical trials, use of placebos, open scientific publication of data, and rigorous peer review. The discovery of a number of psychotherapeutic drugs, largely by circumstance, in the 1950s led to the psychopharmacology movement recognizable today. The early psychopharmacologists saw the use of medications such as chlorpromazine, imipramine, and lithium not simply as potential therapies, but as exploratory tools in understanding brain function, generating the potential for bioetiological theories of mental disorders. The neuropharmacology of late midcentury became the foundation for today’s basic neuroscience and molecular biology of mental disorders.
By the time of the founding of the National Institute for Mental Health (NIMH) in 1948, the aforementioned psychodynamically-oriented psychiatrists (Menninger, Grinker) dominated American psychiatry, and research and treatment focused on psychological and environmental manipulations. However, the early promise of biological psychiatry came to dominate psychiatric research in the latter half of the 20th century, and the dominance of psychodynamic psychiatry in U.S. medical schools waned. By the latter half of the 20th century, pharmaceutical companies had established themselves as a dominant force in developing compounds and performing clinical trials of new drugs, leaving the NIMH and other funding sources to support research into unpatentable medications like lithium, and to finance a more diverse research agenda. Psychiatric researchers had transformed from curious practitioners in the early 20th century into highly regulated academic entrepreneurs in the late 20th century—the latter consumed by fundraising for increasingly expensive research paradigms, as medical schools progressively retreated from funding research with hard money. By the early 21st century, the paradigm of basic-translational-clinical research had been widely established, shortly followed by a template for reform within the NIMH funding philosophy.11
Many of the most important features of psychiatric science today are not unique to psychiatry. On the plus side, psychiatry enjoys a comprehensive and conscientious research infrastructure unparalleled in the world, a system of undergraduate and graduate medical education that attracts the best and brightest from around the globe, and the sympathy and engagement of a public who has consistently recognized the value of medical science. On the minus side, psychiatry, too, is limited by the dominance of a medical-industrial complex, which influences research priorities, and clinical nomenclature. Psychiatry suffers from an intellectual fragmentation of the clinical, educational, and research missions of medical schools and their faculties into smaller and smaller superspecialties; and within research, the unresolved role of ethical physician-industry relations, the failure to publish negative research findings, the failure to perform long-term safety and efficacy studies, and the dissolution of big-picture clinical understanding into a dizzying myriad of facts of small effect size. This intellectual fragmentation of today is the 21st century equivalent to the fragmentation of care systems that crippled 19th- and 20th-century psychiatry, in which multiple institutions (mental health, penal, juvenile justice, schools for intellectually impaired) maintained mostly independent tracks, in which systems competed with each other for funding, and in which little oversight coordinated all. I believe that psychiatric science today is in trouble as a result of not only these concerns, but also, especially for psychiatry, what seems to be a more progressive disengagement from patients and families whom the scientific endeavor serves. However, this latter trend does show promising signs of reversal in recent years.
I purposefully saved psychiatric science for last because an understanding of stigma and the psychiatric conscience is important in considering the future of psychiatric science. The crisis in psychiatric science today derives from a number of factors, many of which can be understood through reading the recently released NIMH Strategic Plan.13 The Plan is essential reading for the future of psychiatry, precisely because of the economic power of this institution and its influence on the academic-medical-industrial complex. Acknowledging the limited success of etiological and treatment research in psychiatry, the plan prioritizes funding within the NIMH into four objectives that emphasize basic science, developmental trajectories, diversity of intervention with people (i.e., personalized medicine), and increased public health applications. These priorities will address some of the current components of the crisis in psychiatric science. What is missing are three essential ingredients—coping with information overload, attention to emerging ethical issues, and explicit stakeholder involvement —that I believe will undermine the success of this new mission if they remain unaddressed and unfunded.
The first concerns the problem generated by the wonderful human capability to generate information. As the findings of the new NIMH mission roll in, who will tell practitioners what the results of these findings mean? Who will be the interpreters of these interdisciplinary reams of data, translating fragmented technoscience into elegant treatments and policy within the public sphere? Importantly, how will those individuals interested in doing this essential work be funded? One might argue that the NIMH objective of increased public health applications might accomplish this interdisciplinary task, although the strategic plan does not describe how integrated syntheses of data will be assembled—only how they will be disseminated and utilized.
Second, recent experiences with scandals about, and unforeseen sequelae of, new drugs and treatments seem a marginal priority under the new NIMH Plan, as do the ethical disputes inevitably encountered in rolling out new clinical interventions in diverse communities. Perhaps NIMH (or NIH in general) needs its own Ethical, Legal, and Social Implications program as did the Human Genome Project. If NIMH does not address and fund research into these concerns, the current exploding fragmentation of research-into-practice will only expand, and psychiatry and mental health will be crippled by additional burdens of choice for the public (Shall we pay for this? Is this the right way to do things? Do I want this for myself?), increased political heat for the NIMH, and more bleeding surprises on the cutting edges of treatment.
Lastly, one of the NIMH Public Health priorities will be to “improve dialogue to provide a clearer understanding of stakeholders’ needs.”13 Will stakeholders (patients, families, clinicians, administrators, policymakers, industry, health care payers) themselves participate through their own funding mechanisms? What has been unique today in the history of psychiatry is the rise of an increasingly empowered patient base with the promise of a real social, educational, and political impact. Will NIMH enhance that clout through a de-marginalizing of patients’ voices, leading them beyond recovery to empowerment?
While the growth of psychiatry as clinical neuroscience will undoubtedly contribute to a more positive long-term future for psychiatry, the social sciences, philosophy, and ethics address the issues that face psychiatry today as well as tomorrow. The promissory reach of clinical neuroscience may be shortened, slowed, and overregulated if psychiatry ignores or marginalizes other priorities: the dismantling of stigma through social science research, the funding of integrative reflection and conscience-seeking within mental health, and the boosting of not just stakeholder dialogues but also stakeholder power.