Howell, Joel D. MD, PhD
What (or who) is the “health care workforce”? What should it be? Where should it be? The answers to these questions have always been grounded in specific social, political, and economic contexts. Ideas about the health care workforce must continue to change as those contexts are transformed. Briefly examining the history of some ideas about the health care workforce may help us not only to embrace the change that is inevitable but also to consider a wider range of possible solutions to the challenges facing today’s workforce.
From the 19th-Century Free Market to Medical School Reform
Andrew Jackson was the seventh U.S. president, serving between 1829 and 1837. During this “Jacksonian era,” the ethos of a free society and a free market reigned supreme. This was most strikingly manifest for health care in the complete absence of licensing laws—anyone could legally practice medicine regardless of knowledge, training, skill, or experience. This model of health care reflected prevailing social and economic realities. Despite the nascent idea of a unified nation (the “United States” had only recently come to be seen as a singular concept), most people lived lives of rural isolation. Going to see a health care provider, or even contacting one, was no easy feat. Transportation was on foot, horseback, or perhaps on a boat if you lived along a river or near the newly completed Erie Canal. Communication traveled no faster than a person could move.
It was in this context that the Knoxville medical practitioner John C. Gunn, who, like most U.S. physicians, had never gone to medical school, chose to dedicate to Andrew Jackson his 1830 handbook Gunn’s Domestic Medicine.1 This wildly popular book, which eventually went through 234 editions, can offer insight into who was seen as making up the health care workforce. In the book, Gunn1 frankly declared, “Every man ought to be his own physician.” He sought to operationalize this principle by explaining how to diagnose and treat a wide range of ailments. His handbook not only described how to use a variety of drugs but also told how to perform an assortment of procedures, such as inserting a male urinary catheter, repairing a “hare lip,” or delivering twins. Gunn’s instructions explicitly assumed that medicine at home could be practiced by anyone with common sense. Thus, in 1830 the “health care workforce” could be seen to include any American who could read.
A few decades later, the massive casualties of the Civil War—including more than 600,000 soldiers and 50,000 civilians dead—helped bring to the fore questions about who really should be able to practice as a physician. As one trenchant 1864 observer noted, the health care workforce (although he did not use that term) was a mélange of people with diverse and at times mutually contradictory ideas, including
allopaths of every class in allopathy; homeopaths of high and low dilutions; hydropaths mild and heroic; chrono-thermalists, Thomsonians, Mesmerists, herbalists, Indian doctors, clairvoyants, spiritualists with healing gifts, and I know now what besides. What is worse, perhaps, is the fact that there is no standard—no real science of medicine—no absolute or acknowledged authority. Every one may do what is right in his own eyes.2
In an attempt to regulate a confused health care workforce, states started to enact physician licensing laws. These laws promptly led to a glut of low-quality, proprietary medical schools and what many physicians saw as a rapidly expanding, overcrowded, and underpaid workforce. During the severe depression of the 1890s, increasing numbers of practicing physicians joined the American Medical Association (AMA) in hopes that the AMA might be able to limit the ability of new trainees to gain access to the field. At the same time, a reform in allopathic medical education, soon aided by the Flexner Report in 1910 and philanthropic support, especially from the Carnegie and Rockefeller Foundations, led to the closing of poor-quality, proprietary medical schools and a more rigorous, standardized curriculum in the surviving institutions.3 Science became seen as the standard for defining accepted care. Hospitals became established sites for medical care and education. Nursing started to become professionalized.
Policy Decisions Informed by National Studies
The United States emerged from World War I (WWI) as the world’s leading economy and, arguably for the first time, the international leader in medical education. In 1927, the Committee on the Costs of Medical Care (CCMC) was created to offer the first national assessment of how best to provide medical care for all persons in the United States. The CCMC issued its landmark final report in 1932.4 They based their conclusions in part on a survey of over one million members of the health care workforce, including a catholic range of not only physicians and nurses but also naturopaths and religious healers. Not all practitioners were seen as valuable; the CCMC singled out “chiropractors, osteopaths and the like” as providing inferior treatment.5
A key workforce question has always been “how many do we need”? Successes in the fight against communicable diseases had reduced the numbers of sick people. This accomplishment resulted in less work and lower incomes for physicians; one-third made less than $2,500 per year (the equivalent of about $34,000 in 2013).6 There appeared to be too many physicians. In his 1934 presidential address, the AMA president pleaded with medical schools to cut the number of graduates in half.7
But the answer to “how many do we need” depended on where one lived. Physicians tended to move not where they were most needed but, in the words of Evans Clark, director of the Twentieth Century Fund (which had supported the CCMC), “almost in direct proportion to the estimated per capita wealth.”8 As the general population moved increasingly to the cities from the countryside, rural areas were left with shortages of nurses and physicians. One potential solution was based on technology, albeit not the sort of technology that we usually think of as medical. With the proliferation of automobiles, better roads, and telephones, patients could have access to nurses and physicians even if they did not live near a doctor’s office or hospital.
Specialization emerged as a new concern about the physician workforce in the period following WWI. Despite the fact that 85% of patients did not need specialty care, physicians persisted in trying to become specialists. They did so for many reasons, but the most important seemed to be that specialists made more than 250% the salary of a general practitioner.9 Just as entrance to the physician workforce had come to be controlled by state licensure, the same approach could have been taken to control access to becoming a specialist. Some states introduced legislation to do just that, which would have provided a way to control the size of the workforce. Instead, specialists came to be certified by private specialty boards. But before World War II (WWII), specialty training and board certification were distinctly uncommon, and the dominant image of the ideal physician remained resolutely that of the generalist.10
The Rise of Specialization
WWII marked a dramatic watershed in ideas about health care. Wartime workforce needs led to domestic worker shortages and the creation of a national War Manpower Commission. Medical schools increased their output by compressing a four-year curriculum into three. The military medical services grew dramatically; at its peak, the medical department included 50,000 physicians and over 700,000 people, making it three times the size of the entire army in 1939.11 The military used board certification to assign higher rank and pay to military physicians. During the war, medical personnel saw the value of science-based medicine manifest in such advances as penicillin and blood banking. Afterwards, those who wished to pursue specialty training saw their opportunities rapidly expand. The number of residency positions doubled between 1941 and 1947.12 The GI Bill of Rights and increasing numbers of Department of Veterans Affairs (VA) hospitals provided additional opportunities. The National Institutes of Health used specialty-based study sections to allocate exponentially increasing extramural funding, which helped to establish medical specialization as the norm. Those concerned about the physician workforce ceased asking whether specialization itself was a good idea and focused instead on how many specialists were needed.13
During the postwar period, many of the same questions persisted as before, although some of the answers changed. As wartime needs abated, a New York Academy of Medicine report suggested that the United States now had too many physicians, which might lead to patients receiving unnecessary services. That same report noted that almost half of the U.S. population now lived in cities, and although some believed “that the salubrious conditions of the countryside made medical facilities less necessary in rural communities than in the cities,” rural health was a continued concern.14(p 123)
This was especially true for minority populations. In 1940, 83% of “Negro” live births in Mississippi were not attended by a physician. This failing was attributed to hospital discrimination and a lack of “Negro” physicians.14(p 133) During WWII, the distinguished African American surgeon (and blood-bank pioneer) Charles Drew noted that “Negro” health suffered because only about 2% of physicians were African American.15 The lack of African American physicians was thus seen as important in part because of its effects on health in the African American community. Drew also observed the injustice in having “Negro physicians … go forth to rid the world of terror in far places, [yet] emancipation is not yet complete at home.” A definitive 1952 Brookings Institution report on “Health Resources in the United States” remarked on the continued paucity of African American physicians. It also explicitly noted the dramatic gender imbalance in physicians—95% male—and described but did not comment on barriers to female practice, which were identified as marital status and patient preference.16
Looming since the end of WWII, the Cold War turned hot on the Korean peninsula in 1950. Only two years before, President Harry Truman had gone to the American Association for the Advancement of Science to stress the central importance of science for national defense. In 1950, he declared a national emergency and called for the nation to combat “world conquest by communist imperialism.”17 One of the key weapons in this global conflict was the physician workforce. National medical organizations responded to fears of a national emergency, of another world war, perhaps (probably) even a nuclear one, by creating new committees. The Association of American Medical Colleges (AAMC) created a Committee on Preparedness for War, and the AMA appointed a Council on National Emergency Medical Service. Medical schools did their part in the crisis by once again accelerating training programs.18
In the early 1960s, the AAMC moved its headquarters to Washington, DC, and started to play a larger role in national discussions of the health care workforce. By the 1970s, there was general agreement that a lack of adequate physicians had risen to the level of a full-fledged crisis, one worthy of congressional and presidential attention. From 1960 to 1980, annual medical school graduates more than doubled. But the pendulum swung back, and by 1981 the influential report of the Graduate Medical Education National Advisory Committee (the GMENAC report) predicted an overall surplus of some types of physicians.19 Many workforce reports have followed.
Looking Forward, Looking Backward
History is not prophecy. Historical analysis cannot tell us which ways of estimating workforce needs are correct, other than to point out that the pendulum has swung consistently between “too many” and “too few.” But historical analysis can highlight some of the implicit policy choices we have made, many of which have profound implications for the workforce, such as the decision to license medical practitioners as physicians but not as specialists. Some issues show remarkable continuity over the centuries. Rural and economically disadvantaged communities continue to struggle with access to health care providers. Wars lead to major workforce transformations, often in ways that last long after the conflict has ended. Some issues are relatively new, such as explicit concern about the ethnic and gender makeup of the physician workforce.
We continue to try to determine how to provide an adequate workforce to deliver primary care. Part of the solution may come from redefining the workforce. For example, nurse practitioners and physician assistants are potentially underused as providers of primary care, although broadening these providers’ scope of practice is a controversial idea. Garson and others20 have proposed training laypeople with some health care background to care for people in their homes as “Grand-Aides,” connecting patients with their team of care providers through transitional care, chronic disease management, and preventive interventions. Almost two centuries ago, John Gunn also offered advice for how to care for people at home. It is instructive to contrast these two ideas. Gunn wrote his handbook when families were larger than they are today and multiple generations tended to live together, but when transportation and communication were often insurmountable hurdles. Garson and colleagues’ suggestion comes at a time when households are smaller and fewer family members remain at home, but when transportation is much easier and telemedicine offers the possibility of nearly instantaneous communication and collaboration between patients and providers. Both Gunn and Garson offer solutions that respond to a specific moment in time. To be successful, future health care workforce innovations will also need to reflect the changing social, political, and economic contexts of health care.
1. Gunn JC Gunn’s Domestic Medicine [facsimile]. 1986 Knoxville, Tenn University of Tennessee Press
2. Nichols TL Forty Years of American Life. 1864;Vol 1 London, UK J. Maxwell and Company:364–365
3. Ludmerer K Learning to Heal: The Development of American Medical Education. 1985 New York, NY Basic Books
4. Starr P The Social Transformation of American Medicine. 1982 New York, NY Basic Books
5. . The costs of medical care. N Engl J Med. 1930;203:135
6. Moore HH American Medicine and the People’s Health. 1927 New York, NY D. Appleton and Company
7. Bierring WL. The family doctor and the changing order. JAMA. 1934;102:1995–1998
8. Clark E How to Budget Health. 1933 New York, NY Harper & Brothers
9. Brown EL Physicians and Medical Care. New York, NY Russell Sage Foundation:1937
10. Howell JD. Historical reflections on the past and future of primary care. Health Aff (Millwood). 2010;29:760–765
11. McMinn JH Personnel in World War II. 1963 Washington, DC USGPO
12. Johnson VAshford M. Implications of current trends towards specialization. Trends in Medical Education. 1949 New York, NY Commonwealth Fund:173–178
13. Ashford M Trends in Medical Education. 1949 New York, NY Commonwealth Fund ed.
14. Stern BJ American Medical Practice in the Perspectives of a Century. 1945 New York, NY Commonwealth Fund
16. George W Bachman and Associates. Health Resources in the United States: Personnel, Facilities, and Services. 1952 Menasha, Wisc Brookings Institution
18. Hinsey J. Medical education in this national emergency. Med Educ. 1951;26:81–90
19. Report of the Graduate Medical Education National Advisory Committee to the Secretary, Department of Health and Human Services. 1981;Vol 1–7 Washington, DC USGPO
20. Garson A, Green DM, Rodriguez L, Beech R, Nye C. A new corps of trained grand-aides has the potential to extend reach of primary care workforce and save money. Health Aff (Millwood). 2012;31:1016–1021