Smith, Karen MD, MPH; Jarris, Paul E. MD, MBA; Inglesby, Thomas MD; Hatchett, Richard MD; Kellermann, Arthur L. MD, MPH
Health and Human Services Agency, Napa County, California (Dr Smith); Association of State and Territorial Health Officials, Arlington, Virginia (Dr Jarris); Center for Biosecurity, University of Pittsburgh Medical Center, Baltimore, Maryland (Dr Inglesby); Biodefense and Research Development Agency, US Department of Health and Human Services, Washington, District of Columbia (Dr Hatchett); and RAND Corporation, Arlington, Virginia (Dr Kellermann).
Correspondence: Arthur L. Kellermann, MD, MPH, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814-4712 (email@example.com).
The views expressed are those of the panelists and do not necessarily reflect the views of their organization or agency.
The authors declare no conflicts of interest.
This commentary on the scope, content, translation, and policy utility of research is shaped by the authors' perspectives from federal, state, and local levels and national policy making. The reflections presented here were offered in response to presentations at the Dynamics of Preparedness Conference in Pittsburgh, October 22-24, 2012, many of which are included in this journal supplement issue.
Perspective on the Science of Preparedness
In trying to think about such events scientifically, we face what Don Burke, a leading proponent of modeling for preparedness, might call an epistemological problem: “How do we learn from samples of one or fewer?”1 In a Commentary in this issue,2 he describes the march from data to information to knowledge to wisdom and, ultimately, to understanding. Other articles in this issue actually map out along that paradigm. The conceptual problem is how to create models that describe the behavior we are seeking to understand. The metrics problem is how to sift through masses of data to identify the critical indicators of preparedness. Another important research question is how to couple modeling methodologies and analyses of dynamic behavior with empirical descriptions of real-world events.
To move the practice of disaster preparedness and response closer to something that could be considered as disaster science, we need to get beyond viewing every disaster as a one-off event and adopt evidence-based approaches. But identifying what drives such deeply complex events is extremely difficult. Some of our most valuable insights may come from fields that seem only remotely related to the present topic, such as ecology, where the dynamics of systems have been studied for a long, long time. Ecologists understand equilibria: how the systems they study move from one equilibrium to another when a predatory or invasive species is introduced into an ecosystem or when a species is removed. Analogies are needed to understand community resilience and determine the core capabilities required to ensure that communities heal swiftly after major disruptions. Quoting Burke: “If we don't understand dynamics, we don't actually know what matters. Without knowing what matters, we don't know what to measure.” Taking a system dynamics perspective is imperative to identify the most appropriate measures and determine how to use them. And what gets measured gets done. Analytics, therefore, form the link between model building and our desire to operationalize the steps required to enhance preparedness.
Perspective on Research Priorities
A letter report of the Institute of Medicine in which one of us (P.E.J.) participated proposed priorities for preparedness research in 2008.3 Now that 5 years have passed, it is time to revisit those recommendations. Two recommendations—the importance of translational research and the need to work with vulnerable populations—remain vitally important. Another was the need to generate better metrics to measure the effectiveness of what we do. State health care officials are constantly asked, “We've invested all this money, what have you got to show for it?” They must be able to confidently answer, “Here is what we set out to achieve, and here is how we're measured on it.” The Institute of Medicine also noted the importance of “creating and maintaining sustainable preparedness and response systems.” This continues to be a huge challenge. Between 2005 and 2012, supposedly the era of preparedness, there was a decrease of approximately 26% in funding for both Public Health Emergency Preparedness and the Hospital Preparedness Program. With sequestration,4 Public Health Emergency Preparedness and Hospital Preparedness Program funding will be reduced by an additional 5% at the federal level, and the Office of Management and Budget states that the effective percentage reductions may ultimately be as high as 9%. The public needs to understand what they will lose with these cutbacks: public health emergencies will not go away because this funding was cut.
The most important reason to develop good measures is to drive continuous quality improvement. Considering the public health research agenda for the future, a good place to start is with the consensus statement on quality and public health, which was put out by the US Department of Health and Human Services in 2008: “Quality in public health is the degree to which policies, programs, services, and research for the population increase desired health outcomes and conditions in which the population can be healthy” [emphasis added]. Research is needed to guide better decisions. To that end, measures are needed that are economical and fast, so they can affect decision making in real time. “Plan-Do-Study-Act” cycles for continuous quality improvement cannot wait years for the results of a research project. Information in real time is required. The “Preparedness Critical Incident Registry (CRI)” project of Klaiman et al5 moves us in that direction.
Perspectives on Translating Research Into Practice
To ensure that new findings are swiftly implemented, it is important to engage the practice community in the research questions. That way, the time between producing new findings and implementing them into practice can be minimized. Ultimately, the quality of research should be judged by the extent to which it leads to improved health outcomes.
It is generally accepted that it often takes 15 or 20 years for research in the clinical area to widely spread into practice. But in public health preparedness, we are told: “Make it happen yesterday.” Although the Institute of Medicine Committee3 decided that case study methodology is not a valid form of science, the fact is that we learn some of our most valuable lessons from experience. What we must do now is embed research into practice to produce improvements in real time.
A Perspective on Local Realities
The daily reality of the local public health practice community is one of rapidly shrinking resources and an ever-expanding scope of responsibility. In light of these constraints, answers are needed to some very practical questions. What preparation is needed for a public health emergency? What are the critical activities that every health department, large or small, must be able to accomplish to protect the public's health?
The next question is: how are these activities accomplished? What is the most effective way to accomplish this? What are the component parts that focus on building core skills, knowledge, and abilities our workforce needs before the next big event? Are there critical infrastructure elements we must invest in now to perform these activities in the future?
Finally, who in our communities is at greatest risk of harm from the various disasters we may be required to address? What makes communities resilient, so they bounce back from significant damage? How can public health agencies foster resilience at the local level?
“Stories” of past responses need to be translated into useful information to guide future actions. Improvement will result if we take the lessons of each event to heart. Those who fund our work need to learn as well. Given the current economic climate, programmatic silos for preparedness activities are no longer viable—they are not only wasteful but also counterproductive. By the very nature of these events, emergency and disaster responses at the local level require the participation of the entire public health workforce, regardless of their everyday roles. Many local agency staff members have detailed knowledge of the populations and communities they serve, and they bring these partnerships to the table. The modest preparedness funding from federal and state governments should be given flexibility for use in the most effective way; even if, on a daily basis, it does not look much like emergency preparedness.
Perspectives on Policy Priorities
From a policy perspective, 4 priorities top the list. The first is, “Can the preparedness community continue to show that public health preparedness saves lives, reduces costs, and helps communities recover from disasters?” Around the country, it is obvious what firefighters and police do, but sometimes less obvious how the public health preparedness community saves lives. As a result, some in Washington feel that the urgency of public health preparedness has come and gone. Policy makers who are not persuaded by the ongoing relevance and value of these programs will not support them. So, it is crucial to generate evidence that preparedness saves lives, reduces costs, and helps communities recover from disasters.
The second priority is to keep demonstrating the value of research. A political leader in Washington recently said, “Enough research. It's time to get on to putting these things into practice. Let's go to operations. We don't have time for more research.” If we stop doing preparedness research, we will lack answers to important questions such as the value of nonpharmaceutical countermeasures in epidemic response, the evidence required to optimize mass distribution strategies, and best strategy for communicating with the public during a population health emergency.
The third priority is to calculate the return on investment we have generated from spending on public health preparedness to date. Policy makers from the executive branch and from the legislative branch representing both sides of the aisle are asking, “Are we spending money wisely?” The National Health Security Preparedness Index project has, at its core purpose, the goal of generating an overall estimate of preparedness for states and the nation. A community resilience index is in the works as well. These kinds of research activities are really important.
The fourth priority is to continue to conduct research on specific, high-value questions. For example, should schools be closed during the next pandemic? How effective are existing biosurveillance systems? How can we promote cross-disciplinary teamwork for an effective response? If we can answer these questions, the public will be safer.
Several articles in this issue supplement present modeling studies. Two challenges go beyond the research problem of building more precise models. One is communicating results to nonmodelers in understandable ways. Those who actually make the big decisions in disasters are not public health people, scientists, or modelers; they are governors, mayors, or county commissioners. Public health is simply an input into what is essentially a political decision, and political decisions are often based on economics, politics, and power. Therefore, we must be able to translate our models into political and economics-speak, because that is what motivates decision makers. The second challenge is developing a process, during a crisis, for reconciling conflicting results from different models—to determine which model is best for which question.
© 2013 Lippincott Williams & Wilkins, Inc.