Skip Navigation LinksHome > May/June 2010 - Volume 16 - Issue 3 > Toward a Taxonomy of Public Health Error
Journal of Public Health Management & Practice:
doi: 10.1097/PHH.0b013e3181e030d3
Commentary

Toward a Taxonomy of Public Health Error

De Ville, Kenneth PhD, JD; Novick, Lloyd F. MD, MPH

Free Access
Article Outline
Collapse Box

Author Information

Kenneth De Ville, PhD, JD, is Professor, Brody School of Medicine, East Carolina University, Greenville, North Carolina.

Lloyd F. Novick, MD, MPH, is Professor and Chair, Department of Public Health, Brody School of Medicine, East Carolina University, Greenville, North Carolina.

Corresponding Author: Kenneth De Ville, PhD, JD, Brody School of Medicine, East Carolina University, 2S-17 Brody, Greenville, NC 27858 (devillek@ecu.edu).

In his groundbreaking 1984 New England Journal of Medicine article titled “Facing Our Mistakes,” David Hilfiker encouraged physicians to come to terms with the inevitability of medical mistakes and the implication of the profession's apparent inability to come to terms with the limitations of medical science and art.1 Twenty-five years later, in this issue of the Journal of Public Health Practice and Management, David R. Holtgrave in “Public Health Errors: Costing Lives, Millions at a Time” makes a parallel plea to public health professionals and policy makers.2 Drawing on the analogy to medical error, Holtgrave introduces a new term public health error. The so-called “evidence-based medicine” movement stimulated the Council on Linkages between Academia and Public Health Practice to explore the usefulness and feasibility of “evidence-based public health” culminating in the formation of the USPHS Task Force on Community Preventive Services. Similarly, examination is warranted of benefits and application of this new terminology.3 Will it lead to improvements in public health practice? On a larger scale, will intent to reduce these errors in turn motivate policy and guide resource allocation leading to health improvement in our communities? Regardless of how these downstream consequences ultimately evolve, we applaud Professor Holtgrave's recognition that the time has clearly come to inaugurate a robust dialogue on the definition, identification, and meaning of public health error.

Back to Top | Article Outline

The Taxonomy of Public Health Error

Holtgrave subdivides public health error into 3 general “categories”:

Category 1. Errors of deliberate commission

Category 2. Errors of willful omission

Category 3. Errors of complacency

It is too early in this nascent discussion to settle irrevocably on a particular terminology used to label each category of error or to steadfastly assert that a particular categorization is the final word on what constitutes public health error. But a few preliminary observations are appropriate. There is, Holtgrave posits, almost certainly not 1 type or category of error, but many. And, each category is likely to have different implications for public health practitioners and policy. Finally, some of the resulting categories of error may ultimately have no particular significance for the quality movement or public health practice directly, but the identification of even these more abstract categories of error may be important for conceptual clarity and consistency.

Back to Top | Article Outline

Intentional Acts Versus Error

The common feature in Holtgrave's “three categories of errors is an intent to do harm, or at least the lack of caring about fully discharging one's public health duty to serve the public good.” His first 2 categories explicitly categorize decisions or actions that result from deliberate commission or willful omission as errors.

In contrast, these qualities alone do not mean that intentional acts and omissions should be classified as errors. Indeed, it may be inapt, and perhaps unwise, to include intentional departures from accepted standards as errors. Dictionary definitions and definition errors proposed in various other contexts may be instructive even though they typically do not distinguish between types of error and sometimes rely on overlapping and self-referential language. The Oxford English Dictionary defines error as “something incorrectly done through ignorance or inadvertence; a mistake, e.g. in calculation, judgment, speech, writing, action, etc.”; or as “the condition of erring in opinion; the holding of mistaken notions or beliefs.”4 Professor Holtgrave himself cites the definition offered by the Institute of Medicine, identifying medical errors as “The failure of a planned action to be completed as intended (error of execution) or the use the wrong plan to achieve an aim (an error of planning).”5

In contrast to these definitions, the “common feature” of Holtgrave's “three categories of errors is an intent to do harm, or at least the lack of caring about fully discharging one's public health duty to serve the public good.” We agree that willful commissions and omissions that either cause harm or create an unacceptably high risk of harm clearly represent actions for which the individual decision makers should be held accountable in some fashion or the other. But intentional acts of departure from accepted norms and standards should not be categorized as errors. Instead, they are more accurately categorized as unjustified violations or breaches, as opposed to errors. In almost every other professional context, the designation of error is reserved for acts of carelessness, negligence, inattention, lack of skill, lack of knowledge, or misjudgment that lead to a result that is not intended. Violations and breaches are easy to identify and are rightfully sanctioned because there are sometimes existing standards that have been intentionally subverted. Consequently, the actors should be held accountable and responsible. (As we discuss later, our views on assigning culpability for “nonintentional errors” are quite different.) While intentional acts that are contrary to accepted practice standards cannot be condoned in public health practice, we believe that the label of error should be reserved for other categories of action and inaction.

The questions we raise regarding whether intentional departures from clear standards should be considered errors under Holtgrave's category 1 designation, or violations or breaches, they should not obscure the potential benefits of clearly identifying and labeling in some fashion this class of what should be considered unacceptable activity by public health actors. The utilization of what Holtgrave has designated category 1 and category 2 errors does have exciting potential for practice effectiveness and methods of quality assurance. Holtgrave provides examples of this type of error, including providing substandard tuberculosis treatment or divulging names of persons with human immunodeficiency virus infection from a confidential dataset. Numerous other examples of this type of error might be envisioned: failure to perform contact investigations in tuberculosis or syphilis, inadequate notification of those at risk following a food-borne outbreak, lack of abatement activities after a child is discovered with an elevated blood lead level (Holtgrave identified this as category 3 error; the authors might argue for its inclusion within category 1); absent or inadequate child seat or bicycle helmet programs, and inadequate public notification after documented microbial or chemical contamination of drinking water. There are outcome indicators relevant to all local and state health departments in the United States. A set of outcome and performance indicators is now used by The Joint Commission on Accreditation of Healthcare Organizations supplementing their previous sole use of structural and process standards. Could such a method, stimulated by Holtgrave's public health error, category 1, become a central component of quality assurance for health departments? In January 2010, the Journal of Public Health Management and Practice published a special issue on quality assurance in public health. With the possible exception of the article by Jeff Gunzenhauser, describing comprehensive quality improvement efforts at the Los Angeles Department of Health, this series of state-of-the-art quality assurance activities in public health measured quality using the proxies of performance management and accreditation standards. At the Los Angeles Health Department, quality improvement efforts included professional practice and public health science as well as performance measurement. Holtgrave's public health error concept—regardless of the precise labels ultimately applied to the various categories—might contribute to an indicator-based method of measurement that will yield results with high impact.

Back to Top | Article Outline

Who Decides?

Holtgrave's proposal of category 1 and category 2 errors (“errors of deliberate commission” and “willful omission”) raises additional problems with respect to who is empowered to make the definition of what is omitted or neglected. What criteria are used in making such determinations? Who is held accountable? Holtgrave uses the example of failure to implement a needle exchange in a community where intravenous drug use is contributing to human immunodeficiency virus infection. One of the authors (L.F.N.) has worked in a community to implement a needle exchange, and in another community where he could not implement this program because elected officials setting policies and making resource allocations believed that needle exchange would actually encourage intravenous drug use. Clearly, those of us in public health regard this latter view as an unenlightened view. But should such a strategic shortcoming be viewed as “error” or merely the product of a different set of values and priorities by duly elected political representatives? Needle exchange may not be the best example of this dilemma. Evidence, after all, does support the effectiveness of such programs. But in other settings, public health officials might want to ban trans fat, remove French fries from fast food restaurants, and require motorcycle riders to wear helmets. All such measures can be justified for public health reasons. Despite the existence of such public health justifications, different conclusions have been reached on each of these issues in different regions of the country. Varying positions on public health issues are frequently based not only on a misreading of the available empirical evidence but also on the prevailing ideological predispositions of the decision makers and of the served communities. Public health officials who ignore such realities may be and have been accused of “paternalism” in insisting that their view or value system is the one that should prevail.

Some of these same limitations apply to what Holtgrave refers to as category 3 inertia. Should public health dictate prioritization with respect to resource allocations? Resources are finite. Additional resources allocated to public health may reduce funds for other societal needs, including housing, job training, and education. All have an impact on public health.

Back to Top | Article Outline

“Bad Outcomes Might Not Be Errors, Strictly Speaking”

We second Holtgrave's conclusion that a public decision or action that results in a bad or less than ideal outcome does not automatically connote error. However, it may be useful to conceptualize some such decisions as error, albeit not culpable or preventable error. Consider a potential error category labeled “flawed state-of-the-art” error. “Flawed state-of-the-art” errors might be viewed as those decisions or actions that respectively prove mistaken or incorrect, even when the decision maker could not have decided or acted in a more advantageous way, given the then current state-of-the-art and existing knowledge. For example, one might suggest that Newton's laws of motion were mistaken or incorrect because they were not consistent with special relativity. Physics in general and measurement techniques had not advanced sufficiently far to allow Newton to understand limitations of his equations.6 However, at the same time, Newton's equations were consistent with the best data and judgment at time—even though they were ultimately in error.

On one hand, it is certainly possible to argue, as we believe that Holtgrave would, that actions and decisions based on the correct evaluation and implementation of all existing knowledge should not be counted as error because the actor lived up to the highest possible standard. In other words, “limited state-of-the-art errors” should not be counted as errors because the actor could not have acted otherwise and was not responsible for the mistake. On the other hand, there may be some good reasons to argue that “flawed state-of-the-art errors” should in fact be counted as errors. These incidents, after all, appear to fit the common language definition of error, even if they should not be counted as culpable or preventable errors. Consider, for example, the 1950s physicians who provided hyperoxygenation to premature infants to save their lives. Little did clinicians know or suspect that uncritical use of high levels of oxygen could lead to retrolental fibroplasia and blindness. In some sense, however, it seems clearly inaccurate to consider treatment with such burdensome effects as inerrant or appropriate care.

The prevailing reluctance to conceptualize “flawed state-of-the-art” shortcomings as mistakes is related to the persistent problem of conflating 3 related but distinct concepts: (1) preventability, (2) culpability, and (3) error. If one can separate the notion of culpability and wrongful conduct from the notion of error, then there may be a range of benefits in identifying a category of error related to flawed state-of-the-art decision and actions. The open recognition and transparent labeling of such oversights as “errors” can lead to improvements in care and advancements in public health science, practice, and policy. Once those errors have been identified and publicized, they might later be viewed as culpable acts or omission if public health practitioners do not integrate such knowledge into their practice and decision-making process. Science, research, and experience will transform error from that which results from insufficiently developed state of knowledge in the field—-to error that originates from ignorance or of misapplying currently available knowledge in the field. Thus, actions or inactions that constitute nonculpable flawed state-of-the-art errors at one point in time might eventually be viewed as culpable errors of a different sort once the relevant deficiencies are identified.

Back to Top | Article Outline

The Irrelevance of Culpability?

The conflation of error with culpability can undermine the overall benefits of Holtgrave's exhortation to recognize and respond to public health errors. In fact, a focus on culpability is likely to do more harm than good as academics, policy makers, and public health practitioners attempt to respond to identified errors in the various public health arenas. The ultimate goal of such a campaign, after all, is ultimately to improve the public's health. It is not to identify wrongdoers. The identification of error is difficult enough without complicating the determination of responsibility and culpability. Errors are typically identified by retrospective results. When a decision does not lead to the planned, expected, or desired outcome (an error of planning) or when an action is not completed as intended (error of execution), we typically conclude that an error has occurred. Linking the action or decision causally with the unwanted result will be difficult, given multiplicity of factors that typically play a role in a public health event. The identification of an action or decision that would have led to better results can play an essential role in educating and informing future actors and decision makers in similar contexts. Assume that a public health actor performs an action or makes a judgment that after the fact proves to be, objectively speaking, incorrect or leads to bad or less than optimal results. In contrast, the determination of whether the mistake was culpable or blameworthy requires an entirely additional and complicating analysis that will likely confuse rather than enlighten the investigation of the incident and the development of a remedy. If culpability and accountability become the focus of the incident, the analysis will become bogged down on whether the incident or negative result was prospectively preventable and who was to blame rather than what precipitated the incident and how the results could be improved in future similar scenarios. An excessive focus on culpability also discourages disclosure and distorts the discussion and insights from the parties most likely to understand the causal nuances of the public health incident—the participants themselves. As a result, we recommend a near “no blame” culture when it comes to the identification of the disclosure and analysis of public health error. Such a posture would encourage disclosure and nurture frank dialogues on the complicated genesis of public health error. It is true that in some instances, public health actors should be held responsible for their mistakes. But we believe that in the vast majority of cases, accountability is infinitely less important than uncovering the source of the error and developing a remedy to prevent repetitions in the future. While the time may have come for the development of conceptualization of public health error, the ultimate goal of that exercise is enhanced quality in the development of public health policy and the delivery of public health services.

Back to Top | Article Outline

Sentinel Events and Public Health Practice

An alternative approach to Holtgrave's categorization of public health errors was advanced in the 1970s by David Rutstein's work on the use of Sentinel Health Events as the basis of measuring the quality of medical care and public health surveillance.7 This method, also used by investigators in England and Sweden, counts cases of unnecessary disease and disability and unnecessary untimely deaths. Conditions are listed in which the occurrence of a single case of disease or disability or a single untimely death would lead to the question “Why did it happen?” In addition, selecting indicators representing conditions displaying increase in rates of disease, disability, or untimely death can serve as indices of the quality of care or public health services. These events are sentinels or outcome measures pointing to the need for improvement in our medical or public health activities. Rutstein refers to this process as the identification and investigation of the “airplane crashes in health.” Rutstein's approach is particularly attractive because it recognizes the protean nature of accountability for health problems in our society.

In selecting a particular condition as a sentinel health event, we have assumed that if everything had gone well, the condition would have been prevented or managed. The chain of responsibility to prevent the occurrence of any unnecessary disease, disability, or untimely death may be long and complex. The failure of any single link may precipitate an unnecessary, undesirable health event. Thus, the unnecessary case of diphtheria, measles, or poliomyelitis may be the responsibility of the state legislature that neglected to appropriate the needed funds, the health officer who did not implement the program, the medical society that opposed community clinics, the physician who did not immunize his or her patient, the religious views of the family, or the mother who did not bother to take her infant for immunization.

Back to Top | Article Outline

A “Systems” Approach to Public Health Error?

The role and importance of complex systems have become nearly an article of faith among those who think and write about error in medicine, engineering, aviation, and other endeavors. The so-called systems approach recognizes that error may be viewed in 2 ways: as the failure or limitation of an individual and as the natural and likely consequence of a complex system that creates error traps in the workplace and in organizational processes.8,9 The “systems” approach assumes that humans are fallible and sees the primary origins of error in systems factors. In terms of public health error, the systems approach suggests that we need to make it clear that administrative and organizational policies and decisions, including funding decisions, will play an important and perhaps central role in terms of causing and preventing errors at the individual operational and public health practice levels. It is important to recognize that an administrative or organizational decision that allows or even causes errors among public health practitioners might, in some cases, itself also be categorized as at least a species or source of error. Thus, while individuals frequently spark error, the mistake also frequently has its genesis in interactions among various critical actors, in organizational structures and distribution of tasks and work responsibilities, in functions assigned and reserved for particular components of the organization, and by limiting and channeling function play by following applicable protocols, policies, and guidelines. Admittedly, it will likely be very difficult to identify “systems errors” in public health practice, given that many such decisions, by their nature, are based on a multitude of scientific, economic, cultural, and political considerations that must be balanced and reconciled. But in any event, we should make neither the mistake of focusing on individual error alone nor another kind of mistake by focusing disproportionately on systems. The source of error in public health, as in medicine, aviation, and other complex system, will usually prove both individual and systemic.

Back to Top | Article Outline

Conclusion

Whatever rubric or categorization is ultimately applied, Holtgrave's coining of the term and his call for a systematic discussion of the nature and implication of “public health error” are groundbreaking and vital and clearly an idea whose time has now come. It should be applauded. To the extent that public health professionals and other observers discuss error at all, they most frequently sound like Supreme Court Justice Potter Stewart, discussing pornography—they know when they see it—but find it very difficult to describe in explicit terms and categories. Actors at the practical, institutional, and policy levels are frequently motivated by perceptions of error shaped by stories or particular incidents rather than by crisp categorical definitions. As a result, there remain gaping seams in the understanding of what constitutes error, culpable error, and the error that matters in public health policy, management, and practice. These conceptual and taxonomical limitations hamper the ability to construct, conduct, and interpret empirical work on the topic and to monitor performance in the public health practice arena. Put simply, without a clear definition of error and the subcategories of error, it is extraordinarily difficult to determine how frequently errors occur, the costs of error, and what reforms will decrease their prevalence. The foundational work of determining and discriminating between categories of error and creating a working taxonomy will help provide a common language for scholars, reformers, epidemiologists, and rank-and-file public health professionals. This conceptualization and application of error analysis may also serve as a new and important adjuvant approach to analyzing public health problems in a community as Holtgrave counsels by defining, tracking, and developing interventions to address public health errors. Those of us involved in quality assurance efforts of local public health activities need to heed his word and advice.

Back to Top | Article Outline

REFERENCES

1. Hilfiker D. Facing our mistakes. N Engl J Med. 1984;310:118–122.

2. Holtgrave DR. Public health errors: costing lives, millions at a time. J Public Health Manag Pract. 2010;16(3):211–215.

3. Novick LF. Public health practice guidelines: a case study. J Public Health Manag Pract. 1997;3(1):59–64.

4. Compact Edition of the Oxford English Dictionary. Vol 1. New York, NY: Oxford University Press; 1971:892.

5. Institute of Medicine. To err Is Human: Building a Safer Health System. Washington, DC: National Academies Press; 2000:28.

6. Thomsen M, Resnik D. The effectiveness of the erratum in avoiding error propagation in physics. Sci Eng Ethics. 1995;1:231–240.

7. Rutstein DD, Berenberg W, Chalmers TC, Child CG, Fishman AP, Perrin EB. Measuring the quality of medical acre: a clinical method. N Engl J Med. 1976;294(11):582–588.

8. Reason J. Human Error. New York, NY: Cambridge University Press; 1990.

9. Perrow C. Normal Accidents: Living With High-Risk Technologies. New York, NY: Basic Books; 1984.

© 2010 Lippincott Williams & Wilkins, Inc.

Login