Secondary Logo

Data as a Substitute for Judgment

Twa, Michael D., OD, PhD

Optometry and Vision Science: May 2019 - Volume 96 - Issue 5 - p 323–324
doi: 10.1097/OPX.0000000000001391
EDITORIALS
Free

Editor in Chief, Birmingham, AL

Robert McNamara was the U.S. Secretary of Defense during the Vietnam War from 1961 to 1968. He graduated from the University of California at Berkeley in 1937 with a degree in economics and later earned a master's degree in business administration from Harvard. He trained to pursue sound management practices based on facts and statistical optimization. During World War II, he served in the Air Force where he taught business analytical methods and management practices that helped bring greater efficiency for the resources he managed. It was his desire to apply sound management practices to bring about positive changes.1

After the war, McNamara's organizational and managerial skills were sought by the president of Ford Motor Company, who brought him to Dearborn, MI. McNamara helped transform the company's management practices, and after 15 years, he was appointed as the first president of the company who was not a member of the Ford family. Less than 3 months later, John F. Kennedy appointed McNamara as the U.S. Secretary of Defense. It is fair to say that McNamara's expertise in management and emphasis on statistical analysis was a novel way to understand complex organizations and, in his hands, proved capable of generating transformational efficiency.

As the Secretary of Defense, McNamara was responsible for many of the strategic decisions that determined how the United States prosecuted the Vietnam War (troop deployments, bombing campaigns, etc.). His decisions were guided by many of the same principles and practices that he developed and refined in the business world: measurement, statistical analysis, and extensive reporting. During the war, McNamara was widely criticized for his reliance on narrow measures such as body counts as a basis for determining broader progress and success. These counts were viewed as important local intelligence and served as the guiding input for managers who were responsible for understanding and conducting a complex enterprise with little direct experience or local knowledge of the daily conditions or activities under their supervision. In theory, this simple metric could be used to gauge progress toward larger goals. However, because tangible benefits could come to those who provided desirable counts, fabrications were common, diminishing the value of the chosen metrics.

Management through metrics has become a modern fasciation and pervades all of our current public institutions: health care, education, policing, the military, and so on. But at what cost? When metrics are substituted for judgment and used as the basis for a reward system, devastating failures are not only possible but also likely. Rewards based on metrics can become a perverse incentive that changes behavior toward self-interest. Surgeons who receive financial rewards based on patient survival rates tend to avoid performing surgery on risky patients.2 When schools are funded and rewarded for standardized test scores, there is a risk that teachers and administrators will sacrifice education for superior test performance.3

In a recent book by Jerry Z. Muller, The Tyranny of Metrics, the author explores our historical affinity for metrics in society and our willingness to forego expertise and judgment when presented with numerical summaries. Muller summarizes his thesis as follows:

“…measurement is not an alternative to judgement: measurement demands judgement: judgement about whether to measure, what to measure, how to evaluate the significance of what's been measured, whether rewards and penalties will be attached to the results, and to whom to make the measurements available.”

Evidence-based clinical practice can be susceptible to the dangers of metric fascination unless one considers all three elements that define evidence-based practice: (1) the best available evidence, (2) clinical expertise, and (3) patient values and preferences. By this definition, expert clinical judgment and experience, as well as patient preferences, are separate considerations that may modify or even override quantifiable evidence.

There is cause for a growing new wave of concerns related to the culture of metric fasciation: big data. Big data is an emerging field concerned with ways to analyze, extract, and summarize information from data sets that are too large to be handled by conventional data processing methods. Examples would include genomics, meteorology, clinical informatics, economic simulations, and others. In April 2019, the annual Association for Research in Vision and Ophthalmology meeting will include a full-day educational session on one aspect of big data, artificial intelligence. There are more than 1000 posters scheduled to be presented on topics related to machine learning, data mining, and other approaches to extracting knowledge and information from large data sets. Given our past experiences with susceptibility to metric fixation, big data will likely present us with many familiar and some new opportunities for misinterpretation and misdeeds. Beyond the allure of new technology capable of harnessing massive data collections, we will still have to confront our inclination to accept the validity of stories told with numbers as somehow more believable, or trustworthy. As thinking people, we will have to see through the dashboard of indicators summarizing small facts as indisputable truths. We will have to ask how the data were collected and what assurance we have that they are free from errors and bias. In short, we will still have to know something about the underlying information source. Metrics originating from flawed data will present distorted facts cloaked in an air of numerical authority and precision.

A healthy dose of skepticism can help navigate these dangers. Bias is a very real concern in any big database just as it is in a small clinical study. Careful sampling, rigorous experimental design, and randomization can help guard against many forms of unintentional bias. Beware, too, of the numerical siren's song or pseudo-quantification. It may be possible to measure something, but numbers without context are meaningless. For example, if an author reports that 40% of the population in a community developed cataracts over a 2-year period, it would be good to know the age of this cohort; context matters. Mining large data sets as a way to explore plausible theories can be a great way to understand associations, develop new insights, and generate testable hypotheses. However, information generated from such work must be carefully and critically scrutinized. With large samples, it is easier to find statistically significant differences that are clinically meaningless, and if you look at enough comparisons, something is bound to be significantly different by chance alone. With big data comes big responsibility. There are many companies currently vying to define future markets for the use of big data in ways that will influence clinical decision making. There will be significant financial interests at stake in this as well. Because health care is such a significant proportion of the U.S. economy, clinical decision support, clinical informatics, and health care management are all viable business targets for metric-based decision making.

“Not everything that is important is measurable, and much that is measurable is unimportant.” Elliot Eisner

Back to Top | Article Outline

REFERENCES

1. Rosenzweig P, Robert S. Mcnamara and the Evolution of Modern Management. Available at: https://hbr.org/2010/12/robert-s-mcnamara-and-the-evolution-of-modern-management. Accessed April 1, 2019.
2. Chatterjee P, Joynt KE. Do Cardiology Quality Measures Actually Improve Patient Outcomes? J Am Heart Assoc 2014;3:e000404.
3. Springer MG. Performance Incentives: Their Growing Impact on American K-12 Education. Washington, DC: Brookings Institution Press; 2009.
© 2019 American Academy of Optometry