Technology has without a doubt changed the way medicine is practiced, often for the better. Not all changes deserve that accolade, however, and not all traditions should be thrown out in favor of the new.
We should ask what each new advance's usefulness will be before we embrace it. What is it replacing? What will be lost or placed at risk as a result? The CT scan is but one example. Its use revolutionized medicine, and its value is unquestioned, but it became so excessively utilized that it has now become the most widely criticized technology in medicine.
The expected deaths from its use seem to grow with each new publication, and I find it increasingly difficult to convince patients that it is necessary when it truly is. Their care is jeopardized by their fear. A more deliberate, thoughtful adoption of this valuable diagnostic tool might have avoided this. That approach was not adopted for a number of reasons, not the least of which was the financial gain that scanner manufacturers would experience, even as patients experienced risk of radiation-related death.
Newer scanners and techniques now reduce the risks, but that only begs the question: Why was this safer model not introduced earlier before the media panic forced it upon us? I began practicing medicine before CT scans were available. In the early days, I would have to convince a radiologist that it was necessary. Now patients can get a whole-body screening CT at their request and expense in spite of an FDA policy saying it provides more benefit than harm. (http://bit.ly/2du7bwl). Marketing has once again triumphed over reason.
The radiation exposure associated with CTs is now widely known, but the secondary risk of getting false results by performing tests for low-risk patients is not so apparent. It should have been obvious from the beginning that test results for patients with a low prevalence of disease would likely be false positives. This was confirmed by a study looking into the risk of false-positive screening results for patients enrolled in a multimodal cancer screening program. After 14 tests, the cumulative risk of having at least one false-positive screening test was 60.4 percent for men and 48.8 percent for women. (Ann Fam Med 2009;7:212.)
These figures apply to different tests, not just CT scans, but the conclusion may well be valid nonetheless. Patients with false positives will be subject to further testing and procedures with further costs and risks where appropriate clinical judgment would have avoided all the tests, procedures, and risks. The real issue is that new technology was embraced and it replaced other valuable practices while risks were not appreciated or ignored. Most commonly, the practice that is replaced is clinical judgment, and no measurement can quantify its significance.
The true risk of the rush to embrace the newest technology may be the alteration of our practice. The practitioner substitutes one modality for another, and that may eliminate the old practice before the new one has been adequately assessed for risks. We used CT scans to enhance diagnostic accuracy before it had proven its benefits, and we lost the skills it replaced by the time the shortcomings were uncovered.
This is true for all technology but even truer for protocols and bundles. When a technology or a protocol is established as the “standard of care” by the government or insurance fiat, it not only eclipses possible alternatives, it also eliminates them. We and our patients become the victims if the goal of diagnosis and treatment is compliance with protocol and not outcome of treatment. Protocols guide us, but they also provide a shield behind which we can hide.
If the lab or imaging fits into the protocol and directs us to discharge the patient, we need do no more; in fact, we are instructed to do no more than is necessary to adhere to the protocol. The patient ceases to be our responsibility, and we need not ask if the “treatment” were effective. Fulfilling the protocol becomes the goal, not ensuring adequate care.
The sepsis protocols provide an excellent example. Multiple studies have shown that well-directed clinical management is equal to or even superior to protocol. Protocol-based “resuscitation of patients in whom septic shock was diagnosed in the emergency department did not improve outcomes,” but the protocol allows no substitution to include the use of our own judgment. (New Engl J Med 2014;370:1683.) This has not dissuaded the persistent and insistent use of protocols, however.
The trouble with clinical judgment is that it cannot be learned or quantified. It cannot be acquired over a weekend in Las Vegas. No intense session of five or six hours will provide training, and no checklist will verify its presence on a medical record. An expert cannot write a bundle or protocol to substitute for it in any way. It cannot be easily measured as compliance with a protocol can. It must be developed over years of training and practice.
This confounds the designers of courses and the coders of charts alike, but as with all skills, clinical judgment must be valued if it is to continue to be present in our practice. It is much easier to retreat behind a protocol or a new technology than to take the risk of thinking for ourselves or respecting our colleague's decision and judgment. We must start this process in our everyday practice and then we must make sure that those who supervise our practice, the administrators, embrace the same philosophy.
Share this article on Twitter and Facebook.
Access the links in EMN by reading this on our website or in our free iPad app, both available at www.EM-News.com.
Comments? Write to us at email@example.com.Copyright © 2017 Wolters Kluwer Health, Inc. All rights reserved.