Journal Logo

Letter to the Editor

Letter to the Editor

EMRs Populate AI with Garbage Data

doi: 10.1097/01.EEM.0000657704.50922.0b

    Letters to the Editor

    Emergency Medicine News welcomes letters to the editor about any subject related to emergency medicine. Please limit your letter to 250 words, and include your full name, credentials, and city and state of residence or practice.

    Letters may be edited for content, length, and grammar. Submission of a letter constitutes the author's permission to publish on all media, including print, online, and social media, but does not guarantee publication. Letters express the views of the authors and do not necessarily reflect those of Emergency Medicine News and Wolters Kluwer.

    Letters to the editor may be sent to emn@lww.com.

    Editor:

    I read with great interest the article by Anup Salgia, DO. (“The ED as the Epicenter of Artificial Intelligence,” EMN. 2019;41[12]:29; http://bit.ly/2sD94mw.) While he holds out the promise of AI analyses of EMRs, he fails to contend with one important detail. EMRs are by and large garbage data. I'm not just talking about the interfaces, the time-eating, momentum-stopping nature of them but the actual data collected. It is garbage data in the computer science sense of the term.

    The garbage data come in three types. Bloat: Are the details from the 25 previous psychiatric visits part of my decision-making when the patient presents with a frank medical issue? False data: A cursory PubMed search brought up the article, “EHR Documentation: How to Keep Your Patients Safe, Keep Your Hard-Earned Money, and Stay Out of Court” (Innov Clin Neurosci. 2015;12[7-8]:34; http://bit.ly/35CPO6i), which details events such as the EMR prepopulating a negative ROS and negative physical exam, any of which may have an individual point that was positive, negative, or not even performed. Hidden data: The worst one of all, the one that no method of data cleanup will ever extract.

    Why did I order the CT? Because the patient had googled something and was now unsettled, and would only rest easy once a more thorough workup was performed. Why did I admit that patient? He was giving me the heebie-jeebies. Did I document that rationale anywhere in the chart? No, and I believe my colleagues won't either. This means that the precise nature of my decision-making is deliberately obscured, which is death for an AI if done with any regularity. Obvious rationales, such as significantly abnormal lab values, vital signs, or imaging results, will be picked up by AI, but we don't need computers to help us with that.

    Now take into account a recent article that points out how racism in our practice patterns translated into a racist AI algorithm. (“Millions of Black People Affected by Racial Bias in Health-Care Algorithms,” Nature. 26 Oct 2019; https://go.nature.com/37O9i9M.) Any bad practice we are currently implementing will be codified into “the computer said so.” The heart of the matter is that modern EMRs are not designed to be a health delivery or communication tool. They are designed, first and foremost, to enhance billing. Any AI trained on these datasets can only hope to achieve enhanced billing.

    Greg Neyman, MD

    Toms River, NJ

    Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.