Secondary Logo

Journal Logo

Special Report: The Wired ED Closer to Reality

SoRelle, Ruth MPH

doi: 10.1097/01.EEM.0000413153.47313.01
Special Report


The vision of a wired emergency department is looming ever closer on the medical horizon, and the electronic health record is the bedrock on which it is built. Its success, however, depends on the technology speeding data entry into the electronic health record and the processing that evaluates that information, identifying not only patients' health conditions but also significant trends for the department, the hospital, and the community.

Good information in can mean gold coming out, but in the words of technology whizzes, garbage in means nothing better can emerge. Jeffrey Nielson, MD, the chair of the emergency medical informatics section of the American College of Emergency Physicians and a practicing emergency physician in Ohio, has focused his attention on the problems of getting data in.

Hospitals usually paid transcriptionists to listen to voice dictation and convert it into words on a page (or an electronic screen) until recently, he said. It was an effective but costly process. Now hospitals have starting using voice recognition software, the gold standard being Dragon Naturally Speaking.

“The software really doesn't add anything we didn't have before,” Dr. Nielson said. “The paid transcriptionist could take voice into text. Then someone wrote software that might be able to do that automatically. It kind of works, and it kind of doesn't.”

His hospital changed to voice recognition software about a year ago. Transcriptionists were replaced with computers that did the input through voice recognition, and then humans checked the transcript for accuracy.

“We paid fewer cents per line,” said Dr. Nielson. “For us, as emergency physicians, it was seamless, but the quality suffered immensely. The computer can make horrible mistakes, and the people who read through it often do not have the eye for detail that physicians and trained transcriptionists had.”

Recently, he was reading through a chart that made the startling assertion that a woman had presented with “a six-inch vagina on her face.” Obviously, there was a lapse between what the doctor said and what appeared on the computer screen.

“The second way that people use it is that they have the physician editing in real time while they are dictating,” said Dr. Nielsen. “Our emergency department started that three months ago.”

The economics of the decision seemed clear. The hospital paid $1 million a year for transcription services or invested $250,000 to get the hardware and license for voice recognition software and then 10 percent of that each year to maintain its licenses.

“So far, it's been rough,” said Dr. Nielsen. “My doctors complained all the time. One major complaint is that the computer does not understand the context of their words. If you say something that is not related to the history of the present illness, a person would be smart enough to know what I would be talking about. The problem is the voice recognition system doesn't know what it's doing. It predicts the word based on sound. There is not a step where it thinks about what it is writing. The grammar has mistakes. I don't make grammar mistakes.”

Dr. Nielsen said he wants companies to develop a system that has more understanding of the topic. Currently, the voice recognition system his hospital uses is incorporating some medical terminology, but that is only half the battle, he said. “The voice recognition people realize that without understanding the context of the information, they cannot accurately try to pick out the words from what the system ‘hears.’”

The physicians at his hospital are struggling, with dictation taking twice as long as it did with transcriptionists. They compensate by shortening the content they provide, he said, noting that “adding extra words costs them effort.”

Most physicians do not want to type in their information, or they even lack the skills to do that. Even in this computer age, Dr. Nielson said he is amazed by the number of physicians who continue to hunt and peck on the keyboard. Part of the process means teaching the computer to recognize what the physician dictates. If the software makes an error, the physician can delete the comment and repeat it. “If it cannot get it by telling it once or twice, then you type it in yourself,” he said. “You teach it as you go. It feels as though you are working with someone who doesn't really know English, not just medical terminology.”

Dr. Nielsen said although much progress has been made in voice recognition over the past decade, little improvement has been seen over the past five years. “While there might have been small improvements, its ability to improve as it goes along is stagnant.”

The system does save money, but he said that comes “at the expense of physicians, residents, and anyone else doing documentation.” He said his own emergency department may have gotten over the hump, but he still hopes for improvements in the future. Dr. Nielsen said physicians will continue to document the visit, but “really good voice recognition software … might even be able to transcribe what the patient is saying. Why do we lose out on that important data?” he asked.

Alternatives such as scribes also add cost and another layer of abstraction. “They will say they don't interpret, but they do,” he said. “The patient says his belly hurts, and they circle abdominal pain. That's an abstraction.”

Voice recognition is easier to use in some situations, such as when ED is not particularly busy; that allows doctors to spend more time doing documentation. “If the doctors don't want to learn to type, voice recognition is pretty useful,” said Dr. Nielsen. “I've heard of physicians in other parts of the hospital who don't open their own email. The secretary prints it out for them, and they dictate a response.”

Dr. Nielsen isn't ready to give up on the software, saying that he believes in the technology and encourages other doctors to keep using it. “Using the medical version is better,” he said. “If I tell it I'm doing emergency department dictation, it will be using word frequencies for that. In that case, it does capture the context. If I'm in my office doing email, it uses standard office English.”

Having the right tools to extract data from the chart once it is dictated is key to getting the record corrected, he said. “It is best to have completely coded data where everything is in individual discrete data elements. But people think in terms of fuzzy data pieces. Pulling discrete data elements out of natural speech is difficult because we hedge all the time.”

On the other side is natural language processing, a form of artificial intelligence that deals with understanding and generating language that humans use naturally to interface with computers using that rather than computer language. Human language is frequently vague, however, and computers must be programmed to understand those ambiguities. If that can be accomplished, the outcome of mining the medical record for information can provide important information, much of it part of the meaningful use standards that are currently the favorite of the U.S. Department of Health and Human Services.

A recent report by Harvey J. Murff, MD, MPH, of the Tennessee Valley Healthcare System and the Veterans Affairs Medical Center in Nashville, and colleagues, compared natural language processing with administrative data codes to identify postoperative complications using the electronic medical record. (JAMA 2011;306[8]:848.) They found that the natural language process had a higher sensitivity and lower specificity in identifying postoperative events such as acute renal failure requiring dialysis, deep vein thrombosis, pulmonary embolism, sepsis, pneumonia, or myocardial infarction.

A second paper by two authors on the JAMA report looked at using natural language processing as a part of a biosurveillance for influenza. (Ann Intern Med 2012;156[1 Pt 1]:11.) Investigators at the Mayo Clinic evaluated the whole records of 17,243 patients tested for influenza A or B virus between January 2000 and December 2006, and compared the findings to cases identified when only the chief complaint was recorded, and found that natural language processing was more effective in identifying flu cases.

“There is always the debate. There are also some folks who will say ideally everything should be entered in a structured format — checked boxes or drop-down lists,” said Dr. Murff. “I don't think you would ever get a health care buy-in.”

Structured entry of information is faster, and most use templates that work in parts of the medical examination, such as a review of systems. But when the physician and patient review the history of the present illness, it's more difficult to make a drop-down menu work. “Almost anyone in medicine will tell you there is no way to make a drop-down menu for that,” Dr. Murff said.

Some of the issues that arise are amusing, though they point out serious flaws in the system. “We were looking for patients admitted to the hospital for a surgical procedure who had developed sepsis,” he said. “We constructed algorithms looking for the term septic as well as looking at syntax issues. We were doing a review of false positive and negative hits. In one case where the computer had identified sepsis, there was a written note that the man was a retired septic tank repairman.”

Sometimes accuracy is sacrificed for efficiency when the record tries to capture more complicated activities, Dr. Murff said, and technical hurdles abound during the early stages. “The initial processing of documents is data-intensive, but then you can come up with the query you want with an SQL interface. Even though we had specific outcomes we were after, we designed an algorithm, and we realized that there was a lot of flexibility in looking for different things,” he said.

Many institutions still do not have a functional electronic health record, a real barrier to fine-tuning these tools, but improved processes could speed adoption, Dr. Murff said. “We take it for granted how we use our language. A lot of time when a heart attack or myocardial infarction showed up, it was referred to in the family history or past history. There are abbreviations that we are not sure we are capturing.

“Folks are working on machines learning techniques in which a human reviewer could annotate small sections of documents and teach the computer to do a larger workload of validating. Things are being developed to make those easier. The biggest barrier would be an institution relying on paper,” he said.

Comments about this article? Write to EMN at

Click and Connect!Access the links in EMN by reading this issue or in EMN's app for the iPad, available in the Apple app store.

Back to Top | Article Outline

Dr. Leap on EMRs

“We're drowning in data. Crushed by clicks. Smothered by unnecessary fields. Slogged down and bogged down by acting as secretaries instead of physicians,” says Dr. Edwin Leap in his Second Opinion column this month. Read about one doctor's experience with EMRs on p. 14.

© 2012 Lippincott Williams & Wilkins, Inc.