Our world is investing heavily in health information systems and “Big Data.” With enormous computer power that a decade ago was unimaginable, it is now possible to sift through terabytes of data in what seems like a flash. The airline and financial industries for decades have been among the leaders in this arena to serve customers more efficiently and to squeeze the most profit from their businesses. On a personal level, in seconds I can review all the transactions in my checking account over the past few years and can even see photos of specific checks I have written, a useful service.
In contrast, the world of medicine, medical practice in particular, has lagged well behind other industries in all aspects of information technology.
There are two major uses of the medical application of information technology I will discuss here: (1) The use of the technology as a more efficient tool for managing patient care and cutting the costs of care compared with the traditional transfer of information by charts and paper; and (2) The mining of useful information from the enormous mounds of “data” generated in hospitals and practices that may point to opportunities for quality improvement and comparative effectiveness.
Although there is a slow but steady application of electronic medical records (EMR) in daily patient care, there are several hard-won lessons learned from the experiences of hospitals and private practices over the years.
1. Installation of a well-functioning medical record system usually costs far more than promised. It is not like installing a new version of a Mac or Windows operating system. In addition, there are major “hidden” costs of the project that, of course, do not appear on the cost estimate from the vendor: the major commitment of institutional manpower needed; the changes in the scope of the project, usually by the client; and a decrease in physician productivity by as much as 30 percent in the first several months. Also, predictions by RAND that digital health records will save money or improve outcomes have thus far failed to come to pass (Abelson and Creswell, NY Times 10 Jan 2013).
2. A major cause of failure of an EMR system is the lack of buy-in by physicians and other staff. The problems are that change is always difficult and initially, physicians and staff will spend much more time creating and searching an EMR than they did with paper records. If one's income depends on the number of patients one sees, a 30 percent decrease in productivity can be discouraging.
3. Once you sign up with a vendor, you are essentially “married,” for better or worse, for richer or poorer. Although the services and responsiveness of vendors have improved, not too may years ago a common tactic was to send all their top guys to dazzle the client into signing up, but before the ink was dry the second or third string was put on the project. It is a case of caveat emptor, buyer beware. At one time the EMR industry was notorious for finding ways to charge for every little change and every call for help. If the client loses trust in the vendor, he may choose to dump them (divorce!) and hire another, at a considerable financial loss. Clients have since become more sophisticated, and vendors that provide excellent service and responsiveness are gradually winning a greater share of the business.
4. For all its warts, the EMR is not only here to stay, but in the not too distant future you won't be able to practice medicine and bill for services without it. Imagine my surprise when I found out that the IRS would no longer accept my paper tax returns, that they would accept only an electronic return; there are exceptions for now, but it is clear where this is headed. The same will ultimately be true for exchanging or reporting clinical information and for billing.
“Big Data” has been an approach in medicine, at least conceptually, for quite some time. In the past it required going through endless paper records of, say, all patients with Stage 2 breast cancer. Back then and now in the digital age, the premise is that if one can retrieve and sort clinical data for specific diagnoses, one can learn how most of such patients are treated and what the outcomes are. Clinical trials include only a small percentage of patients and because the qualifications for entry exclude many, they enroll patients who are not a representative sample of the entire population of patients. This is OK when testing a scientific hypothesis, but not when trying to understand how care is given and how effective it is in ordinary practice circumstances.
The most recent, and for us oncologists the most interesting, approach is CancerLinQ, a program of the American Society of Clinical Oncology. The plan is to collect vast amounts of data from practices with EMRs that includes prognostic factors and treatment given (not just prescribed, such as naming a common therapy combination (e.g., MOPP, BACOP, etc.—which reveals my age). The idea would be to sift and arrange the data in a format that physicians may search for patients with a specific stage, pathology, molecular features, and therapy administered.
Of course, connecting to the outcomes will be the key to understanding whether a particular approach is wise to consider for one's own patient. ASCO has already gathered data from thousands of patients and is working on the interface and organization of the data to make access useful and intuitive.
There are many potholes to overcome in this project, such as the reliability of the data, assuring that chemotherapy and radiation therapy doses are reported as what was given and not just prescribed, and determining how the data is filtered to make it useful. But with perseverance and lots of money, that can be done.
This approach is part of the “Rapid Learning” movement, often identified with and popularized by Lynn Etheredge, a writer and consultant. His works describe in detail the approach for making good use of collected information that is usually buried in some medical records system, rarely to be seen again. The Institute of Medicine has endorsed the Big Data approach as a useful, and maybe necessary, method of informing the way forward in managing patients.
But I would be cautious and learn from history. CaBIG, the ill-fated attempt of the NCI to develop a huge common platform for all clinical research activities, is a lesson in hubris. The sacrifice of tons of money and time and many lost opportunities is a haunting example of reaching too far despite the deep pockets of NCI; having blind faith in technology to solve problems is dangerous.
The Lessons of Pediatric Oncology
While I don't want to rain on anyone's parade, I would also like to make an important historical point: pediatric oncology has, in effect, been using a Rapid Learning system for many decades. A majority of pediatric cancer patients have been treated according to a protocol—particularly those with leukemia, lymphoma, and neuroblastoma, but other cancers as well. And because almost all are treated in institutions that belong to the pediatric cooperative group(s), which usually require registration of all patients whether treated by protocol or not, progress has been made largely by reviewing recent experience and modifying protocols empirically.
I will underscore this point by informing you that the last front-line chemotherapy agents for childhood leukemia were introduced in the early 1970s, when the cure rate nationally was about 20 to 30 percent. The most recent treatments have resulted in cure rates of 80 to 85 percent or more. This was accomplished by manipulating doses and treatment schedules and by adapting therapy to the initial prognostic features of patients—all this without any new frontline therapeutic agents.
So I would guess that pediatric oncologists not only would support CancerLinQ as a good thing, but they might also ask, “What took you so long?”