The increasing adoption of electronic health records (EHRs) and their “meaningful use” offer great promise to improve the quality, safety, and cost of health care.1 EHR adoption also has the potential to enhance our collective ability to advance biomedical and health care science and practice through the reuse of clinical data.2–4 This investment sets the foundation for a “learning” health care system that facilitates clinical research, quality improvement, and other data-driven efforts to improve health.5,6
At the same time, there has also been substantial federal investment in comparative effectiveness research (CER) that aims to study populations and clinical outcomes of maximal pertinence to real-world clinical practice.7 These efforts are facilitated by other investments in research infrastructure, such as the Clinical and Translational Research Award (CTSA) program of the US National Institutes of Health.8 Many institutions funded by CTSA awards are developing research data warehouses of data derived from operational systems.9 Additional federal investment has been provided by the Office of the National Coordinator for Health Information Technology (ONC) through the Strategic Health IT Advanced Research Projects (SHARP) Program, with 1 of the 4 major research areas focusing on reuse of clinical data.10
A number of successes have already been achieved. Probably the most concentrated success has come from the Electronic Medical Records and Genomics (eMERGE) Network,11 which has demonstrated the ability to validate existing research results and generate new findings mainly in the area of genome-wide association studies that associate specific findings from the EHR (the “phenotype”) with the growing amount of genomic and related data (the “genotype”).12 Using these methods, researchers have been able to identify genomic variants associated with atrioventricular conduction abnormalities,13 red blood cell traits,14 white blood cell count abnormalities,15 and thyroid disorders.16
Other researchers have also been able to use EHR data to replicate the results of randomized controlled trials (RCTs). One large-scale effort has come from the Health Maintenance Organization Research Network’s Virtual Data Warehouse (VDW) Project.17 Using the VDW, for example, researchers were able to demonstrate a link between childhood obesity and hyperglycemia in pregnancy.18 Another demonstration of this ability has come from the longitudinal records of general practitioners in the United Kingdom. Using this data, Tannen and colleagues were able to demonstrate the ability to replicate the findings of the Women’s Health Initiative19,20 and RCTs of other cardiovascular diseases.21,22 Likewise, Danaei et al23 were able to combine subject-matter expertise, complete data, and statistical methods emulating clinical trials to replicate RCTs demonstrating the value of statin drugs in primary prevention of coronary heart disease. In addition, the Observational Medical Outcomes Partnership has been able to apply risk-identification methods to records from 10 different large health care institutions in the United States, although with a moderately high sensitivity versus specificity tradeoff.24
However, routine practice data are collected for clinical and billing uses, not research. The reuse of these data to advance clinical research can be challenging. The timing, quality, and comprehensiveness of clinical data are often not consistent with research standards.3 Research assessing information retrieval (search) systems to identify candidates for clinical studies from clinical records has shown many reasons not only why appropriate records are not retrieved but also why inappropriate ones are retrieved.25
A number of authors have explored the challenges associated with the use of EHR data for clinical research. A review of the literature of studies evaluating the data quality of EHRs for clinical research identified 5 dimensions of data quality assessed: completeness, correctness, concordance, plausibility, and currency.26 The authors identified many studies with a wide variety of techniques to assess these dimensions and, similar to previous reviews, a wide divergence of results. Another analyses have highlighted the potential value but also the cautions of using EHR for research purposes.4,27
In this paper, we describe the caveats of using operational EHR data for CER and provide recommendations for moving forward. We discuss a number of specific caveats for use of EHR data for clinical research generally, with the goal of helping CER and other clinical researchers address the limitations of EHR data. We then provide an informatics framework that provides a context for better understanding of these caveats and providing a path forward toward improving data in clinical systems and their effective use.
The intuitive appeal of reusing large volumes of existing operational clinical data for clinical research, quality measurement and improvement, and other purposes for improving health care is great. Although the successes described above are noteworthy, a growing body of literature and our own analysis remind us that there are many informatics challenges and caveats associated with such approaches. Biases may be introduced at several steps along the process of the patient receiving care, including having it documented, billed for, and processed by insurers.28
Under the following headings, we describe some specific caveats that have been identified either from research or our own observations about clinical data. In the last caveat, we further delineate issues of “data idiosyncrasies” for special attention. We view coded data as part of the EHR and, as such, include the clinical portions of administrative databases within our notion of using the EHR for clinical research, as many of the same caveats apply to that sort of data. We are viewing the entire collection of data on the patient as the EHR, and recognizing caveats for its use for CER.
Caveat #1: EHRs may Contain Inaccurate (or Incorrect) Data
Accuracy (correctness) of data relies on correct and careful documentation, which is not always a top priority for busy clinicians.29 Errors in EHR records can be produced at any point. For example, data entry errors were demonstrated in a recent analysis in the English National Health Service, where yearly hospital statistics showed approximately 20,000 adults attending pediatric outpatient services, approximately 17,000 males admitted to obstetrical inpatient services, and about 8000 males admitted to gynecology inpatient services.30 Although the admission of males admitted to obstetrical units was explained by male newborns, the other data remain more problematic and difficult to explain.31 In addition, a large sample of United States records showed 27% of patients who were emergently intubated in an emergency department (ED) were dispositioned either as discharged or admitted to noncritical care units, a highly unlikely outcome.32 One systematic review identified 35 studies assessing data quality for reliability and validity of quality measures from EHR data.33 These studies were found to have tremendous diversity in data elements, study settings, populations, clinical conditions, and EHR systems. The authors called for further research to focus on the quality of data from specific components in the EHR and to pay attention to the granularity, timeliness, and comparability of data. A more recent analysis assessed how the EHR systems of 4 known national leaders in EHR implementation would be able to use their data for CER studies on the treatment of hypertension. Researchers at each institution determined which data elements were necessary and whether and how they might be extracted from their EHR. The analysis found 5 categories of reasons why the data were problematic. These included data there were missing, erroneous, uninterpretable, inconsistent, and/or inaccessible in text notes.34
Caveat #2: EHRs Often do not Tell a Complete Patient Story
EHRs, whether those of a single institution or aggregated across institutions, do not always tell the whole story; that is, patients may get care in different health care organizations or are otherwise lost to follow-up. Some estimate of the potential consequences of this incomplete picture can be gleaned from recent studies that have assessed data availability for health information exchange. One study of 3.7 million patients in Massachusetts found 31% visited ≥2 hospitals over 5 years (57% of all visits) and 1% visited ≥5 hospitals (10% of all visits).35 Another analysis of 2.8 million ED patients in Indiana found that 40% of patients had data at multiple institutions, with all 81 EDs sharing patients in common to create a completely linked network.36 In addition, a study aiming to identify patients with type 2 diabetes mellitus (DM) found that searching data from 2 medical centers in Minnesota had better predictive power than from a single center alone.37 These same researchers also found that the ability to identify DM successively increased as the time frame of assessing records was increased from 1 through 10 years of analysis.38
Other studies have shown that data recorded in a patient’s record at a single institution may be incomplete. Two systematic reviews have been performed that assess the quantity of data that were needed for clinical research but were unavailable from EHR sources. The first assessed studies on the scope and quality of data through 2001.39 A second review focused on the use of EHR data for outcomes research through 200740 and identified 98 studies. In 55% of the studies found, additional non-EHR sources of data were also used, suggesting that an EHR data alone were not sufficient to answer the investigator’s questions. About 40% of the studies supplemented EHR data with patient-reported data.
As examples of other studies on data completeness, at a New York academic medical center, 48.9% of patients with ICD-9-CM code for pancreatic cancers did not have corresponding disease documentation in pathology reports, with many data elements incompletely documented.41 In another study comparing data from a specialty-specific EHR from community oncology clinics against data from the Surveillance Epidemiology and End Results cancer registry and 2 claims databases (Medicare and commercial claims), significant proportions of data were missing from the EHR for race (40%) and tumor stage (63%).42
Evidence exists that there is significant variability in the quality of EHR data. One study, for example, found that relying solely on discrete EHR data, as opposed to data manually abstracted from the electronic record including text fields, led to persistent undercapture of clinical quality measures in a New York outpatient setting, with great variation in the amount of undercapture based on variation in clinical workflow and documentation practices.43
Additional studies have evaluated EHR data for quality measurement purposes. For example, different ways of calculating adverse drug event rates from a single institution’s EHR were associated with significantly different results.44 Likewise, quality metrics using EHR data required substantial validation to ensure accuracy.45 An additional analysis compared a manually abstracted observational study of community-acquired pneumonia to a fully EHR-based study without manual data abstraction.46 In the pure EHR study, mortality in the healthiest subjects seemed to exceed mortality in the sicker subjects due to several biases, including incomplete EHR data entry on patients who died quickly in the ED.
Determining the timing of a diagnosis from clinical data is also challenging. Not every diagnosis is recorded at every visit, and the absence of evidence is not always evidence of absence. This is just 1 example of a concern known by statisticians as censoring.47 Left censoring is the statistical property where events before the start of an observation are missed, or their timing is not known with certainty. The result is that the first appearance of a diagnosis in an electronic record may not be the incident occurrence of the disease. Related to the notion of left censoring is right censoring, which refers to missing the occurrence of events that appear after the end of the interval under observation. Although many clinical data warehouses may have millions of patients and cover many years of activity, patient turnover can be very high and individual patients in the warehouse may only have a few years’ worth of longitudinal data. The implication of patient turnover is that, for exposure-outcome pairs that take years to develop, groups of individual patients may not have a sufficiently long observation period to ascertain the degree of association. Even one of the most well-defined outcomes—death—may not be recorded in an EHR if the fatal episode occurred outside the institution.
Caveat #3: Many of the Data Have Been Transformed/Coded for Purposes Other Than Research and Clinical Care
The most commonly known problematic transformation of data occurs when data are coded, often for billing purposes. Although the underlying data may not be missing from the medical record, they are often inaccessible, either because they are paper-based or the electronic data are, for whatever reason, not available to researchers. This leads many researchers to rely solely on administrative (ie, “claims”) data, of which a great deal of research has found to be problematic. Errors can be introduced in the clinical coding process for many reasons along the pathway of a patient’s hospitalization from admission to discharge.48 These include inadequate or incomplete documentation, lack of access to information by clinicians and/or coders, illegible writing, suboptimal training and experience of the coder, upcoding for various reasons, inadequate review by the clinician, and errors made by anyone involved in the process.
In the early 1990s, 1 study reported that claims data lacked important diagnostic and prognostic information on patients admitted for cardiac catheterization; this information was contained in the medical record.49 Later in the decade, another investigator assessed administrative data for quality measurement, finding that coding based on ICD-9-CM did not provide any clinical description beyond each code itself, including any prognostic indicators, or capture problems typical to outpatient settings, such as functional, socioeconomic, or psychosocial factors.50 More recent studies have documented disparities between claims and EHR data in surgical conditions,51 children with pharyngitis,52 pediatric EDs,53 and patients with DM.54 One recent study of patients with hospital-acquired, catheter-associated urinary tract infection, a complication denied payment from Medicare, found that claims data vastly underreported the condition.55
Researchers have also looked at coding completeness of patient assessments or their outcomes for clinical research purposes. In 1 Texas academic practice, billing data alone only identified 22.7% and 52.2%, respectively, of patients with endometrial and breast cancer, although this increased to 59.1% and 88.6%, respectively, with use of other data and a machine learning algorithm.56 A similar study found somewhat better results for identifying patients with myocardial infarction, ischemic stroke, and severe upper gastrointestinal bleed events, with improvements also seen upon refinement of the selection algorithm.57 Another study attempted to identify patients with metastatic cancer of the breast, lung, colon, and prostate using algorithmic models from claims data, finding that acceptable positive predictive value and specificity could be obtained but was only possible as a tradeoff with sensitivity.58 An additional study found that coding tended to be better at identifying variance in utilization, whereas patient reporting was better for disease burden and emotional symptoms.59
Sometimes changes in coding practices inadvertently imply clinical differences where they may not exist. For example, in a national sample of coded data, it was noted that hospitalization and inpatient mortality rates for patients with a diagnosis of pneumonia decreased, whereas hospitalizations with a diagnosis of sepsis or respiratory failure along with a secondary diagnosis of pneumonia increased and mortality declined. This analysis found, however, that when the 3 pneumonia diagnoses were combined, the decline in the hospitalization rate was much smaller and inpatient mortality was barely changed, suggesting the temporal trends were due more to differences in diagnostic coding than care factors.60 Another study in community health clinics in Oregon found variations between claims and EHR data for a variety of services used in quality measures, such as cholesterol screenings, influenza vaccinations, diabetic nephropathy screenings, and tests for hemoglobin A1c. Although some measures were found with claims data only, a much larger proportion were found with EHR data only, especially in patients who were older, male, Spanish speaking, above the federal poverty level, or who had discontinuous insurance.61
Sometimes even improvements in coding can alter reporting, for example, a transition to the more comprehensive ICD-10 coding system in the Centers for Disease Control WONDER database was determined to be the explanation for an increased rate of death from falls in the United States between 1999 and 2007.62 Furthermore, changes in the coding system itself over time can impede comparability of data, as codes undergo “semantic drift.”63 This drift has been shown to go unrecognized by researchers who were analyzing EHR data collected across multiple years.64
Caveat #4: Data Captured in Clinical Notes (Text) may not be Recoverable for CER
Many clinical data are “locked” in narrative text reports.65 This includes information-rich sources of data from the initial history and physical report through radiology, pathology, and operative and procedural reports to discharge summaries. It may also include the increasingly used summaries of care.66 One promising approach for recovering these data for research is natural language processing (NLP).67 This approach has been most successful when applied to the determination of specific data elements, such as the presence of a diagnosis or treatment. For example, the eMERGE studies described above used NLP to identify the presence of specific phenotype characteristics of the patient.12 However, although the state of the art for performance of NLP has improved dramatically over the last couple decades, it is still far from perfect.68 Furthermore, we really do not know how good is “good enough” for NLP in data reuse for clinical research, quality measurement, and other purposes.69
Caveat #5: EHRs may Present Multiple Sources of Data That Affect Data Provenance
Another critical issue that contributes to the difficulty of reusing operational data for research purposes is data provenance, which is the understanding of the authoritative or definitive source(s) of a given measure or indicator of interest, given the existence of multiple potential sources for such a variable (ie, “knowing where your data comes from”). Data provenance is concerned with establishing and systematically using a data management strategy that ensures that definitive findings are derived from multiple potential source data elements in a logical and reproducible manner.70 For example, given a scenario where we would like to determine if a patient has received a given medication, there may be multiple possible data sources, namely: (1) order entry data, which may indicate an intent to give a medication to a patient; (2) pharmacy data, which may indicate the availability of the given medication for administration to a patient; (3) the medication administration record; and (4) medication reconciliation data, which aims to reconcile what a patient is supposed to receive and actually receives. Unfortunately, none of these elements indicate the ground truth of medication administration, but rather serve as surrogate measures for such ground truth (eg, there is not a single variable that directly measures or otherwise indicates the physical administration of the medication in question).71 An example of this is illustrated in Figure 1.
Caveat #6: Data Granularity in EHRs may not Match the Needs of CER
Data granularity is the level of detail or specificity used to encode or otherwise describe a measure or indicator of interest (ie, “knowing what your data mean”). At a base level, this issue is important due to the wide variation of data granularity that results from the various reasons for data capture. For example, diagnostic codes assigned for billing purposes may, due to regulatory and documentation requirements, be generalized to a broad class of diagnosis (eg, a patient with a set of complex cytogenetic and morphologic indicators of a preleukemic state would be described as having “myelodysplastic syndromes” for billing purposes—an indicator of a broad set of such conditions, rather than a specific subset). In contrast, data collected for the purposes of clinical subspecialties, intended to elucidate the etiology or contributing factors surrounding the initial diagnosis or treatment planning for a disease state may be highly granular and provides detailed descriptions of data types and subtypes that contribute to an overall class of diagnosis (eg, extending the previous example, a research study might include specific variables corresponding to the cytogenetic and morphologic indicators underlying an overall diagnosis of myelodysplastic syndromes).
Caveat #7: There are Differences Between Research Protocols and Clinical Care
The research and theoretical concerns cited above show that there are many documented challenges to reuse of clinical data for research that are related to the data themselves. There are differences in methods and purposes between clinical care and research. Research protocols tend to be highly structured with strict definitions of inclusion and exclusion criteria, data collection that is thorough and rigorous, treatment assignment is often randomized, follow-up visits are scheduled at prespecified intervals, and medication use is closely monitored. Clinical care, in contrast, is geared toward patient needs. Treatments are assigned based on clinical impression of benefit balanced with patient wishes, data collection is limited to what the clinician believes is necessary, follow-up visits are scheduled at different intervals depending on clinical and nonclinical factors, and assessments of patient preferences are inconsistent at best. There are many common “idiosyncrasies” in clinical data that are enumerated and described in Table 1.
INFORMATICS FRAMEWORK FOR ADDRESSING CAVEATS
To address the full gamut of caveats related to the reuse of clinical data for CER and other types of research, it is helpful to have a framework to categorize the major issues at hand. One way to organize these findings is to think along a continuum historically used in biomedical informatics that comprises data, information, and knowledge. Fundamentally, discovery requires the collection of observations (data), making sense of these observations (making the data meaningful, or transforming data into information) and deriving justified true belief (knowledge) based on this information. For example, we can collect data regarding smoking and lung cancer across institutions, map these data to a common standard to ensure that they are compatible (information) and look for correlations to determine whether smoking is associated with lung cancer (knowledge). This provides us a structure for understanding the challenges we face in reusing clinical data. Figure 2 shows the caveats and their influences on data, information, and knowledge.
Probably most critical to the success of using EHR data for CER and other types of research is the promotion of policies calling for, mandating, or providing incentives for the universal adoption of standards-based, interoperable health care data, captured seamlessly across the diverse sites where patients receive care. Other sources of data should be accessible as well, such as those in the public health system. Data use may be further enhanced by integrated personal health records and other sources of patient-collected data (eg, sensors). All of these sources, when use is allowed by appropriate patient consent, will allow us to compare and learn what is truly effective for optimal health and treating disease.
The opportunities for using operational EHR and other clinical data for CER and other types of clinical and translational research are immense, as demonstrated by the studies cited in the introduction to this paper. If used carefully, with assessment for completeness and appropriate statistical transformation, these data can inform not only the health of individuals, but also the function of the larger health care system. However, attention must be paid to the caveats about such data that are raised in this paper. We also hope that the caveats described in this paper will lead the health care system to strive to improve the quality of data, through attention to standards, appropriate health information exchange, and usability of systems that will lead to improved data capture and its use for analysis. Development of a clinical research workforce trained to understand nuances of clinical data and its analytical techniques, and development of guidelines and practices for optimal data entry, structure and extraction should be part of a national research agenda to identify and implement optimal approaches in the use of EHR data for CER.
The authors thank the support of the NIH NCATS Clinical and Translational Science Award Consortium and its CER Key Function Committee as well as the feedback and suggestions of Rosemarie Filart, MD of NCATS.
1. Blumenthal D, Tavenner M.The “meaningful use” regulation for electronic health records.N Engl J Med.2010;363:501–504.
2. Safran C, Bloomrosen M, Hammond WE, et al..Toward a national framework for the secondary use of health data: an American Medical Informatics Association white paper.J Am Med Inform Assoc.2007;14:1–9.
3. Weiner MG, Embi PJ.Toward reuse of clinical data for research and quality improvement: the end of the beginning.Ann Intern Med.2009;151:359–360.
4. Hripcsak G, Albers DJ.Next-generation phenotyping of electronic health records.J Am Med Inform Assoc.2012;20:117–121.
6. Smith M, Saunders R, Stuckhardt L, et al..Best Care at Lower Cost: The Path to Continuously Learning Health Care in America.2012.Washington, DC:National Academies Press.
7. Sox HC, Goodman SN.The methods of comparative effectiveness research.Annu Rev Public Health.2012;33:425–445.
9. MacKenzie SL, Wyatt MC, Schuff R, et al..Practices and perspectives on building integrated data repositories: results from a 2010 CTSA survey.J Am Med Inform Assoc.2012;19e1e119–e124.
10. Rea S, Pathak J, Savova G, et al..Building a robust, scalable and standards-driven infrastructure for secondary use of EHR data: the SHARPn project.J Biomed Inform.2012;45:763–771.
11. McCarty CA, Chisholm RL, Chute CG, et al..The eMERGE Network: a consortium of biorepositories linked to electronic medical records data for conducting genomic studies.BMC Genomics.2010;4:13Available at: http://www.biomedcentral.com/1755-8794/4/13
. Accessed May 28, 2013.
12. Denny JCKann M, Lewitter F.Mining Electronic Health Records in the Genomics Era, in PLOS Computational Biology: Translational Bioinformatics.2012.San Francisco, CA:Public Library of Science.
13. Denny JC, Ritchie MD, Crawford DC, et al..Identification of genomic predictors of atrioventricular conduction: using electronic medical records as a tool for genome science.Circulation.2010;122:2016–2021.
14. Kullo LJ, Ding K, Jouni H, et al..A genome-wide association study of red blood cell traits using the electronic medical record.PLoS One.2010;5:e13011.
15. Crosslin DR, McDavid A, Weston N, et al..Genetic variants associated with the white blood cell count in 13,923 subjects in the eMERGE Network.Hum Genet.2012;131:639–652.
16. Denny JC, Crawford DC, Ritchie MD, et al..Variants near FOXE1 are associated with hypothyroidism and other thyroid conditions: using electronic medical records for genome- and phenome-wide studies.Am J Hum Genet.2011;89:529–542.
17. Hornbrook MC, Hart G, Ellis JL, et al..Building a virtual cancer research organization.J Natl Cancer Inst Monogr.2005;35:12–25.
18. Hillier TA, Pedula KL, Schmidt MM, et al..Childhood obesity and metabolic imprinting: the ongoing effects of maternal hyperglycemia.Diabetes Care.2007;30:2287–2292.
19. Tannen RL, Weiner MG, Xie D, et al..A simulation using data from a primary care practice database closely replicated the Women’s Health Initiative trial.J Clin Epidemiol.2007;60:686–695.
20. Weiner MG, Barnhart K, Xie D, et al..Hormone therapy and coronary heart disease in young women.Menopause.2008;15:86–93.
21. Tannen RL, Weiner MG, Xie D.Replicated studies of two randomized trials of angiotensin-converting enzyme inhibitors: further empiric validation of the’prior event rate ratio’ to adjust for unmeasured confounding by indication.Pharmacoepidemiol Drug Saf.2008;17:671–685.
22. Tannen RL, Weiner MG, Xie D.Use of primary care electronic medical record database in drug efficacy research on cardiovascular outcomes: comparison of database and randomised controlled trial findings.BMJ.2009;338:b81Available at: http://www.bmj.com/cgi/content/full/338/jan27_1/b81
. Accessed May 28, 2013.
23. Danaei G, Rodríguez LA, Cantero OF, et al..Observational data for comparative effectiveness research: An emulation of randomised trials of statins and primary prevention of coronary heart disease.Stat Methods Med Res.2011;22:70–96.
24. Ryan PB, Madigan D, Stang SE, et al..Empirical assessment of methods for risk identification in healthcare data: results from the experiments of the Observational Medical Outcomes Partnership.Stat Med.2012;31:4401–4415.
25. Edinger T, Cohen AM, Bedrick S, et al..Barriers to Retrieving Patient Information From Electronic Health Record Data: Failure Analysis From the TREC Medical Records Track.2012.Chicago, IL:AMIA 2012 Annual Symposium.
26. Weiskopf NG, Weng C.Methods and dimensions of electronic health record data quality assessment: enabling reuse for clinical research.J Am Med Inform Assoc.2012;20:144–151.
27. Overhage JM, Overhage LM.Sensible use of observational clinical data.Stat Methods Med Res.2011;22:7–13.
28. Schneeweiss S, Avorn J.A review of uses of health care utilization databases for epidemiologic research on therapeutics.J Clin Epidemiol.2005;58:323–337.
29. de Lusignan S, vanWeel C.The use of routinely collected computer data for research in primary care: opportunities and challenges.Fam Pract.2005;23:253–263.
30. Brennan L, Watson M, Klaber R, et al..The importance of knowing context of hospital episode statistics when reconfiguring the NHS.BMJ.2012;344:e2432Available at: http://www.bmj.com/content/344/bmj.e2432
. Accessed May 28, 2013.
32. Green SM.Congruence of disposition after emergency department intubation in the National Hospital Ambulatory Medical Care Survey.Ann Emerg Med.2013;61:423–426.
33. Chan KS, Fowles JB, Weiner JP.Electronic health records and reliability and validity of quality measures: a review of the literature.Med Care Res Rev.2010;67:503–527.
34. Savitz L, Bayley KB, Masica A, et al..Challenges in using electronic health record data for CER: experience of four learning organizations.J Am Med Inform Assoc.2012(In review).
35. Bourgeois FC, Olson KL, Mandl KD.Patients treated at multiple acute health care facilities: quantifying information fragmentation.Arch Intern Med.2010;170:1989–1995.
36. Finnell JT, Overhage JM, Grannis S.All Health Care is not Local: An Evaluation of the Distribution of Emergency Department Care Delivered in Indiana.2011.Washington, DC:AMIA Annual Symposium Proceedings;409–416.
37. Wei WQ, Leibson CL, Ransom JE, et al..Impact of data fragmentation across healthcare centers on the accuracy of a high-throughput clinical phenotyping algorithm for specifying subjects with type 2 diabetes mellitus.J Am Med Inform Assoc.2012;19:219–224.
38. Wei WQ, Leibson CL, Ransom JE, et al..The absence of longitudinal data limits the accuracy of high-throughput clinical phenotyping for identifying type 2 diabetes mellitus subjects.Int J Med Inf.2013;82:239–247.
39. Thiru K, Hassey A, Sullivan F.Systematic review of scope and quality of electronic patient record data in primary care.BMJ.2003;326:1070.
40. Dean BB, Lam J, Natoli JL, et al..Review: use of electronic medical records for health outcomes research: a literature review.Med Care Res Rev.2009;66:611–638.
41. Botsis T, Hartvigsen G, Chen F, et al..Secondary Use of EHR: Data Quality Issues and Informatics Opportunities.San Francisco, CA:AMIA Summits on Translational Science Proceedings.Available at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3041534/
; 2010. Accessed May 28, 2013.
42. Lau EC, Mowat FS, Kelsh MA, et al..Use of electronic medical records (EMR) for oncology outcomes research: assessing the comparability of EMR information to patient registry and health claims data.Clin Epidemiol.2011;3:259–272.
43. Parsons A, McCullough C, Wang J, et al..Validity of electronic health record-derived quality measurement for performance monitoring.J Am Med Inform Assoc.2012;19:604–609.
44. Kahn MG, Ranade D.The impact of electronic medical records data sources on an adverse drug event quality measure.J Am Med Inform Assoc.2010;17:185–191.
45. Benin AL, Fenick A, Herrin J, et al..How good are the data? Feasible approach to validation of metrics of quality derived from an outpatient electronic health record.Am J Med Qual.2011;26:441–451.
47. Zhang Z, Sun J.Interval censoring.Stat Methods Med Res.2010;19:53–70.
48. O’Malley KJ, Cook KF, Price MD, et al..Measuring diagnoses: ICD code accuracy.Health Serv Res.2005;40:1620–1639.
49. Jollis JG, Ancukiewicz M, DeLong ER, et al..Discordance of databases designed for claims payment versus clinical information systems: implications for outcomes research.Ann Intern Med.1993;119:844–850.
50. Iezzoni LI.Assessing quality using administrative data.Ann Intern Med.1997;127:666–674.
51. Lawson EH, Louie R, Zingmond DS, et al..A comparison of clinical registry versus administrative claims data for reporting of 30-day surgical complications.Ann Surg.2012;256:973–981.
52. Benin AL, Vitkauskas G, Thornquist E, et al..Validity of using an electronic medical record for assessing quality of care in an outpatient setting.Med Care.2005;43:691–698.
53. Gorelick MH, Knight S, Alessandrini EA, et al..Lack of agreement in pediatric emergency department discharge diagnoses from clinical and administrative data sources.Acad Emerg Med.2007;14:646–652.
54. Harris SB, Glazier RH, Tompkins JW, et al..Investigating concordance in diabetes diagnosis between primary care charts (electronic medical records) and health administrative data: a retrospective cohort study.BMC Health Serv Res.2010;10:347Available at: http://www.biomedcentral.com/1472-6963/10/347
. Accessed May 28, 2013.
55. Meddings JA, Reichert H, Rogers MA, et al..Effect of nonpayment for hospital-acquired, catheter-associated urinary tract infection: a statewide analysis.Ann Intern Med.2012;157:305–312.
57. Wahl PM, Rodgers K, Schneeweiss S, et al..Validation of claims-based diagnostic and procedure codes for cardiovascular and gastrointestinal serious adverse events in a commercially-insured population.Pharmacoepidemiol Drug Saf.2010;19:596–603.
58. Nordstrom BL, Whyte JL, Stolar M, et al..Identification of metastatic cancer in claims data.Pharmacoepidemiol Drug Saf.2012;21suppl 221–28.
59. Bayliss EA, Ellis JL, Shoup JA, et al..Association of patient-centered outcomes with patient-reported and ICD-9-based morbidity measures.Ann Fam Med.2012;10:126–133.
60. Lindenauer PK, Lagu T, Shieh MS, et al..Association of diagnostic coding with trends in hospitalizations and mortality of patients with pneumonia, 2003-2009.JAMA.2012;307:1405–1413.
61. Devoe JE, Gold R, McIntire P, et al..Electronic health records vs Medicaid claims: completeness of diabetes preventive care data in community health centers.Ann Fam Med.2011;9:351–358.
62. Hu G, Baker SP.An explanation for the recent increase in the fall death rate among older Americans: a subgroup analysis.Public Health Rep.2012;127:275–281.
63. Cimino JJ.Desiderata for controlled medical vocabularies in the twenty-first century.Methods Inf Med.1998;37:394–403.
64. Yu AC, Cimino JJ.A comparison of two methods for retrieving ICD-9-CM data: the effect of using an ontology-based method for handling terminology changes.J Biomed Inform.2011;44:289–298.
65. Hripcsak G, Friedman C, Anderson PO, et al..Unlocking clinical data from narrative reports: a study of natural language processing.Ann Intern Med.1995;122:681–688.
66. D’Amore JD, Sittig DF, Ness RB.How the continuity of care document can advance medical research and public health.Am J Public Health.2012;102:e1–e4.
67. Nadkarni PM, Ohno-Machado L, Chapman WW.Natural language processing: an introduction.J Am Med Inform Assoc.2011;18:544–551.
68. Stanfill MH, Williams M, Fenton SH, et al..A systematic literature review of automated clinical coding and classification systems.J Am Med Inform Assoc.2010;17:646–651.
69. Hersh W.Evaluation of biomedical text mining systems: lessons learned from information retrieval.Brief Bioinform.2005;6:344–356.
70. Seiler KP, Bodycombe NE, Hawkins T, et al..Master data management: getting your house in order.Comb Chem High Throughput Screen.2011;14:749–756.
71. de Lusignan S, Liaw ST, Krause P, et al.Haux R, Kulikowski CA, Geissbuhler A.Key concepts to assess the readiness of data for international research: data quality, lineage and provenance, extraction and processing errors, traceability, and curation.IMIA Yearbook of Medical Informatics 2011.2011.Stuttgart, Germany:Schattauer;112–120.