Secondary Logo

Journal Logo

Creation and Validation of an Automated Algorithm to Determine Postoperative Ventilator Requirements After Cardiac Surgery

Gabel, Eilon MD; Hofer, Ira S. MD; Satou, Nancy RN; Grogan, Tristan MS; Shemin, Richard MD; Mahajan, Aman MD, PhD; Cannesson, Maxime MD, PhD

doi: 10.1213/ANE.0000000000001997
Cardiovascular Anesthesiology: Original Clinical Research Report
Free

BACKGROUND: In medical practice today, clinical data registries have become a powerful tool for measuring and driving quality improvement, especially among multicenter projects. Registries face the known problem of trying to create dependable and clear metrics from electronic medical records data, which are typically scattered and often based on unreliable data sources. The Society for Thoracic Surgery (STS) is one such example, and it supports manually collected data by trained clinical staff in an effort to obtain the highest-fidelity data possible. As a possible alternative, our team designed an algorithm to test the feasibility of producing computer-derived data for the case of postoperative mechanical ventilation hours. In this article, we study and compare the accuracy of algorithm-derived mechanical ventilation data with manual data extraction.

METHODS: We created a novel algorithm that is able to calculate mechanical ventilation duration for any postoperative patient using raw data from our EPIC electronic medical record. Utilizing nursing documentation of airway devices, documentation of lines, drains, and airways, and respiratory therapist ventilator settings, the algorithm produced results that were then validated against the STS registry. This enabled us to compare our algorithm results with data collected by human chart review. Any discrepancies were then resolved with manual calculation by a research team member.

RESULTS: The STS registry contained a total of 439 University of California Los Angeles cardiac cases from April 1, 2013, to March 31, 2014. After excluding 201 patients for not remaining intubated, tracheostomy use, or for having 2 surgeries on the same day, 238 cases met inclusion criteria. Comparing the postoperative ventilation durations between the 2 data sources resulted in 158 (66%) ventilation durations agreeing within 1 hour, indicating a probable correct value for both sources. Among the discrepant cases, the algorithm yielded results that were exclusively correct in 75 (93.8%) cases, whereas the STS results were exclusively correct once (1.3%). The remaining 4 cases had inconclusive results after manual review because of a prolonged documentation gap between mechanical and spontaneous ventilation. In these cases, STS and algorithm results were different from one another but were both within the transition timespan. This yields an overall accuracy of 99.6% (95% confidence interval, 98.7%–100%) for the algorithm when compared with 68.5% (95% confidence interval, 62.6%–74.4%) for the STS data (P < .001).

CONCLUSIONS: There is a significant appeal to having a computer algorithm capable of calculating metrics such as total ventilator times, especially because it is labor intensive and prone to human error. By incorporating 3 different sources into our algorithm and by using preprogrammed clinical judgment to overcome common errors with data entry, our results proved to be more comprehensive and more accurate, and they required a fraction of the computation time compared with manual review.

From the University of California Los Angeles, David Geffen School of Medicine, Los Angeles, California.

Accepted for publication January 10, 2017.

Funding: None.

The authors declare no conflicts of interest.

Reprints will not be available from the authors.

Address correspondence to Eilon Gabel, MD, David Geffen School of Medicine, University of California Los Angeles, 757 Westwood Plaza, Suite 3325, Los Angeles, CA 90095. Address e-mail to egabel@mednet.ucla.edu.

Large clinical data registries are a widely used tool that allow for multicenter outcomes-based research and data-driven quality improvement processes. In their various forms, these registries allow for risk-adjusted comparisons between institutions,1 multicenter clinical research studies, and opportunities for providers to improve quality by learning from their peers.2 In fact, these registries have become so central to health care that Medicare is now basing some of their reimbursements on participation in qualified clinical data registries.3,4

The Society for Thoracic Surgery (STS) maintains a national registry of adult cardiac surgical procedures, which contains hundreds of data points on each cardiac surgical procedure.5 Each data point has a highly specific definition and dedicated personnel in each participating center who manually abstract the data. The process of data collection is time consuming, resource intense, can be subject to human error, and is made more challenging by data in the medical record that is often poorly structured, redundant, and even conflicting.6,7 In one simulation study, users were asked to extract diagnoses from a medical record into a registry type format and found multiple error types with rates ranging from 8.5% to 31.8%.8 Despite these drawbacks, the anesthesia portion of the STS registry specifically requires manual data entry into a web form.9

The electronic medical record (EMR) chronicles large amounts of highly detailed and mostly structured data during the course of patient care. This proliferation of data may make it more difficult to locate the data needed in manual chart abstraction. However, the presence of structured data in an electronic format makes it theoretically possible to develop software to abstract these data automatically.7

The duration of postoperative mechanical ventilation is one such outcome measure in the STS registry that is time consuming for a clinician to determine, yet it contains a few discrete and well-structured data points. The aim of the study was the evaluation of a computer algorithm for automatic extraction of the duration of postoperative mechanical ventilation from EMR data in adult patients undergoing cardiac and thoracic surgery. The duration of postoperative mechanical ventilation automatically extracted by our algorithm was compared with the duration of mechanical ventilation manually extracted as part of the STS database data collection, and the superior method was determined. The main hypothesis was that automatic data extraction was more accurate than manual data extraction.

Back to Top | Article Outline

METHODS

After obtaining institutional review board (IRB) approval from the University of California Los Angeles (UCLA) Human Research Protection Program (IRB# 15-000518), we developed and validated a computer algorithm to automatically determine the total duration of mechanical ventilation (in hours) after cardiac surgery.

Back to Top | Article Outline

Manual Process for Determination of Duration of Mechanical Ventilation (STS)

One of the data fields submitted to STS is the total duration of postoperative mechanical ventilation, which is defined as the number of hours measured from operating room exit time until extubation time, plus any additional hours resulting from reintubation (http://www.sts.org/sites/default/files/documents/STSAdultCVData-SpecificationsV2_81.pdf). At our institution, this time was determined by a dedicated nurse who manually reviewed each chart and calculated the ventilation time using the clinical documentation in the chart. The charts are abstracted at some point after patient discharge and prior to the STS submission deadline.

Back to Top | Article Outline

Algorithm Development

As part of routine clinical patient care at our institution, mechanical ventilation is documented in 3 distinct locations in the EMR: (1) nursing flowsheet records of any airway devices; (2) documentation of lines, drains, and airways (LDAs); and (3) respiratory therapy (RT) flowsheet entries of ventilator settings. In general, nurses document airway presence hourly, airway devices are charted upon insertion and removal, and respiratory therapists document when ventilator settings are changed or when they make periodic rounds on patients.

The Department of Anesthesiology and Perioperative Medicine at UCLA has developed and maintains a large perioperative data warehouse (PDW) that contains all clinical data entered as part of patient care from our EMR (EPIC Systems, Verona WI). We have described the development of the PDW previously.10 The algorithm described in this article builds on the expertise we have developed with the creation of the PDW and utilizes the underlying data.

Figure 1.

Figure 1.

The algorithm was designed to take advantage of the charting patterns of the 3 data sources. The algorithm first created a chronological list of all the nursing airway documentation and RT documentation of mechanical ventilation for each patient. The algorithm then scanned the list and determined the start time of mechanical ventilation as the first instance of documentation of mechanical ventilation by either nursing or RT. Mechanical ventilation was considered to continue until at least 2 consecutive data points indicating spontaneous ventilation were found (this was chosen to prevent miscalculation of the algorithm because of inadvertent mischarting of spontaneous ventilation). Given that airway documentation tended to be down to the minute whereas nursing and RT documentation tended to be at consistent intervals (ie, hourly), in the event that the calculated mechanical ventilation end time was within 1 hour of the LDA-documented endotracheal tube removal time, the calculated end time was then updated to the LDA endotracheal tube removal time. In the event that the LDA was never charted as removed during the hospitalization (provider documentation error), or the LDA time differed from that charted by nursing/RT by >1 hour, the LDA was set to null and ignored. After completion, the algorithm checked to ensure continued spontaneous ventilation until hospital discharge (ie, there were no reintubations). In the event of a reintubation, the total mechanical ventilation time for the additional intubation was calculated using the same methodology and was added to the previous total. A graphical representation of the algorithm is shown in Figure 1.

Back to Top | Article Outline

Study Design, Inclusion and Exclusion Criteria

All patients who underwent cardiac surgery at UCLA between April 1, 2013, and March 31, 2014, were identified and included in the study. Patients were excluded if they had multiple surgeries on a single day (which prevented definitive matching), had a tracheostomy present on admission or created at any point during their hospital course (which made it impossible to determine total duration of mechanical ventilation), or the submission to the STS database failed to include information on mechanical ventilation time (because of resource constraints of the nurse extracting the data).

Back to Top | Article Outline

Validation of the Gold Standard

Discrepancies in the results might be the result of errors in one method or the other; therefore, further work was necessary to validate the perceived gold standard. In order to determine which test yielded the most accurate results, an independent manual chart review was performed by the research team of all cases in which the algorithm and manual determination times differed by >1 hour. One hour was chosen because most nursing documentation occurred at least hourly, implying that calculations should not differ by more than that threshold.

In order to get a more accurate sense of the mechanical ventilation time, the reviewing team member had the ability to view the live EMR environment as well as the raw tabular data that the algorithm used for decision-making. Having both data sources in 2 different formats decreased the likelihood of errors.

For each case that was manually reviewed, the reviewer determined the last time that the patient was documented as being on a ventilator as well as the first time that there was documentation of spontaneous ventilation. The algorithm or manual review were considered to be correct if either fell within the window from last documented mechanical ventilation to first spontaneous ventilation; and either was considered incorrect when falling outside that window. For example, if the last documented time of mechanical ventilation occurred at 9 hours and the first documentation of spontaneous ventilation occurred at 11 hours, durations between 9 and 11 hours would be considered correct and all other durations incorrect. This process yielded 4 possible results of the manual review: algorithm correct, manual review correct, both correct, and neither correct. In each instance, the reviewer attempted to determine the reasoning behind the error to detect any patterns.

Back to Top | Article Outline

Statistical Analysis

Statistical validation of the algorithm was conducted in 2 different ways following method comparison study methodology (descriptive statistics, relationship, agreement, and concordance). First, descriptive statistics for each source of mechanical ventilation time (STS, complete algorithm, nursing documentation, RT documentation, LDA documentation) were computed. Next, we compared the concordance between these methods using the Lin Concordance Correlation Coefficient (LCCC). Second, a Bland-Altman agreement plot was constructed between the algorithm and STS ventilator times to examine the agreement and potential bias across varying ranges of times. More specifically, the average time difference between the methods for each record (bias) was plotted on the y-axis, and the average of the 2-time estimates for each record was plotted on the x-axis. Limits of agreement (LOAs) were calculated using the standard formula: mean difference (bias) +/- 1.96 SD of the differences in time between methods. Finally, the accuracy of both the manual and automatic extraction values were assessed using the McNemar test to determine which method of ventilator time extraction was more accurate. Statistical analyses were performed using R V 3.1.2 (Vienna, Austria) and IBM SPSS V 23 (Armonk, NY).

Back to Top | Article Outline

Power and Sample Size Calculation

A formal power calculation was not computed prior to the study because our sample size was limited to 439 patients from the STS database who were available since the implementation of our EMR system. Furthermore, the inclusion/exclusion criteria reduced that number to 238 patients. Prior to the study, we agreed that if the algorithm was the same or more accurate than STS 90% of the time, then the results would be clinically reliable. Trying to keep the lower limit of the 2-tailed 95% confidence interval (CI) above the 90% threshold, we found that if the example rates of accuracy were observed (0.99, 0.98, 0.97, 0.96, and 0.95), the power would be >0.99, >0.99, 0.99, 0.94, and 0.79, respectively. In other words, we were adequately powered if the algorithm results were STS 95% of the time or more.

Back to Top | Article Outline

RESULTS

The STS database contained 439 entries representing patients who underwent cardiac surgery at UCLA between April 1, 2013, and March 31, 2014. No entries prior to April 2013 were considered since the EMRs of our institution went live in March 2013. A total of 180 (41%) patients were excluded for not having mechanical ventilation time submitted to the STS registry, 14 (3%) were excluded for having a history of a tracheostomy documented by LDAs or nursing documentation, and 7 (2%) were excluded for having more than 1 surgery on the same day. Patient disposition is summarized in Figure 2.

Figure 2.

Figure 2.

Table 1.

Table 1.

Nursing data were present for all 238 patients (100%), RT data were present for 215 (90%) patients, and LDA data were present for 214 patients (90%). Average duration of mechanical ventilation for the STS group was 13 hours (95% CI, 9.5–16.6), which did not differ significantly from the nursing data (mean = 14.7; 95% CI, 10.2–19.1; P = .31). Both RT documentation (mean = 30.0; CI, 11.7–48.4) and LDA data (mean = 20.4; 95% CI, 14.4–27.8) differed significantly from the STS data (P values of .04 and .01, respectively). The average duration of mechanical ventilation was slightly higher in the algorithm group (mean = 16.2; 95% CI, 10.6–21.7; P= .08), but the difference did not reach statistical significance. These data, as well as a description of the groups, are shown in the Table.

Back to Top | Article Outline

Relationship and Agreement Between Manual and Automatic Data Extraction

The LCCC between the STS data and nursing documentation and RT documentation were 0.71 (95% CI, 0.64–0.77) and 0.29 (95% CI, 0.26–0.32), respectively. The LDA data showed a lower concordance than the nursing documentation method with, an LCCC of 0.44 (95% CI, 0.35–0.52). The combined algorithm using all 3 data sources was the best, with an LCCC of 0.73 (95% CI, 0.68–0.78) compared with the STS data.

Figure 3 contains a Bland-Altman plot of the final algorithm results versus the STS data results. It is evident that there is a relatively low average difference between the 2 data sources: 2.6 hours with +/- LOA (1.96 × 27.5), a range that has questionable clinical significance. The Bland-Altman plot also displays the upper and lower limits of agreement, which have approximately a 100-hour range; the lower LOA was −51.3 (95% CI, −57.6 to 45.1), whereas the upper LOA was 56.5 (95% CI, 50.3–62.8). This lends to the high SD of the differences (27.5 hours), which is significantly affected by the outliers with total mechanical ventilation times that were complicated to calculate because of recurring episodes of mechanical ventilation.

Figure 3.

Figure 3.

Of the 238 study patients who met inclusion criteria, 158 (66.4%) had an agreement between the 2 systems of 1 hour or less. The majority of the data are clustered around the line of equality; however, the algorithm displayed a bias toward longer times (mean bias, 3.1 hours; lower LOA, −51.3 [95% CI, −57.6 to 45.1]; and upper LOA, 56.5 [95% CI, 50.3–62.8] hours).

Back to Top | Article Outline

Concordance Between Manual and Automatic Data Extraction

Eighty records resulted in mechanical ventilation times for manual and automatic extraction that differed by >1 hour. Of these, the algorithm was correct in 75 (93.8%) instances, STS was correct once (1.3%), and both were correct in 4 (5.0%) instances. This yields an overall accuracy of 99.6% (95% CI, 98.7%–100%) for the algorithm compared to 68.5% (95% CI, 62.6%–74.4%) for the STS data (McNemar P < .001). Overall, the algorithm was correct in 75 of the 76 cases for which 1 method was superior, which was statistically significant. These data are shown in Figure 2.

There was a single case in which the STS was correct and the algorithm was incorrect because of erroneous nursing documentation of spontaneous ventilation. While the algorithm is able to overlook a single erroneous nursing entry, this particular instance had 2 consecutive nursing documentations of spontaneous ventilation when the patient was still on mechanical ventilation.

In 10 (12.5%) instances, a postoperative reintubation was missed. In 13 (16.25%) of the instances, STS recorded a mechanical ventilation duration of 0, despite clear evidence to the contrary. Most significantly, in 34 (42.5 %) instances, there was clear documentation (either nursing or RT) around the time of the STS calculation that indicated they were incorrect.

Back to Top | Article Outline

DISCUSSION

We were able to successfully create an algorithm that duplicated the manual process of data extraction and produced results that were more accurate through the use of multiple data sources that were processed in parallel. The fact that data obtained from different singular sources may not have good concordance is not a new finding in the literature.11–13 In most cases, it has been assumed that the clinician-abstracted data contained in large registries are the most reliable; this study indicates there may still be room for improvement.

Incorporating data from multiple areas of the medical record remains a complex process for clinicians. The data required to determine the length of mechanical ventilation were located in different places in the EMRs and are not available on the same screen simultaneously, a common occurrence amongst installations. There did not seem to be a single source of error in the manual abstraction process, but in a substantial number of instances, documentation that contradicted the time that was recorded was missed (including 10 instances of reintubation and 13 instances in which a duration of 0 hours was erroneously charted). The algorithm was successful because, unlike humans, it was able to consistently combine 3 distinct data sources in order to yield maximal accuracy.

Back to Top | Article Outline

Resources Required for Automated Versus Manual Data Extraction

A major limitation of outcomes research is the time-consuming nature of collecting the data from the medical record; depending on the length of the hospitalization, it can take >10 minutes per patient to determine the total time of mechanical ventilation. In fact, of the 439 patients who underwent cardiac surgery during the study period, 180 (41%) did not have ventilator data extracted because of manpower constraints, despite our institution having a full-time nurse dedicated to this work. Similarly, National Surgical Quality Improvement Program (NSQIP) participation requires a full-time nurse (approximately $100,000 per year) and only abstracts 100 cases per month. The advantage of an algorithm such as the one described here is that, after it is created, it can be applied to all records (both past and future) with little additional labor. In our experience, the development of the algorithm presented in this article required 40 hours of programing, and the overall data extraction took <1 minute to run on the 439 cases.

Although the algorithm reduced the manpower needed for chart review, it did require specialized skills to create. We have already described previously the work required to create the PDW,10 which was substantial. In addition to the technical skills to code in SQL, development of the algorithm requires someone with clinical knowledge who understands the workflow for charting the relevant information and can make sense of the raw clinical data. For example, it requires clinical knowledge to understand what RT and nursing documentation are associated with mechanical ventilation and how to make sense of conflicting data (such as the decision to require 2 consecutive entries of spontaneous ventilation before calculating total mechanical ventilation time). Even something as simple as the concept of reintubation requires a clinical understanding that many computer programmers lack.

Back to Top | Article Outline

Scaling the Concept to Automate More Data

The ultimate goal of a project such as this would be to automate the extraction of all the data points associated with a particular clinical registry. Performing such an exercise would require dealing with exponentially more data points as well as different types of data (discrete, binary, categorical, etc). The STS registry itself has >55 postoperative events that are reported, many of which are a binary distillation of a range of clinical data. Our approach would be to create detailed (and possibly hierarchical) definitions for each data point based upon the clinical question associated with the definition. Such an approach requires not only significant technical expertise in programming but also a thorough understanding of the clinical questions as well. While our group is fortunate enough to have anesthesiologists with the requisite technical skills, many other institutions do not possess (and cannot afford to invest in) these resources.

A potential solution to dealing with the limited pool of developers who possess both clinical and technical expertise is to educate interested clinicians and/or create consortiums of institutions to develop the necessary algorithms together. Fortunately, there has been a significant influx of bioinformatics training at major academic centers with the creation of clinical informatics fellowships; much like the one here at UCLA. Anesthesiologists are encouraged to take part in these fellowships and can sit for the newly created clinical informatics board through the American Board of Preventative Medicine. Alternatively, fostering cooperative relationships between the few existing informatics centers will allow us to address the current shortage of personnel with the necessary skills and spread the costs of participation more widely. Furthermore, this level of cooperation would have the additional benefit of creating clear and highly detailed technical specifications that would be applied consistently across all institutions and help improve communication potentially facilitating multicenter trials. An ultimate goal might even be working with EMR vendors themselves to make this reporting ability part of the underlying EMR and obviating the need for institutions to invest any resources at all. EPIC has expressed a willingness to do this for some data registries when approached by interested partners such as the Multicenter Perioperative Outcome Group and Anesthesia Quality Institute.

Back to Top | Article Outline

Implications for Interpreting Registry Data

The possibility that the data submitted to clinical registries have errors due to human nature has real implications for how we use these data for decision-making. Due to the significant manpower constraints associated with data acquisition, many registries abstract a subset of cases and assume that this is an accurate representation of the population. However, the smaller sample size will amplify the effect of any data artifact or inaccuracy such as we demonstrate here. Additionally, while some registries such as NSQIP have regular audits and required data accuracy thresholds,14 others do not, making participants ignorant to any differences that might exists between institutions. Now that physician payment may be linked to performance in qualified clinical data registries, these issues have more than academic importance.15

Automated extraction is not without its pitfalls. All static algorithms are constrained to the underling clinical workflow. While humans will rapidly notice a change in charting patterns and seek out the source of the changes, machines must be taught to do so, or they will simply begin to generate erroneous data. There are several possible solutions to this problem. One solution is to require manual validation of a subset of records during each reporting period and having a certain required threshold of data accuracy (eg, 95% agreement). Another option is to incorporate statistical techniques from the manufacturing industry that look at the variation in the algorithm output. Results that fall outside a range would require manual review, whereas those within the range do not; should a group of results become faulty, the algorithm would be redeveloped.

Back to Top | Article Outline

Limitations

The most significant limitation of this study is that it extracts only a single variable of the STS submission. As discussed above, a full submission to the STS database consists of dozens of variables, and this process would need to be repeated for all of them. Based on the analysis performed here, we cannot be sure that a similar extraction for those measures would be as accurate or straightforward to develop. Similarly, the data for this study come from a single institution and are based on the workflow at that institution; other institutions with different workflows and charting practices might be more or less straightforward in extracting data.

Finally, our study calls upon infrastructure that was complex and time consuming to develop. The challenges of extracting data from EPIC are well known, and the construction of the PDW took nearly 2 years.10 Other institutions, which lack a similar infrastructure, or sufficient technical expertise would find it challenging to extract these data, despite any potential benefits they might perceive. Our algorithm will not work at another institution using EPIC with the typical “out of the box” implementation of the EPIC reporting infrastructure. It would require other institutions to adopt a commonly structured set of tables that the algorithm requires to function. Alternatively, with very minimal alteration in the current base level of tables, the algorithm would be adaptable to other EPIC sites. There is an intentional degree of separation between the algorithm and the reporting infrastructure of EPIC to allow for adaptation to other sites.

Back to Top | Article Outline

CONCLUSIONS

Ultimately, this algorithm is a successful proof of concept; it demonstrates that the automated extraction of data for registry submission is not only possible, but it can have significant benefits in both time and accuracy. Like any proof of concept, it can only demonstrate the technical feasibility of the solution; the ultimate implementation requires integrating this technical knowledge with the human components such as registry guidelines. A fully automated extraction of an entire registry would likely require not only scaling of the technical solution, but also redesigning the workflow of the submission as well as other issues we cannot yet foresee. While these issues are not trivial, they are most certainly surmountable, and doing so is necessary if we wish to continue to rely on large clinical data registries to guide our medical decision-making.

Back to Top | Article Outline

DISCLOSURES

Name: Eilon Gabel, MD.

Contribution: This author helped coordinate the research, acquire the data, and write and edit the manuscript.

Name: Ira S. Hofer, MD.

Contribution: This author helped advise on the methodology, acquire the data, and edit the manuscript.

Name: Nancy Satou, RN.

Contribution: This author helped advise on the methodology and edit the manuscript.

Name: Tristan Grogan, MS.

Contribution: This author helped analyze the statistics and edit the manuscript.

Name: Richard Shemin, MD.

Contribution: This author helped advise on the methodology and edit the manuscript.

Name: Aman Mahajan, MD, PhD.

Contribution: This author helped advise on the methodology, mentor, and edit the manuscript.

Name: Maxime Cannesson, MD, PhD.

Contribution: This author helped advise on the methodology, mentor, and edit the manuscript.

This manuscript was handled by: W. Scott Beattie, PhD, MD, FRCPC.

Back to Top | Article Outline

REFERENCES

1. Shroyer AL, Plomondon ME, Grover FL, Edwards FH. The 1996 coronary artery bypass risk model: the Society of Thoracic Surgeons Adult Cardiac National Database. Ann Thorac Surg. 1999;67:1205–1208.
2. Edwards FH, Shahian DM, Peterson ED, et al. The Society of Thoracic Surgeons 2008 cardiac surgery risk models: Part 1—coronary artery bypass grafting surgery. Ann Thorac Surg. 2009; 88:S2–S22
3. Buntin MB, Jain SH, Blumenthal D. Health information technology: laying the infrastructure for national health reform. Health Aff (Millwood).2010;29:1214–1219.
4. Rosenthal MB, Frank RG. What is the empirical basis for paying for quality in health care? Med Care Res Rev. 2006;63:135–157.
5. ACC/AHA/STS statement on the future of registries and the performance measurement enterprise: a report of the American College of Cardiology/American Heart Association Task Force on Performance Measures and the Society of Thoracic Surgeons. JAC2015;66:2230–2245.
6. Hayrinen K, Saranto K, Nykanen P. Definition, structure, content, use and impacts of electronic health records: a review of the research literature. Int J Med Informatics. 2008;77:291–304.
7. Wanderer JP, Shaw AD, Ehrenfeld JM. Automated data transmission for the Society of Thoracic Surgeons’ adult cardiac anesthesia module. Anesth Analg. 2014;119:1221–1222.
8. Lorenzoni L, Da Cas R, Aparo UL. The quality of abstracting medical information from the medical record: the impact of training programmes. Int J Qual Health Care. 1999;11:209–213.
9. Aronson S, Mathew JP, Cheung AT, Shore-Lesserson L, Troianos CA, Reeves S. The rationale and development of an adult cardiac anesthesia module to supplement the society of thoracic surgeons national database: using data to drive quality. Anesth Analg. 2014;118:925–932.
10. Hofer IS, Gabel E, Pfeffer M, Mahbouba M, Mahajan A. A systematic approach to creation of a perioperative data warehouse. Anesth Analg. 2016;122:1880–1884.
11. Shiloach M, Frencher SK, Steeger JE, et al. Toward robust information: data quality and inter-rater reliability in the American College of Surgeons National Surgical Quality Improvement Program. ACS. 2010;210:6–16.
12. McIsaac DI, Gershon A, Wijeysundera D, Bryson GL, Badner N, van Walraven C. Identifying obstructive sleep apnea in administrative data: a study of diagnostic accuracy. Anesthesiology2015;123:253–263.
13. Quach S, Blais C, Quan H. Administrative data have high variation in validity for recording heart failure. Can J Cardiol2010;26:306–312.
14. Chow WB, Ko CY, Rosenthal RA, Esnaola NF. ACS NSQIP®/AGS Best Practice Guidelines: Optimal Preoperative Assessment of the Geriatric Surgical Patient.
15. Manchikanti L, Hirsch JA. Regulatory burdens of the Affordable Care Act. Harv Health Policy Rev. 2012;13:9–12.
Copyright © 2017 International Anesthesia Research Society