WHAT IS BIG DATA?
Clinical research seeks to answer clinical problems that have a significant impact on health care systems and patients. However, clinical studies frequently lack the ability to reliably answer their research questions because of inadequate sample sizes. Underpowered studies are subject to multiple sources of bias, may not represent the larger population, and are regularly unable to detect differences between treatment groups. Most importantly, an underpowered study can lead to incorrect conclusions. Big data can be used to address many of these concerns, enabling researchers to answer questions with increased certainty and less likelihood of bias.
Prospective cohort studies, large clinical trials, national registries, and administrative claim records are all common sources of big data. National registries can be designed to capture entire population data, act as a surveillance tool, or monitor longitudinal trends. A robust registry can provide true population estimates and can detect small differences with remarkable statistical power. The success of a registry relies on multiple participating centers and the collection of clinically relevant data. The field of orthopaedics has numerous models of highly effective registries, including the US National Trauma Data Bank and the UK National Trauma Audit and Research Network.1,21,2 Although registry data is collected prospectively, research questions that use these registries are typically answered using retrospective designs. In contrast, big data can also be achieved within a prospective cohort study. The advantage of this design is the ability to ensure that all relevant variables are included during data collection. The Prevelance of Abuse and Intimate Partner Violence Surgical Evaluation (PRAISE) and the International Orthopaedic Mullticenter Study in Fracture Care (INORMUS) studies are examples of 2 large prospective cohort studies that were designed to measure the prevalence and incidence of important observational events, respectively.3,43,4 Finally, when the objective of the study is to determine treatment efficacy, the randomized control trial (RCT) is the gold standard. Well-designed RCTs provide definitive evidence of treatment efficacy, but also require extensive resources. Obtaining big data with an RCT design can present an array of administrative and technical challenges. If these challenges can be met, the findings of a study such as a large orthopaedic fracture trial can be transformative, as was the case with the Fluid Lavage of Open Wounds (FLOW) study and its future impact on open fracture protocols.5
Technological advancements have unlocked new possibilities for efficient data capture and widespread opportunities to merge massive datasets, particularly in the setting of national registries and administrative data. The remainder of this article describes examples of successful initiatives and the limitations of large available datasets.
NHFD: WHAT IT CAN AND CANNOT DO
The National Hip Fracture Database (NHFD) in the United Kingdom is clinically led, web-based, commissioned by the Healthcare Quality Improvement Partnership, and managed by the Royal College of Physicians.6 In total, 182 eligible hospitals in England, Wales, and Northern Ireland are currently submitting data to this initiative. The NHFD is considered the largest hip fracture database in the world, with over 300,000 cases recorded since its launch in 2007. Approximately 5,700 new patient records are being added every month.
The NHFD currently operates as a comprehensive quality improvement initiative. It captures important data such as descriptions of the facilities and practice in different units around the United Kingdom, audits of practice against the National Institute for Health and Care Excellence quality standard for hip fracture, performance evaluations to support Monitor's Best Practice Tariff, support for clinical governance in individual hospitals, metrics to support patient safety monitoring, identification of outlier hospitals with respect to patient outcome, a framework to support local and national audit work, an infrastructure for scientific and research work, and finally, a resource of specialist information, expertise, and networking. Captured information for the NHFD is shown in Supplemental Digital Content 1 (see Table, http://links.lww.com/BOT/A548). The NHFD also supports the Department of Health's Best Practice Tariff initiative, which rewards the achievement of specified standards (see Table, Supplemental Digital Content, http://links.lww.com/BOT/A548).
Different hospitals' outcomes are compared based on 2 key measures: 30 day mortality, and return to one's own home by 30 days. These outcomes are drawn from the mean and standard deviation according to the size of the unit. For 30 day mortality in outlying hospitals, the data quality is assessed and feedback provided to help in reviewing their clinical service.
Overall, the NHFD has become a valuable documentation and audit tool in England.7,87,8 It has had a major impact on the improvements made in the care of elderly patients, and has been a useful tool for policy makers, chief executives, commissioners, and clinical staff. It allows clinicians to consider the strengths and weaknesses of their own service, as identified in the interhospital comparison charts, regional tables, and funnel plots published annually. Identification of appropriate measures may lead to better performance and patient outcomes. Similar to any other database, it has limitations in the data captured (see Table, Supplemental Digital Content, http://links.lww.com/BOT/A548); however, it is expected that these limitations will be addressed in the foreseeable future.
SCANDINAVIAN DATA: ONGOING CHALLENGES AND SUCCESSES
Scandinavia is home to the world's first patient registry.9 Owned by the clinical communities, medical registries have the combined aim of public health monitoring and the collection of research data. The Swedish Hip Arthroplasty Registry was created in 1975, and Scandinavian registries have been vital to the study of reoperation and mortality rates, with numerous clinically relevant publications. The databases also have the ability to inform surgeons of underperforming implants and techniques.10
Hip fracture registries have more recently been established in Scandinavia following the arthroplasty registry paradigm. Primary surgeries are well reported within the registry; however, data on reoperations and complications are often lacking (see Table, Supplemental Digital Content, http://links.lww.com/BOT/A548).11 Crude outcomes such as mortality and reoperations, which are collected by most trauma registries in Scandinavia, are typically complemented by additional patient-related outcomes obtained through mail.12 Other registries for specific trauma diagnoses have also been established, and nationwide general fracture registries are in the startup phase. However, the value of these new registries is still undetermined.
High-quality RCTs, especially clinician-lead hip fracture trials, can be branded a Scandinavian success.13 Yet it is increasingly difficult in the many small Scandinavian hospitals to run a sufficiently powered high-quality RCT, because of an increasingly complex regulatory system that necessitates more resources and more research staff. If the number of randomized trials continues to decline, too much reliance may be placed on registry data. This threatens the valuable synergy between clinical trials and registry data, as illustrated by the comparison of internal fixation to arthroplasty (see Table, Supplemental Digital Content, http://links.lww.com/BOT/A548),12,14,1512,14,1512,14,15 or the decision to cement hemiarthroplasties.16–2016–2016–2016–2016–20
The most complete Scandinavian registries give an excellent overview of patient epidemiology and treatment choices at a relatively low cost. However, in addition to the limitations presented by the low reporting of reoperations and the low response rate on patient outcome questionnaires, registries remain vulnerable to bias from regional patient differences and surgeon preferences. Whenever possible, registry data must be compared to clinical trials, that are still the best source of unbiased knowledge, even in Scandinavia.
DESIGNING STUDIES THAT UTILIZE LARGE DATABASES: THE BASICS
Large databases provide a unique opportunity for observational orthopaedic research to examine research questions that cannot be examined in most RCTs. Observational database research typically provides nationwide information from a variety of patients, providers, and settings. In contrast to RCTs, which often have relatively small samples, strict patient selection criteria, and select providers, database studies provide real-world health care information from a broader, population-based perspective.
Orthopaedic database studies commonly use administrative data, or data collected for nonclinical purposes, such as the payment of medical bills, program enrolment (eg, Medicare), or government monitoring and reporting. Administrative data research can be used to examine trends and health care variations, identify problem outcomes such as hospital readmissions, mortality, or significant adverse events, generate hypotheses, and refine research questions for future studies.
Administrative data studies have broad generalizability, large number of patient records, less attrition than clinical trials, are faster and less costly than primary data collection, and can often be linked with other datasets such as Medicare with US Census data. The limitations of database studies are related to observational study designs and datasets. Because these studies are not randomized, sample selection bias is the greatest design limitation, whereby observed outcomes may differ because patients differed at baseline on factors other than treatment. Preanalytic efforts to restrict the sample to desired cases is essential, and analytic methods must additionally be used to balance other observed factors across groups such as stratification or propensity score matching. Important baseline variables known to impact outcomes must be available and reliable, and variables can sometimes be combined to obtain information that is not available from a single variable. Other limitations include relatively few variables per dataset, the inability to determine surgical or treatment quality, limited detail on patient case mix, lack of functional outcomes, and the potential inability to follow patients over time (deidentified datasets).
The range of research questions that can be examined is directly related to the quality and complexity of the data, which is positively associated with the cost of the data. For example, datasets such as Medicare that can follow patients over time and across settings are more costly, harder to program, and more difficult to obtain than public-use deidentified inpatient hospital data (such as the Healthcare Cost and Utilization Project Nationwide Inpatient Sample).
Orthopaedic database studies require a professional team of researchers, surgeons, a biostatistician, a programmer, and a billing coder specific to the dataset (hospital vs. physician coder). All datasets have limitations, and users need to understand the limitations before deciding if the dataset can answer a given research question. The best research questions for large datasets are focused, specific, and determined before any data programming. Data cleaning is crucial; statistics cannot correct for poor data cleaning. One of the most common errors in orthopaedic database studies is failure to exclude unintended cases through meticulous data cleaning. Despite large samples, statistical power is not unlimited and studies of rare event outcomes, such as pulmonary embolism, may require multiple years of data for sufficient power.
When carefully planned and conducted, large database studies are a powerful research tool for select clinical questions and for most health policy analyses conducted in the United States, both of which can inform health care and assist in planning future studies.
THE FUTURE OF LARGE-SCALE DATABASES: WILL THEY REPLACE THE CLINICAL TRIAL?
Archie Cochrane inspired us to ask the following questions when considering a new technology for clinical application: Can it work? Will it work? Is it worth it? In a perfect world, the answers to each of these questions would be found using data from high-quality experimental research. In reality, the majority of health policy decisions are made with imperfect information. An RCT cannot be completed for every new technology, drug, or device. However, there is a plethora of data collection automated into current and future health care delivery that should be utilized. Large computerized databases with millions of observations of procedures and outcomes may be used in assessing the effectiveness and safety of routine care without the delays and prohibitive costs of RCTs. These data sources are comprised of the administrative by-products of financial transactions and medical records. The prospect of merging claims data with electronic health records shows great promise; however, methods for merging these while ensuring both quality control and privacy protection remain undefined.
The advantages of using these large-scale databases include timely and clinically relevant information. The huge samples provide statistical precision, external validity through less stringent selection of study subjects, and an economic means of investigating clinical questions. All forms of observational study design can be employed, including cross-sectional, cohort, case-controlled, case-crossover, and interrupted time series. Unfortunately, each of these designs suffers from the same limitations as all nonexperimental research; namely, the presence of unmeasured confounding that cannot be adjusted for. Perhaps an even greater danger that could arise from empirical reasoning in the absence of hypotheses is the increasing likelihood of false discoveries. Database research is also more prone to selection bias, where the likelihood of patient selection (or censoring) is affected by the treatment-outcome relationship. Measurement errors that lead to the misclassification of treatment, covariates, or outcome can also significantly bias results. To minimize these impediments to drawing valid inferences, specific scientific best practices should be adopted.21 These include generation of a priori hypotheses in a written protocol, detailed analytical plans noting specific methods and safeguards against bias, and transparent reporting with justification of any changes in plans. Potential clinically important effects should be defined a priori and the results discussed accordingly.
Similar to guidelines for the reporting of clinical trials and cohort studies, checklists have been developed for secondary analysis of large datasets to raise the standards for reporting of this research.22 In addition to describing the study design, target population, and how sources of bias were addressed, the interpretation should take into account the suitability of the database for the hypotheses tested, and caution should be taken when contradicting evidence from high-quality randomized trials. In conclusion, the integration of large databases for secondary analysis offers the opportunity to efficiently mine extant data for the purposes of comparative effectiveness research, but must be done rigorously to avoid the many potential pitfalls of bias.
1. National Trauma Data Bank. Chicago, IL: American College of Surgeons; 2015.
2. The Trauma Audit and Research Network. Salford, United Kingdom: NHS Foundation Trust; 2015.
3. PRAISE Investigators. Prevalence of abuse and intimate partner violence surgical evaluation (PRAISE) in orthopaedic fracture clinics: a multinational prevalence study. Lancet. 2013;382:866–876.
4. Foote CJ, Mundi R, Sancheti P, et al.. Musculoskeletal trauma and all-cause mortality in India: a multicentre prospective cohort study. Lancet. 2015;385:S30.
5. FLOW Investigators. Fluid lavage of open wounds (FLOW): design and rationale for a large, multicenter collaborative 2 x 3 factorial trial of irrigating pressures and solutions in patients with open fractures. BMC Musculoskelet Disord. 2010;11:85.
6. The National Hip Fracture Database
. London, United Kingdom: Royal College of Physicians. Available at: http://www.nhfd.co.uk/20/hipfractureR.nsf/welcome?readform
. Accessed June 15, 2015.
7. Patel NK, Sarraf KM, Joseph S, et al.. Implementing the national hip fracture database
: an audit of care. Injury. 2013;44:1934–1939.
8. Horriat S, Hamilton PD, Sott AH. Financial aspects of arthroplasty options for intra-capsular neck of femur fractures: a cost analysis study to review the financial impacts of implementing NICE guidelines in the NHS organisations. Injury. 2015;46:363–365.
9. Irgens LM, Bjerkedal T. Epidemiology of leprosy in Norway: the history of the National Leprosy Registry
of Norway from 1856 until today. Int J Epidemiol. 1973;2:81–89.
10. Havelin LI, Robertsson O, Fenstad AM, et al.. A Scandinavian experience of register collaboration: the Nordic Arthroplasty Register Association (NARA). J Bone Joint Surg Am. 2011;93(suppl 3):13–19.
11. Gjertsen JE, Fenstad AM, Leonardsson O, et al.. Hemiarthroplasties after hip fractures in Norway and Sweden: a collaboration between the Norwegian and Swedish national registries. Hip Int. 2014;24:223–230.
12. Gjertsen JE, Vinje T, Engesaeter LB, et al.. Internal screw fixation compared with bipolar hemiarthroplasty for treatment of displaced femoral neck fractures in elderly patients. J Bone Joint Surg Am. 2010;92:619–628.
13. Yeung M, Bhandari M. Uneven global distribution of randomized trials in hip fracture surgery. Acta Orthop. 2012;83:328–333.
14. Frihagen F, Nordsletten L, Madsen JE. Hemiarthroplasty or internal fixation for intracapsular displaced femoral neck fractures: randomised controlled trial. BMJ. 2007;335:1251–1254.
15. Tidermark J, Ponzer S, Svensson O, et al.. Internal fixation compared with total hip replacement for displaced femoral neck fractures in the elderly. A randomised, controlled trial. J Bone Joint Surg Br. 2003;85:380–388.
16. Talsnes O, Vinje T, Gjertsen JE, et al.. Perioperative mortality in hip fracture patients treated with cemented and uncemented hemiprosthesis: a register study of 11,210 patients. Int Orthop. 2013;37:1135–1140.
17. Talsnes O, Hjelmstedt F, Pripp AH, et al.. No difference in mortality between cemented and uncemented hemiprosthesis for elderly patients with cervical hip fracture. A prospective randomized study on 334 patients over 75 years. Arch Orthop Trauma Surg. 2013;133:805–809.
18. Gjertsen JE, Lie SA, Vinje T, et al.. More re-operations after uncemented than cemented hemiarthroplasty used in the treatment of displaced fractures of the femoral neck: an observational study of 11,116 hemiarthroplasties from a national register. J Bone Joint Surg Br. 2012;94:1113–1119.
19. Rogmark C, Fenstad AM, Leonardsson O, et al.. Posterior approach and uncemented stems increases the risk of reoperation after hemiarthroplasties in elderly hip fracture patients. Acta Orthop. 2014;85:18–25.
20. Langslet E, Frihagen F, Opland V, et al.. Cemented versus uncemented hemiarthroplasty for displaced femoral neck fractures: 5-year followup of a randomized trial. Clin Orthop Relat Res. 2014;472:1291–1299.
21. Berger ML, Mamdani M, Atkins D, et al.. Good research practices for comparative effectiveness research: defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report—Part I. Value Health. 2009;12:1044–1052.
22. Motherall B, Brooks J, Clark MA, et al.. A checklist for retrospective database studies—report of the ISPOR Task Force on Retrospective Databases. Value Health. 2003;6:90–97.