Journal Logo

Clinical Trials in Orthopaedics Research. Part I. Cultural and Practical Barriers to Randomized Trials in Orthopaedics

Wright, James G., MD, MPH, FRCSC1; Katz, Jeffrey N., MD, MSc2; Losina, Elena, PhD2

doi: 10.2106/JBJS.J.00229
The Orthopaedic Forum
Free

Randomized clinical trials are the most rigorous clinical research design. However, trials are expensive, time-consuming, and challenging to design and complete. In May 2009, the Clinical Trials in Orthopaedics Research Symposium, sponsored by the American Academy of Orthopaedic Surgeons (AAOS), the Orthopaedic Research and Education Foundation (OREF), and the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), brought together multiple disciplines to define a randomized clinical trials research agenda by focusing on important clinical questions in each subspecialty and to debate the major important methodological, cultural, and practical barriers to performing more randomized clinical trials in orthopaedics. We defined barriers as any challenge that makes a randomized clinical trial difficult to design or perform. We plan to report the deliberations of the Clinical Trials in Orthopaedics Research Symposium in three publications. The purpose of this first article is to present the cultural and practical barriers and to highlight the key infrastructure needed to support performing randomized trials in orthopaedics. We largely focused on randomized clinical trials but realized that methodologically sound prospective cohort studies also provide important information1. Our deliberations were clearly not exhaustive, and readers can refer to texts for basic information about randomized clinical trials not addressed during the symposium2. We have included the names of symposium speakers in parentheses after the title for each section.

Back to Top | Article Outline

Cultural Issues (Marc Swiontkowski and James Wright)

Randomized clinical trials have been infrequently performed in surgery3. We use the term culture in this article to reflect how orthopaedic surgeons resolve clinical questions, respond to clinical controversies, express their participation enthusiasm, and respond to the results of randomized clinical trials. Culture is one potential explanation for the paucity of trials in orthopaedics and may be a barrier to more trials.

Each surgical procedure provides challenges that require ingenuity and often require deviations from standard practice. Evidence-based medicine, in contrast, proposes a standardized approach that treatment should be based on the best evidence, ideally from randomized trials4. The culture of orthopaedic surgery is one of acceptance and even promotion of divergence in opinions and treatment recommendations. Surgeons also strongly value personal experience. This preference is reflected in the predominant form of clinical research, the case series, which is usually a report of many years of experience. The reliance on experience also explains why surgeons often defer to senior or expert colleagues. However, an uncontrolled case series seldom resolves a clinical question definitively for many reasons, including uncertainty about surgical proficiency, different patient groups, and the effects of cointerventions. Furthermore, there is as much disagreement among experienced or expert clinicians as there is within the surgical literature. When randomized trials are available, surgeons often vigorously defend certain procedures even with evidence of no benefit. Some surgeons perceive randomized trials with suspicion even to the point of viewing trials that challenge the benefit of surgical procedures as an attack on the specialty. The end result is our frequent inability to resolve clinical controversies leading to wide variations in geographic practice patterns5. Given that all of the treatment options probably do not provide equivalent outcomes, patients may not be receiving ideal treatment.

If evidence-based medicine and randomized trials are held as an alternative paradigm, it is important to ask, “What is the value of randomized clinical trials?” One particularly cogent example comes from pediatric oncology6. Over the past thirty years, the death rate for pediatric patients with a malignant tumor has decreased substantially. This dramatic improvement in outcome has been attributed almost exclusively to the results from randomized trials. The Clinical Oncology Group, involving thousands of oncologists across North America, has approximately fifty trials ongoing at any one time, with the majority of children in North America with cancer entered into a randomized clinical trial6. The challenge for orthopaedic surgeons is to provide similar leaps in outcome.

Why is orthopaedics different from internal medicine or pediatric oncology? Whether through selection or training, surgeons are fiercely independent. Individual surgeons feel compelled to “know” the right answer for every patient. While individual surgeons know the correct answer for them, they acknowledge that other surgeons have a different answer for the same patient. The culture of orthopaedics no doubt begins with residency. The training of surgeons is still largely an apprentice model and highly hierarchical. As role models, surgeons provide little impression of doubt or uncertainty. For good reason, operating rooms leave little room for uncertainty. Furthermore, for surgeons, equipoise—comfort in recommending either of two treatment options—is seen as a sign of weakness in the surgeons’ need to provide and reinforce definitive answers. Patients also want to know the “right” answer. As discussed above, surgeons are slow to accept the results of randomized trials. Surgeons have many explanations for why they reject study results that challenge their views, including “my patients are different,” “the surgeons in the trial don’t have sufficient skill,” “the authors set up the study to get this result,” and “the results don’t reflect my experience.” Residency programs contain little to minimal training in research methods, and few surgeons have specific training in randomized clinical trials. Education in randomized clinical trials during residency is hard to develop because of the primary focus on clinical skills and the additional time required. After surgeons become faculty or clinical practitioners, reimbursement often penalizes those who perform research. While randomized clinical trials are often of high impact, because they usually span a minimum of five years, they often result in only one or two publications and a curriculum vitae that does not fill up quickly. Thus, the low ratio of publications to the time and effort expended may serve as a deterrent.

As discussed later, participation in randomized trials involves a different set of frustrations including the exacting requirements of institutional review boards, lack of hospital support, trial costs, less efficiency in providing patient care, and difficulty in enrolling patients. The surgeons with the largest practices are often the most unwilling to join. Rather than struggle through all of the practical requirements of randomized trials, it is easier to do surgery. Culture appears to be both a barrier for orthopaedic surgeons to participate in randomized clinical trials and an influence on how they respond to the results of randomized clinical trials.

At the symposium, leadership was proposed as a solution for cultural barriers to the performance of randomized trials. Surgical leaders must admit uncertainty and acknowledge equipoise. Leaders also need to promote research, particularly randomized clinical trials. As such, leaders will influence their peers and, more importantly, the trainees as the next generation of orthopaedic surgeons. Participation in trials takes time. While some may advocate or accept financial penalty, a better model is to change compensation so that leading or participating in randomized trials does not penalize individual surgeon faculty members. While a select few will commit to the necessary training to design randomized trials, an option for most surgeons is to become a team member rather than the principal investigator of a randomized trial. Finally, leaders need to advocate within their specialties. There are many activities within specialty societies that can promote randomized clinical trials, including establishing formal clinical trials committees, providing instructional courses for randomized trial design, and providing for grant funding7. While we have certainly not caught up to pediatric oncology, the number of randomized trials in orthopaedics is on the rise. In The Journal of Bone and Joint Surgery (American Volume), since 1975, the percentage of all published studies that are randomized clinical trials has increased from 4% to 21% and the percentage of case series has decreased from 81% to 48%8. In summary, the culture of orthopaedics is slowly but clearly changing to an evidence-based approach.

Back to Top | Article Outline

Training and Experience of the Investigator (Robert Marx)

Throughout the symposium, the need for specific training in randomized clinical trials was apparent. The lack of sufficient skill or training in randomized clinical trials may be a barrier because it is difficult to design a randomized clinical trial without MSc and/or MPH or PhD training. The OREF-AAOS Health Service Research fellowship, discontinued for lack of funding, has trained many of the current leaders in orthopaedic clinical research, and this type of training is critical. Research training is difficult to fit into orthopaedic residency, and funding for research training is problematic. Even if a surgeon has research training, conducting a successful randomized trial requires involvement of multiple disciplines with appropriate experience. For those new to running a randomized trial, mentorship, such as including a coinvestigator who has run a trial, is important. Experienced trials personnel, such as research coordinators and data management experts, are invaluable. It is extremely useful for institutions to have infrastructure such as research nurses, and individuals with data management and biostatistical experience. Even with support, execution is time-consuming, enrolling patients is frustrating and slower than expected, and maintaining morale is a constant dilemma. Assembling a sufficient sample size frequently requires the participation of other investigators and/or centers that may not enroll enough patients, do not obtain follow-up for all patients, or deviate from the protocol.

At the symposium, we heard that the solution is to have formal training in randomized clinical trials and to include experienced personnel. To address the issues that arise in multicenter trials, regular and/or frequent contact with participating investigators, including providing and reviewing enrollment numbers by center and/or participant, is required. Realistically, running a randomized trial takes a minimum of one to three days per week for the principal investigator. This commitment takes away from clinical time, with possible financial barriers that need to be addressed by leaders. In summary, appropriate expertise and experience are essential to a successful trial.

Back to Top | Article Outline

Barriers to Randomized Clinical Trials: An Academic Perspective (Daniel Berry)

Institutions and/or departments have several potential barriers to performing randomized clinical trials, including cost, time, surgeon resistance, questionable relevance and/or value, conflicts of interest, and patients. Funding for randomized trials often does not cover the entire costs associated with the trial. While direct costs should include clinical resources, equipment, and costs of research personnel, trial costs may even be greater when no infrastructure exists before the onset of the trial. Randomized trials also involve physician costs, including study administration time to obtain informed consent, and the additional clinical time for study patients. Surgeons have entrenched beliefs that may interfere with trial participation and enrolling patients. New techniques may also serve as a barrier to surgeon participation because of the surgeon’s unfamiliarity and lack of proficiency. Conflict of interest may interfere with surgeon participation because they are involved in the treatment(s) being evaluated or in potential alternative treatment(s) to those evaluated in the randomized trial. Finally, patients come with their own preferences and/or reluctance toward randomization.

The solution is to consider potential revenue associated with studies, including the ability to increase business on the basis of reputation or expertise. Randomized clinical trials, particularly those involving new technologies, may serve as a drawing point for doctors and patients. In the design of clinical trials, the logistical aspects should be planned to minimize the burden on the doctor and the hospital. Department chairs have a prominent role in recognizing and promoting the academic value of randomized clinical trials to staff and hospital administration. Department chairs also have a role in focusing clinical researchers on important and feasible clinical questions. Conflict of interest needs to be managed institutionally to maximize surgeon participation. While the preference of patients with regard to treatment choices must be respected, institutions can help patients to understand that randomized clinical trials are a common feature of many academic institutions and that uncertainty about the best treatment is the rationale for a randomized trial.

Back to Top | Article Outline

Infrastructural Requirements (Michael Bosse)

The larger the randomized trial is, the more complex the required infrastructure—often overwhelming the investigator and serving as a barrier to randomized trial initiation. While not all randomized trials require all aspects of infrastructure shown in Figure 1, all trials require that all tasks be performed. The executive committee, chaired by the principal investigator, oversees the running of the randomized trial, including financing decisions, any issues that arise in each of the sites enrolled in the trial, and interfacing with the data safety monitoring board and the granting agencies. The data coordinating center manages day-to-day operations, including training of staff, maintaining a manual of operations, managing data, monitoring regulatory compliance, maintaining institutional review board approvals, managing budget, and overseeing web sites, if appropriate. The steering committee, comprising all site leads, reviews recruitment and any changes to protocol. The adjudication committee, which must be at arm’s length from the investigators, evaluates individual patient eligibility if uncertain, performs outcome determination when not clear, and identifies protocol violations. Protocol violations in turn must be reported to the institutional review board. The publication committee determines which papers will be written, the potential authorship, and the order of authorship. While decisions on authorship cannot often be finalized until the papers are finished, the earlier the discussion occurs, the less likely there will be disputes and damaged relationships. A database management center, essential for large multicenter studies, determines data elements and monitors the transfer, completeness, and quality of data. Finally, most funding agencies require a data safety monitoring board, which is often constituted by the granting agency and in other cases is formed by the institutional review board. The data safety monitoring board must be at an arm’s length relationship to investigators to assess the progress of the randomized trial and safety of trial participants. While a data safety monitoring board often can serve in an advisory role, the primary function is to determine when a trial can continue or finish because of factors such as low recruitment, safety issues, or overwhelming efficacy leading to early ending of the trial. Institutional review boards also have power to stop a randomized controlled trial. Finally, trial sites are responsible for local institutional review board approval, grants and contracts, patient enrollment, data collection, patient follow-up, and reporting of adverse events. In summary, a well-organized randomized trial is necessary to complete all required tasks. The solution is to consult early and widely with experienced trialists. Ideally, an experienced trialist should form part of the team.

Fig. 1

Fig. 1

Back to Top | Article Outline

Data Management (Christine Chaisson)

The data management team may be one of the most underappreciated aspects of trial management. Lack of appropriately performed data management may be a serious barrier because the trial may be complete, but the data may be inaccurate or missing. The data management team designs forms, collects data, enters data, cleans and verifies data, tracks data reporting, creates databases, and produces final reports. A critical issue for all randomized trials is to ensure the integrity and security of the data. A whole set of separate issues is necessary to protect personal health information. Data management must be involved early in the randomized trial, ideally even prior to grant submission, to ensure appropriate funding for data capture. Electronic data capture, often complicated and expensive, may not be the best strategy for trials. For many randomized trials, particularly smaller trials, paper forms are less complex and less expensive.

Decisions need to be made early on what data to collect. The focus should be on collecting only essential data. Excessive data collection is expensive and may affect recruitment if there are too many patient forms. Data collection forms, critical to successful data collection, need to be designed in advance and pilot-tested, and it is important that they be modified as the study progresses and as experience with the forms demonstrates flaws. Several principles guide the design of data collection forms. First, forms that have open-ended questions are hard to code. Second, forms need to be uncluttered, with clear questions and specific, comprehensive, and nonoverlapping responses. Third, forms should be simple and pilot-tested for clarity. Data must be constantly monitored for accuracy and completeness. Ensure appropriate coding for when and why data are missing. Finally, data collection personnel need training and retraining.

Data need to be reviewed by the principal investigator and biostatistician as the trial is ongoing. Databases are often complex and need formal programming. Database development exceeds the capability of the Excel spreadsheet program (Microsoft, Redmond, Washington). Furthermore, the biostatistician who will ultimately be responsible for data analyses needs to be involved. We heard in the symposium that the solution is to consult an experienced data management center early in the design phase. With appropriate help, correctly performed data management saves time, money, and headaches.

Back to Top | Article Outline

Institutional Review Boards (Jeff Katz)

Institutional review boards are often seen as a barrier to randomized clinical trials, but they have important duties including beneficence (to maximize benefits for science, humanity, and research subjects and minimize risk or harm), respect (to protect autonomy and privacy rights of participants), and justice (to ensure fair distribution of the costs and benefits of research among persons and groups). However, individual institutional review boards interpret state and federal regulations locally, sometimes leading to frustrations in gaining approval and to increased complexity in multicenter trials with different recommendations on study protocols at each center. Regulations may change often during the conduct of a trial. Essential aspects of an ethical study are that subjects are informed and consent is not coerced. Institutional review boards must ensure that consent forms are complete and understandable, that health information is private and confidential, and that consent is obtained by appropriate individuals in a noncoercive fashion. As the randomized trial progresses, the research ethics board needs to be informed of protocol changes, to obtain current revisions of consent forms, and occasionally to notify regulatory agencies and patients of adverse events. The frustration for many investigators is the sometimes bureaucratic nature of the institutional review board process including the completion of many forms, prolonged waits for approvals, and entire studies hinging on institutional review board approval. Furthermore, the institutional review board is often remote from investigators, faceless and formal in correspondence, and foreign in process. The result is that the institutional review board is viewed as a barrier at minimum and an “enemy” at worst.

The solution lies in budgeting time and money to engage in and complete an institutional review board process. Develop a personal relationship with the individual in charge of reviewing protocols and the chair of the institutional review board. Attend an institutional review board meeting or join the committee to understand the process, work with the institutional review board to move the process along as fast as possible, and feel free to call and speak to staff. In summary, you can look to the institutional review board to help with essential processes.

Back to Top | Article Outline

Funding for Trials (James Panagis)

Few trials can succeed without funding. Approximately 5% of the NIAMS appropriations are randomized clinical trials currently costing approximately $28 million annually. Of the current forty-six trials funded by the National Institutes of Health (NIH) in 2009, eight were orthopaedic (NIAMS).

A frequently successful strategy is to use the Randomized Clinical Trials (R34) Planning Grant before the R01 application. Currently, the R34 provides $100,000 for twelve months to finalize the Manual of Operations and Procedures, resolve gender and/or minority inclusions, develop collaborative arrangements, standardize procedures, develop data collection and management tools, and finalize training plans (but not allow pilot collection of data). While the NIH has supported orthopaedic randomized clinical trials, other potential sites for trial funding include the OREF, disease-specific organizations, or specialty orthopaedic societies.

Early conversations with NIH staff are needed to discuss the appropriateness of an idea and ensure compliance with NIH protocols. There are specific restrictions on grants with annual budgets of >$500,000 per year in direct costs. Research studies with these large budgets are accepted only at certain times of year and require approval for submission on the basis of preapproval protocols submitted three months in advance of the deadline for full grant submission. Some of the criteria for acceptance of these large studies include relevance to the NIAMS mission, potential for new information, public health importance, potential to change clinical practice, feasibility, portfolio balance, and availability of funds. Granting agencies are looking for research that is relevant to their mission, to improving health, and to changing clinical practice and that demonstrates feasibility while being scientifically sound. The solution is early consultation with the director at NIH responsible for randomized clinical trials who can provide advice about the clinical question and trial feasibility as well as direct the investigator through the application process.

Back to Top | Article Outline

Regulation of Trials (Jonette Fox)

Evaluating or bringing new devices to market inevitably involves the U.S. Food and Drug Administration (FDA)9. As with any bureaucratic requirement, the FDA may serve as a perceived or true barrier to randomized clinical trials. The FDA has a legislative mandate to address new orthopaedic devices. These trials are designed to demonstrate the safety and noninferiority of existing devices. While this poses some hurdles, early and collaborative involvement will aid the clinical investigator. The aim of the Center for Devices and Radiological Health, a center within the FDA, is to get safe and effective devices to market as quickly as possible, to ensure devices currently on the market remain safe and effective, and to help the public to obtain accurate information. The Office of Device Evaluation, an office within the Center for Devices and Radiological Health (Fig. 2), ensures that the basic safety of a product has been demonstrated prior to initiation of a trial, that the trial will address important safety and effectiveness questions, that risks and benefits to participants have been noted, that patients are informed, and finally that the trial size and methodology are appropriate. Trials are appropriate for a new device or a new use for a legally marketed device. Investigational device exemption is the route for the collection of safety and effectiveness information.

Fig. 2

Fig. 2

The decision about allowing devices on the market requires an appraisal of the evidence by the FDA. While evidence of all types (except isolated case reports, opinions, or random experience) is useful for the Center for Devices and Radiological Health, randomized trials provide the most compelling information. Important considerations in trial design include the purpose of the study and the intended use for the device, study population and controls, number of patients and sites, inclusion and exclusion criteria, adequate case report forms, appropriate monitoring, adequate statistical analyses including consideration of covariates, and appropriate end points for effectiveness and safety. It is important to recognize that while certain study designs may be satisfactory for the FDA, they may not be sufficient for the Centers for Medicare and Medicaid Services (CMS) in considering funding. The investigator must ensure that the investigation is conducted according to the research plan and will protect the rights, safety, and welfare of participants; obtain informed consent; control the use of the device under investigation; and report adverse events. Issues that need to be resolved at the approval stage, on the basis of the preapproval studies, include whether the studies were performed on narrow or broad samples of patients, whether the intervention was performed by experienced or inexperienced clinicians, and whether genetic idiosyncrasies were considered. The challenges for the investigator in designing trials for the FDA include the need to establish a minimally clinically important difference, understanding that not all safety end points are equal, ensuring that risks and benefits are balanced, and that evidence reflects the so-called real-world situation. The daunting task of managing the application and approval process may deter many potential investigators. The solution we heard at the symposium is to have an early discussion with the FDA, even before investigational device exemption submission for randomized clinical trials. Furthermore, it is possible, and sometimes preferred, to invite NIH and CMS to early discussions to make the entire process less daunting.

Back to Top | Article Outline

The Role of Journals (James Heckman)

The intent of journals is to enhance patient care. However, journals may be perceived as barriers to publication of trial results. Journals and reviewers tend to favor positive results, leading to potential publication bias. The involvement of sponsoring companies has led to concerns about conflicts of interest. For both of these reasons, registration of trials such as on the web site ClinicalTrials.gov must occur at the initiation of any randomized trial. In some circumstances, such as trials involving products that are subject to FDA regulations, registration is required by law.

The journey from data to wisdom has several steps, and the obligation of journals is to publish high-quality research that pushes surgeons to do more rigorous work. Journals also serve as a readily accessible repository of quality information, including research on study methodology, for readers and researchers. As noted above, JBJS has more than quintupled the percentage of Level I and Level II studies since 19758. JBJS has more than 800 volunteer peer reviewers and many editors, all devoted to publishing the best research. Several initiatives have been developed to encourage high-quality studies and educate the readership of JBJS, including the evidence-based medicine section (three annotated abstracts are published quarterly with expert commentators), the What’s New subspecialty section reviewing new developments with a focus on Level I studies, level of evidence ratings provided for every published scientific article, and grades of recommendation for treatment recommendations in review articles. Finally, JBJS has recently published evidence-based guidelines from the AAOS, beginning in July 2009. High-quality design does not ensure a clinically meaningful question; both the design and clinical question need to be considered when evaluating a study. While peer review is not perfect, it is the best system to ensure that highest-quality, relevant, and new information is published. The solution for surgeons hoping to have their randomized clinical trials published is to design the trial vigorously, consult experienced colleagues, and format the randomized trial report with use of established guidelines such as CONSORT (Consolidated Standards of Reporting Trials) or CLEAR NPT (Checklist to Evaluate Report of a Nonpharmacological Trial)10.

In summary, while randomized clinical trials pose many practical challenges, the momentum has clearly shifted from, “Why don’t we do more randomized trials?” to “What clinical questions should be addressed and how can we make randomized trials better?”

∗This report is based on the Clinical Trials in Orthopaedics Research Symposium sponsored by the American Academy of Orthopaedic Surgeons and the Orthopaedic Research Society, Albuquerque, New Mexico, May 7, 8, and 9, 2009.

Disclosure: In support of their research for or preparation of this work, one or more of the authors received, in any one year, outside funding or grants in excess of $10,000 from the American Academy of Orthopaedic Surgeons, Orthopaedic Research and Education Foundation, and National Institutes of Health. Neither they nor a member of their immediate families received payments or other benefits or a commitment or agreement to provide such benefits from a commercial entity.

Back to Top | Article Outline

References

1. Concato J, Shah N, Horwitz RI . Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med. 2000;342:1887-92.
2. Jadad AR, Enkin M . Randomized controlled trials: questions, answers, and musings. London: BMJ Publishing Group; 2007.
3. Solomon MJ, McLeod RS . Should we be performing more randomized controlled trials evaluating surgical operations? Surgery. 1995;118:459-67.
4. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS . Evidence based medicine: what it is and what it isn’t. BMJ. 1996;312:71-2.
5. Weinstein JN, Bronner KK, Morgan TS, Wennberg JE . Trends and geographic variations in major surgery for degenerative diseases of the hip, knee, and spine. Health Aff (Millwood). 2004;Suppl Web Exclusives: VAR81-9.
6. Wittes RE. Therapies for cancer in children—past successes, future challenges. N Engl J Med. 2003;348:747-9.
7. Wright JG, Gebhardt MC . Multicenter clinical trials in orthopaedics: time for musculoskeletal specialty societies to take action. J Bone Joint Surg Am. 2005;87:214-7.
8. Hanzlik S, Mahabir RC, Baynosa RC, Khiabani KT . Levels of evidence in research published in The Journal of Bone and Joint Surgery (American Volume) over the last thirty years. J Bone Joint Surg Am. 2009;91:425-8.
9. Feinsod M, Chambers WA . Trials and tribulations: a primer on successfully navigating the waters of the Food and Drug Administration. Ophthalmology. 2004;111:1801-6.
10. Chan S, Bhandari M . The quality of reporting of orthopaedic randomized trials with use of a checklist for nonpharmacological therapies. J Bone Joint Surg Am. 2007;89:1970-8.
Copyright © 2011 by The Journal of Bone and Joint Surgery, Incorporated