Secondary Logo

Journal Logo


On Clinical Trials

Jones Jordan, Lisa A. PhD, FAAO; Twa, Michael D. OD, PhD, FAAO

Author Information
Optometry and Vision Science: April 2021 - Volume 98 - Issue 4 - p 307-309
doi: 10.1097/OPX.0000000000001685
  • Free

The progression of knowledge in biomedical research often begins with anecdotal observations or clinical associations and culminates with systematic experimental studies of cause and effect—clinical trials. When properly conducted, clinical trials are the foundation of reliable evidence that clinical practitioners use as the basis for treatment decisions, risk-benefit assessments, prognostications, diagnostic interventions, and guidance on standards of practice. Their essential function—establishing the knowledge base that supports clinical decision making—underscores the importance of reflecting on the purpose and challenges related to clinical trial design, conduct, analysis, and reporting.

A current illustration of the importance of clinical trials is the rapid development and clinical testing of messenger RNA vaccines for the coronavirus. As the world has waited and suffered over the past year, researchers and manufacturers have been scrambling to develop vaccines that are safe and effective against the coronavirus. In record time, vaccine researchers, pharmaceutical manufacturers, and clinical investigators decoded the genetic sequence of the virus responsible for a global pandemic, constructed an entirely new class of vaccinations based on messenger RNA, conducted clinical trials to assess safety and efficacy, and then mass-produced billions of doses of vaccines that are about 95% effective—an astounding result that ultimately came down to the design and conduct of clinical trials.

One of the most essential design elements of a good clinical trial is careful consideration and control of bias—and the risk of bias is everywhere. Bias is the great hazard of clinical trial design, and it requires constant vigilance. Clinical equipoise is a genuine state of uncertainty about the best treatment or method to put into clinical practice. It is a position of fairness, free from bias, balanced, and with open consideration for any intervention as the best possible treatment. When there is bias or a lack of clinical equipoise, it can influence decisions about study design, conduct, and analyses, which can each undermine the validity of clinical trial results.


Over time, the number of different clinical trial designs has grown and so has their complexity. The most common clinical trial designs are placebo-controlled trials, crossover trials, and noninferiority trials.1 When designing a clinical trial, the structure is usually driven by the specific research question, the outcome measures, characteristics of the disease or therapies under study, the availability of subjects for study, and the availability of funding.

Whatever the design constraints, the genesis of a clinical trial normally starts with a well-phrased question. From the concepts of evidence-based clinical practice, good questions are formulated to address the PICO construct2:

  • P: patient, problem, or population of interest
  • I: intervention; treatment trials may involve drugs; diagnostic trials may involve clinical imaging or laboratory tests
  • C: comparison or control group
  • O: outcome measure

In each case, the trial is designed to address a clinical question, and each element of the structured question receives critical scrutiny to refine what precisely is of interest and how best to specifically address the clinical question. For example, the Age-Related Eye Disease Study was designed “to evaluate the effects of pharmacologic doses of (1) antioxidants and zinc on the progression of AMD and (2) antioxidants on the development and progression of lens opacities.”3,4 The participants were adults between 55 and 80 years of age. In the third arm of the Age-Related Eye Disease Study, participants were randomized to a treatment intervention receiving dietary supplements (500 mg vitamin C, 400 IUs vitamin E, 15 mg β-carotene, 80 mg zinc oxide, 2 mg of cupric oxide) or a nonintervention (placebo) comparison group. The primary outcome measure was defined as progression of AMD. The study was designed to find environmental, epidemiologic, and personal risk factors for developing macular degeneration and to observe possible protective effects of dietary supplements.

One can continue with the PICO framework, to dissect some of the potential sources of bias in clinical trial design.5 Participant selection is a critical point for bias control. Ideally, the results from well-designed clinical trials would translate to clinical practice everywhere. However, practically speaking, this is difficult to ensure. If trial participants differ from the population of patients seen in a clinical practice implementing the clinical trial guidance (e.g., they differ by age, sex, disease severity, or other factors), the trial recommendations may not apply because of an inherent design bias.

Randomization in the assignment of study interventions is a fundamental study design method used to control bias. If investigators or participants have the potential to influence who is assigned to intervention or observation, it is easy for subjective bias to influence study outcomes. A parent who has suffered from a lifetime of severe refractive error would be highly motivated to seek interventions designed to his/her their child from sharing the same fate. Investigators who know the family history may be inclined to prefer one group assignment over another, thus biasing the outcome. To work, randomization must be designed to avoid bias; it must truly be random assignment.

Comparison groups—or control groups—by design provide contrast for the trial's intervention group and should be drawn from the same representative population as the intervention group and in sufficient numbers to ensure the statistical power needed to answer the study questions. Again, randomized assignments are a common protection from group-wise bias. However, there are several ways to create comparison groups and many different comparisons that may be of interest for a particular study question that will drive study design considerations. The most basic design is provided by treatment and nontreatment groups. However, this may be unethical in some clinical scenarios, and multiple interventions compared with one another may provide an ethical study design and a way forward. Bias can come from knowing or having control over group assignments, and good trial design can help to minimize these sources of bias as well by masking or restricting who makes the assignments, who has access to the assignment information (examiners, analysts, data safety boards, etc.), who decides when assignments will be revealed, and so on.

If essential trial design elements are missing, the validity and integrity of the entire trial are undermined. Imagine the implications if shortcuts, mistakes, or omissions were allowed when developing the coronavirus vaccine. Although the stakes for vision science may be less dramatic, the rules that guide clinical trial design and conduct are important, nonetheless. These clinical trial standards are the basis of public trust and that trust is essential for supporting and improving public health.


Modern clinical trials often address biases that can arise during the conduct of the trial. A basic mantra of any research study is to follow the protocol, but it is up to the research team to do so. The performance of clinical measures can have an impact on outcome measures. For example, encouraging participants to try harder before stopping on visual acuity measurements can result in better acuity and more repeatable estimates when compared with measures where subjects are permitted to stop whenever they like.6 These types of errors can be avoided by developing and effectively implementing standardized procedures and protocols. This should include preliminary studies of any key clinical measures to understand the quality and consistency for these measures. Moreover, these studies can provide the basis for standard protocols included in personnel training. Each of these measures can help provide more accurate and consistent clinical measures that are less susceptible to examiner or subject bias during trial conduct. Masking examiners, when feasible, is another useful way to help minimize the effects of performance bias. Attrition bias can occur when subjects leave a study and their results or outcome measures are no longer available. This may be due to many factors such as a lack of perceived benefit or treatment complications.


Bias in clinical trials can also come from how the data are analyzed. When a trial is designed, a statistical analysis plan, with input from a data safety and monitoring board, is written, which will often include an interim analysis plan or an early look at results. The interim analysis is meant to address two potential issues related to safety. The first is straightforward safety—to ensure that the treatment under study does not have an increased risk of harm associated with it. The second seems equally straightforward but is often not implemented in the proper way. Monitoring for treatment effect is based on the assumption that there is a level of effectiveness at which the study must be stopped because it is doing harm to those not receiving the preferred treatment. This is where the a priori analysis plan and equipoise are critical.

When designing the statistical analysis plan, the primary hypothesis driving the study, including the length of time until the ascertainment of the primary outcome, drives the number and timing of the interim analyses. Using the preferred monitoring method, the P value for stopping the study for a treatment effect is calculated. This is not a criterion of α = 0.05. For interim analyses to not undermine the type 1 error, a frequently used method involves determining an adjusted P value for stopping the study early using an α-spending function.7 This gives a small P value to balance the type 1 error and the potential early study termination because of a treatment effect. Consequently, this is an uncommon occurrence.

Another core statistical principle for well-designed clinical trials is the implementation of an intent-to-treat analysis.8 This analysis uses all the data, with participants being analyzed in the treatment group they were originally randomized to, regardless of whether they left their assigned treatment group, or their level of compliance, or completion of all their visits. The estimates generated from these analyses are conservative and are more likely to reflect treatment effects that may be seen in clinical practice, where patients are not perfectly compliant. This makes a preferable approach when answering the primary question of whether a treatment works or not by not artificially inflating the estimate of effect.

The alternative analysis is a per-protocol analysis. This analysis includes the subset of individuals who follow the treatment plan as prescribed, either excluding those who crossed over or dropped out or analyzing them in the treatment group into which they crossed over. This analysis plan ignores the benefits achieved by randomization and introduces bias. In addition, because of using those participants who may have had the maximum treatment, this analysis gives the largest treatment effect, if one exists. Although to an advocate this estimate will be an estimate that should be reported, the lack of representativeness makes it a suboptimal portrayal of a primary outcome for a clinical trial.


Once conducted, clinical trial results are normally reported based on a common set of guidelines defined in the CONSORT statement.9 The results should be reported in a manner consistent with the original study aims. Investigators should respect the original study design, reporting outcomes by groups, treatment levels, or other original subject groupings. The a priori analysis plan should drive the way in which study results are reported. Above all else, investigators should be up-front about what was done and what outcomes resulted. To do otherwise is a breach of an investigator's ethical duty to the participants, their funding agencies, and the broader public who depends on investigators for integrity and unbiased reporting.

Investigators should make clear the clinical implications of their findings. Statistical significance may be interesting to find, but clinically relevant differences between participant groups are far more important. For example, if investigators find a small but statistically significant difference in axial length after treatments designed for myopia, it is important to translate the size and impact of this result in terms of routine clinical findings. Likewise, when reporting results that may be confounded by multiple factors, it is important to show both adjusted and unadjusted data to help others understand the impact of any confounding variables.

At no time during an interim analysis should results be unmasked for anyone, except for someone responsible for data coordination and analysis. Study leaders, examiners, patients, and the data safety and monitoring board all remain masked until the conclusion of the study. At no time do interim analyses lead to results that are publishable or presentable—unless they are associated with the termination of the primary hypothesis. The notion that if some people are still masked allows for the presentation or publication of results is misguided. Part of the implicit agreement between participants and investigators when the consent to be in a study is signed is that the participants will be notified of the results before publication of any results. Maintaining the illusion of keeping a trial “randomized” to allow for results to be published is just that, an illusion. This undermines investigator equipoise and can inject bias into the study by changing the behaviors of investigators, examiners, or participants. The protocol and statistical analysis plan exist to minimize bias and maintain the confidence of the scientific community and the public in the rigor of the study.

Minimizing bias and maintaining rigor in a well-conducted clinical trial are a continuous investment, requiring attention to the details from the design of the protocol to the last publication. As vision researchers, we all do our part to support public trust in science while improving our public health.


1. Piantadosi S. Clinical Trials: A Methodologic Perspective, 3rd ed. Hoboken, NJ: Wiley; 2017.
2. Twa MD. Evidence-based Clinical Practice: Asking Focused Questions (PICO). Optom Vis Sci 2016;93:1187–8.
3. Chew EY, Clemons TE, Agron E, et al. Ten-year Follow-up of Age-related Macular Degeneration in the Age-Related Eye Disease Study: AREDS Report No. 36. JAMA Ophthalmol 2014;132:272–7.
4. Chew EY, Sperduto RD, Milton RC, et al. Risk of Advanced Age-related Macular Degeneration After Cataract Surgery in the Age-Related Eye Disease Study: AREDS Report 25. Ophthalmology 2009;116:297–303.
5. Evans SR. Fundamentals of Clinical Trial Design. J Exp Stroke Transl Med 2010;3:19–27.
6. Gordon MO, Schechtman KB, Davis LJ, et al. Visual Acuity Repeatability in Keratoconus: Impact on Sample Size. Collaborative Longitudinal Evaluation of Keratoconus (CLEK) Study Group. Optom Vis Sci 1998;75:249–57.
7. Lan KK, DeMets DL. Discrete Sequential Boundaries for Clinical Trials. Biometrika 1983;70:659–63.
8. Pocock SJ. Clinical Trials: A Practical Approach. Hoboken, NJ: Wiley; 1983.
9. CONSORT 2010. Lancet 2010;375:1136.
Copyright © 2021 American Academy of Optometry