Share this article on:

What is “Evidence-Based” Strength and Conditioning?

English, Kirk L. MA1; Amonette, William E. PhD, CSCS*D2; Graham, Marilynn MS, CSCS3; Spiering, Barry A. PhD, CSCS4

Strength & Conditioning Journal: June 2012 - Volume 34 - Issue 3 - p 19–24
doi: 10.1519/SSC.0b013e318255053d


1University of Texas Medical Branch, Galveston, Texas

2University of Houston–Clear Lake, Houston, Texas

3University of Houston, Houston, Texas

4California State University, Fullerton, California



Kirk L. English is a PhD candidate at the University of Texas Medical Branch and an Exercise Physiologist with JES Tech at NASA-Johnson Space Center.



William E. Amonette is an assistant professor in the Fitness and Human Performance Program at the University of Houston–Clear Lake.



Marilynn Graham is an adjunct instructor at the University of Houston.



Barry A. Spiering is a Research Physiologist in the Military Performance Division at the United States Army Research Institute of Environmental Medicine.

Back to Top | Article Outline


Recently, the term “evidence-based” has begun appearing in the field of strength and conditioning. This term has been used with increasing frequency at the last several National Strength and Conditioning Association (NSCA) National Conferences and in recent issues of the Strength and Conditioning Journal. Because the term “evidence-based” has yet to be formally introduced to many strength and conditioning practitioners, it is at risk of being misinterpreted and, unfortunately, misused to promote products and concepts.

The term “evidence-based” originated in the field of medicine in the early 1990s. Evidence-based medicine, the forerunner to evidence-based practice (EBP), was largely conceived and guided by Sackett et al. (5,8–12) in response to contentions that less than half of all medical decisions were supported by research evidence (3,14). The realization that critical, potentially life-altering clinical decisions were being made based on outdated medical textbooks, information obtained while in medical school decades prior, and practices and preferences handed down from mentors and senior physicians drove Sackett to formulate a systematic process by which physicians could incorporate “best evidence” (e.g., cutting-edge research) to augment professional knowledge and experience and inform their everyday clinical practice.

In light of what we perceive as the great potential of EBP in the field of strength and conditioning, it is essential to provide a clear understanding of the EBP process and precisely define what is “evidence-based” strength and conditioning. Thus, the purpose of this article is to (a) clearly define EBP as it relates to the field of strength and conditioning, (b) briefly describe the 5 steps of the EBP process, (c) discuss the utility of EBP in modern strength and conditioning, and (d) provide a few recommendations for integrating science and experience to improve practice.

Back to Top | Article Outline


EBP in the context of health care has been defined as the use of a systematic approach based on evidence, professional reasoning, and patient preferences to improve patient outcomes (12,13). We propose a refined definition of EBP, adapted for the field of strength and conditioning: a systematic approach to the training of athletes and clients based on the current best evidence from peer-reviewed research and professional reasoning. This approach should be used within the context of a specific needs analysis.

This definition contains several important components. First, EBP is a systematic process that requires a conscientious and judicious search of available research to find the current best evidence for a given topic (12). EBP does not blindly follow the recommendations of experts or base decisions on the casual reading of a few scientific abstracts. It is a continuous process that requires a long-term commitment to learning in-depth information about a variety of topics to make the best decisions for athletes and clients. Furthermore, when attempting to find evidence to support or refute a training technique, exercise device, or nutritional supplement, strength and conditioning professionals must keep an open mind. The practitioner must weigh the evidence, giving fair and equal treatment to both sides.

Second, professionals should not simply search for evidence—they must search for the current best evidence. Regardless of the question, it is likely that there is at least some information both supporting and refuting a given practice. The challenge the evidence-based practitioner faces is uncovering the “best evidence.” It is also important to realize that the current best evidence may change over time—EBP is a continuous process. The ultimate goal of an evidence-based strength and conditioning program is to provide athletes and clients with the best training plan based on the current state of knowledge.

Third, experienced professional reasoning will always be an integral part of strength and conditioning practice. Often, there is no specific evidence available for a new training technique. If practitioners limit their training to only those techniques for which research has reached a consensus, training will never progress. Although all programs should be based on proven techniques originating from a conscientious and judicious evaluation of the current best evidence, professional reasoning is necessary to fill in the gaps and drive performance to higher levels. In turn, practical experiences will drive research to new levels.

Finally, a program will only be effective if it is created in the context of specific team/athlete/client needs. Programs that address these specific needs using the current best evidence, and professional reasoning when solid evidence is lacking, will maximize strength and conditioning outcomes.

Back to Top | Article Outline


EBP, as defined by Sackett et al. (9), is a 5-step process used to find and integrate research evidence into daily practice. These steps are as follows: develop a question, find evidence, evaluate the evidence, integrate the evidence into practice, and reevaluate the evidence (Table 1). We have previously discussed in detail the steps in EBP as it pertains to exercise science (1,2); here we simply wish to highlight a few important points.

Table 1

Table 1

Back to Top | Article Outline


The EBP process begins with a very practical question, for instance: Is Exercise A superior to Exercise B for improving lower-body power? Does Supplement X improve recovery after exhaustive exercise? However, to obtain precise answers, it is essential to define the question precisely. The acronym “PICOT” (Population, Intervention, Comparison, Outcome, Time) provides a helpful tool for ensuring that the question is precisely defined (6). Using the PICOT acronym, a precisely defined question would be: Is performing 3 sets per exercise superior to performing 1 set per exercise over the course of an 8- to 12-week resistance training program for improving back squat one-repetition maximum performance in previously untrained healthy females? This question specifically defines the population (previously untrained healthy females), the intervention (resistance training program), the comparison (3 sets per exercise versus 1 set per exercise), the outcome (back squat one-repetition maximum [1RM]), and the time (8–12 weeks).

Back to Top | Article Outline


Strength and conditioning practitioners can obtain evidence from 2 important sources: (a) professional experience and (b) scientific research. This sentiment is reflected in the editorial mission of the Strength and Conditioning Journal: “to publish articles that combine the practical applications of previously published peer-reviewed research findings and the knowledge of experienced professionals.” Most strength and conditioning practitioners obtain professional experience via training athletes and clients, interacting with colleagues, and attending practitioner-oriented conferences (e.g., the NSCA Coaches Conference). Alternatively, scientific evidence is obtained by reading peer-reviewed publications (e.g., Journal of Strength and Conditioning Research) and attending research-based conferences (e.g., the NSCA National Conference). An initial online literature search of relevant keywords using a search engine, such as PubMed, SPORTDiscus, Google Scholar, etc., followed by cross-referencing and manual searches should yield most of the existing literature on a given subject.

Back to Top | Article Outline


This is perhaps the most difficult step because available evidence can be conflicting. Therefore, coaches and practitioners must carefully consider the strengths and weaknesses of each form of evidence to determine which ones should be given more weight due to their greater validity. EBP provides a systematic, unbiased, and reproducible method of reconciling these differences while removing emotionally driven ties that may lead to subjective programming decisions. This is accomplished by ranking the evidence according to its validity (Table 2; (7)). This ranking system is referred to as “levels of evidence.” The interested reader is referred to our previous publication (1) for the relevant details; it suffices to say that the least-biased, most objective forms of evidence (e.g., randomized controlled trials with consistent findings) receive higher ranking, whereas the most potentially biased, least objective forms (e.g., expert opinion) receive lower ranking. As an example: an “expert” strength and conditioning coach might insist that all athletes take supplement X to improve strength gains (which is Level D evidence), whereas several peer-reviewed research studies demonstrate that supplement X is clearly ineffective (which is Level A, B, or C evidence depending on the study design). An evidence-based practitioner would resolve this conflict by relying on the least-biased most objective data (the peer-reviewed studies) as a basis for decision making, thus rejecting the supplement. However, in the absence of research evidence (Level A, B, or C evidence), a practitioner would rightfully choose to use supplement X if expert opinion suggests that it is effective. Thus, practitioners should never be afraid to make decisions when research evidence is lacking; this is particularly relevant for cutting edge and newly emerging training devices, programs, and nutritional supplements.

Table 2

Table 2

Back to Top | Article Outline


The decision on the most reasonable and effective way to integrate evidence into daily practice is based on the interrelationship among 4 components:

  • The strength of the available research evidence. If strong evidence is available, then coaches should seek to incorporate the evidence into practice. However, if only weak or inconsistent evidence is available, then perhaps the time and resources should be devoted to other training practices.
  • The specific needs of the athlete or client. The specific needs of athletes or clients should be considered before integrating evidence into practice. Time limitations, previous injuries, dietary preferences, etc., should be considered instead of developing a “one size fits all” approach.
  • Budgetary restrictions. When faced with budgetary constraints and facility limitations, we recommend allocating resources to the training practices supported with strong evidence.
  • Professional expertise. Given the considerations listed above, strength and conditioning professionals can generate pragmatic and creative solutions for providing the “best” possible training program for their athletes and clients.
Back to Top | Article Outline


The final step is reevaluating the evidence. EBP is a continuous process and emphasizes constant, usually subtle, shifts to bring practice in line with the most recent scientific findings. For example, a company might try to sell an expensive new training device. If no published research exists, then a coach might choose to refrain from purchasing the device based on lack of evidence. However, if, 5 years later, several studies clearly show the device's effectiveness, then the coach might wish to reassess the decision.

Back to Top | Article Outline


Besides guiding decision making, EBP provides strength and conditioning professionals with “strength of certainty” regarding their programming decisions. The number of existing studies, the consistency of their findings, and their quality (i.e., validity) determine the strength of certainty. When numerous well-controlled studies exist to support or refute a given practice, it increases the practitioner's strength of certainty regarding the decision whether or not to incorporate the practice into their programming. When little or no research has been conducted to evaluate a particular device or training technique, strength of certainty is relatively low. Although this does not preclude the use of cutting-edge devices, programs, and supplements, practitioners should critically assess these novel items and, ideally, work with researchers to formally evaluate them to establish a higher level of evidence for their efficacy (or lack thereof).

This “strength of certainty” aspect of EBP is particularly useful when it is necessary to defend one's decisions or convince a skeptic of the validity of a particular strength and conditioning practice. For instance, if the incoming strength and conditioning coach at a small college wished to purchase platforms to perform Olympic-style lifting, then he/she could provide a large body of research literature to support the efficacy of Olympic-style lifting, its effect on power, and its subsequent effect on athletic performance; perhaps, this sound body of evidence could even help convince the administration to provide the necessary equipment/facilities. Similarly, an athlete who is reluctant to give up an ineffective or counterproductive training practice that he/she has used for a long period might be better convinced by a large body of research evidence that demonstrates the merit (or lack thereof) of the practice.

All exercise programs are constructed using a combination of research and professional experience (both an individual practitioner's and others'). It takes many years and much trial and error for a single practitioner or group of practitioners to personally evaluate numerous varied training practices. Ideas from other practitioners are often the primary sources of new knowledge. However, all new ideas should be systematically evaluated based on the available evidence; replication and reproducibility lend strength and validity to a given practice.

Professional experience is particularly important because it enables practitioners to make prudent decisions in the absence of solid research evidence. Strength and conditioning professionals can increase the soundness of decisions made on the basis of professional experience by keeping comprehensive programming records. Detailed records of athlete and client needs, training protocols, and test results increase the strength of certainty of experience-based programming decisions for which peer-reviewed research is unavailable or inconclusive. Records can also be used as pilot data to develop research studies (with or without the collaboration of other researchers) that can provide more definitive conclusions about emerging or controversial topics.

The confidence placed on a new training technique/device should match the evidence for it. Practitioners typically (and appropriately) focus on individual athletes or clients—not on conducting unbiased and carefully controlled experiments. EBP allows individual strength and conditioning professionals to immediately benefit from the work of other practitioners and researchers—whose sole job is to systematically evaluate various training programs, devices, and nutritional supplements—in addition to their own, often extensive, experience.

When no research has been conducted to evaluate, for example, a particular training device, then practitioners should move forward with a decision based on professional reasoning extending from the existing body of research and knowledge from experience. Theories that seemingly contradict the available evidence should be viewed with extreme caution, no matter who promotes them. Thus, the best and most currently available evidence should always provide the basis for programming.

Back to Top | Article Outline


To base programming decisions on the best and most current evidence, practitioners must be consumers of research. Generally, professionals involved in the strength and conditioning vocation tend to work either as scientists or as coaches. Thus, they either generate research (as scientists) or are practitioners (e.g., coaches). EBP does not demand that practitioners become research scientists; rather, it requires practitioners to regularly peruse the research literature to remain informed of cutting-edge knowledge that is pertinent to the athletes and clients that they train. There are several ways that practitioners can “consume” research. First, they should regularly search the literature for studies that are relevant to their daily exercise programming responsibilities. For instance, coaches responsible for football teams should remain abreast of research that is pertinent to enhancing the performance of young male strength/power athletes. Second, staffs (e.g., coaching staffs) can hold a regular (weekly or monthly) journal club where they discuss and digest a scientific article relevant to their needs. Finally, practitioners can consult the literature when presented with a unique situation that is outside of their typical knowledge base. A recent article in this journal detailed a helpful method for practitioners to search and evaluate scientific literature in areas that they might be unfamiliar (4). Scientists can facilitate this process by “speaking the language” of EBP. When giving lectures and consulting with practitioners, scientists can make their recommendations using the levels of evidence to help practitioners appreciate the strength of certainty associated with the available evidence. By following these recommendations, scientists and practitioners can “bridge the gap between science and application” together (NSCA Mission Statement).

Although systematic review and implementation of research provides the foundation for EBP, professional reasoning and evaluation of specific needs are also integral parts of the process (13). Randomized controlled trials provide the control necessary to establish causation but are unable to capture the variability in the needs of different athlete and client populations and their complex training requirements. Thus, while research evidence should provide the foundation for all programming, experienced professional judgment will always be an integral part of strength and conditioning practice. The key to sound practice is to build a program's framework around proven techniques and then to fill the gaps based on the individual practitioner's knowledge and experience as well as the unique needs of athletes/teams and clients.

Finally, research and practical application should be a bidirectional process. Both scientific evidence and practical experience are necessary and complementary components of furthering EBP. Scientific research should provide the basis for practice, but practical experience should also lead to applicable research. A practitioner can build a solid framework based on available research; however, if new techniques are never attempted, then training will become stagnant. On the other side of the coin, practical experiences should drive researchers to investigate new training approaches to help substantiate or refute current practices. It is this interplay of science and practice that encourages the evolution of sound programming concepts that can be applied in a practical manner.

Back to Top | Article Outline


How do we as strength and conditioning professionals design effective, top-notch programs for our athletes and clients? On what do we base our decisions to perform squats instead of leg extensions, to rest between sets for 3 minutes instead of 1 minute, and to train at 90% 1RM instead of 70%? What if a reputable coach suggests a novel training approach or a major athletic equipment manufacturer begins selling an unusual new gadget?

Answers to these questions can be found in the very mission statement of the NSCA: to “support and disseminate research-based knowledge and its practical application to improve athletic performance and fitness.” As a discipline built on science, strength and conditioning must never be a field in which we do things solely because an authority “said so” or “it's how we used to train back in the old days.” Such ungrounded rationales are incongruous with an objective, science-based field like strength and conditioning.

The process of EBP is a defined way to separate the proverbial wheat from the chaff—a clear and reproducible method of distinguishing between legitimate advances in training knowledge and the latest gimmick. EBP is an established paradigm that will permit strength and conditioning professionals to objectively evaluate the “evidence” for the efficacy of any given program, device, or training technique as well as elucidate “best practices” for the training of our athletes and clients. We strongly appeal to practitioners to adopt this 5-step approach to evidence-based strength and conditioning (summarized in Table 1).

Back to Top | Article Outline


1. Amonette WE, English KL, Ottenbacher KJ. Nullius in verba: A call for the incorporation of evidence-based practice into the discipline of exercise science. Sports Med 40: 449–457, 2010.
2. Amonette WE, English KL, Spiering BA, Kraemer WJ. Evidence-based practice in strength and conditioning. In: Conditioning for Strength and Human Performance (2nd ed). Chandler TJ, Brown LE, eds. Baltimore, MD: Lippincott Williams & Wilkins, 2012.
3. Ellis J, Mulligan I, Rowe J, Sackett DL. Inpatient general medicine is evidence based. A-Team, Nuffield Department of Clinical Medicine. Lancet 346: 407–410, 1995.
4. Galpin AJ, Bagley JR. Guiding coaches through scientific articles by examining human growth hormone research. Strength Cond J 33: 62–66, 2011.
5. Guyatt GH, Sackett DL, Cook DJ. Users' guides to the medical literature. II. How to use an article about therapy or prevention. A. Are the results of the study valid? Evidence-Based Medicine Working Group. JAMA 270: 2598–2601, 1993.
6. Lou JQ, Durando P. Asking clinical questions and searching for the evidence. In: Evidence-Based Rehabilitation: A Guide to Practice. Law M, MacDermid J, eds. Thorofare, NJ: Slack Incorporated, 2008. pp. 95–117.
7. National Institutes of Health and National Heart, Lung, and Blood Institute. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults: the evidence report. NIH Publication No. 98-4083 September 1998 National Institutes Of Health. pp. xii. Available at Retreived April 4, 2012.
8. Sackett DL. A science for the art of consensus. J Natl Cancer Inst 89: 1003–1005, 1997.
9. Sackett DL, Richardson WS, Rosenberg WM, Haynes RB. Evidence-Based Medicine: How to Practice & Teach EBM. New York, NY: Pearson Professional Limited, 1997.
10. Sackett DL, Rosenberg WM. The need for evidence-based medicine. J R Soc Med 88: 620–624, 1995.
11. Sackett DL, Rosenberg WM. On the need for evidence-based medicine. J Public Health Med 17: 330–334, 1995.
12. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: What it is and what it isn't. BMJ 312: 71–72, 1996.
13. Thomas A, Saroyan A, Dauphinee WD. Evidence-based practice: A review of theoretical assumptions and effectiveness of teaching and assessment interventions in health professions. Adv Health Sci Educ Theory Pract 16: 253–276, 2011.
14. Wegscheider K. Evidence-based medicine—Dead-end or setting off for new shores?Herzschr Elektrophys 11: II/1–II/7, 2000.

evidence-based practice; EBP; levels of evidence; strength of certainty; professional reasoning; specific needs analysis

© 2012 National Strength and Conditioning Association