Skip Navigation LinksHome > July 2010 - Volume 126 - Issue 1 > How to Practice Evidence-Based Medicine
Plastic & Reconstructive Surgery:
doi: 10.1097/PRS.0b013e3181dc54ee
Ebm Special Topic

How to Practice Evidence-Based Medicine

Swanson, Jennifer A. B.S., M.Ed.; Schmitz, DeLaine R.N., M.S.H.L.; Chung, Kevin C. M.D., M.S.

Free Access
Article Outline
Collapse Box

Author Information

Arlington Heights, Ill.; and Ann Arbor, Mich.

From the American Society of Plastic Surgeons and the Section of Plastic Surgery, Department of Surgery, University of Michigan Health System.

Received for publication November 23, 2009; accepted December 22, 2009.

Disclosure: The authors have no conflicts of interest related to the content of this article.

Kevin C. Chung, M.D., M.S., Section of Plastic Surgery, Department of Surgery, University of Michigan Health System, 1500 East Medical Center Drive, 2130 Taubman Center, SPC 5340, Ann Arbor, Mich. 48109-5340, kecchung@med.umich.edu

Collapse Box

Abstract

Summary: Evidence-based medicine is defined as the conscientious, explicit, and judicious use of current best evidence, combined with individual clinical expertise and patient preferences and values, in making decisions about the care of individual patients. In an effort to emphasize the importance of evidence-based medicine in plastic surgery, the American Society of Plastic Surgeons and Plastic and Reconstructive Surgery have launched an initiative to improve the understanding of evidence-based medicine concepts and provide tools for implementing evidence-based medicine in practice. Through a series of special articles aimed at educating plastic surgeons, the authors' hope is that readers will be compelled to learn more about evidence-based medicine and incorporate its principles into their own practices. As the first of the series, this article provides a brief overview of the evolution, current application, and practice of evidence-based medicine.

Evidence-based medicine is rooted in the words of Archie Cochrane (1909–1988), a British epidemiologist, who understood the importance of synthesizing high-quality evidence to inform clinical decisions.1 In 1979, he wrote: “It is surely a great criticism of our profession that we have not organised a critical summary, by specialty or subspecialty, adapted periodically, of all relevant randomised controlled trials.” However, it was not until the early 1990s that the term “evidence-based medicine” first appeared in the medical literature. In a 1992 JAMA article,2 the Evidence-Based Medicine Working Group introduced evidence-based medicine to the wider medical community:

A new paradigm for medical practice is emerging. Evidence-based medicine deemphasizes intuition, unsystematic clinical experience, and pathophysiologic rationale as sufficient grounds for clinical decision making and stresses the examination of evidence from clinical research. Evidence-based medicine requires new skills of the physician, including efficient literature searching and the application of formal rules of evidence evaluating the clinical literature.

Shortly thereafter, the Users' Guides to the Medical Literature series was published by JAMA3; the Cochrane Collaboration, a group aimed at publishing a database of systematic reviews, was founded; and evidence-based medicine was on its way to becoming the next revolution in modern medicine.4,5 In an effort to emphasize the importance of evidence-based medicine in plastic surgery, the American Society of Plastic Surgeons, and Plastic and Reconstructive Surgery have launched an initiative to improve the understanding of evidence-based medicine concepts and provide tools for implementing evidence-based medicine in practice. Since 2007, American Society of Plastic Surgeons has published two evidence-based practice guidelines and six evidence-based patient safety advisory documents. In 2009, “Outcomes” appeared on the Plastic and Reconstructive Surgery masthead, and the editorial, “Introducing Evidence-Based Medicine to Plastic and Reconstructive Surgery,”6 was published. Moving forward, the American Society of Plastic Surgeons and Plastic and Reconstructive Surgery will collaborate on a series of special articles aimed at educating plastic surgeons about several evidence-based medicine topics such as research design, research bias, biostatistics, research reporting guidelines, and critical appraisal of research studies. In addition, the American Society of Plastic Surgeons and Plastic and Reconstructive Surgery are partnering with the American Board of Plastic Surgeons to write a series of Maintenance of Certification articles on a variety of common plastic surgery topics by having each of the directors synthesize the best available evidence in the literature to guide practice. These efforts will showcase high-quality articles and systematic reviews on various topics in plastic surgery. Our hope is that readers will be compelled to learn more about evidence-based medicine and incorporate its principles into their own practices, through these evidence-based medicine articles.

Back to Top | Article Outline

EVOLUTION OF EVIDENCE-BASED MEDICINE

Modern evidence-based medicine is composed of five main components (Table 1)7 and is defined as the conscientious, explicit, and judicious use of current best evidence, combined with individual clinical expertise and patient preferences and values, in making decisions about the care of individual patients.8 Although evidence-based medicine may be considered a relatively modern concept in health care, the practice is far from new. An example of early evidence-based medicine practices is James Lind's (1716–1794) treatment of scurvy, an ailment that often plagued sailors during the eighteenth century. In his 1753 article, Treatise of the Scurvy, Lind describes his experience as the ship's surgeon aboard the HMS Salisbury, where he designed a study to compare six remedies being used to treat scurvy. He chose 12 men with similar cases of the illness and divided them into six groups of two. Each group was given a particular treatment: cider; elixir of vitriol (i.e., sulfuric acid); vinegar; sea-water; citrus (oranges and lemons); and nutmeg. The groups were treated for 14 days. Although this trial was small, Lind's results suggested that citrus was superior over the other scurvy treatments, even those recommended by the Royal College of Physicians (sulfuric acid) and the Admiralty (vinegar).9,10 Thus, this trial serves not only as an early account of randomization and a defined treatment period, but also as an example of a fair test that refuted expert opinion.

Table 1
Table 1
Image Tools

Early scientific methods are also found in surgical references. In the eighteenth century, the British surgeon William Cheselden (1688–1752) introduced a new method of lithotomy, a surgical procedure used to remove bladder stones, and was credited with another important feature of valid evidence—comparable treatment groups. Cheselden put forth considerable effort to keep accurate records of his operations. In what would now be considered a case series, he included the ages and dates of operation for all patients undergoing lithotomy between March of 1727 and July of 1730. In 1740, he wrote:

What success I have had in my private practice I have kept no account of, because I had no intention to publish it, that not being sufficiently witnessed. Publickly in St. Thomas's Hospital I have cut two hundred and thirteen; of the first fifty, only three died; of the second fifty, three; of the third fifty, eight, and of the last sixty-three, six.

Evaluating the increase in mortality rates over time, Cheselden noticed that the average age of patients in the later operative groups was higher than that of the earlier groups, noting that in the later groups: “even the most aged and most miserable cases expected to be saved by it.” After Cheselden's realization that dissimilarities in patients' ages could contribute to differences in treatment outcomes, John Yelloly (1774–1842), another British physician, emphasized that the gender of patients and size of bladder stones should also be documented, as these characteristics could also influence mortality rates after lithotomy.9,11 Comparability of treatment groups is now a critical measure of a study's validity.

Numerous examples of early evidence-based medicine exist in the literature, but despite the innovative ideas of our predecessors, treatments and practices are still being recommended without evidence that they actually improve outcomes.10 In their book, Testing Treatments, Evans et al. describe several contemporary examples that shed light on the consequences of using unproven practices.12 We can certainly remember the devastating example of expert opinion gone wrong, when Dr. Benjamin Spock (1903–1998), American childcare specialist and author of the best-selling book Baby and Child Care, recommended that infants sleep in the prone position. Dr. Spock was considered an “expert” in child care, and his reasoning seemed quite logical—infants sleeping on their backs may be more likely to choke on vomit. Without question, millions of health care workers and families began following Dr. Spock's advice, and placing babies to sleep in the prone position became standard practice. Unfortunately, no conclusive evidence existed that sleeping on the stomach was safer for infants than sleeping on the back, and as a result of this untested practice, thousands of children died as a result of sudden infant death syndrome.13

Examples of untested treatments are also present in the surgical literature. In this particular example, a commonly used invasive treatment was later found to provide no better outcome than less invasive treatments. Radical mastectomy, developed in the early nineteenth century by William Halsted (1852–1922), was the most common method for treating breast cancer. At the time, cancer specialists believed that breast cancer grew slowly from the tumor outward toward the lymph nodes and that extensive removal of the affected area should cure the cancer. Based on the belief that “more is better,” the radical mastectomy involved complete removal of the affected breast and pectoralis muscles, and in the most severe cases, splitting of the breast bone and removal of ribs to access the lymph nodes. Unfortunately, after widespread use of this extremely invasive procedure, survival rates did not improve. This caused cancer specialists to revise their original theory, prompting the use of lumpectomy, a less invasive surgical procedure, followed by systemic treatments such as irradiation and chemotherapy. However, even with this new theory, many surgeons still advocated for the radical procedure, and it was not until the mid 1950s that the less invasive treatments became widely accepted. Two American surgeons, George Crile and Bernard Fisher, were credited with bringing this issue to the forefront. While Crile was promoting the less radical procedures, Fisher and his colleagues began conducting randomized controlled trials to compare the effectiveness of radical mastectomy and lumpectomy followed by irradiation for breast cancer treatment. After a 20-year follow-up, their results suggested that lumpectomy followed by irradiation was equally effective as radical mastectomy at treating breast cancer. If not for this new found evidence, more women would have undergone the unnecessary and highly mutilating procedure without added benefit. After Fisher's work, additional trials were conducted in the United Kingdom, Sweden, and Italy, paving the way for the very first systematic review on breast cancer treatment.12

These and many other examples emphasize the importance of using valid evidence to inform clinical decisions. Although it would be incorrect to assume that “pre–evidence-based medicine” was unscientific, modern evidence-based medicine provides a framework and cultural standard for applying the evidence, and this guidance is necessary for all specialties, including plastic surgery.

Back to Top | Article Outline

CURRENT APPLICATION OF EVIDENCE-BASED MEDICINE

Evidence-based medicine is now widespread throughout the United States and is used in multiple ways by legislators, policy makers, and payers. Government- and employee-sponsored health plans are driving these initiatives. According to a June 2009 Health Care Reform Survey conducted by Aon Consulting, evidence-based medicine was cited as a top initiative to improve the quality of care. Of 1100 United States–based employers surveyed, 80 percent of all respondents and 94 percent of respondents with over 10,000 employees agreed that provider reimbursement should be based on evidence-based medicine.14

Evidence-based medicine is often a key component of pay-for-performance programs that reward physicians for meeting predetermined outcomes or performance measures. There is no universal set of performance measurers shared by all payers. However, one of the best known pay-for-performance programs is the Centers for Medicaid and Medicare Services Physician Quality Reporting Initiative. Physicians who meet the Physician Quality Reporting Initiative requirements and report their performance measures through claim submission or a qualified Physician Quality Reporting Initiative registry are eligible for incentive payments. In 2010, physicians who meet the Physician Quality Reporting Initiative reporting requirements are eligible to earn bonuses of up to 2 percent of their total Centers for Medicaid and Medicare Services charges. It is anticipated that physicians who do not meet Physician Quality Reporting Initiative requirements will face reduced Medicare payments.

Health plan benefit design is another area where evidence-based medicine is playing an increasingly important role. Health plans, both public and private, are using evidence-based guidelines to determine which clinical procedures, therapies, medical devices, and drugs will be covered. Comparative effectiveness research takes this one step further by comparing treatment options to determine the appropriateness of treatments for a specific condition or disease. Comparative effectiveness research analyzes the medical benefits, risks, and costs associated with each treatment. Comparative effectiveness research is likely to have a huge impact in the future, as the American Recovery and Reinvestment Act of 2009 invested $1.1 billion in this federal initiative. Because of the up-front costs associated with generating, coordinating, and disseminating the comparative effectiveness research findings, it unclear if or how soon cost savings will be realized.

Evidence-based medicine is also playing a prominent role in the development of continuing medical education content. The Accreditation Council for Continuing Medical Education guidelines require that educational activities address gaps in practice, physician education, patient care, or patient education so as to change physician competence, performance, and patient outcomes. Educational objectives and patient care recommendations are poised to strengthen the effectiveness of continuing medical education activities and provide physicians with practical tools to improve their practices.

Despite the push by institutions and organizations, some clinicians are still reluctant to practice evidence-based medicine. Implementing evidence-based medicine is no doubt a daunting task—biomedical publications contain an overwhelming amount of information, only a fraction of which is valid, important, and applicable to clinical care. However, numerous resources on evidence-based medicine are now available, including books, critical appraisal checklists, web tutorials, and workshops. Table 2 provides several useful resources for learning and practicing evidence-based medicine skills.

Table 2
Table 2
Image Tools
Back to Top | Article Outline

PRACTICE OF EVIDENCE-BASED MEDICINE

The first step toward becoming an effective practitioner of evidence-based medicine is determining what is meant by “best evidence.” Although the randomized controlled trial is often touted as the be-all and end-all of clinical evidence, one can still practice evidence-based medicine without such information. In fact, evidence-based medicine involves using the best available evidence at the time, and what qualifies as “best evidence” differs by clinical question. Randomized controlled trials, though desirable for clinical questions about therapy, may not be appropriate for all clinical questions.15 For example, to investigate whether smoking increases the risk of lung cancer, researchers cannot ethically randomize one group of patients to smoking and one to placebo. Thus, questions about risk are usually best answered by observational studies (e.g., a study comparing people who already smoke to those who do not).

Therefore, various types of evidence can be used to develop the best treatment plan for a patient, and this evidence is ranked by its strength or level of evidence; the more rigorous the study design, the higher the level of evidence. Moreover, this evidence can be synthesized into practice recommendations that are graded according to the strength of the supporting evidence. In their 1989 article, “Rules of Evidence and Clinical Recommendations on the Use of Antithrombotic Agents,” Sackett et al. published the very first scales for rating levels of evidence and grading recommendations.8 As more specialties adopted evidence-based medicine, the scales were modified for each specialty. Tables 3 through 6 depict American Society of Plastic Surgeons evidence rating scales, which were modeled after the scales published by the Journal of Bone and Joint Surgery16 and the Centre for Evidence Based Medicine.17 Even though most rating scales are relatively similar, there are differences in ranking systems; for example, alphabetic (A through D); numeric (I through V); or alphanumeric (e.g., 1a, 1b) and qualifying evidence at each level. Therefore, level I evidence on one scale may not equate to level I evidence on the other. In addition, scales do not always account for differences in the type of clinical question that the evidence is attempting to answer. Study designs can be assigned different levels of evidence depending on the type of clinical question. For example, a well-designed prospective cohort study about therapy would be level II evidence on the American Society of Plastic Surgeons therapeutic scale, whereas the same design used for a study about prognosis or risk would be level I on the American Society of Plastic Surgeons prognosis/risk scale. Inconsistencies can also be found in scales for grading practice recommendations. Therefore, developers of evidence-based articles should include a clear description of the rating scales that were used to rate level of evidence and grade recommendations.

Table 3
Table 3
Image Tools
Table 4
Table 4
Image Tools
Table 5
Table 5
Image Tools
Table 6
Table 6
Image Tools

Importantly, level of evidence depends not only on the study design, but also on the methodologic quality. All studies, even randomized controlled trials, are susceptible to some form of bias; thus, it is necessary to appraise each study for potential biases and overall validity.18 Table 7 includes a list of questions that can be used to evaluate the quality of a randomized controlled trial. Additional tools are available for appraising other study designs. Similar to evidence rating scales, inconsistencies also exist in the critical appraisal process. In a recent study assessing the ability of orthopedic surgeons to rate their own research, Schmidt et al.19 found substantial inconsistencies in the levels of evidence assigned to research articles by different reviewers. In addition, authors often rated their own studies more favorably than independent reviewers. Lack of interrater reliability may be attributable to inadequate training in critical appraisal skills, ambiguity in evidence rating scales, whether reviewers appraised the full-text articles or the abstracts only, or poorly written articles with inadequate Methods sections. Numerous organizations have developed critical appraisal tools and tutorials aimed at standardizing the process; however, even with these tools, there remains some subjectivity. Therefore, to reduce inconsistency and bias in the critical appraisal process, studies should be appraised by several reviewers who can then come to a consensus on the final rating. In addition, critical appraisal may become easier as authors begin to consider new standards for reporting their research. The EQUATOR network—Enhancing the Quality and Transparency of Health Research—aims to improve the transparency and reporting of original research. Reporting standards for randomized controlled trials (Consolidated Standards of Reporting Trials),20 observational studies (Strengthening the Reporting of Observational Studies in Epidemiology),21 and many other research designs can be found on the EQUATOR Web site.22

Table 7
Table 7
Image Tools

At times, and especially in surgery, clinicians are faced with clinical questions for which no high-level evidence exists. Although “expert opinions” may seem obsolete in the realm of evidence-based medicine, they do qualify as evidence and can be very helpful when no other evidence is available. However, when relying on expert opinion in clinical decision making, one must consider how the opinions were developed, not how persuasive the experts are. Expert opinions are evidence when they were developed with an unbiased method for evaluating facts (i.e., clinicians' experiences or observations) and forming conclusions that are supported by those facts. Developers of evidence-based guidelines are now required to use a formal consensus process (e.g., Delphi method, Nominal Group Technique) for developing expert opinion recommendations.

Even with the best tools for practicing evidence-based medicine, there is not enough time in the day for the busy clinician to acquire and appraise research studies for every clinical question. However, several efforts are underway to streamline the process. Systematic reviews, meta-analyses, and evidence-based guidelines can be huge time savers, and when developed with a prospective, transparent, and reproducible method, can be powerful tools in evidence-based practice.23,24 Initiatives for improving the quality of these documents are in full force, including reporting standards for systematic reviews and meta-analyses, such as the Preferred Reporting Items for Systematic Reviews and Meta-Analyses, formerly Quality of Reporting of Meta-Analyses,25 and groups such as the Grading of Recommendations Assessment, Development, and Evaluation Working Group24 and the Physicians Consortium for Performance Improvement for guideline development. Although guidelines are often criticized for being “cookbook” medicine,8,26 evidence alone cannot answer clinical questions about individual patients; clinical expertise and patient values and preferences are key elements of evidence-based medicine and are equally important in clinical decision making. Therefore, when it comes to practice guidelines, one size does not fit all, and recommendations will not apply to every patient, yet well-developed guidelines can be helpful for developing individualized treatment plans.

As with any new process, evidence-based medicine is no stranger to growing pains, and the increase in evidence-based medicine practice has revealed obstacles in implementation. Knowledge translation, the act of integrating the best available evidence into practice, is the newest challenge in evidence-based medicine.10,27,28 Even though evidence is available, there is often a lag between its discovery and actual practice. Iain Chalmers, Editor of the James Lind Library, wrote, “Although science is cumulative, scientists rarely cumulate scientifically,” and emphasized that even our predecessors had difficulty in this area, as James Lind's “proven” treatment of scurvy took 42 years to become standard practice.29 Clinicians may be hesitant to implement evidence for various reasons, including institutional issues such as reimbursement, time constraints, liability, and organizational standards, or their own knowledge and attitudes about patient care such as lack of self-confidence in clinical skills or inability to appraise and/or apply the evidence. Current research is aimed at developing guidelines to help clinicians translate evidence into practice effectively. The Johns Hopkins Quality and Safety Research Group has developed a large-scale model for knowledge translation that not only provides the evidence, but also envisions how the evidence can be implemented within the entire health care system.28 By engaging and educating all stakeholders about the new intervention, identifying barriers to implementation, providing actual tools for executing the intervention, and measuring performance, this model promotes a collaborative culture, a necessary element for effecting change. Therefore, individual clinicians must learn evidence-based medicine skills, but institutions and organization must provide them with essential tools for practicing evidence-based medicine in the real world.

As the newest revolution in modern medicine, evidence-based medicine has the potential to improve patient care. Past experience has shown us that better outcomes can be achieved with better knowledge. However, as we embark on this new journey, we must be cautious. Evidence-based medicine can be a useful tool when practiced properly, but it can also be dangerous if attempted hastily. Therefore, we must be mindful that the information on which we base our decisions is not always created equal, and misinformation can certainly be worse than no information. We also must realize the limitations of evidence-based medicine and understand what it can and cannot do. Nevertheless, we should be ever vigilant in our quest to identify the best evidence and improve patient care. Silverman,30 citing the philosopher Karl Popper (1902–1994), observes:

There is no way to know when our observations about complex events in nature are complete. Our knowledge is finite, but our ignorance is infinite. In medicine, we can never be certain about the consequences of our interventions, we can only narrow the area of uncertainty.

It is time to embrace this new direction in medicine. Even small steps toward learning and practicing evidence-based medicine will bring us closer to the truth.

Back to Top | Article Outline

ACKNOWLEDGMENTS

This work was supported in part by a Midcareer Investigator Award in Patient-Oriented Research (K24 AR053120) from the National Institute of Arthritis and Musculoskeletal and Skin Diseases (to K.C.C.). The authors thank Karie O'Connor of the American Society of Plastic Surgeons for assistance with research for this project.

Back to Top | Article Outline

REFERENCES

1.Shah HM, Chung KC. Archie Cochrane and his vision for evidence-based medicine. Plast Reconstr Surg. 2009;124:982–988.

2.The Evidence-Based Medicine Working Group. Evidence-based medicine: A new approach to teaching the practice of medicine. JAMA. 1992;268:2420–2425.

3.The Evidence-Based Medicine Working Group. Users' Guides to the Medical Literature: Essentials of Evidence-Based Clinical Practice. Chicago: American Medical Association; 2002.

4.Montori VM, Guyatt GH. Progress in evidence-based medicine. JAMA. 2008;300:1814–1816.

5.Chung KC, Ram AN. Evidence-based medicine: The fourth revolution in American medicine? Plast Reconstr Surg. 2009;123:389–398.

6.Chung KC, Swanson JA, Schmitz D, Sullivan D, Rohrich RJ. Introducing evidence-based medicine to plastic and reconstructive surgery. Plast Reconstr Surg. 2009;123:1385–1389.

7.Straus SE, Richardson WS, Glasziou P, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM. 3rd ed. Philadelphia: Elsevier Churchill Livingstone; 2005.

8.Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: What it is and what it isn't. BMJ. 1996;312:71–72.

9.Claridge JA, Fabian TC. History and development of evidence-based medicine. World J Surg. 2005;29:547–553.

10.Doherty S. History of evidence-based medicine: Oranges, chloride of lime and leeches. Barriers to teaching old dogs new tricks. Emerg Med Australas. 2005;17:314–321.

11.Tröhler U. Cheselden's 1740 presentation of data on age-specific mortality after lithotomy. Available at: http://www.jameslindlibrary.org/trial_records/17th_18th_Century/cheselden/cheselden_commentary.html. Accessed October 28, 2009.

12.Evans I, Thornton H, Chalmers I. Testing Treatments: Better Research for Better Healthcare. London: The British Library; 2006.

13.Gilbert R, Salanti G, Harden M, See S. Infant sleeping position and the sudden infant death syndrome: Systematic review of observational studies and historical review of recommendations from 1940 to 2002. Int J Epidemiol. 2005;34:874–887.

14.Aon Consulting. Health Care Reform Survey Report 2009. Available at: http://www.aon.com. Accessed November 13, 2009.

15.Fletcher AE. Controversy over “contradiction”: Should randomized trials always trump observational studies? Am J Ophthalmol. 2009;147:384–386.

16.Wright JG, Swiontkowski MF, Heckman JD. Introducing Levels of Evidence to The Journal. Available at: http://www.ejbjs.org/journalclub/1_85-1-1.pdf. Accessed October 28, 2009.

17.Centre for Evidence Based Medicine. Levels of evidence and grades of recommendations. Available at: http://www.cebm.net/index.aspx. Accessed April 30, 2007.

18.French J, Gronseth G. Lost in a jungle of evidence: We need a compass. Neurology 2008;71:1634–1638.

19.Schmidt AH, Zhao G, Turkelson C. Levels of evidence at the AAOS meeting: Can authors rate their own submissions, and do other raters agree? J Bone Joint Surg (Am.) 2009;91:867–873.

20.The CONSORT Group. The CONSORT Statement. Available at: http://www.consort-statement.org/consort-statement/. Accessed October 28, 2009.

21.The STROBE Group. STROBE Statement: Strengthening the Reporting of Observational Studies in Epidemiology. Available at: http://www.strobe-statement.org/index.html. Accessed October 28, 2009.

22.EQUATOR Network. Introduction to Reporting Guidelines. Available at: http://www.equator-network.org/index.aspx?o=1032. Accessed October 28, 2009.

23.Margaliot Z, Chung KC. Systematic reviews: A primer for plastic surgery research. Plast Reconstr Surg. 2007;120:1834–1841.

24.Guyatt GH, Oxman AD, Vist GE, et al. GRADE: An emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336:924–926.

25.Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of meta-analyses of randomised controlled trials: The QUOROM statement. QUOROM Group. Br J Surg. 2000;87:1448–1454.

26.Henley MB, Turkelson C, Jacobs JJ, Haralson RH. AOA symposium: Evidence-based medicine, the quality initiative, and P4P: Performance or paperwork? J Bone Joint Surg (Am.) 2008;90:2781–2790.

27.Grol R, Grimshaw J. From best evidence to best practice: Effective implementation of change in patients' care. Lancet 2003;362:1225–1230.

28.Pronovost PJ, Berenholtz SM, Needham DM. Translating evidence into practice: A model for large scale knowledge translation. BMJ. 2008;337:a1714.

29.Tröhler U. James Lind and scurvy: 1747 to 1795. Available at: http://www.jameslindlibrary.org/trial_records/17th_18th_Century/lind/lind_1753_commentary.pdf. Accessed August 11, 2009.

30.Silverman WA. Where's the Evidence? Oxford: Oxford University Press; 1998:165.

Back to Top | Article Outline
Contribute to Plastic Surgery History

The Journal seeks to publish historical photographs that pertain to plastic and reconstructive surgery. We are interested in the following subject areas:

* Departmental photographs

* Key historical people

* Meetings/gatherings of plastic surgeons

* Photographs of operations/early surgical procedures

* Early surgical instruments and devices

Please send your high-resolution photographs, along with a brief picture caption, via email to the Journal Editorial Office (ds_prs@plasticsurgery.org). Photographs will be chosen and published at the Editor-in-Chief's discretion.

Cited By:

This article has been cited 4 time(s).

Bmj Open
Framework of policy recommendations for implementation of evidence-based practice: a systematic scoping review
Ubbink, DT; Guyatt, GH; Vermeulen, H
Bmj Open, 3(1): -.
ARTN e001881
CrossRef
Clinics in Plastic Surgery
Evidence-based Medicine and Data Sharing in Outpatient Plastic Surgery
Keyes, GR; Nahai, F; Iverson, RE; Singer, R
Clinics in Plastic Surgery, 40(3): 453-+.
10.1016/j.cps.2013.04.008
CrossRef
Jama Facial Plastic Surgery
Taking Evidence-Based Plastic Surgery to the Next Level Report of the Second Summit on Evidence-Based Plastic Surgery
Eaves, FF; Rohrich, RJ; Sykes, JM
Jama Facial Plastic Surgery, 15(4): 314-320.
10.1001/jamafacial.2013.1208
CrossRef
Aesthetic Surgery Journal
Taking Evidence-Based Plastic Surgery to the Next Level: Report of the Second Summit on Evidence-Based Plastic Surgery
Eaves, FF; Rohrich, RJ; Sykes, JM
Aesthetic Surgery Journal, 33(5): 735-743.
10.1177/1090820X13493766
CrossRef
Back to Top | Article Outline

©2010American Society of Plastic Surgeons

Login

Article Tools

Images

Share