Nurse practitioners (NPs) who work in either acute care or community-based primary care settings have, as an integral part of their professional role, the evaluation and synthesis of the best available evidence for guiding patient care decisions and providing optimal care. Sixteen years ago, Sackett, Rosenberg, Gray, Haynes, and Richardson (1996) defined evidence-based medicine as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” by “integrating individual clinical expertise with the best available external clinical evidence from systematic research” (p. 71). Achieving the goal of practicing evidence-based health care is not always easy. NPs may struggle with how to best manage the constant flood of new studies, recommendations, and practice guidelines. However, even amidst the demands of a busy practice, integrating best evidence is still possible.
Locating evidence in research studies through broad literature searches is often insufficient and ineffective in terms of ensuring that the most current and/or high-quality evidence is obtained. Subsequent appraisals of multiple, individual studies by an NP can be impractical because of the considerable amount of time and effort involved in the process. NPs do still need to maintain a strong working knowledge of research methods combined with study appraisal expertise. However, since newer and faster tools for obtaining and evaluating available studies have moved to the forefront, it is now possible to make practice decisions based on best evidence a more efficient and rapid process (Alper, White, & Ge, 2005; Keahey & Goldgar, 2008).
Representing a paradigm shift in the way one finds the best current evidence, these sources have been referred to as preappraised because they “use explicit review processes to find and appraise evidence about the management of a wide range of clinical problems” (DiCenso, Bayley, & Haynes, 2009, p. 100). Accessing evidence summaries provided from a preappraised source can minimize the amount of time that is required to locate and evaluate the relevant research evidence effectively in relation to various conditions and interventions for either individual patients or subpopulation profiles. By researching in expert preappraised sources, the NP can locate information that will facilitate a clear decision as to whether a practice revision or the initiation of a new intervention or care protocol is indicated.
The purpose of this article is twofold. The first is to describe preappraised evidence sources, or a “top-down” approach to obtaining the best available evidence. A description of the benefits and limitations of these sources is provided as well as a comparison of some of the product types in terms of availability, comprehensiveness, and quality. The second purpose of the article is to describe how to incorporate preappraised evidence into clinical decision making with the Best Practice Decision Guide (Figure 1). The guide begins with a query of preappraised evidence sources followed by steps assisting NPs to seek and utilize the current body of evidence to guide best practice decisions.
Step 1: Frame a question and consult a preappraised evidence source
The first step in the Best Practice Decision Guide is to frame a question and consult a preappraised evidence source. The use of electronic preappraised evidence summary sources is a viable starting point for NPs to seek and evaluate appropriate evidence. Technologies such as personal digital devices and smart phones permit ease of access to evidence sources at the point of care (Stroud, Smith, & Erkel, 2010).
Varying circumstances lead NPs to seek evidence from these sources during the course of everyday practice. Questions may arise because of a unique patient presentation, or because there has been a series of gradual, but obvious and consistent changes occurring in some outcome associated with a specific patient subpopulation.
Questions about the actual benefits of a current practice or intervention also frequently occur. For example, it has been common practice in the intensive care unit to treat a patient for fever over 100.9°F. A valid question would be: for a patient with fever, will treatment with antipyretics versus no treatment improve outcomes in the critically ill patient? Beginning a search of the Web using Google is unlikely to provide a concise or complete search of credible medical and nursing information to address this question. Although a new study published on the topic may result from the search of the Web, reliance on the results from a single study, even those that are highly publicized from well-respected journals does not replace a full, systematic evidence search.
A CINAHL or PubMed search on the term antipyretics is an option to research the fever question, but could involve a lengthy process of reviewing many studies and systematic reviews of multiple studies. Instead, a starting point in the evidence search process is to construct a question to submit to a preappraised summary source. Phrasing of the question can be similar to the PICO mnemonic using patient, intervention, comparison (which may be optional), and outcomes. The PICO format for the prior question would be: for a patient with fever (Patient), will treatment with antipyretics (Intervention) versus no treatment (Comparison) improve outcomes (Outcomes) in the critically ill patient? Another example of a question involves a common clinical problem in primary care: acute sinusitis. NPs often must decide whether a patient's symptoms warrant treatment with antibiotics. The question for the second case: for an adult patient with acute sinusitis (Patient), when should antibiotics (Intervention) be given to resolve symptoms (Outcome)?
A starting point in the evidence search process is to construct a question to use in searching the available preappraised summary databases. Figures 2 and 3 provide findings following access to DynaMed, as the available preappraised evidence source to answer the questions by completing the steps in Best Practice Decision Guide. Figure 2 portrays a synopsis of the findings from a search using DynaMed to assist with the fever question: for a patient with fever, will treatment with antipyretics versus no treatment improve outcomes in the critically ill patient? To query DynaMed, a search term such as fever or other clinical diagnosis is used. In this case the most closely related topic to the initial PICO questions was “Fever Without Apparent Source in Critically Ill Adults” (Fever, 2011)Figure 3 involves a search using the research question “For an adult patient with acute sinusitis, when should antibiotics be given to resolve symptoms?”
Although DynaMed was used to address the two questions, UptoDate, ACP's Pier (Physician's Information and Education Resource), and Clinical Evidence are other examples of preappraised sources increasingly used to facilitate integration of valid and reliable evidence into clinical decision making (Alper et al., 2005; Keahey & Goldgar, 2008). The editors of the preappraised sources scour the medical, health, and nursing literature frequently and create evidence summaries of the latest and highest quality research findings related to a broad spectrum of medical topics (DiCenso, Bayley, & Haynes, 2009). Varying formats, scope, extent of content coverage, and features of the evidence sources exist and, of course, not all clinical questions will be addressed in each. For this reason, searching more than one database may be necessary. Within the best preappraised sources, evidence summaries are prepared by editorial teams from a variety of international clinical and academic physicians, nurses, pharmacologists, and representatives from other disciplines chosen for their expertise in a particular area. Therefore, what constitutes high-quality evidence is dependent on the criteria and opinions of the editorial board and the designated health science appraiser or reviewer teams, hence the term preappraised evidence.
After submitting the question to the evidence source, a review of the background and treatment recommendations will be visible. The evidence will be composed of recommended systematic reviews, RCTs and prospective cohort, diagnostic studies, and clinical practice guidelines (CPGs). CPGs are usually published by subspecialty or disease-specific experts, often from well-established international and national professional organizations and societies. While CPGs may be very useful in establishing patient care protocols, it is important to note that not all are based on experimental studies. A review of American College of Cardiology and the American Heart Association practice guidelines showed that many were not based strictly on RCTs, but also on expert opinion and individual case studies, considered lower levels of evidence (Tricoci, Allen, Kramer, Califf, & Smith, 2009).
Two major sources for guidelines include the Guidelines International Network database (www.g-i-n.net) with more than 3700 CPGs from 39 countries; and the National Guideline Clearinghouse or NGC (www.guideline.gov) with over 2700 (Institute of Medicine [IOM], 2011). NGC is a free web-based database maintained by the U.S. Agency for Health Care Research and Quality or AHRQ (www.ahrq.gov). The guidelines in the NGC are created by approximately 300 healthcare organizations and groups, and only those written or updated in the last 5 years are included. Complementing the NGC guidelines are the sometimes controversial recommendations of the U.S. Preventive Services Task Force or USPSTF (www.USPreventiveServicesTaskForce.org). For example, the USPSTF (2009) released revised guidelines for mammogram screening. Previous recommendations for women to have routine yearly mammograms after the age of 40 were changed to biennial screening beginning at age 50 in the USPSTF guideline. These new recommendations were based on recent public health evidence that widespread routine screening did not lead to a significant reduction in mortality, and often led to costly and psychological ramifications of false positive results from mammograms. The USPSTF graded the new recommendation as Grade B. As defined in this rating scale, Grade B indicated “high certainty that the net benefit is moderate or there is moderate certainty that the net benefit is moderate to substantial” (USPSTF, 2009). Meanwhile, conflicting recommendations and guidelines for screening continue to be defended by other groups who purport a view more toward the individual patient rather than from an epidemiological perspective represented in the USPSTF guidelines (USPSTF, 2009).
CPG concerns raised by the IOM (2011) have included potential guideline committee biases based on a material interest in the recommendations, narrow committee memberships, and a lack of appropriate public participation. In order to ensure trustworthiness and relevance to patients and clinicians, the IOM has put forth standards of trusted guidelines that can be used as a basis for review (IOM Standards, 2011). The full set of guidelines and descriptions can be found at: www.iom.edu/Reports/2011/Clinical-Practice-Guidelines, and include a focus on:
- establishing transparency;
- management of conflict;
- CPG-systematic review intersection;
- establishing evidence foundations for and rating strength of recommendations;
- articulation of recommendations;
- external review; and
Attention to these guidelines will demand more complete transparency as to the design process used by the CPG development group. In particular, how any conflict of interest related to an individual member (i.e., ties to a pharmaceutical firm sponsored study relevant to the guidelines) was handled will need to be described (IOM, 2011, Brief Report). Other difficulties faced by CPG groups may include the lack of updated and comprehensive systematic reviews of evidence to use in CPG development. Either the group will need to provide a systematic review or be clear on the limitations of the CPG without one. Another issue may surround how to handle the lack of available data on care alternatives or health outcomes targeting various patient subgroups or when studies addressing patient preferences are limited or inadequate for incorporation in the CPG. Again, clarity of the group's findings within the written document will be essential for others to be able to decide about a CPG's integrity and value. The IOM recommends obtaining an external review of any CPG prior to dissemination. Also recommended prior to dissemination, is an increased emphasis on structuring all the information contained within the CPG for use in computer-aided clinical decision support systems for end users, that is, both healthcare providers and patients. With this level of attention, the goal is to support the quality and consistency in the approach of CPG development, but ultimately, increase the quality of care and improve health outcomes.
CPGs still remain one significant piece of the evidence-based medicine picture, although good guidelines, as the term implies, represent guidance rather than prescription. A clear understanding of CPG benefits and limitations remains an important consideration for NPs in all specialties.
Step 2: Appraise the accumulated body of evidence for quality and currency
The second step in the Best Practice Guide is to appraise the accumulated body of evidence for quality and currency. As the number and type of evidence sources expands, it is important to distinguish the similarities and differences of each. Sources include: preappraised sources, such as UptoDate and DynaMed; E-textbooks such as Harrison's online; and another category of what are simply called “filtered” sources, such as eMedicine (Table 1). Filtered sources will often include newer studies addressing the question of interest, but are generally not at the level of preappraised sources in terms of using a defined review process and incorporating thorough evidence summaries based on a systematic appraisal of high-quality studies. It should also be noted that although the term point of care (POC) product is sometimes used to represent both preappraised and filtered sources in the published literature, these should not to be confused with computer decision support systems (CDSS). A wide variation in CDSS designs exists, but the general framework is the integration of individual patient information (electronic health record or EHR) to a computerized knowledge database.
As marketed, many of the evidence sources appear similar, but under closer scrutiny, equivalence in terms of currency, comprehensiveness, or intended use varies. In a review of over 16 POC products, Banzi, Liberati, Moschetti, Tagliabue, and Moja (2010) employed a rank ordering procedure based on the extent (volume) of medical condition coverage, editorial quality, and use of evidence-based methodology of each. The overall findings indicated wide variability among the products included in the review. The implication for NPs is the need to assess the strengths and weaknesses of sources consulted to ensure that the one used for the question of interest meets the standards of a high-quality preappraised evidence source.
A good example of a high-quality preappraised evidence source is the aforementioned DynaMed database of evidence summaries. DynaMed received one of the highest overall scores in the review by Banzi et al. (2010) and had the largest proportion of current references in another comparative analysis of electronic evidence databases (Ketchum, Saleh, & Jeong, 2011). DynaMed covers more than 3000 topics and, most importantly, was found to lead other POC products in speed of updating with new research information (Banzi et al., 2011). The format is user-friendly by providing relevant subtopics, helpful background information on diagnoses and common treatments, and bulleted clinical recommendations. DynaMed searches more than 500 sources including the National Guideline Clearinghouse, the Cochrane Database of Systematic Reviews, and hundreds of key medical journals, continually incorporating new research evidence into its topical summaries (Connor, 2007).
The Cochrane Database of Systematic Reviews (Cochrane Library, 2011) is an important component of the body of evidence, but must be recognized as limited to a collection of systematic reviews of individual studies. The Cochrane Database is not what constitutes a comprehensive preappraised evidence summary. Also, while systematic reviews can be informative, not all clinical questions have been addressed by the Cochrane Collaboration. In the past, many health professionals relied on the Cochrane Database as the first “go to” source of relevant evidence on a topic, but this has now changed with the availability of the newer electronic evidence sources recommended in this article.
UptoDate, another frequently accessed preappraised evidence summary source, is formatted differently from DynaMed and written in a narrative textbook fashion with an extensive background on most topics presented first, followed by a synthesis of contemporary knowledge from medical research. The summaries are updated quarterly. In a review of POC products, UptoDate received a high rating for editorial quality and evidence-based methodology (Banzi et al., 2010).
Accessibility to some of the recommended evidence sources can be an issue for the NP, since some are free via the Internet, such as Medscape, and others of varying costs require either an individual subscription or are typically available through a health science or medical library. An option for NPs who are not affiliated with a medical library is to contact a local hospital, medical center, or Area Health Education Center (AHEC) to seek a cooperative arrangement that enables access to fee-based preappraised evidence sources (National AHEC Organization, 2012). Most states have AHECs in which library resources may be made available to community health professionals.
Levels of evidence
Depending on the sources accessed, the NP should examine how much (if any) high-quality evidence resulted from the search. When systematic reviews and individual studies were included in the evidence summary, some preappraised sources, for example, BMJ's Clinical Evidence and DynaMed, will designate the strength of that evidence by assigning a level of evidence (LOE). In most evidence hierarchies, the top level, usually Level 1 (primarily randomized controlled trials [RCTs] and meta-analyses or systematic reviews of RCTs), is the highest level of reliable evidence upon which to make clinical judgments. Ensuring that the highest levels of evidence are always included in an evidence summary is the goal, but this element is known to vary among evidence sources. Investigators recently reported results from a comparison of the references and evidence found within five commonly used POC products (Ketchum et al., 2011). Basing their findings on just four clinical topics: asthma, hypertension, hyperlipidemia, and carbon monoxide poisoning, the investigators found wide variations not only in content, but in the quality of evidence types cited.
Grades of recommendation
Likewise some of the preappraised evidence sources including PIER and DynaMed provide grades for recommendations based on how the reviewers gauge the strength of the evidence for a treatment or intervention. Grades are often provided such as Grade A, B, or C referring to the strength of the recommendation arising from the evidence. DynaMed employs the Strength Of Recommendation Taxonomy (SORT), Clinical Evidence employs the GRADE system, while PIER utilizes their own brief A–C system. An example using SORT criteria is "consistent, good quality, patient-oriented evidence for an A rating, versus inconsistent or limited-quality patient-oriented evidence for a B rating.” Preappraised sources such as DynaMed increasingly report on guidelines with the grading system employed by the original source if available, that is the guideline creators'. Grading is important, yet the nonstandardization of the graded evidence hierarchies within and across individual subspecialties of nursing, medicine, and other health professions can be confusing. Fortunately, only a small number of standard grading systems are widely used (Table 2). The critical point for the NP is to know which grading system was applied to the evidence and to understand the definitions of the reported levels contained within each system.
For the second clinical example, treatment of acute sinusitis, a search in DynaMed identified a recently published CPG by the Infectious Diseases Society of America (IDSA) on the treatment of acute sinusitis. This CPG employs the GRADE system and has a strong recommendation with moderate quality evidence which means that it can used with most patients in most situations (Chow et al., 2012). It advises treating with antibiotics only if the patient meets the criteria for acute bacterial sinusitis rather than viral or noninfectious sinusitis (Figure 3).
Step 3: Weigh the body of evidence for adequacy and relevance to your patient population
The third step in the Best Practice Decision Guide is to weigh the body of evidence for adequacy and relevance to your patient population. How current is the evidence? It is possible, in relation to the use of a high-risk treatment, that the lack of the latest study results due to variability in the timing of database updates could be significant to a practice decision. Ketchum et al. (2011) found that the currency of references varied across their four sample topics within each of the evidence summaries examined. The frequency of updating differs from daily updates in DynaMed, to a strategy introduced by Clinical Evidence in which updating varies depending on multiple factors including the emergence of new RCTs or systematic reviews, or sometimes even the popularity of the topic. If not satisfied with how current or comprehensive the evidence is as presented in one or more databases, the NP can conduct an additional search of PubMed and/or CINAHL for any new studies or articles. However, since neither of these databases are preappraised or even filtered sources, the quality of any studies found will need to be appraised. For practicing NPs who have not had as much opportunity to engage in the process of evaluating the quality of studies, there are numerous resources such as The Joanna Briggs Institute (www.joannabriggs.edu.au) that provide information and tutorials on critiquing studies.
Another resource, the Consolidated Standards of Reporting Trials or CONSORT 2010 Statement, is a well-established tool for appraising the quality of RCTs (Schultz, Altman, & Moher, 2012). Developed by a select, but evolving group of international trialists, methodologists, and medical editors, it is currently endorsed by many health professionals as the standard when publishing the results of RCTs in most science journals. A checklist of items is provided for the investigator including aspects of trial design, analysis, and interpretation. Based on adherence to the checklist, the reader can then appraise the relative merit of the study report which will consequently aid in evaluating the quality and value of the study findings. Similarly, the Preferred Reporting Items for Systematic Reviews and Meta analyses or PRISMA Statement can be referred to when appraising systematic reviews and meta-analysis studies (Liberati et al., 2009).
Research studies other than high-quality RCTs found in evidence summaries or through a literature search might be the only current evidence available upon which to base practice. This finding may diminish confidence in the search results, but the reality is that not all treatments are amenable to an RCT design due to ethical, practical, or financial reasons. For example, to determine neonatal outcomes in relation to cesarean versus vaginal delivery, the ability to use random assignment to one group or the other would be difficult due to the risk differential and patient preferences associated with each type of delivery. Therefore, many medical and nursing studies are limited to nonexperimental, retrospective, and prospective observational studies with the treatment choice determined by the patient and healthcare provider.
An observational study design creates more risk for biased outcomes as compared to an RCT design, but there are several instances when this approach can contribute important evidence. Therefore, increasing reliance on these studies is likely, and will therefore be included more often in systematic reviews and guidelines in the future (Guyatt et al., 2011). Observational studies are employed as the study design of choice when “larger studies are needed to understand the real-world benefits of different dosing and routes of administering a drug; when patient's adherence to treatment differs in real-world settings and RCTs; when a treatment is delivered with different results by providers with different training; and when treatments are off-label—that is, a drug or device is used in ways that have not yet been specifically approved” (Dreyer et al., 2010, p. 1821). Observational studies are also used to “provide information that can be derived only through larger studies or long-term follow-up” (Dreyer et al., 2010, p. 1820) that is not always possible with an RCT. The GRACE Initiative (Grace Principles, 2010) has been recommended as a source for guiding decision makers in the evaluation of observational studies of comparative effectiveness of treatments and key elements of good study design.
Even when the evidence summaries are current and comprehensive, an additional PubMed search may sometimes be required if individual patient or subpopulation factors are not addressed fully in the studies presented. Another factor, cost analysis, may be provided in some preappraised evidence summaries, but not always. According to the Congressional Budget Office: “hard evidence is often unavailable about which treatments work best for which patients and whether the added benefits of more effective but more expensive services are sufficient to warrant their added costs” (Congress of the United States. Congressional Budget Office, 2007, p. 1).
Characteristics of a particular patient subpopulation may need to be considered in assessing the evidence associated with one treatment over another. In the earlier example, the evidence on benefits of antipyretic therapy for fever treatment was strong about the potential adverse effects versus benefits, except in the case of neurological pathology or temperatures exceeding 104°F (Fever, 2011). In the sinusitis example, antibiotic treatment is recommended only after symptoms have been present for 10 days (Dynamed, 2012). An additional search of the literature could also be important when the studies included in the evidence summary did not address specific comorbidities of interest. Racial differences in a clinical setting may raise concerns about the relevance of an intervention in affecting the same outcomes as the sample in a clinical trial; thus, additional evidence may be needed. The key is to be aware of potential intervening factors and to know when to seek additional study outcome data that adequately address these factors.
Step 4: Make the decision on the best practice for the patient(s)
The fourth step in the Best Practice Decision Guide is: make the decision on the best practice for the patient(s). When the preappraised source provides a recommendation based on how their reviewers weighed the evidence, the task remains with the NP to decide if the quantity of the evidence and the overall consistency and strength of the results across studies is adequate to inform clinical decisions. For example, to what extent are the results from a 2006 systematic review of 10 studies similar or different from a 2010 large, multisite RCT? Consistency between the two might strengthen the view of adequacy, but if the two have dissimilar findings, one may be less satisfied about the weight of the evidence in moving to the next steps toward a practice change. Discrepancies and controversies between the actual findings and interpretation of findings by acknowledged experts in the field, particularly with observational studies, can also complicate how the evidence is weighed.
With many clinical questions, decisions based on the evidence presented in a preappraised summary may be straightforward and uncomplicated because the evidence is clear and the risk/benefit ratio is considered acceptable. This means that the available body of evidence, that is, CPGs or recommendations from the preappraised sources, can be relied upon either to change practice or not to make any changes. For example, with the fever question, the outcome of the decision process may be to change the current protocol of administering antipyretic therapy for most critically ill patients, since no supportive evidence was available for this intervention in most patients. However, if there are remaining concerns about the evidence, for example, when no clear advantage of one treatment over another is shown, the decision may rest on the NP's own clinical expertise/experience, often in conjunction with the values and preferences of the patient.
To keep informed about evolving clinical practice controversies that can trigger the need for finding current and reliable evidence, it is recommended that NPs access websites to bookmark and regularly track for the most relevant content to the current practice setting and patient population. Additionally, automated alerts from new evidence on common practice questions may be set up through subscribed preappraised sources and from evidence-based websites such as BioMed Central (www.biomedcentral.com) and Lippincott's Nursing Center (www.nursingcenter.com). Tracking these sites on a regular basis will not eliminate the need for using updated evidence summaries when specific questions arise, but can alert NPs of the necessity for following up on topics relevant to their current practice setting.
It may also be helpful to organize a local or regional peer collaborative to participate in the decision process. The concept of quality improvement or learning collaboratives (QICs) has been presented as a gathering of multidisciplinary teams, either from multiple health departments or organizations, to strategize care changes systematically across settings using best practices (Sorensen & Bernard, 2009). In a recent study aimed at improving colorectal cancer screening rates involving five primary care practices, one of the most important elements found to advance practice changes was the opportunity for communication and shared learning among the members (Shaw, Chase, Howard, Nutting, & Crabtree, 2012). Although our recommendation for the use of a collaborative would be for the initial discussion of the evidence in relation to a potential practice change, it is possible that this beginning step could lead to the development of a quality improvement structure linking communications across settings and/or individual practices. Our vision of this type of collaborative can include not only NPs, but appropriate interdisciplinary colleagues as regular members to strengthen the appraisal process after choosing a question of shared interest. This learning collaborative could also provide needed experience for both new graduates and those NPs who have had fewer opportunities to use electronic data bases or who want to further develop their evidence appraisal skills.
The intent of this article is to describe preappraised evidence sources to obtaining the best available evidence, and how to incorporate preappraised evidence into clinical decision making. It is understood that NPs make numerous practice decisions on a daily basis without needing to engage in each element outlined in our decision guide. At the same time, with the rapid advances in healthcare research and technology worldwide, providing the best quality care in either acute or community clinical settings is possible only with an increased understanding of available resources and trends. Foremost is the ability to efficiently obtain the most current and high-quality evidence. Preappraised evidence summary sources can include CPGs within their evidence summaries and the NP also needs to be aware of the benefits and limitations of these, including adequacy relative to patient preferences. NPs need to be skilled in both obtaining and appraising the quality of studies particularly when the electronic evidence sources are insufficient and further searching for studies becomes necessary. With the aid of the many resources now available to support development in both of these skills, NPs can enhance their ability to make evidence-based practice decisions.
Agency for Healthcare Research and Quality (AHRQ). National guideline clearinghouse
. Retrieved from http://www.guideline.gov
Agency for Healthcare Research and Quality (AHRQ). (2002, April). Rating the strength of scientific research findings. Fact sheet. AHRQ Publication No. 02-P0022
. Retrieved from http://archive.ahrq.gov/clinic/epcsums/strenfact.htm
Alper, B. S., White, D. S., & Ge, B. (2005). Physicians answer more clinical questions and change clinical decisions more often with synthesized evidence: A randomized trial in primary care. Annals of Family Medicine
Banzi, R., Cinquini, M., Liberati, A., Moschetti, I., Pecoraro, V., Tagliabue, L., & Moja, L. (2011). Speed of updating online evidence based point of care summaries: Prospective cohort analysis. British Medical Journal
, d5856. doi: 10.1136/bmj.d5856.
Banzi, R., Liberati, A., Moschetti, I., Tagliabue, L., & Moja, L. (2010). A review of online evidence-based practice point-of-care information summary providers. Journal of Medical Internet Research
(2011). Retrieved from http://www.biomedcentral.com/
Chow, A. W., Benninger, M. S., Brook, I., Brozek, J. L., Goldstein, E. J., Hicks, L. A., … File, T. M., Jr. (2012). IDSA clinical practice guideline for acute bacterial rhinosinusitis in children and adults, Clinical Infectious Diseases
, e72–e112. doi: 10.1093/cid002Fcir1043.
Congress of the United States. Congressional Budget Office. (2007, December). Research on the comparative effectiveness of medical treatments
. Retrieved from http://www.cbo.gov/publication/41655
Cochrane Library. (2011). Cochrane reviews. What are Cochrane reviews
. Retrieved from http://www.cochrane.org/cochrane-reviews
Congress of the United States. Congressional Budget Office. (2007, December). Research on the comparative effectiveness of medical treatments
. Retrieved from http://www.cbo.gov/publication/41655
Connor, E. (2007). Interview with Brian S. Alper of DynaMed
. Journal of Electronic Resources in Medical Libraries
DiCenso, A., Bayley, L., & Haynes, R. B. (2009). Accessing pre-appraised evidence: Fine-tuning the 5S model into a 6S model. Evidence-Based Nursing
Dreyer, N. A., Tunis, S. R., Berger, M., Ollendorf, D., Mattox, P., & Gliklich, R. (2010). Why observational studies should be among the tools used in comparative effectiveness research. Health Affairs
Fineout-Overholt, E., Melnyk, B. M., Stillwell, S. B., & Williamson, K. M. (2010). Evidence-based practice, step by step: Critical appraisal of the evidence: Part III. American Journal of Nursing
(11), 43–51. doi: 10.1097/01.NAJ.0000390523.99066.b5.
Grace Principles. (2010). Good research for comparative effectiveness
. Retrieved from http://www.graceprinciples.org/
Garbutt, J. M., Banister, C., Spitznagel, E., & Piccirillo, J. (2012). Amoxicillin for acute rhinosinusitis: A randomized controlled trial. Journal of the American Medical Association
Guyatt, G. M., Oxman, A. D., Sulton, S., Glasziou, P, Akl, E. A., Alonso-Coello, P., … Schunemannn, H. J. (2011). GRADE guidelines 9. Rating up the quality of evidence. Journal of Clinical Epidemiology
Institute of Medicine. Committee on Standards for Developing Trustworthy Clinical Practice Guidelines. (2011, March 23). Clinical practice guidelines we can trust. Standards for developing trustworthy clinical practice guidelines (CPGs)
. Retrieved from http://www.iom.edu/Reports/2011/insert
Institute of Medicine. Committee on Standards for Developing Trustworthy Clinical Practice Guidelines. (2011, March 23). Clinical practice guidelines we can trust. Report brief
. Retrieved from http://www.iom.edu/Reports/2011/Clinical-Practice-Guidelines-We-Can-Trust/Report-Brief.aspx
Keahey, D., & Goldgar, C. (2008). Evidence-based medicine resources for physician assistant faculty: DynaMed. Journal of Physician Assistant Education
Ketchum, A. M., Saleh, A. A, & Jeong, K. (2011). Type of evidence behind point-of-care clinical information products: A bibliometric analysis. Journal of Medical Internet Research
(1), e21. doi:10.2196/jmir.1539.
Laupland, K. B. (2009). Fever in the critically ill medical patient. Critical Care Medicine
(7 Suppl), S273–S28.
Liberati, A., Altman, D. G., Tetzlaff, J., Mulrow, C., Gotzsche, P. C., Ioannidis, J. A., … Moher, D. (2009) The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. PLoS Medicine
(7), e1000100. Retrieved September 6, 2012 from http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.1000100
National AHEC Organization. Initiatives supporting health professionals
. Retrieved from http://www.nationalahec.org/programs/SupportingHealthProfessionals.asp
O'Grady, N. P., Barie, P. S., Bartlett, J. G., Blick, T., Carroll, K., Kalil, A. C., … Masur, H. (2008). Guidelines for evaluation of new fever in critically ill adult patients: 2008 update from the American College of Critical Care Medicine and the Infectious Diseases Society of America. Critical Care Medicine
Sackett, D. L., Rosenberg, W. M. C., Gray, J. A., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn't. British Medical Journal
Schultz, K. F., Altman, D. G., & Moher, D., for the CONSORT Group. (2010). CONSORT 2010 Statement: Updated guidelines for reporting parallel group randomised trials. Trials
Shaw, E. K., Chase, S. M., Howard, J., Nutting, P. A., & Crabtree, B. F. (2012). More black box to explore: How quality improvement collaborative shape practice change. Journal of the American Board of Family Medicine
Smith, S. R., Montgomery, L. G., & Williams, J. W. (2012). Less is more: Treatment of mild to moderate sinusitis. Archives in Internal Medicine
Sorenson, A. V., & Bernard, S. L. (2012). Accelerating what works: Using qualitative research methods in developing a change package for a learning collaborative. The Joint Commission Journal on Quality and Patient Safety
Stroud, S. D., Smith, C. A., & Erkel, E. A. (2009). Personal digital assistant use by nurse practitioners: A descriptive study. Journal of the American Academy of Nurse Practitioners
Terracciano, L., Brozek, J., Compalati, E., & Schünemann, H. (2010). GRADE System: New paradigm. Current Opinion in Allergy and Clinical Immunology
Tricoci, P., Allen, J. M., Kramer, J. M., Califf, R. N., & Smith, S. C. (2009). Scientific evidence underlying the ACC/AHA clinical practice guidelines. Journal of the American Medical Association
The Joanna Briggs Institute. Appraise evidence
. Retrieved from http://notari.joannabriggs.edu.au/Appraise_Evidence
TRIP database—Clinical search engine. Trip Database Ltd
. Retrieved from http://www.tripdatabase.com/
U.S. Preventive Services Task Force. (2009). Screening for breast cancer
. Rockville, MD: Agency for Healthcare Research and Quality. Retrieved from http://www.guideline.gov/content.aspx?id=15429&search=uspstf00B1mammogram