Secondary Logo

Journal Logo

Electronic Knowledge Resources and Point-of-Care Learning

A Scoping Review

Aakre, Christopher A., MD, MS; Pencille, Laurie J., CCRP; Sorensen, Kristi J., MS; Shellum, Jane L., MHA, MAS; Del Fiol, Guilherme, MD, PhD; Maggio, Lauren A., MS, PhD; Prokop, Larry J., MLIS; Cook, David A., MD, MHPE

doi: 10.1097/ACM.0000000000002375
Review Methods
Free
SDC

Purpose The authors sought to summarize quantitative and qualitative research addressing electronic knowledge resources and point-of-care learning in a scoping review.

Method The authors searched MEDLINE, Embase, PsycINFO, and the Cochrane Database for studies addressing electronic knowledge resources and point-of-care learning. They iteratively revised inclusion criteria and operational definitions of study features and research themes of interest. Two reviewers independently performed each phase of study selection and data extraction.

Results Of 10,811 studies identified, 305 were included and reviewed. Most studies (225; 74%) included physicians or medical students. The most frequently mentioned electronic resources were UpToDate (88; 29%), Micromedex (59; 19%), Epocrates (50; 16%), WebMD (46; 15%), MD Consult (32; 10%), and LexiComp (31; 10%). Eight studies (3%) evaluated electronic resources or point-of-care learning using outcomes of patient effects, and 36 studies (12%) reported objectively measured clinician behaviors. Twenty-five studies (8%) examined the clinical or educational impact of electronic knowledge resource use on patient care or clinician knowledge, 124 (41%) compared use rates of various knowledge resources, 69 (23%) examined the quality of knowledge resource content, and 115 (38%) explored the process of point-of-care learning. Two conceptual clarifications were identified, distinguishing the impact on clinical or educational outcomes versus the impact on test setting decision support, and the quality of information content versus the correctness of information obtained by a clinician–user.

Conclusions Research on electronic knowledge resources is dominated by studies involving physicians and evaluating use rates. Studies involving nonphysician users, and evaluating resource impact and implementation, are needed.

C.A. Aakre is assistant professor of medicine and senior associate consultant, Division of General Internal Medicine, Mayo Clinic College of Medicine and Science, Rochester, Minnesota.

L.J. Pencille is program coordinator, Knowledge and Delivery Center, Center for Translational Informatics and Knowledge Management, Mayo Clinic, Rochester, Minnesota.

K.J. Sorensen is assistant professor of medical education and unit head, Knowledge Management Technologies, Center for Translational Informatics and Knowledge Management, Mayo Clinic, Rochester, Minnesota.

J.L. Shellum is section head, Knowledge and Delivery Center, Center for Translational Informatics and Knowledge Management, Mayo Clinic, Rochester, Minnesota.

G. Del Fiol is assistant professor of biomedical informatics, University of Utah School of Medicine, Salt Lake City, Utah, and co-chair, Clinical Decision Support Work Group at Health Level Seven (HL7).

L.A. Maggio is associate professor of medicine and associate director of technology and distributed learning, Department of Medicine, Uniformed Services University, Bethesda, Maryland.

L.J. Prokop is reference librarian, Plummer Library, Mayo Clinic, Rochester, Minnesota.

D.A. Cook is professor of medicine and medical education; researcher, Center for Translational Informatics and Knowledge Management; associate director, Office of Applied Scholarship and Education Science; and consultant, Division of General Internal Medicine, Mayo Clinic College of Medicine and Science, Rochester, Minnesota.

Funding/Support: Author G.D.F. was funded by National Library of Medicine grant 1R01LM011416.

Other disclosures: In 2016, author L.A.M. received travel funds to deliver a lecture on evidence-based medicine for employees of Ebsco, the parent company of DynaMed; Ebsco did not have any involvement in the conduct of this study. The authors are not aware of any other conflicts of interest.

Ethical approval: Reported as not applicable.

Disclaimer: The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Uniformed Services University of the Health Sciences, the Department of Defense, or the U.S. Government.

Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A577.

Correspondence should be addressed to David A. Cook, Division of General Internal Medicine, Mayo Clinic College of Medicine, Mayo 17-W, 200 First St. S.W., Rochester, MN 55905; telephone: (507) 284-2269; e-mail: cook.david33@mayo.edu.

Written work prepared by employees of the Federal Government as part of their official duties is, under the U.S. Copyright Act, a "work of the United States Government" for which copyright protection under Title 17 of the United States Code is not available. As such, copyright does not extend to the contributions of employees of the Federal Government.

Clinicians at all stages of training very frequently identify patient-related questions1 and often seek answers during or immediately following a clinical encounter (i.e., point-of-care learning). Yet clinicians typically seek answers to only a minority of their questions because of barriers that include insufficient time, inadequate search skills, lack of reliable resources, excessive information, and belief that an answer is not available.1–8 Moreover, answers may guide immediate actions (point-of-care decision support) without translating to actual learning (i.e., retention of knowledge).9,10

Various electronic knowledge resources have been developed in an effort to help clinicians quickly find credible answers by synthesizing and curating relevant information, including commercial products such as UpToDate and Micromedex and locally developed products such as McMaster-Plus11 and AskMayoExpert.12 These resources are now widely available and widely used in clinical practice.13–16 Search tools such as Google16,17 and PubMed can also help clinicians find answers on the Internet and in peer-reviewed literature18,19 but lack the synthesis provided by purpose-built knowledge resources.

A better understanding of how clinicians seek information during clinical activities, and the resources from which they seek and obtain information, would help in providing support for these activities. Systematic reviews of health information resources have generally focused on systems that directly support clinical decisions (e.g., alerts, order facilitators, medication dosing supports, and expert systems)20–25 rather than knowledge resources. Some reviews touched on knowledge resources20,26 but did not examine these tools in depth. We are aware of only two reviews of point-of-care learning—a systematic review of the type and frequency of clinical questions,1 and a narrative review that listed several information sources without elaborating on the nature or benefits of these sources.27 As a first step in identifying and synthesizing the evidence in this field, a scoping review could summarize the “overall state of research activity”28(p21) and help researchers “clarify a complex concept and refine subsequent research inquiries.”29(p1)

We conducted a scoping review to identify and summarize key aspects of quantitative and qualitative research addressing electronic knowledge resources and point-of-care learning. We specifically sought to:

  1. Clarify and quantify the range and nature of knowledge resources used, outcomes reported, and research themes (questions and bottom-line messages) addressed;
  2. Generate operational definitions for key study features; and
  3. Identify themes warranting more intensive systematic review.
Back to Top | Article Outline

Method

This scoping review is the first stage of a large systematic review of knowledge resources and point-of-care learning that was planned and conducted in adherence to standards of quality for systematic reviews30 and scoping reviews.28,29 All of the authors have extensive experience in developing or studying knowledge resources for point-of-care learning and decision support, and all were involved in developing operational definitions for selection and/or data charting.

Back to Top | Article Outline

Study identification

With support from an experienced reference librarian, we created a strategy to search MEDLINE, Embase, PsycINFO, and the Cochrane Database for quantitative and qualitative studies of electronic knowledge resources and point-of-care learning. We used existing reviews1,20,26,31 and authors’ files to iteratively evaluate and refine the search strategy (Supplemental Digital Appendix 1, http://links.lww.com/ACADMED/A577). We conducted the search on February 14, 2017.

Back to Top | Article Outline

Study inclusion criteria and selection

We included all original research studies that addressed clinicians’ use of electronic knowledge resources or point-of-care learning. Two reviewers (paired combinations of C.A.A., L.J. Pencille, K.J.S., J.L.S., D.A.C.), working independently, screened each identified study for inclusion, first reviewing the title and abstract (phase 1) and then reviewing the full text if needed (phase 2; interreviewer reliability, kappa = 0.75). All disagreements were resolved by consensus.

We iteratively revised the inclusion criteria and operational definitions throughout the selection process. During phase 1 (title/abstract review) we erred on the side of inclusion, then made final selection decisions during phase 2 (full-text review) once criteria had been finalized. Ultimately, we included both quantitative comparative studies (evaluating a specific intervention in comparison with another intervention or resource, a no-intervention group, a preintervention time point, or across clinician subgroups) and rigorous qualitative studies. We included knowledge resource studies conducted in either real patient care or classroom/research settings (e.g., using written case scenarios). We included point-of-care learning studies that explicitly addressed learning during real patient care. We made no exclusions based on language. Because very old studies will likely be irrelevant to the design, implementation, and outcomes of contemporary electronic knowledge resources, we limited our search to studies published after January 1, 1991 (the year in which the World Wide Web was first described).

In defining electronic clinical knowledge resource we started with the definition used by Lobach et al20 and iteratively revised this during phase 1. Ultimately, we defined electronic clinical knowledge resource as an electronic (computer-based) resource comprising distilled (synthesized) or curated information that allows clinicians to select content germane to a specific patient to facilitate medical decision making. This definition excluded decision support tools that provide popup alerts or push notifications, resources containing only unsynthesized information (e.g., journals and journal databases such as MEDLINE), websites and tools such as infobuttons31 containing only links to other sites, and online versions of print texts unless they were specifically adapted to optimize online use.

Although consulting other clinicians (curbside consultation32,33) is a common and important part of clinical practice, in this review we focused on how clinicians seek information from composed materials. Thus, we defined point-of-care learning as seeking information from a nonhuman resource to address a clinical question that arises while performing routine clinical tasks in the care of a specific, real patient. This included clinician interactions with computer and paper knowledge resources. We excluded studies of seeking information about the specific patient (e.g., from the medical record), about hospital policies, or for nonclinical purposes (e.g., research, preparing educational materials, or studying for a test).

We defined clinicians as practitioners with direct responsibility for patient-related decisions, and students in that profession. This included (but was not limited to) physicians, dentists, nurse practitioners, certified nurse anesthetists, midwives, physician assistants, pharmacists, and psychologists.

Back to Top | Article Outline

Data charting and synthesis

Two reviewers (C.A.A., D.A.C.) independently abstracted data from all included studies. Throughout study selection we listed study features of potential interest, including population, electronic and nonelectronic knowledge resources, study designs, outcomes, and bottom-line messages. In consultation with all team members, we used this list to create an electronic data extraction form that was iteratively revised. We paused after every 5 to 15 studies for the first 50, and as needed thereafter, to discuss and revise items and operational definitions. After extracting data from all included studies we identified areas of conceptual disagreement, revised operational definitions as needed, and then recoded all studies for selected items using updated definitions. We then discussed and came to agreement on all final codes.

Back to Top | Article Outline

Results

Trial flow

We identified 10,811 studies, of which 305 were eligible for inclusion in this scoping review (302 from our database search, and 3 from our review of bibliographies); see Figure 1. Six included studies were published in languages other than English (Croatian, French, German, Portuguese, and Spanish); we translated these for data extraction. Supplemental Digital Appendix 2, available at http://links.lww.com/ACADMED/A577, contains a full list of included studies.

Figure 1

Figure 1

Back to Top | Article Outline

Participants and context

Clinicians in most studies (225; 74%) included physicians at various stages in training, including practicing physicians (150; 49%), physicians in postgraduate training (101; 33%), and medical students (46; 15%). Twenty-six studies (9%) involved practicing and/or student nurse practitioners, 20 (7%) involved practicing and/or student pharmacists, and 9 (3%) involved practicing and/or student physician assistants. In 60 studies (20%), users were members of the investigator team. See Table 1 for additional information on participants.

Table 1

Table 1

About half the studies (142; 47%) focused on general medicine or mixed medical topics. The most common other topics were pharmacy (48; 16%), medical subspecialty (e.g., cardiology, neurology, or dermatology [40; 13%]), pediatrics (29; 10%), and surgery (18; 6%). About one-fourth (86; 28%) were conducted in an authentic clinical environment with real patients; and in another 39 (13%), clinicians used a knowledge resource to respond to clinical vignettes. Surveys, focus groups, or interviews without a clear clinical context were used in 137 (45%) studies.

About one-third of studies (96; 31%) did not report the year in which the study was completed; in such omissions, we substituted the year of publication. Only 9 studies were completed (or published) before 1995; this increased to 96 for the period 2005–2009 and 100 for 2010–2014. Nearly two-thirds of studies were conducted in the United States (155; 51%) or Canada (40; 13%); see Table 2 for other publication year and geographic data.

Table 2

Table 2

Back to Top | Article Outline

Knowledge resources

UpToDate was the most frequently mentioned electronic knowledge resource, referenced in approximately one-third of studies (88; 29%). Pharmacy-related resources (Micromedex [59; 19%], Epocrates [50; 16%], and LexiComp [31; 10%]) were also commonly reported; see Table 1. We noted that a number of resources (e.g., WebMD/Medscape [46 studies; 15%], MD Consult/ClinicalKey [32 studies; 10%]) were actually collections containing other electronic resources (e.g., Micromedex). At least 108 reports (35%) described such “resource aggregators.” Wikipedia, a free, open resource comprising content collaboratively created by users (“crowdsourced”), was referenced in 21 studies (7%). Five studies reported tools for machine-automated synthesis of evidence. We note that some resources, such as Virtual Preceptor34 and KnowledgeLink,35 are no longer available. Seventy-three studies (24%) used a mobile platform (e.g., smartphone) for at least 1 resource.

Among resources that did not meet our definition of an electronic knowledge resource, literature databases such as MEDLINE were used most often (94; 31%), followed by textbooks (75; 25%); journals (72; 24%); other online resources (55; 18%) such as professional organization websites (e.g., www.acog.org), government websites (e.g., www.fda.gov, www.guidelines.gov), nonprofit organization websites (e.g., www.teratology.org, www.crohnscolitisfoundation.org), and YouTube; and human resources such as other clinicians (52; 17%) or librarians (7; 2%). Google was specifically mentioned in 43 studies (14%), and other Internet search tools were mentioned in 25 (8%).

Back to Top | Article Outline

Study design and outcomes

About half the studies reported clinician-reported cross-sectional (single-time-point) data obtained from a survey, focus group, or interview (110; 36%). Quantitative studies of this type typically described the self-reported use of various knowledge resources, or made comparison among clinician demographic subgroups. Forty-four studies (14%) reported objectively measured cross-sectional data from, for example, computer log files or direct observation of behavior in a study setting. Sixty-eight studies (22%) compared two or more clinician groups receiving different interventions (e.g., use vs. nonuse of a knowledge resource, two different resources, or training vs. no training in using a resource). Fifteen studies (5%) compared one group before and after an intervention, while 6 (2%) measured outcomes such as use at various time points without a specific intervention. Sixty-three studies (21%) evaluated resource accuracy against an investigator-approved “correct” answer. Sixty-two studies (20%) used qualitative data and analysis.

Eight studies (3%) evaluated knowledge resources or point-of-care learning using outcomes of patient effects such as successful cardioversion36 or hospital length of stay.37 Thirty-six studies (12%) reported objectively measured clinician behaviors, such as prescription patterns,38 test ordering,39 and potential drug–drug interactions40; and 22 studies (7%) reported clinician-reported behaviors. Forty-three studies (14%) used a clinician-reported information-seeking outcome linked to a specific clinical question-and-search when caring for real patients, such as “found an answer” or “this answer changed patient care.”

Back to Top | Article Outline

Key research messages

We coded the research theme or message(s) of each study (see Table 3 for definitions). Twenty-five studies (8%) examined the clinical or sustained educational impact of electronic knowledge resource use. Researchers measured clinical impact through, for example, brief clinician–user surveys completed immediately after using the resource35,41 or review of charts to identify referral patterns or potential drug–drug interactions.39,42 Researchers measured sustained educational impact (after a period of resource use) through knowledge or skill tests completed without using the resource itself.43,44 We distinguished sustained educational impact from concurrent educational impact (“test setting decision support”) measured while using a knowledge resource to answer questions in a test setting (21 studies; 7%).

Table 3

Table 3

The most common message (124 studies; 41%) addressed the comparative use of various knowledge resources. Over half of these studies (64; 52%) measured use rates through retrospective surveys. Other studies used real-time record keeping, analysis of computer logs, or direct observation of behavior with real patients or in a test setting.

Several studies examined the quality or accuracy of information from a knowledge resource (i.e., focusing on resource content), usually in comparison with an investigator-approved “correct” answer. Members of the investigator team usually completed these information searches themselves (69 studies; 23%). Less often (12 studies; 4%), noninvestigator study participants responded to clinical vignettes while using the resource, a design that evaluates human factors and human–computer interactions in addition to content.

One hundred fifteen studies (38%) explored the process of point-of-care learning itself, such as the frequency of asking and answering questions, the resources used, clinicians’ preferred physical location, and different approaches to information seeking (e.g., preference for synthesized vs. unsynthesized information). Nearly half of these studies (54; 47%) used qualitative data collection and analysis (e.g., focus groups).

Sixteen studies (5%) examined point-of-care learning specifically in the context of clinician education. For example, one study evaluated how attending physicians’ information-seeking activities changed when students were present.45 Other studies contrasted the information-seeking practices of different trainees (e.g., attendings vs. residents46), examined how trainees integrated point-of-care learning into their overall education,47 or used point-of-care learning as a teaching or assessment activity.48 We specifically sought examples of point-of-care learning as part of a formal continuing medical education program, and found no instances.

Eighty-five studies (28%) identified barriers to and facilitators of electronic knowledge resource use and/or point-of-care learning. Data sources for these studies included focus groups and interviews, surveys, and usability studies. Twenty-two studies (7%) examined training interventions or systems-level changes to enhance or promote knowledge resource use or point-of-care information seeking.

Back to Top | Article Outline

Discussion

In this scoping review of 305 studies addressing electronic knowledge resources and point-of-care learning, we found that most studies focused on practicing physicians, and nearly three-fourths included physicians in training or in practice. UpToDate was the most frequently mentioned resource, followed by two pharmaceutical resources. Resource “aggregators” were used in over one-third of studies. Only a small minority of studies quantitatively compared two or more groups; most studies employed a single-time-point or single-group pre/postintervention design. Only 25 studies evaluated the educational or clinical impact of knowledge resources, and only one-third were conducted in the context of real patient care. The most frequent research themes were resource use rates, the process of point-of-care learning, the accuracy of resource content, and barriers to and facilitators of information seeking. Nearly two-thirds of the studies were conducted in North America.

Back to Top | Article Outline

Limitations

Implications for practice are limited by the paucity of evidence, and also by the scoping review approach (which does not appraise study quality or extract specific study outcomes28,29). We included studies of clinicians responsible for patient-related decisions, including physicians, nurse practitioners, and pharmacists; but we did not include nurses or allied health, whose information needs may differ from those of clinicians. Knowledge resources are continually evolving; older studies may have less relevance today, and studies of some new resources and technologies have yet to be published. The number of publications should not be construed to reflect the effectiveness or even the popularity of a given resource, as these numbers are influenced by product life, sponsorship, and researcher factors (e.g., familiarity with or preference toward a given product).

Back to Top | Article Outline

Implications

The pace of research in this field may be slowing, with 19 studies published in the last 26 months of this review compared with 100 in the 5 years preceding. This does not necessarily indicate a decline in interest or scientific achievement; for example, it is possible that recent studies reflect higher quality or greater clinical relevance. More important, given the ever-accelerating growth of medical knowledge,49 and the consequent need for effective knowledge synthesis and translation to practice,10,50–52 we see substantial room for high-quality research in this field. We trust that our findings will enable investigators to more effectively build on prior work and address key gaps in evidence regarding the design, implementation, and impact of electronic knowledge resources. Although studies of impact on clinical practice (behaviors and patient care outcomes) are essential, we believe studies in a test setting (in particular, usability studies and studies of information obtained by user) offer important complementary insights.

The operational definitions we developed for key terms and for research themes will provide conceptual clarity to us and others going forward. Notably, we offer definitions of knowledge resource and point-of-care learning. Perhaps more important, we have identified two novel areas of conceptual clarity—namely, our distinction of impact on clinical or educational outcomes versus test setting decision support, and our distinction of the accuracy/quality of information content versus the correctness of information obtained by a clinician–user.

The research themes (messages) we identified each warrant intensive review, to more clearly understand the study quality, direction and magnitude of effects, actionable implications, and extant gaps. We anticipate pursuing systematic reviews focused on the themes of impact, accuracy, barriers/enablers, and educational uses of electronic knowledge resources and point-of-care learning. However, this scoping review already highlights several deficiencies in the evidence base. For example, we found few controlled studies, evaluations of clinical and educational impact, empiric determinations of barriers (e.g., usability studies), objectively determined outcomes, or studies conducted in authentic patient contexts; and no studies evaluating point-of-care learning as part of formal continuing medical education. Studies addressing one or more of these gaps would advance the field.

Finally, although not explicitly coded, we noted the general absence of conceptual frameworks and theoretical models to guide the development and implementation of knowledge resources. This limits generalizability of study findings across institutions and contexts, and precludes guidance regarding effective strategies in future implementations. Frameworks and theories related to information seeking, evidence appraisal and application, human–computer interactions, and innovation implementation may all find relevance and enable researchers and developers to more effectively build upon prior work.

Back to Top | Article Outline

References

1. Del Fiol G, Workman TE, Gorman PN. Clinical questions raised by clinicians at the point of care: A systematic review. JAMA Intern Med. 2014;174:710–718.
2. Cook DA, Sorensen KJ, Wilkinson JM, Berger RA. Barriers and decisions when answering clinical questions at the point of care: A grounded theory study. JAMA Intern Med. 2013;173:1962–1969.
3. Davies K, Harrison J. The information-seeking behaviour of doctors: A review of the evidence. Health Info Libr J. 2007;24:78–94.
4. Ely JW, Osheroff JA, Ebell MH, et al. Obstacles to answering doctors’ questions about patient care with evidence: Qualitative study. BMJ. 2002;324:710.
5. Ely JW, Osheroff JA, Chambliss ML, Ebell MH, Rosenbaum ME. Answering physicians’ clinical questions: Obstacles and potential solutions. J Am Med Inform Assoc. 2005;12:217–224.
6. Green ML, Ruff TR. Why do residents fail to answer their clinical questions? A qualitative study of barriers to practicing evidence-based medicine. Acad Med. 2005;80:176–182.
7. Bennett NL, Casebeer LL, Zheng S, Kristofco R. Information-seeking behaviors and reflective practice. J Contin Educ Health Prof. 2006;26:120–127.
8. Revere D, Turner AM, Madhavan A, et al. Understanding the information needs of public health practitioners: A literature review to inform design of an interactive digital knowledge management system. J Biomed Inform. 2007;40:410–421.
9. Regehr G, Mylopoulos M. Maintaining competence in the field: Learning about practice, through practice, in practice. J Contin Educ Health Prof. 2008;28(suppl 1):S19–S23.
10. Cook DA, Sorensen KJ, Linderbaum JA, Pencille LJ, Rhodes DJ. Information needs of generalists and specialists using online best-practice algorithms to answer clinical questions. J Am Med Inform Assoc. 2017;24:754–761.
11. Haynes RB, Holland J, Cotoi C, et al. McMaster PLUS: A cluster randomized clinical trial of an intervention to accelerate clinical use of evidence-based information from digital libraries. J Am Med Inform Assoc. 2006;13:593–600.
12. Cook DA, Sorensen KJ, Nishimura RA, Ommen SR, Lloyd FJ. A comprehensive system to support physician learning at the point of care. Acad Med. 2015;90:33–39.
13. Salinas GD. Trends in physician preferences for and use of sources of medical information in response to questions arising at the point of care: 2009–2013. J Contin Educ Health Prof. 2014;34(suppl 1):S11–S16.
14. Addison J, Whitcombe J, William Glover S. How doctors make use of online, point-of-care clinical decision support systems: A case study of UpToDate©. Health Info Libr J. 2013;30:13–22.
15. Shariff SZ, Bejaimal SA, Sontrop JM, et al. Searching for medical information online: A survey of Canadian nephrologists. J Nephrol. 2011;24:723–732.
16. Duran-Nelson A, Gladding S, Beattie J, Nixon LJ. Should we Google it? Resource use by internal medicine residents for point-of-care clinical decision making. Acad Med. 2013;88:788–794.
17. Tang H, Ng JH. Googling for a diagnosis—Use of Google as a diagnostic aid: Internet based study. BMJ. 2006;333:1143–1145.
18. Sayyah Ensan L, Faghankhani M, Javanbakht A, Ahmadi SF, Baradaran HR. To compare PubMed Clinical Queries and UpToDate in teaching information mastery to clinical residents: A crossover randomized controlled trial. PLoS One. 2011;6:e23487.
19. Thiele RH, Poiro NC, Scalzo DC, Nemergut EC. Speed, accuracy, and confidence in Google, Ovid, PubMed, and UpToDate: Results of a randomised trial. Postgrad Med J. 2010;86:459–465.
20. Lobach D, Sanders GD, Bright TJ, et al. Enabling Health Care Decisionmaking Through Clinical Decision Support and Knowledge Management. Evidence report no. 203. 2012.Rockville, MD: Agency for Healthcare Research and Quality;
21. Bright TJ, Wong A, Dhurjati R, et al. Effect of clinical decision-support systems: A systematic review. Ann Intern Med. 2012;157:29–43.
22. Chaudhry B, Wang J, Wu S, et al. Systematic review: Impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144:742–752.
23. Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: A systematic review of trials to identify features critical to success. BMJ. 2005;330:765.
24. Roshanov PS, Fernandes N, Wilczynski JM, et al. Features of effective computerised clinical decision support systems: Meta-regression of 162 randomised trials. BMJ. 2013;346:f657.
25. Jones SS, Rudin RS, Perry T, Shekelle PG. Health information technology: An updated systematic review with a focus on meaningful use. Ann Intern Med. 2014;160:48–54.
26. Gagnon MP, Pluye P, Desmartis M, et al. A systematic review of interventions promoting clinical information retrieval technology (CIRT) adoption by healthcare professionals. Int J Med Inform. 2010;79:669–680.
27. Clarke MA, Belden JL, Koopman RJ, et al. Information needs and information-seeking behaviour analysis of primary care physicians and nurses: A literature review. Health Info Libr J. 2013;30:178–190.
28. Arksey H, O’Malley L. Scoping studies: Towards a methodological framework. Int J Soc Res Methodol. 2005;8:19–32.
29. Levac D, Colquhoun H, O’Brien KK. Scoping studies: Advancing the methodology. Implement Sci. 2010;5:69.
30. Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann Intern Med. 2009;151:264–269, W64.
31. Cook DA, Teixeira MT, Heale BS, Cimino JJ, Del Fiol G. Context-sensitive decision support (infobuttons) in electronic health records: A systematic review. J Am Med Inform Assoc. 2017;24:460–468.
32. Keating NL, Zaslavsky AM, Ayanian JZ. Physicians’ experiences and beliefs regarding informal consultation. JAMA. 1998;280:900–904.
33. Cook DA, Sorensen KJ, Wilkinson JM. Value and process of curbside consultations in clinical practice: A grounded theory study. Mayo Clin Proc. 2014;89:602–614.
34. Adler MD, Duggan A, Ogborn CJ, Johnson KB. Assessment of a computer-aided instructional program for the pediatric emergency department. AMIA Annu Symp Proc. 2003;2003:6–10.
35. Maviglia SM, Yoon CS, Bates DW, Kuperman G. KnowledgeLink: Impact of context-sensitive information retrieval on clinicians’ information needs. J Am Med Inform Assoc. 2006;13:67–73.
36. Wang S, Xie J, Sada M, Doherty TM, French WJ. TACHY: An expert system for the management of supraventricular tachycardia in the elderly. Am Heart J. 1998;135:82–87.
37. Bonis PA, Pickens GT, Rind DM, Foster DA. Association of a clinical knowledge support system with improved patient safety, reduced complications and shorter length of stay among Medicare beneficiaries in acute care hospitals in the United States. Int J Med Inform. 2008;77:745–753.
38. King WJ, Le Saux N, Sampson M, Gaboury I, Norris M, Moher D. Effect of point of care information on inpatient management of bronchiolitis. BMC Pediatr. 2007;7:4.
39. Greiver M, Drummond N, White D, Weshler J, Moineddin R; North Toronto Primary Care Research Network (Nortren). Angina on the Palm: Randomized controlled pilot trial of Palm PDA software for referrals for cardiac testing. Can Fam Physician. 2005;51:382–383.
40. Ramnarayan P, Winrow A, Coren M, et al. Diagnostic omission errors in acute paediatric practice: Impact of a reminder system on decision-making. BMC Med Inform Decis Mak. 2006;6:37.
41. Alper BS, White DS, Ge B. Physicians answer more clinical questions and change clinical decisions more often with synthesized evidence: A randomized trial in primary care. Ann Fam Med. 2005;3:507–513.
42. Smithburger PL, Kane-Gill SL, Seybert AL. Drug–drug interactions in cardiac and cardiothoracic intensive care units: An analysis of patients in an academic medical centre in the US. Drug Saf. 2010;33:879–888.
43. Reed DA, West CP, Holmboe ES, et al. Relationship of electronic medical knowledge resource use and practice characteristics with internal medicine maintenance of certification examination scores. J Gen Intern Med. 2012;27:917–923.
44. Bochicchio GV, Smit PA, Moore R, et al; POC-IT Group. Pilot study of a web-based antibiotic decision management guide. J Am Coll Surg. 2006;202:459–467.
45. Cogdill KW, Friedman CP, Jenkins CG, Mays BE, Sharp MC. Information needs and information seeking in community medical education. Acad Med. 2000;75:484–486.
46. Anton B, Woodson SM, Twose C, Roderer NK. The persistence of clinical questions across shifts on an intensive care unit: An observational pilot study. J Med Libr Assoc. 2014;102:201–205.
47. McCord G, Smucker WD, Selius BA, et al. Answering questions at the point of care: Do residents practice EBM or manage information sources? Acad Med. 2007;82:298–303.
48. Kongerud IC, Vandvik PO. Work files as learning tools in knowledge management. Tidsskr Nor Laegeforen. 2013;133:1587–1590.
49. Alper BS, Hand JA, Elliott SG, et al. How much effort is needed to keep up with the literature relevant for primary care? J Med Libr Assoc. 2004;92:429–437.
50. Banzi R, Cinquini M, Liberati A, et al. Speed of updating online evidence based point of care summaries: Prospective cohort analysis. BMJ. 2011;343:d5856.
51. Davis D, Evans M, Jadad A, et al. The case for knowledge translation: Shortening the journey from evidence to effect. BMJ. 2003;327:33–35.
52. Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implement Sci. 2012;7:50.

Supplemental Digital Content

Back to Top | Article Outline
© 2018 by the Association of American Medical Colleges