Secondary Logo

Journal Logo

FEATURES

Usability Evaluation and Implementation of a Health Information Technology Dashboard of Evidence-Based Quality Indicators

Schall, Mark Christopher Jr PhD, AEP; Cullen, Laura DNP, RN, FAAN; Pennathur, Priyadarshini PhD; Chen, Howard MS; Burrell, Keith BA; Matthews, Grace MSN, RN-BC

Author Information
CIN: Computers, Informatics, Nursing: June 2017 - Volume 35 - Issue 6 - p 281-288
doi: 10.1097/CIN.0000000000000325
  • Free

Abstract

Recent estimates indicate that more than 400,000 Americans die each year of complications arising from preventable medical errors and associated adverse events.1 This positions medical errors as the third leading cause of death in the United States.2 The burden of preventable medical errors extends to costs estimated to total approximately $1 trillion.3 Care providers identify fast-paced work and feelings of being overwhelmed as top contributors to errors.4

Health information technology (HIT) systems that provide interactive prompts and information to support clinicians during stressful situations5–8 have been observed to improve patient safety, organizational efficiency, and speed of monitoring.9–16 Clinical and quality “dashboards” are two forms of HIT that provide important patient care information. Clinical dashboards are designed for use by individual clinicians for surveillance and to guide practice decisions at the point of care by displaying relevant, timely, and usable data.17 Quality dashboards demonstrate areas for practice improvement, often with retrospective data display, for administrative management of a unit or organization.17,18

Use of HIT’s capabilities is growing rapidly. Reports of systematic development and evaluation of dashboards, however, are largely limited to addressing a single clinical issue or screening for a single disease.19–27 The reality of hospital care is that patients rarely have a single illness or single risk that may affect quality care. Combining clinical and quality dashboards is an essential next step for improving patient care provided by interprofessional teams.12–14 Health information technology dashboards must be well received, require little additional time, contain evidence-based recommendations, improve quality, and be integrated into clinicians’ workflow, however, before creating an opportunity to affect care processes and outcomes.17,24,25,28–30

Complementary to HIT systems, evidence-based quality indicators provide objective measures for care providers to assess healthcare structures, processes, and outcomes.7,31–33 For instance, Malone et al16 developed a core set of quality indicators to promote interprofessional team planning for hospitalized older adults. The challenge, however, is how to integrate such evidence-based indicators within clinician workflow given the complexity of their clinical priorities and patient needs.7,34 Despite the potential benefits of incorporating evidence-based clinical indicators into an HIT system, we are aware of few efforts to systematically develop and evaluate such systems.

Recently, Schall et al35 developed a prototype dashboard designed to summarize and display quality indicators associated with patient risks at a large, Midwestern academic medical center. A focus group of nurse managers, physicians, and hospital quality professionals identified design criteria for the dashboard. Indicators were selected from among core quality metrics that were evidence based, used existing data not requiring additional data input by clinicians, and were associated with value-based purchasing or national standards. Pseudodata mimicking dynamic process data from a medical-surgical unit were used to simulate electronic health record (EHR) patient information and develop the application. A preliminary system usability evaluation suggested that the dashboard was “good” according to the System Usability Scale (SUS) criteria,35–39 but with opportunity for improvement. The objective of the current study was to modify the prototype dashboard to improve upon its design and enable its function within an inpatient EHR system for interprofessional planning (ie, “rounding”). Usability testing techniques were then used to evaluate use of the EHR functional dashboard in typical clinical scenarios. This article describes the process of developing, evaluating, and implementing the EHR functional dashboard to inform others interested in the benefits of an innovative approach to healthcare delivery.

METHODS

Converting Prototype Dashboard Into Electronic Health Record Functional Dashboard

Design improvement specifications from the prototype’s usability evaluation and knowledge of the existing EHR software were used to develop a list of specifications for converting the prototype into an EHR functional dashboard (Table 1). The majority of these specifications focused on methods to reduce the amount of “clutter” presented, as cluttered displays have been observed to result in substantial performance decrements.40,41 In addition, a list of quality indicators,42 with each indicator’s relevant thresholds or scores, and data location (cell or row) within the EHR (Table 2) was specified. A query was formulated to display a list of all active inpatients in rows with data available to display in columns similar to a spreadsheet (eg, patient identifiers, room location, quality indicators). The display columns were built according to each quality indicator’s unique specifications (ie, real-time scores, trending data, information on specific catheters and the patient provider). Data elements were linked from patient records using a “pointer” (ie, custom programming linking the data element within the EHR, such as clinical assessments and orders, to be displayed). A focus group of care providers reviewed and confirmed the display style of thresholds for each data element (eg, fall risk used a different scale than delirium scores), to identify elevated patient risk. A score was determined to be binary (ie, favorable or unfavorable) or ranked (acceptable/normal risk, marginal risk, elevated risk) based on evidence-based thresholds or established by clinical experts.

Table 1
Table 1:
Specifications for Converting Prototype to EHR Functional Dashboard
Table 2
Table 2:
Quality Indicators and Score Thresholds Included in the EHR Functional Dashboard

The need for customizable features, such as ability to set thresholds, lack of icon graphics for color-coding display, lack of timely automatic refresh, and space constraints, led to development of the EHR functional dashboard in the EHR’s reporting module. A significant advantage of this decision was availability of icons as a substitute for display of numerical values and customization of unique thresholds for each quality indicator to both numeric and string or text values. An additional benefit was the ability to display subelements of each of the quality indicators (eg, fall risk factors used to create the risk score, list of all catheters). Users could compile and view the report at their leisure to avoid pitfalls of EHR alerts.

Development Challenges

Displaying trending data (ie, changes in patient condition) was one of the important original specifications from users.35 However, existing EHR software did not allow display of trended data within the reporting module, leading to an alternative programming approach. Building display columns for the central venous and urinary catheters was also difficult because of charting omissions. If the catheter was not documented as discontinued upon discharge, the catheter days continued to accumulate days to years later and appear on a subsequent admission. As an alternative (and for accurate information of catheter days), the EHR functional dashboard was programmed to display catheter days only from admission to discharge during a single inpatient stay.

Preliminary Review of EHR Functional Dashboard

Following conversion of the prototype dashboard into an EHR functional version, a focus group of nurse managers, physicians, and hospital quality professionals met again to discuss the clinical accuracy of data elements. During these meetings, care providers discussed whether the dashboard display of all inpatients would be specific enough for their needs. Physicians were concerned about having enough information on patients they were specifically “rounding on” or were cared for by their team. Nurses needed information only on patients within their units, within specific geographic proximity, or within their patient assignment. A difficulty arose making the report dynamic and exclusive to each user role, using each clinician’s log-in identification. The task was to find a data element within the EHR that both groups used and that identified their roles. Both providers and nurses assign themselves to the patient’s treatment team on admission or at the beginning of their shift. The functionality to identify an individual patient’s care team already existed within the current system and facilitated remote communication among clinicians. A filter was added to the query using the treatment team, adding attending physician to the list of licensed independent practitioners or nurses already active on the patient’s treatment team. This allowed the clinician or teams to view only patients they were responsible for treating. Use of icons and color coding was also refined at this stage to improve intuitive interpretation of threshold scores. An image of the EHR functional dashboard with patient-identifying cells collapsed is provided as Figure 1.

FIGURE 1
FIGURE 1:
The HIT dashboard with patient-identifying information collapsed. Used with permission from Epic Systems Corporation, 2016.

Electronic Health Record Functional Dashboard Evaluation

Evaluation followed refinement and integration of the EHR functional dashboard into the medical center EHR. Three pairs of nurses and one physician from medical-surgical areas volunteered to participate. This sample size has been identified as more than sufficient for usability studies.44,45 All participants were proficient EHR users. Participants were asked to perform a series of tasks using the conventional EHR interface and new EHR functional dashboard. The tasks were selected to include commonly used evidence-based interventions for the associated quality indicator. Tasks matched practices previously implemented through the organization’s quality improvement program. The order of tasks and presentation of system were both randomized to prevent training effects. Participants’ interactions with these two systems were recorded through audio, screen recording, and real-time observations and annotations of usability issues using Morae (TechSmith Corporation, Okemos, MI), a usability testing platform. Institutional review board approval was sought prior to usability testing, and the protocol was determined not to be human subject research.

Procedure

Participants were provided with a list of “test” patients (with pseudo–patient records for training purposes) visible on the computer screen, using an EHR feature that is familiar and part of their usual workflow. A brief orientation to the dashboard was provided with instructions about the tasks to complete. Participants were asked to perform a series of tasks using both the dashboard and conventional EHR display (Table 3). The objective of the task-based evaluation was to assess potential differences in the time to complete a task and the percentage of tasks completed without error between the dashboard and conventional system. Participants were instructed to speak aloud as they completed tasks and maneuvered through the system. This helped gather insights on the difficulties participants encountered as they used the system. Participant pairs alternated task completion with observation and discussion.

Table 3
Table 3:
Evaluation Tasks

Immediately following completion of the task-based evaluation, each of the seven participants completed a paper-based SUS and Poststudy System Usability Questionnaire (PSSUQ).36,38,39,47 The SUS is a 10-item survey used to evaluate an individual’s assessment of a system’s usability. Each item has five response options ranging from “strongly disagree” to “strongly agree.” Scoring of the SUS yields a composite measure between 0 and 100 that represents the overall usability of the system being studied. While various scoring criteria have been developed, products with SUS scores greater than 85 are generally considered highly usable (ie, among the top 10% of products).38,48 Previous work has shown that the SUS is reliable, correlates well with other usability scales, and is a useful metric for overall product usability.49,50 The PSSUQ consists of 19 items that measures users’ perceived satisfaction with a product. Factor analysis of the PSSUQ has indicated an overall satisfaction scale (average of responses to Items 1–19) and three subscales: system usefulness (Items 1–8), information quality (Items 9–15), and interface quality (Items 16–18).47 Each item of the scale has seven response options ranging from “strongly agree” to “strongly disagree.” Scores represent the mean response to the items in each subscale, ranging from 1 (good usability) to 7 (poor usability). The PSSUQ has been observed to be highly reliable and valid.47,51

RESULTS

Our evaluation indicated that, with the exception of Tasks 1 and 8 (identifying and intervening for urinary catheter and pressure ulcer risks, respectively), participants completed all tasks faster and with greater accuracy (no errors) using the dashboard than when completed with the conventional EHR system (Figures 2 and 3). Across all seven participants, the dashboard received a mean SUS score of 87.5 (SD, 9.6) and an overall PSSUQ score of 1.7 (SD, 0.5). Component scores of the PSSUQ included a system usability score of 1.5 (SD, 0.4), information quality score of 1.8 (SD, 0.8), and an interface quality score of 1.8 (SD, 0.8), suggesting that the dashboard had good usability.

FIGURE 2
FIGURE 2:
Time on task (in minutes) for the conventional EHR and dashboard (error bars represent SD). SeeTable 3 for further description of tasks. Note that observed differences were not evaluated for statistical significance.
FIGURE 3
FIGURE 3:
Percentage of tasks completed without error. SeeTable 3 for further description of tasks. Note that observed differences were not evaluated for statistical significance.

The SUS score improved 4.5 points in comparison to the prototype dashboard, suggesting that changes made during the iterative development of the EHR functional dashboard were beneficial.35 In both evaluations, participants had little difficulty operating the system. Specifically, participant ratings of SUS Statements 4 and 10 were generally very positive, suggesting good perceived learnability.37 One participant noted that “The system was mostly self-explanatory, hence easy to use and learn.” The overall PSSUQ and individual component scores, although not collected as part of the prototype evaluation, also suggested that the dashboard was intuitive to use. Several participants noted the “visual aspect of the system provided information at a glance for multiple patients.” One participant volunteered that they are color blind, and the color display was still easily interpretable.

Despite generally positive results, several opportunities to improve the dashboard were identified. Most consistent recommendations from participants revolved around inclusion of additional quality indicators and patient warning “flags.” For example, one participant recommended including bariatric status, whereas another recommended presence or absence of bed alarms be added. Unique patient characteristics such as a patient having a different primary language or use of a service dog were also suggested for inclusion. Other recommendations included making the dashboard touch screen capable and revising the layout to make all the information fit on one screen.

DISCUSSION AND CONCLUSIONS

Our findings on reduced task completion times and error rates suggest that the HIT dashboard could be a promising tool for improving the speed and accuracy of clinician decisions in the complex patient care environment. Observed improvements in participant performance are likely the result of keeping displays simple and avoiding visual “clutter” common to conventional EHR displays.40,41 Several recent reviews of the scientific literature have suggested that cluttered displays result in substantial performance decrements including degraded monitoring and change detection, delayed visual search times, increased memory loading, and negative effects on situational awareness.40,41 Faster, more accurate decisions while using the HIT dashboard may support the reduction of preventable medical errors and associated adverse events.

A multifaceted implementation plan has been used to promote use of the dashboard on 11 inpatient units at the medical center52 with pilot use planned to occur for several months. Knowledge about use is being promoted through demonstration at meetings with individuals or teams in the clinical areas. Tip sheets provide guidance for each displayed indicator to promote early interprofessional planning. Local change agents on each unit are responsible to use the dashboard during interprofessional daily huddles. Integration of dashboard use in practice is being promoted by updating practice reminders, actionable feedback of quality improvement data, and reporting to senior leaders. The functionality of the EHR dashboard continues to evolve, leading to ongoing updates within the system and training of clinician users. Installation of more flat-screen monitors to improve convenient access for interprofessional care teams is a component of this implementation. Rollout to other patient populations is under development, with identification of key quality indicators for population-specific dashboards (eg, behavior health or critical care). Evaluation will involve using statistical process control charts to determine if significant improvements in associated quality outcomes displayed as quality indicators within the dashboard are achieved.

In the meantime, some early improvements have already been established. Among the anticipated quality improvements was accurate documentation of restraint use. Prior to dashboard implementation, patients transferred across inpatient units (eg, from ICU to general care unit) whose restraints were removed during the transfer were occasionally not documented as having restraint use discontinued. Identification of those patients with continued active restraints from using the dashboard during rounding led nurses to correct the patient record and convince the restraint committee to implement additional targeted quality improvement activities. Rounding teams are also finding similar improvements in care coordination and documentation for other displayed indicators (eg, urinary catheters). One unanticipated benefit was clear identification of vulnerable patients by easily identifying numerous indicators as “red” or at risk. Elderly patients waiting for discharge to a skilled facility, for example, were often not recognized as being at high risk of contracting hospital-acquired conditions. These patients’ needs are now considered when staffing assignments are made, to keep the acute level nursing coverage. Reporting use of the dashboard is now occurring within the quality program across disciplines in the organization. Additional postpilot evaluation will indicate if the dashboard and implementation strategies were effective.

There are several limitations to this work. First, the conventional EHR interface used as a reference comparison during evaluation of the dashboard was not customizable to each participant. The EHR has many options for personalizing information display. To provide a consistent platform across all participants, personalized EHR interfaces were not included, and participants may have been slowed by slight differences from their personal EHR display. In addition, participants were given only 10 minutes to train with the dashboard system and ask any questions immediately prior to completing the task-based evaluation. Although participants received this training, their performance may have been affected by their novice status.

While participants noted that the dashboard was much easier to use to identify information quickly in comparison to the EHR during evaluations, tasks that required changing documentation or placing orders for activities (eg, Tasks 1 and 8) often took longer or led to more errors when using the dashboard because of limited functionality linking with ordering. Further steps to better integrate the dashboard within the EHR are necessary.

Finally, the scope of this study was small, limiting the generalizability of the results. Moreover, it should be noted that the usability evaluations were based on self-report, which is subjective to bias. The inability to blind participants to the outcome measure is a drawback that further limits generalizability. Regardless, this study represents one of the first attempts to design and analyze the effects of an EHR-integrated quality dashboard, providing an example for many institutions to apply a similar structure and design principles.

References

1. James JT. A new, evidence-based estimate of patient harms associated with hospital care. J Patient Saf. 2013;9(3): 122–128.
2. Centers for Disease Control and Prevention. Leading causes of death. http://www.cdc.gov/nchs/fastats/leading-causes-of-death.htm. Accessed May 13, 2015.
3. Andel C, Davidow SL, Hollander M, Moreno DA. The economics of health care quality and medical errors. J Health Care Finance. 2012;39(1): 39–50.
4. Roth C, Wieck KL, Fountain R, Haas BK. Hospital nurses’ perceptions of human factors contributing to nursing errors. J Nurs Adm. 2015;45(5): 263–269.
5. Blumenthal D. Launching HITECH. N Engl J Med. 2010;362(5): 382–385.
6. Haux R. Health information systems—past, present, future. Int J Med Inform. 2006;75(3–4): 268–281.
7. IOM (Institute of Medicine). Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: The National Academies Press; 2013.
8. Arditi C, Rège‐Walther M, Wyatt JC, Durieux P, Burnand B. Computer-generated reminders delivered on paper to healthcare professionals; effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2012;12(CD001175).
9. Buntin MB, Burke MF, Hoaglin MC, Blumenthal D. The benefits of health information technology: a review of the recent literature shows predominantly positive results. Health Aff (Millwood). 2011;30(3): 464–471.
10. Bates DW, Gawande AA. Improving safety with information technology. N Engl J Med. 2003;348(25): 2526–2534.
11. Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144(10): 742–752.
12. Jensen J. United hospital increases capacity usage, efficiency with patient-flow management system. J Healthc Inf Manag. 2004;18(3): 26–31.
13. Kohli S, Waldron J, Feng K, et al. Utilizing the electronic emergency whiteboard to track and manage emergency patients. Medinfo. 2004;1688.
14. Poon EG, Jha AK, Christino M, et al. Assessing the level of healthcare information technology adoption in the United States: a snapshot. BMC Med Inform Decis Mak. 2006;6: 1.
15. Anderson D, Zlateva I, Khatri K, Ciaburri N. Using health information technology to improve adherence to opioid prescribing guidelines in primary care. Clin J Pain. 2015;31(6): 573–579.
16. Malone ML, Vollbrecht M, Stephenson J, Burke L, Pagel P, Goodwin JS. AcuteCare for Elders (ACE) tracker and e-Geriatrician: methods to disseminate ACE concepts to hospitals with no geriatricians on staff. J Am Geriatr Soc. 2010;58(1): 161–167.
17. Dowding D, Randell R, Gardner P, et al. Dashboards for improving patient care: review of the literature. Int J Med Inform. 2015;84(2): 87–100.
18. Render ML, Freyberg RW, Hasselbeck R, et al. Infrastructure for quality transformation: measurement and reporting in veterans administration intensive care units. BMJ Qual Saf. 2011;20(6): 498–507.
19. Kim H, Chung H, Wang S, Jiang X, Choi J. SAPPIRE: a prototype mobile tool for pressure ulcer risk assessment. Stud Health Technol Inform. 2014;201: 433–440.
20. Madan A, Mahoney J, Allen JG, et al. Utility of an integrated electronic suicide alert system in a psychiatric Hospital. Qual Manag Health Care. 2015;24(2): 79–83.
21. Mapp ID, Davis LL, Krowchuk H. Prevention of unplanned intensive care unit admissions and hospital mortality by early warning systems. Dimens Crit Care Nurs. 2013;32(6): 300–309.
22. Nwulu U, Brooks H, Richardson S, McFarland L, Coleman JJ. Electronic risk assessment for venous thromboembolism: investigating physicians’ rationale for bypassing clinical decision support recommendations. BMJ Open. 2014;4(9): e005647.
23. Dunsmuir DT, Payne BA, Cloete G, et al. Development of mHealth applications for pre-eclampsia triage. IEEE J Biomed Health Inform. 2014;18(6): 1857–1864.
24. Li AC, Kannry JL, Kushniruk A, et al. Integrating usability testing and think-aloud protocol analysis with “near-live” clinical simulations in evaluating clinical decision support. Int J Med Inform. 2012;81(11): 761–772.
25. Matui P, Wyatt JC, Pinnock H, Sheikh A, McLean S. Computer decision support systems for asthma: a systematic review. NPJ Prim Care Respir Med. 2014;24: 14005.
26. Raghu A, Praveen D, Peiris D, Tarassenko L, Clifford G. Engineering a mobile health tool for resource-poor settings to assess and manage cardiovascular disease risk: SMARThealth study. BMC Med Inform Decis Mak. 2015;15: 36.
27. Linder JA, Schnipper JL, Tsurikova R, et al. Electronic health record feedback to improve antibiotic prescribing for acute respiratory infections. Am J Manag Care. 2010;16(12 suppl HIT): e311–e319.
28. Dikomitis L, Green T, Macleod U. Embedding electronic decision-support tools for suspected cancer in primary care: a qualitative study of GPs’ experiences. Prim Health Care Res Dev. 2015;16(6): 548–555.
29. Meulendijk M, Spruit M, Drenth-van Maanen C, Numans M, Brinkkemper S, Jansen P. General practitioners’ attitudes towards decision-supported prescribing: an analysis of the Dutch primary care sector. Health Informatics J. 2013;19(4): 247–263.
30. Piscotty RJ Jr, Kalisch B, Gracey-Thomas A. Impact of healthcare information technology on nursing practice. J Nurs Scholarsh. 2015;47(4):287–293.
31. Grube MM, Dohle C, Djouchadar D, et al. Evidence-based quality indicators for stroke rehabilitation. Stroke. 2012;43(1): 142–146.
32. Fischer C, Anema HA, Klazinga NS. The validity of indicators for assessing quality of care: a review of the European literature on hospital readmission rate. Eur J Public Health. 2012;22(4): 484–491.
33. Mainz J. Defining and classifying clinical indicators for quality improvement. Int J Qual Health Care. 2003;15(6): 523–530.
34. Hayes CW, Batalden PB, Goldmann D. A ‘work smarter, not harder’ approach to improving healthcare quality. BMJ Qual Saf. 2015;24(2): 100–102.
35. Schall MC Jr, Chen H, Pennathur PR, Cullen L. Development and evaluation of a health information technology dashboard of quality indicators. Paper presented at the Human Factors and Ergonomics Society 58th Annual Meeting; October 26-30; Los Angeles, CA; 2015: 461–465.
36. Lewis JR. IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. Int J Human Comput Interact. 1995;7(1): 57–78.
37. Lewis JR, Sauro J. The factor structure of the system usability scale. In: Human Centered Design. Berlin, Heidelberg: Springer-Verlag; 2009: 94–103.
38. Brooke J. SUS: a retrospective. J Usability Stud. 2013;8(2): 29–40.
39. Brooke J. SUS—a quick and dirty usability scale. Usability Eval Ind. 1996;189(194): 4–7.
40. Moacdieh N, Sarter N. Display clutter: a review of definitions and measurement techniques. Hum Factors. 2015;57(1): 61–100.
41. Moacdieh N, Sarter N. Clutter in electronic medical records: examining its performance and attentional costs using eye tracking. Hum Factors. 2015;57(4): 591–606.
42. AHRQ National Quality Measure Clearinghouse. https://www.qualitymeasures.ahrq.gov/index.aspx. Accessed February 9, 2016.
43. van Walraven C, Dhalla IA, Bell C, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. Can Med Assoc J. 2010;182(6): 551–557.
44. Turner CW, Lewis JR, Nielsen J. Determining usability test sample size. Int Encycl Ergon Human Factors. 2006;3(2): 3084–3088.
45. Lewis JR. Sample sizes for usability studies: additional considerations. Hum Factors. 1994;36(2): 368–378.
46. American Geriatrics Society. 2012 Beers Criteria Update Expert Panel. American Geriatrics Society updated Beers criteria for potentially inappropriate medication use in older adults. J Am Geriatr Soc. 2012;42(4): 616–631.
47. Lewis JR. Psychometric evaluation of the PSSUQ using data from five years of usability studies. Int J Human Comput Interact. 2002;14(3–4): 463–488.
48. Orfanou K, Tselios N, Katsanos C. Perceived usability evaluation of learning management systems: empirical evaluation of the System Usability Scale. Int Rev Res Open Distributed Learn. 2015;16(2).
49. Bangor A, Kortum P, Miller J. Determining what individual SUS scores mean: adding an adjective rating scale. J Usability Stud. 2009;4(3): 114–123.
50. Bangor A, Kortum PT, Miller JT. An empirical evaluation of the system usability scale. Int J Human Comput Interact. 2008;24(6): 574–594.
51. Fruhling A, Lee S. Assessing the reliability, validity and adaptability of PSSUQ. Proceedings of the Eleventh Americas Conference on Information Systems, Omaha, NE; August 11–14, 2005;378.
52. Cullen L, Adams SL. Planning for implementation of evidence-based practice. J Nurs Adm. 2012;42(4): 222–230.
Keywords:

Evidence-based practice; Health information technology; Nurse-sensitive indicators; Quality indicators; Usability

Copyright © 2017 Wolters Kluwer Health, Inc. All rights reserved.