Skip Navigation LinksHome > July/August 2014 - Volume 44 - Issue 7/8 > Development of a Tool to Measure User Experience Following E...
Journal of Nursing Administration:
doi: 10.1097/NNA.0000000000000093

Development of a Tool to Measure User Experience Following Electronic Health Record Implementation

Xiao, Yan PhD; Montgomery, Donna Cook RN-BC, BSN, MBA; Philpot, Lindsey M. PhD, MPH; Barnes, Sunni A. PhD; Compton, Jan MSHA, BSN, RN; Kennerly, Donald MD, PhD

Free Access
Supplemental Author Material
Article Outline
Collapse Box

Author Information

Author Affiliations: Director of Human Factors and Patient Safety Research (Dr Xiao), System Director of Nursing and Patient Care Informatics (Ms Montgomery), Director of Survey Research (Dr Barnes), Health Services Researcher, STEEEP Measurement, Analytics, and Reporting (Ms Philpot), Chief Patient Safety Officer (Ms Compton), and Associate Chief Quality Officer (Dr Kennerly), Baylor Scott & White Health—North Division, Dallas, Texas.

The survey development was supported in part by Baylor Health Care System and by a grant (10510592) for Patient-Centered Cognitive Support under the Strategic Health IT Advanced Research Projects from the Office of the National Coordinator for Health Information Technology.

A presentation about the 2011 survey tool and results was made to the Healthcare Information and Management Systems Society Usability Taskforce in January 2012.

The opinions expressed here do not necessarily reflect the official position of the sponsors.

The authors declare no conflicts of interest.

Correspondence: Dr Xiao, 8080 North Central Expy, Ste 500, Dallas, TX 75206 (

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site (

Collapse Box


OBJECTIVE: The aim of this study was to develop a survey tool to assess electronic health record (EHR) implementation to guide improvement initiatives.

BACKGROUND: Survey tools are needed for ongoing improvement and have not been developed for aspects of EHR implementation.

METHODS: The Baylor EHR User Experience (UX) survey was developed to capture 5 concept domains: training and competency, usability, infrastructure, usefulness, and end-user support. Validation efforts included content validity assessment, a pilot study, and analysis of 606 nurse respondents. The revised tool was sent to randomly sampled EHR nurse-users in 11 acute care facilities.

RESULTS: A total of 1,301 nurses responded (37%). Internal consistency of the survey tool was excellent (Cronbach’s α = .892). Survey responses including 1,819 open comments were used to identify and prioritize improvement efforts in areas such as education, support, optimization of EHR functions, and vendor change requests.

CONCLUSION: The Baylor EHR UX survey was a valid tool that can be useful for prioritizing improvement efforts in relation to EHR implementation.

Electronic health records (EHRs) are being implemented at an exponential pace. In hospitals, nurses are the single largest user group of EHR constituents and are often the 1st group of frontline clinical staff to incorporate EHR into their workflow. Although the positive impact of EHRs on quality of care and nursing workload has been documented,1-3 implementing an EHR has been challenging. Reported difficulties include limited usability impeding efficiency, effectiveness, and user satisfaction.4,5 Some reports have attributed negative impacts on patient care, such as errors in order entry, documentation in wrong patients’ charts, and delays in receiving patient allergy information to EHR implementation.6,7

Like other large, complex information technology systems, implementation of an EHR is an ongoing process that can extend years beyond a 1-time go-live event. In addition to development of initial adoption strategies, education, and user support, there are ongoing optimization and integration efforts that must occur to maximize the use and applicability of the system. User experience solicited across time and among different departments can provide valuable input for improvement activities related to the EHR and related processes.

Baylor Scott & White Health is an integrated healthcare delivery system. Since 2009, the organization has implemented a commercial, 3rd-party EHR across its 14 hospitals in its North Texas Division in the Dallas–Fort Worth area. Although focus groups, informal discussions, and meetings were used to gather data on user experience, a systematic and structured means was needed to collect anonymous feedback from nurses across different departments. Surveys have previously been used to assess acceptance of EHR,8 perceived effectiveness of EHR,9 and user experience10,11; however, no published survey tools were available for healthcare organizations to improve user experience continuously12 after an implementation decision has been made. In this article, we report the development of the Baylor EHR User Experience (UX) survey, psychometric properties of the instrument, the process of UX survey implementation, and reporting for tracking and continuous improvement during the multifacility rollout of an EHR system.

Back to Top | Article Outline


Instrument Development

A literature search was conducted on MEDLINE for survey tools published between January 1, 2005, and December 31, 2010, using search terms including user satisfaction survey, nurse user experience, EHR, electronic medical record, and computerized clinical documentation. One physician-centric11 and 4 nursing-centric survey tools were identified.9,3-15 Edsall and Adler11,16 conducted an EHR user satisfaction survey among family physicians yearly for guiding selection of EHR along 4 domains (vendor support, ease of learning, effect on productivity, and overall satisfaction). Otieno et al9 reported a tool to understand the views of hospital-based nurses on the use, quality, and end-user satisfaction with EHR systems in Japan. Lee et al13 reported a tool as part of a multimethod evaluation technique to understand the impact of EHR systems on nurses’ day-to-day practice. Dykes et al14 reported the Impact of Health Information Technology Scale focused on the communication aspect of the use of EHR systems among nurses in an acute care setting. Chisolm et al15 looked at the implementation of EHR in a pediatric emergency department, and based on 71 responses, they recommended that improvement in training and support was important for user satisfaction. These published survey tools established a number of concept domains for assessing user experience: (1) ease of use; (2) perceived usefulness of the EHR system; (3) perceived ability to gain access to the EHR system when needed; (4) system processing speed, response time and system delays; (5) previous experience with EHR systems, increased workload, concern over impact on patient care, completeness of the content found in the EHR system, learning time associated with new EHR systems, and time taken away from direct patient care during the learning experience; and (6) the impact on the working relationships among nurses and between nurses and physicians.

We compiled the items from these surveys and created candidate new questions about EHR infrastructure, education, and user support that were considered key for guiding improvement decisions by healthcare organizations. We aimed for a short survey instrument to generate meaningful and actionable information. An iterative process was used to assess content validity, priorities of each item, wording, and appropriateness for grouping into domains and to revise by a group of multidisciplinary experts in patient safety, safety culture survey, human factors, survey design, statistics, nurse informatics, and hospital administration. The group assigned question items in 5 concept domains: training and competency, usability, infrastructure, usefulness, and end-user support.

The survey validation, testing, and implementation were approved by Baylor Research Institute’s institutional review board. Initial versions of the survey instrument were pilot tested with more than 20 informatics-trained nurses to gather their input regarding the coverage of the survey items, the response scales used, and ease of survey completion.

Back to Top | Article Outline
Internal Consistency Assessment and Survey Revision

The Baylor EHR UX survey was sent by e-mail to 1,688 randomly selected nurses with patient care responsibilities in the 6 hospitals in our organization that had implemented the EHR system in the fall of 2011. Based on the responses received from 606 nurses (response rate, 36%), 4 of the 5 concept domains had good internal consistency (Cronbach’s α = .72-.76). The infrastructure domain had lower internal consistency (Cronbach’s α = .61). We made changes to the response scale and removed the item about downtime because the majority of the planned downtimes were scheduled during weekends, and the number of downtimes was already captured. The revised instrument has 29 items, all on a 5-point frequency or agreement Likert-type scale (Table 1).

Table 1
Table 1
Image Tools

In addition to the Likert-type scale, the survey instrument has 7 fields for open comments about specific aspects in the EHR system. Four of the fields were triggered by negative responses (Table 1).

Back to Top | Article Outline
Survey Implementation

The revised Baylor EHR UX survey was used in a data collection period of 3 weeks in late May and early June 2013 across 11 hospitals with EHR.

Back to Top | Article Outline
Respondent Sampling

A simple random sample was taken for hospitals with more than 300 employed frontline nurses, and a census sample was taken from hospitals with 300 or less employed frontline nurses. Care was taken to ensure that only nurses who engage in direct patient interaction were included in the sample. Examples of nurses not included within the sampling group included those with purely administrative roles, those who functioned as technicians for laboratory and other services, and those who also served a different clinical role, such as case management. The number of respondents, response rate, hospital characteristics, and length of time on the EHR are reported in Table, Supplemental Digital Content 1,

Back to Top | Article Outline
Survey Management

Snap Surveys (Portsmouth, New Hampshire), an online survey management tool, was used to send invitations to participants’ employer-based e-mail address with a link to the survey. Participants were given 2 reminders and 21 business days to complete the survey. Interim response rate reports were provided to nursing administration at each facility to encourage participation.

Back to Top | Article Outline
Statistical Analysis

Internal consistency of the survey instrument and the 5 predefined domains were assessed by Cronbach’s α. Pearson product-moment correlation of individual domains with respondents’ overall rating of the EHR system (“Overall I am satisfied with the EHR”) was used to identify domains and survey items that were more reflective of respondents’ overall sense of satisfaction.

Back to Top | Article Outline


A total of 1,301 responses were received from nurses (37%). The Cronbach’s α of the survey instrument as a whole indicated excellent internal consistency (α = .892). Satisfactory internal consistency was also observed for all domains (α = .65-.89; Table 2). The usability and usefulness domains were correlated with overall rating (r = 0.74, r = 0.84), more so than the infrastructure domain (r = 0.250). The individual question item with the highest correlation with the overall ratings was “The content is laid out in an understandable way” (r = 0.756), followed by “Documentation through the EHR has improved patient safety” (r = 0.719) and “The EHR allows me to spend more time with my patients” (r = 0.695). The items with low correlations were “I document directly into the computer without writing on paper 1st” (r = 0.279) and “Duplicating entries” (r = 0.226).

Table 2
Table 2
Image Tools

Large numbers of open comments were provided (a total of 81 pages contained 1,819 comments). Thematic analysis was conducted 1st individually by 2 informatics-trained nurses and 2 human factors specialists, and then jointly to reach a consensus. Respondents described concerns with the fragmentation of documentation across different parts of the EHR (eg, skin assessments, ear/nose/throat assessments) and with missing components (eg, neurovascular assessment). Respondents indicated difficulty in correcting errors during charting and inconsistency in the EHR location of information entry from nurse to nurse. Respondents specified areas in the EHR where the screen display was not easy to read (eg, laboratory results) and situations where they documented on paper before entering into the EHR system (eg, physician orders, vital signs). Positive experience of the current EHR system compared with use of previous systems reported by survey participants included clear and precise order entry, decreased need to verify orders with physicians due to handwriting issues or unclear order requests, quick log-in to access patient list and records, and general ease of use. Reported negative experiences with the EHR compared with other EHR systems included difficulty of use, length of time required for charting, and frequent system updates/upgrades.

Back to Top | Article Outline


Few information technology systems are as impactful as the EHR in transforming nursing practice and thus the profession. Safe and effective implementation of EHR is dependent on multiple key factors.17 While increasing emphasis has been placed on vendors to improve usability of their products,5 continuing improvement after initial implementation has been a priority for the majority of organizations with EHRs.18 In addition to other channels of feedback from EHR end users, the survey can be a data-based way to assess and guide improvement efforts from a large number of users across different units.12 Quantitative analysis of survey data can help prioritize specific areas of dissatisfaction among users.

The 5 domains in the Baylor EHR UX survey are consistent with published key variables for information system success19 and were designed to encourage healthcare organizations to look for opportunities to improve user experience, beyond an assessment of satisfaction.20

We believe that the survey tool fulfills an important gap in our ability to assess opportunities in hospitals to improve user experience. In contrast to EHR vendors, most hospitals do not have the resources to significantly modify EHR design to improve user experience. However, hospital leaders may allocate resources for education, user support, infrastructure improvement, customization of EHR functionalities, technology integration, and perhaps, most importantly, procedure and policy changes for clinical practices in an EHR-centric environment. User experience surveys and feedback can help hospital leaders uncover valuable data on the high-level view of end-user attitudes concerning the day-to-day use of the EHR and on specific improvement opportunities.

The comparison among different facilities, each at different stages of implementation, prepares organizations and systems to anticipate future challenges. For example, survey results in this project indicated issues with infrastructure as reflected by access challenges. The results were used to initiate studies to determine where the challenges were, guiding the investment of resources in infrastructure and configuration changes in nurse workstations. As another example, time spent in EHR was flagged as a key concern from survey results. Time-motion studies were subsequently carried out to quantify the issues. Documentation policies were clarified, and education initiatives were implemented to streamline and standardize clinical documentation. Also, open comments in the Baylor EHR UX survey were important sources of information. Although the organization has an active incident reporting system and a strong shared-governance culture, anonymous, open comments focused on EHR using survey methods complement other communication channels. Organizational leaders valued the results from the survey and themes identified from the comments. The results and themes were used to help prioritize EHR changes and supporting activities. The information service department and informatics teams partnered to identify the changes that could be made in the EHR based on the survey results that were found to be most significant: usefulness and usability. Three examples of changes made toward usefulness included a new lines, tubes, and drain documentation module; enhancements to blood utilization documentation; and the ability to publish personalized templates. Three examples of changes made toward usability included linking fall risk assessment to evidence-based interventions, implementing a hard stop on allergy entry during admission assessment, and creating a faster and more efficient search capability for note templates. Multiple other changes have been made to support nursing, pharmacy, and physician workflow and their practice needs. Based on the value from 1st deployment of the survey tool in 2011 and subsequent deployment in 2013, our organization plans to administer the survey tool periodically.

Back to Top | Article Outline


The Baylor EHR UX survey demonstrated acceptable to good internal consistency and was successful in obtaining a large amount of specific feedback from the 11 facilities with EHR. Survey results were used in multipronged efforts to improve patient safety and nurse satisfaction. The key domains of user experience include competence and training, usability, infrastructure, usefulness, and end-user support. These domains, all of which were included in our survey instrument, had acceptable or better internal consistency in this study. Both quantitative and qualitative survey data are essential in prioritizing resources in improving EHR implementation. Periodic surveys in hospitals with EHR can be justified because of their vital role in and significant impact on quality of care and the nursing work environment. They can also direct organization leaders’ attention to trends so that plans can be made on improvement efforts and future planning.

Back to Top | Article Outline

The authors thank Kelli R. Trungale, MLS, ELS, for editorial assistance. Drs Linda Harrington, Jiajie Zhang, and Muhammad Walji contributed to the development of the survey tool. The authors thank all nurses who participated in the survey and Dr C. Adam Probst for his input.

Back to Top | Article Outline


1. Korst LM, Eusebio-Angeja AC, Chamorro T, Aydin CE, Gregory KD. Nursing documentation time during implementation of an electronic medical record. J Nurs Adm. 2003; 33 (1): 24–30.

2. Smith K, Smith V, Krugman M, Oman K. Evaluating the impact of computerized clinical documentation. Comput Inform Nurs. 2005; 23 (3): 132–138.

3. Darbyshire P. ‘Rage against the machine?’: nurses’ and midwives’ experiences of using computerized patient information systems for clinical information. J Clin Nurs. 2004; 13 (1): 17–25.

4. Koppel R, Kreda DA. Healthcare IT usability and suitability for clinical needs: challenges of design, workflow, and contractual relations. Stud Health Technol Inform. 2010; 157: 7–14.

5. Middleton B, Bloomrosen M, Dente MA, et al. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA. J Am Med Inform Assoc. 2013; 20 (e1): e2–e8.

6. Ash JS, Berg M, Coiera E. Some unintended consequences of information technology in health care: the nature of patient care information system-related errors. J Am Med Inform Assoc. 2004; 11 (2): 104–112.

7. Harrison MI, Koppel R, Bar-Lev S. Unintended consequences of information technologies in health care—an interactive sociotechnical analysis. J Am Med Inform Assoc. 2007; 14 (5): 542–549.

8. Hurley AC, Bane A, Fotakis S, et al. Nurses’ satisfaction with medication administration point-of-care technology. J Nurs Adm. 2007; 37 (7-8): 343–349.

9. Otieno GO, Hinako T, Motohiro A, Daisuke K, Keiko N. Measuring effectiveness of electronic medical records systems: towards building a composite index for benchmarking hospitals. Int J Med Inform. 2008; 77 (10): 657–669.

10. Lee TT, Mills ME, Lu MH. The multimethod evaluation of a nursing information system in Taiwan. Comput Inform Nurs. 2009; 27 (4): 245–253.

11. Edsall RL, Adler KG. An EHR user-satisfaction survey: advice from 408 family physicians. Fam Pract Manag. 2005; 12 (9): 29–35.

12. Hoonakker PL, Carayon P, Brown RL, Cartmill RS, Wetterneck TB, Walker JM. Changes in end-user satisfaction with computerized provider order entry over time among nurses and providers in intensive care units. J Am Med Inform Assoc. 2013; 20 (2): 252–259.

13. Lee F, Teich JM, Spurr CD, Bates DW. Implementation of physician order entry: user satisfaction and self-reported usage patterns. J Am Med Inform Assoc. 1996; 3 (1): 42–55.

14. Dykes PC, Hurley A, Cashen M, Bakken S, Duffy ME. Development and psychometric evaluation of the Impact of Health Information Technology (I-HIT) scale. J Am Med Inform Assoc. 2007; 14 (4): 507–514.

15. Chisolm DJ, Purnell TS, Cohen DM, McAlearney AS. Clinician perceptions of an electronic medical record during the first year of implementation in emergency services. Pediatr Emerg Care. 2010; 26 (2): 107–110.

16. Edsall RL, Adler KG. The 2012 EHR User Satisfaction Survey: responses from 3,088 family physicians. Fam Pract Manag. 2012; 19 (6): 23–30.

17. Harrington L, Kennerly D, Johnson C. Safety issues related to the electronic medical record (EMR): synthesis of the literature from the last decade, 2000-2009. J Healthc Manag. 2011; 56 (1): 31–43; discussion 34-43.

18. Staggers N, Xiao Y, Chapman L. Debunking health IT usability myths. Appl Clin Inform. 2013; 4 (2): 241–250.

19. Messeri P, Khan S, Millery M, et al. An information systems model of the determinants of electronic health record use. Appl Clin Inform. 2013; 4 (2): 185–200.

20. Moreland PJ, Gallagher S, Bena JF, Morrison S, Albert NM. Nursing satisfaction with implementation of electronic medication administration record. Comput Inform Nurs. 2012; 30 (2): 97–103.

Supplemental Digital Content

Back to Top | Article Outline

© 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins