Smith, Janis B. DNP, RN; Lacey, Susan R. PhD, RN, FAAN; Williams, Arthur R. PhD; Teasley, Susan L. RN, CCRC; Olney, Adrienne MS; Hunt, Cheri MHA, RN, NEA-BC; Cox, Karen S. PhD, RN, FAAN; Kemper, Carol PhD, RN, CPHQ
The quality of healthcare and patient safety in US hospitals became a national concern in recent years motivated by the release of several Institute of Medicine (IOM) reports. The first report1 noted that medical errors have become a national public health issue. It estimated that 44,000 to 98,000 people die in US hospitals each year as a result of medical errors, more than deaths attributed annually to breast cancer, AIDS, or motor vehicle accidents. The same report suggested that substantive improvements in information technology are necessary to support clinical and administrative decision-making about healthcare systems.1 A subsequent report suggested a need to "stimulate the development of a health information infrastructure to support quality measurement and reporting."2(p31) A common theme in all the reports is that broad safety and quality improvements require the development of innovative clinical information systems (CISs).
Clinical information systems cost the healthcare industry billions each year.3 Most recently, the Health Information Technology Economic and Clinical Health Act, part of the American Recovery and Reinvestment Act of 2009, authorized $20 billion in incentives and grants to promote the use of electronic health records.4 But the role of information technology in improving patient safety and care quality is complex and dependent on the systems and processes embedded in a CIS. The intent of implementing a comprehensive CIS in organizations is to increase quality and improve safety, but CIS implementation is confounded by human factors and perceived barriers that can impede user acceptance and use of the system.5,6 There are conflicting data from large-scale studies that indicate implementing these systems may, at times, be linked to increased errors, particularly when clinicians find ways to work around the intended user interface.7
In 2004, a large Midwestern children's hospital system contracted with a CIS vendor to establish an integrated CIS throughout the institution. To enhance adoption of the new CIS, the investigators believed tracking end-users' impressions of the new system over time would improve the confidence of end-users by modifying the system based on their input. Finding no instrument in the literature that offered the level of specificity desired, this investigative team, composed of a nurse informaticist (who represented clinical staff), senior nursing and organizational leaders, and a biostatistician, chose to develop and test a new CIS user perception tool that could guide meaningful systems improvements. The purposes of this study were to
1. develop and validate the Information System Evaluation Tool (ISET) with clinical end-users over time,
2. modify our CIS based on the evaluation feedback from end-users of the system at the point of patient care, and
3. determine if system modifications were effective using a prestudy/poststudy design.
Prior to the beginning of the study, institutional review board approval was obtained by the required body for social science investigation.
State of the Science
Usability, Usefulness, and Satisfaction
Two key constructs emerged in the literature related to measuring components of end-user satisfaction: usability and usefulness. Usability of a system encompasses the extent to which end-users believe that CIS is easy to use. Usefulness of a CIS, on the other hand, describes the users' sense that the system improves work performance, efficiency, or quality. In the past, usability has been a primary focus of studies8-11; however, more recently, Karsh12 refocused on the contrast between usability and usefulness. In evaluating these 2 constructs, it is reasonable to assume they are measured as a surrogate for end-user satisfaction.
User Satisfaction With CIS: Current Tools
A number of instruments have been developed that measure user satisfaction with information technology. For example, the Technology Acceptance Model examines perceived usefulness, perceived ease of use, and the perception that others think it important to adopt the technology under evaluation.13 The Questionnaire for User Interaction Satisfaction evaluates the human-computer interface, assessing 5 dimensions: overall user satisfaction, terminology and information, learning, system capabilities, and screen layout.14 Venkatesh et al9 proposed the Unified Theory of Acceptance and Use of Technology, which measures 8 key factors in information systems adoption, including performance expectancy or what the technology is perceived to enable the user to attain related to gains in job performance. Instruments that have been used to measure user satisfaction with CIS have been adopted from tools to assess satisfaction with commercial technology.
These tools, although helpful in the area of usability, may be less applicable in the 21st century than was the case even a decade ago. Clinical information systems are sufficiently advanced to be remarkably usable, but their usefulness remains a question. In a recently reported evaluation of determinants of user satisfaction with a CIS, there were 8 to 12 times as many questions regarding perceived ease of use as there were related to perceived usefulness.15 A change in perspective is needed.
The goal of the project was to replace multiple, disparate electronic and paper-based information systems with 1 integrated system that was designed to facilitate the transfer of patient information across multiple practice settings. This conversion occurred in the summer of 2008 and included applications for patient care in the emergency department, operating rooms, intensive care units, general inpatient care areas, and (to a more limited extent) in the clinics. Pharmacy, laboratory, and radiology services were also converted. Physicians, nurses, pharmacists, ancillary health professionals, technicians, clerks, and unit/clinic secretaries were all system users.
Finding no single instrument with the level of specificity needed to guide postimplementation system improvement priorities, it was determined that to best serve clinicians who care directly for patients and families, a new instrument should be developed. The coauthors created the ISET. Items in the ISET are framed by the 6 aims for healthcare advanced by the IOM, which include healthcare that is safe, effective, efficient, patient centered, timely, and equitable.16 One caveat to note is that determining if a system supports equitable care is difficult to determine with the ISET. However, the organization's long-term strategy is to examine care equity through retrospective study and examination of the demographic characteristics of patient care generated by system reports of standardized orders for care.
As previously stated, current tools that measure perceived usefulness tend to offer findings that are more global in nature. For example, "using the system improves the quality of care provided" is a question reported to measure system usefulness.15(p615) In creating the ISET, the investigators wanted to ascertain the specific issues related to quality and other factors critical to achieving the IOM aims.
Development and Pilot Testing
The original questions were developed with input from clinical and expert informants. These items were evaluated for content validity and readability by more than 30 individuals working in various clinical disciplines (physicians, nurses, pharmacists, information system analysts) at the study institution and by 1 external expert in psychometric testing of end-user perception of CIS. The expert was a scientist funded by the Agency for Healthcare Research and Quality17 at a large Midwestern university in the Health Management and Informatics department, yet not associated with any member of the investigative team. The tool was revised based on the feedback of these clinical and expert reviewers.
The original version of the ISET was a 61-item instrument designed to assess the 2 domains of usability and usefulness and the 5 subscales in each domain: safe, timely, effective, efficient, and patient centered. Respondents provided their perceptions on a 5-point Likert scale (from 1 = "strongly disagree" to 5 = "strongly agree"). This tool was pilot tested in March of 2007. There were 220 participants who took part in the pilot survey.
Responses to the pilot testing of the ISET were subjected to an exploratory factor analysis using squared multiple correlations as prior communality estimates.18,19 The principal factor method was used to extract factors, followed by a promax (oblique) rotation. A promax rotation was used because the constructs being measured were believed to be correlated. Using the principal factor method, each variable contributes its prior communality estimate, rather than 1 unit of variance. That estimate was less than 1; therefore, the eigenvalue criterion of 1 was not used to determine retained factors. Instead, the number of factors was determined by examining the scree plot and the proportion of variance in the data set accounted for by each factor. A cutoff of 10% was used for the proportion of variance criterion.
An item was said to load on a factor if the loading was 0.40 or greater on that factor and less than 0.40 on all other factors in the rotated factor pattern. Items loading on more than 1 factor were modified or removed from the instrument. Last, the items attributed to each factor were reviewed to determine if each made sense to the investigative team, given the literature and their experience.
Revised ISET Survey
Based on the findings of the pilot, the survey tool was revised for use a second time. The revised version was a 45-item survey. This survey eliminated the Likert scale and used a forced-choice agree/disagree response and requested that respondents indicate their perception of the importance of each item on a 3-point scale (0-2). The rationale for these changes was that a forced choice would offer the study team more certainty regarding the respondents' intent and greater direction for prioritization to improve the CIS. The revised ISET was administered in April 2008 (time 1), prior to the hospital-wide adoption of a CIS. There were 170 participants, consisting of RNs and licensed independent providers (LIPs). Licensed independent providers included physicians and nurse practitioners (eg, those who write billable orders for patient care). (See Table, Supplemental Digital Content 1, which shows the demographics of the ISET participants at time 1, http://links.lww.com/JONA/A50.)
The 45-item ISET was again administered in October 2008 (time 2), which was 6 months after implementation of the CIS. There were 324 participants. Survey participants at time 2 included not only RNs and LIPs, but also allied health (AH) participants. (See Table, Supplemental Digital Content 1, which shows the demographics of the ISET participants at time 2, http://links.lww.com/JONA/A50.)
Confirmatory factor analysis was conducted on the data obtained in times 1 and 2. In addition, prestudy-poststudy comparisons were made for RNs and LIPs. Responses to the ISET at time 2 were arranged in a table by provider (RN, LIP, AH) and by perception (positive, neutral, or negative). Issues were then identified as known (K), unanticipated or surprise (S), in process (IP), or unresolved (UR) at the time of analysis. This provided a guide to address issues related to the CIS based on the end-users' responses.
Final ISET Survey Version
Based on the factor analysis for times 1 and 2, the ISET was again revised to include 42 items. Respondents were instructed to respond to each item as "agree," "neutral," "disagree," or "not applicable." The 42-item ISET was administered to staff in September 2009, 16 months after implementation (time 3). At this time, 761 participants took the survey. The demographics of the participants that took the ISET at time 3 are shown in Table 1. The findings indicated that only 30 of the 42 items remained sufficiently stable to allow for comparison on the 2 postimplementation surveys (times 2 and 3). Principle components factor analysis was conducted on the time 3 results. Examples of items from the final ISET version are shown in Figure 1.
Results for the Final Version of the ISET
Analysis was completed for the revised 42-item ISET at time 3. There were 348 participants who completed 32 items, without use of the N/A option. The survey showed high internal consistency reliability with a Cronbach α of .92. The Kaiser-Meyer-Olkin was also .92. None of the 32 items had a Kaiser-Meyer-Olkin less than .83, which indicates that the items were suitable for factor analysis. Seven factors accounted for 88% of the variance among the survey respondents. All 7 factors had eigenvalues greater than 1.0. (See Table and Graph, Supplemental Digital Content 1, which shows the eigenvalue scores and a scree plot of eigenvalues after the factor analysis, http://links.lww.com/JONA/A50.)
In addition to the item analysis, 2 of the investigators created a tracking grid of item responses (means) by professional group, as well as determining which specific aspects of the CIS needed modifications sooner rather than later. Table 2 provides the results for each item by category and by provider type and categorizes responses as known issue of concern (K), unanticipated or surprise (S), resolved (R), in progress of being resolved (IP), or unresolved (UR). The lead investigator, the nurse informaticist, took the results back to CIS implementation teams who regularly work with clinical end-users, to help guide modifications of the system. We prioritized CIS modifications defined as unanticipated or surprise (S), in progress (IP), and unresolved (UR).
Comparison of Results
Aggregate (summated) means were more negative for RNs and LIPs between times 1 and 2, as was the aggregate mean score when the 2 groups were combined. Licensed independent providers were more negative about the CIS than were RNs at both survey points. Although there are no comparison scores between times for AH providers, their total mean score was intermediate between LIPs and RNs at time 2. Of the 45 items, only 3 were ranked as significantly more positive at time 2 when all participant groups were combined, whereas 26 items were as ranked significantly more negative at time 2.
Because of the iterative nature of the instrument development, when comparing scores from times 2 and 3, 30 of the 42 items were sufficiently stable to allow for comparison. Perceptions, however, improved on 24 of those 30 items and declined on only 6 of the 30 items. Of the 24 items that showed improvement, 20 were statistically significant when comparing mean responses. All 6 of the items that declined were statistically significant. Results can be seen in Figures 2 and 3. Lower scores are desired for the ISET taken at time 3.
The results of the factor analysis and the final revision of the ISET suggest that it is a valid and reliable survey instrument, which should nevertheless be subject to additional critical assessment. This survey has the granularity necessary to ascertain users' perceptions of specific aspects of system functionality, performance, and impact on patient safety and care quality. It can readily identify needed system improvements, as well as modifications to the work processes to optimize the system for better performance.
We also found that users do not discriminate between the 2 constructs of usability and usefulness. What began as 2 separate constructs in the original ISET were integrated in the revised ISET. The team concluded that although subject matter experts and the literature may partition these 2 constructs, end-users view a CIS as either supporting or not supporting their work, irrespective of predefined constructs of usability and usefulness.
Comparisons between the times of administration demonstrated user satisfaction initially decreased upon the implementation of a new CIS. Results from 6 months after implementation showed very few improvements in perception and several significant declines. However, when administered at 16 months after implementation, perceptions on a majority of items significantly improved. This is consistent with many information system evaluations that initially find a decline in user perception in the early months after implementation.
The ISET was specifically designed to capture end-users' perceptions of the CIS at a granular level. The data provided us with the information necessary to identify and prioritize needed system improvements, including training for users. We were then able to use existing change processes to address known but unresolved issues and to clarify and then address unanticipated issues. We prioritized our work based on the potential to impact safety and quality of patient care, as well as the strength of the users' perceptions of the problem.
The ISET is a valid and reliable survey of end-user perceptions of CIS. Although the IOM suggests that implementing a CIS is necessary to improving quality and safety, it is imperative that hospitals understand their end-users' perceptions of these systems. If the users are not convinced that the CIS supports their practice and improves patient care, adoption will be difficult.
One key benefit of using the ISET is that items are not vendor specific, which allows for use regardless of the chosen CIS product. With the amount of resources being dedicated to these purchases, the implementation and maintenance, it is paramount that the efficacy of the system be established. Not only will this boost the confidence of end-users that the product improves their ability to provide safe care, but also that patients are made safer by their implementation and use.
The authors would like to thank all of the participants who completed this survey.
1. Institute of Medicine. To Err Is Human: Building a Safer Health System
. Kohn LT, Corrigan JM, Donaldson MS, eds. Washington, DC: National Academy Press; 2000.
2. Institute of Medicine. Envisioning the National Health Care Quality Report
. Washington, DC: National Academy Press; 2001.
4. DesRoches CM, Campbell EG, Vogeli C, et al Electronic health records' limited successes suggest more targeted uses. Health Aff
5. Sengstack PP, Gugerty B. CPOE systems: success factors and implementation issues. J Healthc Inf Manag
6. Saathoff A. Human factors considerations relevant to CPOE implementations. J Healthc Inf Manag
7. Greenhalgh T, Potts HWW, Wong G, Bark P, Swinglehurst D. Tensions and paradoxes in electronic patient record research: a systemic literature review using the meta-narrative method. Milbank Q
8. Gainer A, Pancheri K, Zhang J. Improving the human computer interface design for a physician order entry system. AMIA Annu Symp Proc
9. Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unified view. MIS Q
10. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q
11. Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform
12. Karsh BT. Beyond usability: designing effective technology implementation systems to promote patient safety. Qual Saf Health Care
13. Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q
14. Harper BD, Norman KL. Improving user satisfaction: the Questionnaire for User Interaction Satisfaction, version 5.5. In: Proceedings of the First Mid-Atlantic Human Factors Conference. Virginia Beach, Virginia
15. Palm JM, Colombet I, Sicotte C, Degoulet P. Determinants of user satisfaction with a clinical information system. AMIA Symp Proc
16. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century
. Washington, DC: National Academy Press; 2001.
17. Agency for Healthcare Research and Quality (AHRQ). Available at www.ahrq.gov
. Accessed November 8, 2010.
18. Kline P. An Easy Guide to Factor Analysis
. Routledge, London: BMJ Publishers; 1994.
19. Hatcher L. A Step-by-Step Approach to Using the SAS System for Factor Analysis and Structural Equation Modeling
. Cary, NC: The SAS Institute, Inc; 1994.
© 2011 Lippincott Williams & Wilkins, Inc.