Communication is one of the most frequent factors contributing to poor patient outcomes.1 Communication issues can lead to missed patient care,2 increased length of stay,3 failed handoffs,4,5 falls,6,7 and medication errors.8 Often interruptions lead to miscommunication or failure to communicate.9 Implementation of the electronic medical record has altered workflow, changing how nurses communicate clinical events, both in the patient record and verbally.10,11 Further contributing to ineffective patient care unit (PCU) communication are the multiple schedules of physicians and nurses, resulting in different teams throughout a patient's stay.12
Until recently, most nursing communication research focused on dyads (nurse-nurse or physician-nurse) or team-to-team communication (handoffs or shift report).13 However, with the advent of sophisticated, but usable, social network analysis (SNA) tools, SNA has become an important technique for examining group communication.14-17 SNA allows researchers to use the links (connections) between individuals in various-size groups to describe network characteristics such as the speed at which information transfers across the group and the density of the communication, as well as key individuals (eg, gatekeepers).18 Systematic reviews reveal SNAs' increased use in healthcare.19 In the acute care setting, SNA has been used to examine communication in emergency departments (EDs),20 neonatal intensive care units,21 and operating rooms,22 as well as to describe medication advice seeking in a renal unit.23 SNA has been used to explore the impact of information technology on healthcare organizations and teams24,25 and study how mutual understanding develops in multidisciplinary primary healthcare teams.26 Cohen and Hilligoss27 cited researchers4 who used SNA to examine handoffs, including examining interdisciplinary handoff communication when patients transferred from an ED to acute care. Other researchers have used SNA to identify patients' actual care teams from electronic health record entries.28 Most SNA studies have been descriptive, rather than prescriptive, with few longitudinal studies or replications using multiple sites,16 and only a few SNA studies have looked at the impact of nursing communication networks on patient outcomes.17
The purpose of this article is to compare, in a sample of 24 acute care medical and surgical PCUs, the impact of nursing staff information-sharing and advice networks on nurse-sensitive patient safety outcomes during four 24-hour periods over a 7-month time frame. Effken et al13,29 explored the impact of nursing staff communication patterns on patient safety outcomes using ORA,30 an SNA tool. However, that study explored information-sharing networks of only 7 PCUs in 3 hospitals, making generalization difficult. PCU staff not only share patient-related information, but also seek and give advice. The extent to which nursing staff's advice giving and receiving differ from information sharing is unknown. Therefore, the goal of this study was to evaluate differences among the 2 types of communication networks, as well as their relationships with nurse-sensitive patient safety outcomes.
Sample and Setting
The convenience sample included 24 medical-surgical PCUs from 2 not-for-profit Arizona community hospitals and 1 for-profit urban Texas hospital, selected because of their size (which provided multiple PCUs for study) and their willingness to participate. The PCUs varied in size from 12 to 51 beds (mean, 26.6). One hospital had achieved Magnet® designation. All licensed and unlicensed nursing staff who were working on the days of data collection in participating PCUs were invited to participate. Staff recruitment utilized flyers and presentations by research team members during staff meetings. We sought a 90% response of the staff who were working during the 24-hour period of data collection to model PCU communication networks as accurately as possible. To encourage responses, potential participants received either a snack such as a bagel or cupcake or a coupon to obtain coffee or other sweets, valued at approximately $4.00.
For comparability, we used the same outcome measures as in the study of Effken et al.13 Fall rate was defined as the number of falls per month in which staff-collected data divided by 1000 patient-days. Medication errors were defined as total medication errors per month in which staff-collected data divided by 1000 patient-days. Hospital-acquired pressure ulcer (HAPU) prevalence was defined as the number of HAPUs per patients averaged over the number of patients hospitalized on the unit the day data were collected. These safety event definitions were provided to quality management departments to ensure consistency of data across hospitals.
Creating the Networks via Communication Questionnaire Data
Data to create the 2 networks were generated from a questionnaire given to PCU staff. The questionnaire presented the staff roster for the 24-hour period in which data were being collected. Staff members were asked, for each staff member listed on the roster, “How often did you discuss patient care with each of these individuals while working on your unit during the current shift and the next shift (for day staff) or the prior shift (for night staff)?” Their answers to this question (via a response scale from 1 to 5 [never to constantly]) were used to generate the PCU's information-sharing network. Staff were also asked to rate the trustworthiness of the information gained from the people with whom they had discussed patient care using a 1 to 5 response ranging from never to always. The frequency of discussion measure was adapted from Effken et al13 as a proxy for accessibility, and the trustworthiness (confidence) measure as a proxy for knowing and valuing what the other knows.31 To create the advice network, the original staff roster was presented again, and staff were asked questions adapted from Creswick and Westbrook22 as to how frequently during their just-completed shifts they went to each staff member listed for patient care–related advice or were sought out for patient care–related advice by that staff member, using the same response scale as for the discuss question. Finally, staff answered questions about their own expertise using a rating scale adapted from Benner32 and used previously by one of the researchers.33 Staff were then queried about their years of experience on the PCU, the shift they worked today, and whether today's shift had been normal, better than normal, or worse than normal.
Commonly used social network metrics were used to measure network characteristics such as size (node count), communication efficiency (density, weighted density, diffusion, average distance), centrality (total degree centrality, betweenness centrality, eigenvector centrality), and clustering (clustering coefficient) into small groups (Table 1). Except for node count, which uses actual frequency values, all other metrics use a 0- to 1-point scale (equivalent to a percentage). These network metrics were selected because they were stable (within 1 SD) across the 4 data collection times.
Human Subjects Protection
Institutional review board approval was obtained from The University of Arizona, Texas Women's University, The University of Texas at Austin, and the 3 data collection sites. Contact information needed for distributing staff invitations to participate was obtained from site coordinators at each hospital. Site coordinators also coordinated data collection and served as the primary contacts for the research team. Although the SNA survey rosters required the names of staff, the data collection software replaced the staff person's name with an anonymous identification number, for data transfer and storage. The link of subject anonymous identification number to name was available only to principal investigators and the database manager and stored on password-protected servers. Hospital safety outcome data were deidentified and saved on secure, password-protected servers.
PCU Staff Data Collection
Demographic and network-related data needed to define PCU information-sharing and advice networks were collected from PCU staff working on preselected days (during a 24-hour period) using an adapted communication survey.13 All licensed and unlicensed PCU staff working on the days of data collection were invited to participate. Data were collected at the end of staff shifts using a Web-based questionnaire presented on Android tablets with wireless Internet access.34 The SNA survey required providing staff rosters so that participants could quickly identify those with whom they had interacted. Rosters of those who were scheduled to work on the day and night of data collection were obtained from nurse managers and uploaded to the secure project Web site. Data collectors updated the list to reflect staffing changes prior to the beginning of the shift in which data would be collected. The updated roster was then downloaded by data collectors to each Android tablet assigned to that unit. PCU staff were able to view only their own unit roster. A disclosure form was presented as the 1st page of the questionnaire. Continuing to complete the survey ensured staff members' willingness to participate. Multiple tablets were available, so staff did not have to wait for a tablet to begin the survey. Staff viewed the roster of the nursing staff assigned to work on their unit during their current or adjacent shift (for day shift that was the upcoming night shift; for night shift, that was the previous day shift). This ensured that data collection would include a handoff. Completing the survey took 10 to 15 minutes. Data collectors then uploaded staff responses from each Android device to the secure server for storage.
Medication errors, fall rate, and HAPU prevalence were provided by hospital quality management departments to correspond with the months of PCU staff data collection.
Networks for each PCU were created in ORA,30 and their characteristics were described via the 9 network metrics shown to be stable over the 4 data collection periods. Like other SNA software, ORA uses both attribute (eg, RN or patient care technician [PCT]) and relational (who is connected to whom) data to compute numeric metrics (the 9 used in this study, for example) and generate network visualizations using a graphical user interface.18 The Statistical Package for Social Sciences version 24 (SPSS Inc, Armonk, New York) was used to analyze the data generated by ORA and the nonnetwork survey responses. Because of the small sample size (N = 24), statistical significance was set at P ≤ .10.
Response Rates and Staff Demographics
Response rates differed by PCU (mean, 84.7% [SD, 13.1%]), with only 5 PCUs from 1 hospital achieving the target rate of 90% at all 4 data collection periods (Table 2). Response rates also varied by data collection period and, on average, were lowest during M4. Of the 1578 respondents, 50% worked 7 AM to 7 PM, 42.5% worked 7 PM to 7 AM, and the remainder worked other shifts. Most respondents were RNs (67%) or PCTs (26%). Most staff had worked 1 to 3 years on the PCU and rated today's shift as “normal” and their own expertise as “proficient” (Table 3).
Comparing the Networks
Table 4 compares the statistically significant information-sharing and advice ORA network metrics related to safety outcomes at each data collection time. None of the 9 ORA metrics in either network exhibited statistically significant correlations at all 4 times data were collected, but weighted density, total degree centrality, and clustering coefficient were correlated with medication errors at baseline, M1, and M7 in the advice network. In the information-sharing network, most correlations with medication errors and falls occurred at baseline and M7. One correlation was statistically significant at M4 in the advice network; however, correlations were in the same direction as other months. No consistent explanation across all units explained why M4 differed from the other 3 data collection results.
In both networks, node count and average distance were positively associated with more medication errors, whereas density, weighted density, diffusion, total degree centrality, eigenvector centrality, and clustering coefficient were associated with fewer medication errors. A similar pattern was observed for fall rate, except for diffusion (which did not correlate in either network) and betweenness centrality (which related positively in the information-sharing network and did not correlate in the advice network).
Few metrics correlated with HAPU prevalence. node count correlated positively, and density, weighted density, total degree centrality, and clustering coefficient correlated negatively with HAPU prevalence in the information-sharing network—but each of these correlations occurred at only 1 time period (all but one at baseline). For the advice network, HAPU prevalence exhibited 3 significant correlations, each at a different data collection period. Node count exhibited a positive correlation, whereas diffusion and clustering coefficient exhibited negative correlations.
In both networks, when there were correlations with outcomes in more than 1 data collection period, the direction of the relationship was always the same, and the magnitude of the relationship was usually similar. The strongest correlations in both networks were of node count (r = 0.62, P < .001) with falls. Node count also exhibited a positive relationship with medication errors (r = 0.52, P < .001) in both networks. Total degree centrality and eigenvector centrality were inversely related to medication errors (r = −0.37 to −0.56, P < .05) and falls (r = −0.37 to −0.53, P < .05) in both networks, whereas betweenness centrality was positively related to falls (r = 0.40, P < .10)only in the information-sharing network.
Our data collection system was developed by the research team for this project and consisted of a Web site and an Android application. Despite a learning curve for the research team, the system proved to be highly efficient and prevented errors due to manual copying from questionnaires to spreadsheets. The method also allowed for direct download to ORA for analysis, which substantially increased our efficiency.
There were 29 statistically significant correlations of information-sharing network metrics with safety outcomes: 12 each with medication errors and fall rate and 5 with HAPU prevalence. None occurred at more than 2 data collection times (nearly all at baseline and/or month 7). For medication errors, 7 of 8 metrics were statistically significant at baseline. For falls, 7 of 8 metrics were statistically significant at month 7 and 4 of 8 at baseline. The strongest correlations (r > 0.5) with medication errors occurred with node count, weighted density, and total degree centrality. The strongest correlations with fall rate were the same, but with the addition of eigenvector centrality.
There were 25 statistically significant correlations between advice network metrics and safety outcomes. Most17 were with medication errors and only 7 with fall rate. Three metrics associated with medication errors (weighted density, total degree centrality, and clustering coefficient) were statistically significant 75% of the time. All were negatively correlated with medication errors.
Most statistically significant correlations occurred at either baseline19 or month 7.23 Only 12 statistically significant correlations occurred at month 1 (6 in each network) and 1 at month 4 (advice network). HAPUs were less prevalent than medication errors or falls, which is likely why there were only 5 significant HAPU correlations with the information-sharing network and 3 with the advice network.
Although the pattern of relationships to safety outcomes was similar in the 2 networks, there were more statistically significant correlations with medication errors in the advice network. As noted earlier, for the information-sharing network, higher node count and average distance were associated with more medication errors, whereas higher density, diffusion, centrality, and clustering coefficient metrics were associated with fewer medication errors. In the current study, the advice networks generated similar results over more than 1 data collection time for weighted density, total degree centrality, and clustering coefficient, suggesting that these may be stronger, more stable relationships. This differs with the results of the 2011 study,13 in which only higher betweenness centrality was positively associated with more adverse drug events (ADEs). This difference may be due in part to the fact that in the 2011 study13 ADEs were averaged over the 3-month period in which the staff survey was conducted, rather than during the same month. Neither study directly linked errors occurring on the same day as data collection, which would be ideal, but is not realistic because of the few, if any, errors during a single day and the expense of daily data collection.
In contrast to the results for medication errors, there were more significant network metric correlations with fall rate in the information-sharing network. Density, weighted density, diffusion, and several centrality metrics indicating more frequent communication among staff were negatively correlated with fall rates. Node count is associated with unit characteristics, such as the number of beds and patients; a larger unit will have more staff. This finding suggests that more staff, larger units (more beds), and more patients result in more falls. In contrast, smaller units (based on node count) are shown to have more frequent staff interaction (density, weighted density), particularly coordinated through knowledgeable staff (total degree centrality, eigenvector centrality). In the study of Effken et al,13 falls were positively correlated with diffusion and negatively correlated with hierarchy (not used in this study because of its instability). Sample size may be partially responsible for this disparity, but node count, weighted density, and average distance were not reported in the 2011 study.
No correlation of a network metric with a safety outcome was statistically significant at all 4 data collection periods in either network. However, the direction of relationship (positive or negative) was consistent for each metric across networks. For example, with higher node counts (ie, more staff and therefore larger PCUs), there were more medication errors, falls, and HAPUs. By contrast, when density was higher, there were fewer medication errors, falls, and HAPUs. In the previous study,13 there were differences in the direction of some of the metrics when related to falls and medication errors, suggesting that a nursing intervention to fix one might not fix the other. In part, this difference may be due to the use of only stable metrics in this longitudinal study.
We acknowledge several limitations to the current study.
- Linking patient outcomes to PCU communication network characteristics is difficult because the frequency of safety outcomes varies (essentially rendering them unstable over time), making it problematic to link outcomes to the specific day of data collection. This may be why few studies have attempted to link network characteristics to patient outcomes. Benton et al16 and Chambers et al19 agree that most SNA studies in healthcare to date have been purely descriptive; future researchers must seek evidence that changing specific communication patterns can influence patient outcomes. New methods and large sample sizes are indicated.
- The current study included PCUs from only 3 hospitals. This limited the variance in PCUs.
- Having data about PCU culture would likely have been of help in understanding PCU contexts, but we erred on the side of limiting participant fatigue.
- We focused only on PCU staff, omitting other professionals, because this best represents the core nursing team. In the future, researchers should collect data from all professions who interact on the PCU.
- PCU response rates did not achieve our 90% target. Consequently, there were gaps in networks that may have limited our ability to detect some network metric relationships with safety outcomes.
- The survey questions providing data to generate the 2 networks were asked in succession—with only the question inquiring about the relative trustworthiness of the information received separating them. This may have led respondents to provide similar answers to the questions. In the future, the questions should be counterbalanced.
- We used self-report SNA data to generate the communication networks. This is the most frequently used method to collect SNA data but may not be the most accurate. More direct observation methods (eg,35) could validate self-reports, but the methods are labor intensive and may be perceived as intrusive.
Implications for Management
SNA offers managers a way to identify potentially actionable PCU-level communication issues that can affect patient safety outcomes (specifically, medication errors and patient falls). The fact that a set of network metrics has been shown to be stable over a 7-month period despite variation in individual staff36 suggests that SNA data for PCUs can be used by managers over at least a 7-month period to assess communication patterns and implement changes as long as no other major organizational changes intervene (even stable metrics can be influenced by massive change). The common direction of significance for metrics associated with the 3 safety outcomes in the current study suggests that the same systemic interventions in PCU communication networks could reduce medication errors, falls, and HAPUs. Being able to collect data by handheld devices and upload them to a Web site where they were automatically converted to the format required by ORA for SNA analysis and reports may make collecting and using SNA data more feasible for management.
In this longitudinal SNA study of 24 acute care PCUs in the Southwest United States, we compared 2 communication networks, an information-sharing network and an advice network, collecting data to create the networks via questionnaire at the end of staff shifts. Certain communication network characteristics denoted by those network metrics shown to be stable over a 7-month period were related to increased frequency of medication errors and patient falls. Larger PCUs (high node count) were positively associated with more medication errors, whereas more patient-related communication (density, weighted density, and total degree centrality) was associated with fewer medication errors. More frequent patient-related communication was associated with lower fall rates whereas PCU size (higher node count) was associated with higher fall rates. Of the 29 statistically significant correlations of information-sharing network metrics with safety outcomes, there were 12 each with medication errors and fall rate. This contrasts with the advice network, in which there were more correlations17 with medication errors than with falls.7 These results suggest that there is a significant difference in the kind of communication that should be targeted by researchers and managers related to a safety outcome. There were few statistically significant relationships to HAPUs, likely because of their low prevalence. Further research is needed to validate these results in larger, more diverse samples and perhaps incorporating direct observation methods as well.
2. Kalisch BJ, Landstrom G, Williams RA. Missed nursing care: errors of omission. Nurs Outlook
3. Pronovost P, Berenholtz S, Dorman T, Lipsett PA, Simmonds T, Haraden C. Improving communication in the ICU using daily goals. J Crit Care
4. Benham-Hutchins MM, Effken JA. The influence of health information technology on multiprofessional communication during a patient handoff. Int J Med Inf
5. Chaboyer W, McMurray A, Wallis M. Bedside nursing handover: a case study. Int J Nurs Pract
6. Dykes PD, Carroll D, Hurley AC. Why do patients in acute care hospitals fall? Can falls be prevented? J Nurs Admin
7. Rush KI, Robey-Williams C, Patton M, Chamberlain D, Bendyk H, Sparks T. Patient falls: acute care nurses' experiences. J Clin Nurs
8. Manojovich M, DeCicco B. Healthy work environments, nurse-physician communication, and patients' outcomes. Am J Crit Care
9. McCurdie T, Sanderson P, Aitken LM. Applying social network analysis to the examination of interruptions in healthcare. Appl Ergon
10. Carrington JM, Effken JA. Strengths and limitations of the electronic health record for documenting clinical events. Comput Inform Nurs
11. Dudding KM, Gephart SM, Carrington JM. Neonatal nurses experience unintended consequences and risks to patient safety with electronic health records. Comput Inform Nurs
12. Stahl K, Palileo A, Schulman CI, et al. Enhancing patient safety in the trauma/surgical intensive care unit. J Trauma
13. Effken JA, Carley KM, Gephart S, et al. Using ORA to explore the relationship of nursing unit communication to patient safety and quality outcomes. Int J Med Inform
14. O'Malley J, Marsden PV. The analysis of social networks. Health Serv Outcomes Res Methodol
15. Rivera MT, Soderstrom SB, Uzzi B. Dynamics of dyads in social networks: assortative, relational, and proximity mechanisms. Annu Rev Sociol
16. Benton DC, Perez-Raya F, Fernandez-Fernandez MP, Gonzalez-Jurado MA. A systematic review of nurse-related social network analysis studies. Int Nurs Rev
17. Bae SH, Nikolaef A, Seo JY, Castner J. Health care provider social network analysis: a systematic review. Nurs Outlook
18. Scott J. Social Network Analysis
. 4th ed. Los Angeles, CA: Sage; 2014.
19. Chambers D, Wilson P, Thompson C, Harden M. Social network analysis in healthcare settings: a systematic scoping review. PLoS One
20. Gray JE, Davis DA, Pursley DM, Smallcomb JE, Geva A, Chawla NV. Network analysis of team structure in the neonatal intensive care unit. Pediatrics
21. Anderson C, Talsma A. Characterizing the structure of operating room staffing using social network analysis. Nurs Res
22. Creswick N, Westbrook JI. Social network analysis of medication advice-seeking interactions among staff in an Australian hospital. Int J Med Inf
23. Aydin CE, Rice RE. Bringing social worlds together: computers as catalysts for new interactions in health-care organizations. J Health Soc Behav
24. Anderson JG, Jay SJ. Physician utilization of computers: a network analysis of the diffusion process. In: Fredericksen L, Riley A, eds. Computers, People and Productivity
. New York: Haworth Press; 1985.
25. Anderson JG, Aydin CE. Evaluating the impact of health care information systems. Int J Technol Assess Health Care
26. Quinlan E, Robertson S. Mutual understanding in multi-disciplinary primary health care teams. J Interprof Care
28. Zhu X, Yao N, Mishra V, Phillips AE, Dow A, Tu SP. Identifying patient-centered care teams using electronic health records access data and social network analysis. J Hosp Med
. 2016;11(supplement 1):S257–S258.
29. Effken JA, Gephart S, Carley KM. Using ORA to assess the relationship of handoffs to quality and safety outcomes. Comput Inform Nurs
30. Carley KM. ORA: a toolkit for dynamic network analysis and visualization. In: Alhajj R, Rokne J, eds. Encyclopedia of Social Network Analysis and Mining
. New York: Springer; 2017:1–10.
31. Borgatti SP, Cross R. A relational view of information seeking and learning in social networks. Manag Sci
32. Benner PE. From Novice to Expert: Excellence and Power in Clinical Nursing Practice. Menlo Park, CA: Addison-Wesley; 1984.
33. Brewer BB, Wojner-Alexandrov AW, Triola N, et al. AACN Synergy model's characteristics of patients: psychometric analyses in a tertiary care health system. Am J Crit Care
. 2007;16(2): 158–167.
34. Benham-Hutchins M, Brewer BB, Carley K, Kowalchuk M, Effken JA. Design and implementation of a data collection system for social network analysis. Online J Nurs Inform
35. McCurdie T, Sanderson P, Aitken LM, Liu D. Two sides to every story: the dual perspectives method for examining interruptions in healthcare. Appl Ergon
Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.
36. Brewer BB, Carley CM, Benham-Hutchins M, Effken JA, Reminga J. Exploring the stability of communication network metrics in a dynamic nursing context. Soc Netw