The Glasgow Coma Scale (GCS) is a neurological instrument, which measures the “depth and duration of impaired consciousness” (Waterhouse, 2008, p. 492). Before the advent of the GCS, no standardized scale was used to assess the level of consciousness (Ingram, 1994). A variety of terms were used to describe states of consciousness (Edwards, 2001), resulting in vague descriptions of the patient’s condition. The appeal of the GCS lies in its applicability in a wide variety of clinical situations as well as its ease of use by a range of healthcare staff (Baker, 2008; Ellis & Cavanagh, 1992). However, the GCS is not without its weaknesses and limitations. Its ease of use opens it up to misinterpretation and misapplication (Addison & Crawford, 1999). There is a growing amount of evidence that suggests that problems are encountered when completing some aspects of the GCS, and there is potential for performing an “incorrect assessment” (Waterhouse, 2008, p. 492) in some clinical situations. From the time spent in neuroscience wards, the authors found that practices of administering the GCS between nurses were at times incongruous and contradictory, occasionally leading to an inaccurate assessment of the patient.
This piqued our interest and provided the impetus in conducting this primary study. Despite the propensity for incorrect assessment, the GCS remains in use in the clinical setting and enjoys an “unwarranted and privilegedposition” (Segatore & Way, 1992, p. 548). This creates an issue to patient care as the GCS is an important instrument in communicating an accurate assessment of the patient’s condition between clinical staff (Holdgate, Ching, & Angonese, 2006) and inappropriate use of the scale may have “serious clinical implications” (Shoqirat, 2006, p. 46).
Hansen, Norris, and Sceriha (1992) conducted a case control study examining the effects of a lecture demonstration on nurses’ performance of the GCS. The study’s participants were first-year registered nurses (RNs) working on neurosurgery or neurology/medical wards in three major hospitals. Participants were divided into two groups, with one group attending a lecture demonstration by a nurse educator. The other group formed the control group and was not given any lectures. Before the lecture, both groups were assessed on their performance of the GCS. A week after the lecture, all nurses were reassessed. It was found that the difference in the educated and control groups were significant at the second assessment. In particular, the numbers of participants using correct methods in assessing verbal and motor response were significant in the educated group as compared with the control group, which showed no improvement. The average difference in GCS for the educated group decreased significantly in the second assessment as well. The study minimized confounding variables by random allocation to groups and blinding the rater assessing the participants. The study also eliminated the effect of “experience” by enlisting participants who were first-year RNs and working in a similar neuroscience setting. However, it was not clear if a group of raters or a single rater was used for theassessments at each time point. It was also not reported that the rater/raters used a standardized set of criterion to assess the participants.
O’Farrell and Zou (2008) conducted a descriptive survey that investigated nurses’ perceptions of best practice guidelines and the Canadian Neurological Scale (CNS) assessment, evaluated the effect of a workshop and implementation process on nurses’ self-efficacy for CNS use, and determined if it met the needs of nurses and evaluated the accuracy and appropriateness of CNS documentation. The CNS is an instrument analogous to the GCS. In this study, 66 RNs who were working in an acute care neuroscience unit were invited to participate in the study. Surveys were given three times: before, immediately after, and 3 months after the workshop. Sixty six of the workshop participants completed the surveys before and immediately after the workshop. However, only 24 of the 66 workshop participants completed the surveys at all three time points. It was found that, before the workshop, overall confidence was moderate. Immediately after the workshop, there was a significant increase in confidence in the overall performance of the CNS. In 3 months, there was a slight decrease in confidence; however, this was not significant. This was not explained in the study; although considering the nurses in the study did not use the CNS as frequently in 3 months as compared with immediately after the workshop (because of difficulties in using the instrument), it is possible that knowledge attrition would occur, hence nurses would lose some confidence in using the CNS.
Heron’s (2001) study on the interrater reliability of the GCS when performed by critical care nurses of contrasting subspecialties contradicts the studies presented thus far. A convenience sample of 75 RNs of different critical care subspecialties participated in this study. Participants conducted GCS assessments of patients recorded on videotape, which was then compared against an expert criterion. The results showed that nurses with undergraduate degrees and basic diplomas were more accurate in their GCS assessments as compared with nurses with critical care qualifications. It is not elaborated why these findings are as such, although it is implied that this unexpected result was related to the curricula during these respective programs and the emphasis placed on neurological assessments. However, one cannot rule out the effect of overconfidence during the performance of these GCS assessments, hence causing the critical-care-trained nurses to be less cautious in conducting their assessments. The study’s findings may be viewed with some caution; the study uses convenience sampling, a small sample size (n = 75), and a single hospital to conduct this study.
Despite the small number of studies, it has been shown that knowledge plays an important role in the assessment of conscious level. In particular, formal training in the GCS improves assessment skills (Hansen et al., 1992) and confidence in the use of instruments assessing conscious level (O’Farrell & Zou, 2008).
Aims and Objectives
The primary aim of this study was to investigate nurses’ knowledge in using the GCS and the factors influencing knowledge of the GCS. To address the aims of this study, two research questions were constructed: (1) What is the knowledge level of nurses in using the GCS? (2) Which demographic factors affect nurses’ knowledge of the GCS?
This study was a correlational study to recruit RNs in one acute care hospital in Singapore.
Participants were recruited from January to March 2010. The power of this study was estimated based on the correlation between nurses’ attitudes, knowledge, and self-confidence from Steginga, et al. (2005). A medium positive correlation (∼0.25) was expected, and the required sample for this study was 103, which could achieve 80% power at a 5% level of significance (MedCalc, 2009).
The inclusion criterion was as follows:
* RNs with at least a nursing certificate.
The exclusion criteria are as follows:
* Clinical educators, nurse clinicians, or nurse managers
* RNs without prior training in the use of the GCS
* Student nurses
Convenience sampling was used, with the required sample sizes being drawn from three clinical areas: neuroscience, general medicine, and neurointensive care unit. One hundred fourteen of two hundred seventy-two possible participants were recruited for this study.
Data Collection Procedure
The researcher approached wards when there was least amount of activity (12:30–4 P.M.) to avoid disruption. A set of questionnaires (in English) were given to the ward managers for distribution to the participants. The participants were advised not to share or discuss answers with their peers. Upon completion, these questionnaires were returned to a box in the ward managers’ office. The consent forms were kept in a separate box to ensure confidentiality. These questionnaires were collected by the researcher approximately 1–2 days after handing them over to the ward managers.
The questionnaire (Table 1) had two parts. The first section collected data on demographic information (gender, age, staff grade, level of education, having an advanced diploma or post basic education, length of time in nursing, length of time in the current discipline, length of time in a neuroscience setting, and formal training on GCS preregistration and postregistration). The second section assessed nurses’ knowledge in using the GCS. Ten of fifteen questions were adapted from a questionnaire developed by Shoqirat (2006, pp. 44–45) and Waterhouse (2008, pp. 494–495), which was designed to assess participants’ knowledge and understanding of the GCS. Five (Questions 6, 8, 13, 14, and 15) were added based on a critical review of the literature. Questions 1, 9, 10, and 11 were adapted from its original form in Shoqirat’s (2006) study into a multiple-choice question format. The total score was used for data analysis and ranged from 0 to 15. A higher score correlates with good knowledge of the GCS, whereas a lower score correlates with poor knowledge of the GCS.
Validity and Reliability of Instruments
The items in the knowledge component were examined for their relevance by three experts with at least 10 years of working experience in a neuroscience setting. Initial results indicated that the content validity index (CVI) of the instrument was low (73.3%). To achieve a CVI of 80%, amendments of the instruments were made based on the expert’s suggestions. The experts replied with their opinions on the amended versions, and the results indicated that the CVI of the instruments were 100%.
Test–retest reliability was used to test the stability and consistency of the instrument. Seventeen RNs were recruited to perform the test twice within 1 week. The correlation coefficient obtained for the knowledge component was 0.71, indicating that the reliability of the instrument was satisfactory.
Descriptive statistics such as mean and standard deviation were used to analyze background variables. Analysis of variance and independent t test were used to examine differences between each factor within each scale. Factors with p < .05 in the univariate analysis were included in the multiple regression (stepwise) analyses. Variance inflation factors were used to examine the collinearity between independent variables on the regression model, and Shapiro–Wilk test was used to examine the normality of the residual of the regression model. All p values less than .05 were taken as statistically significant. Statistical analysis was conducted using SPSS 16.0 (SPSS, Chicago, IL).
Ethical approval was granted by an independent review board. A participant information sheet with an explanation of the intent of the study and a consent form was provided along with the study. Each participant gave written consent before participation, and the data collected from all subjects were kept strictly confidential.
We received 114 questionnaires from this survey (Table 2). These showed that 37.7% (n = 43) of the respondents worked in neuroscience wards, 39% (n = 34.2) worked in the neurointensive care unit, and 28.1% (n = 32) worked in general medicine wards. Furthermore, 90.4% (n = 103) were women, and 18.8% (n = 21) of the respondents were senior staff nurses. The largest age group is composed of nurses aged 21–25 years (n = 41, 36%), followed by those aged 26–30 years (n = 29, 25.4%). The mean score for the knowledge component was 10.8.
Statistical significance was found in clinical discipline (p < .001) and length of time in neuroscience setting (p = .004; Table 3).
A multiple regression (stepwise) analysis was used to identify a model that predicted nurses’ knowledge scores of the GCS (see Table 4). The results showed that the type of clinical discipline (beta = 0.51, p < .001) and the length of experience in a neuroscience setting (beta = 0.22, p = .005) were significant in determining nurses’ knowledge of the GCS.
From the data, one can infer that the duration of time working in a neuroscience setting influences nurses’ knowledge of the GCS. Nurses working in the neonatal intensive care unit scored the highest mean scores (12.7) in the knowledge scale, whereas nurses from general medicine wards scored the lowest mean scores (9.7). Nurses working in a neuroscience setting for 6 years or more scored higher mean scores (11.9) on the knowledge scale, whereas nurses who worked in a neuroscience setting for less than a year scored lower mean scores (10.0).
Neonatal intensive care unit nurses are required to perform observations and GCS assessments on a more frequent basis as compared with nurses in general wards. By using the instrument regularly, they gain an insight on its application in different types of patients, which may not be attained by nurses in general wards (Del Bueno, 1983; cited in Gocan & Fisher, 2005).
Similarly, nurses who have worked in a neuroscience setting for 6 years and more have had greater experience in caring for neurological patients. The time spent in a neuroscience ward and exposure to a wider variety of neurological patients requiring GCS assessment facilitates their learning of the GCS (Gladwell, 2008). This is in accordance with Shoqirat’s (2006) study, which found that student nurses working in neuroscience wards had a better understanding of the GCS as compared with peers who did not undertake such attachments.
Knowledge enhances performance of the GCS (Waterhouse, 2008), providing meaning to observations made during assessment. Hansen et al. (1992, p. 4) showed that “an education session had significant effect on the overall GCS,” which significantly improved the performance of the educated group. Currently, GCS training forms a small part of the induction of a new staff nurse. Staff nurses are also unaware of protocols that guide in the performance of a GCS assessment. The author recommends that GCS assessment should form a larger component of the induction program, taught in a lecture demonstration by expert neuroscience nurses (Hansen et al., 1992) and refresher trainings regularly conducted as part of continual education. It is also recommended that guidelines for performing GCS assessments be easily available to nurses to use in the ward. Implementing guidelines for performing the GCS represents a significant move in addressing the lack of knowledge in nursing (Wellington, 2005).
A follow-up study conducted in a variety of hospitals, with a larger sample size, would enhance generalizability, “verify the study’s validity” (Shoqirat, 2006, p. 47), and identify other variables that may further elucidate the results of the study (Heron, 2001).
The main limitation of this study was its use of convenient sampling. As the study was conducted in only one hospital, caution must be taken in generalizing the results to other hospitals in Singapore. A follow-up study conducted in other Singaporean hospitals would enhance the generalizability of the study’s findings.
This study uses self-report questionnaires, which opens it up to response bias from each responder (Holbrook & Krosnick, 2010). The participants may underreport or give favorable responses to conform to societal values and avoid criticism (Mortel, 2008). Hence, the responses given by the participants may not be a true reflection of the participants’ knowledge.
The study has shown that there is a great disparity in knowledge of the GCS between nurses of different demographics. The GCS has to be performed consistently and accurately to monitor a patient’s level of consciousness effectively, thereby ensuring the patient’s safety. It is suggested that educational interventions and guidelines to perform GCS assessment be made available to maintain clinical skills in performing the GCS (Gocan & Fischer, 2005).
The authors would like to acknowledge Ms. Zhou Wentao for her assistance during the conduct of this project.
Addison C., Crawford B. (1999). Not bad, just misunderstood. Nursing Times, 95 (43), 52–53.
Baker M. (2008). Reviewing the application of the Glasgow Coma Scale: Does it have interrater reliability? British Journal of Neuroscience Nursing, 4 (7), 342–347.
Del Bueno D. J. (1983). Doing the right thing: Nurses’ ability to make clinical decisions. Nurse Educator, 8 (3), 7–11.
Edwards S. L. (2001). Using the Glasgow Coma Scale: Analysis and limitations. British Journal of Nursing, 10 (2), 92–101.
Ellis A., Cavanagh S. J. (1992). Aspects of neurosurgical assessment using the Glasgow Coma Scale. Intensive and Critical Care Nursing, 8 (2), 94–99.
Gladwell M. (2008). Outliers: The story of success. New York, NY: Little Brown and Company.
Gocan S., Fisher A. (2005). Ontario regional stroke centres: Survey of neurological nursing assessment practices with acute stroke patients. Axone, 26 (4), 8–13.
Hansen R., Norris H., Sceriha N. (1992). The effectiveness of education on the performance of neurological observations: A collaborative study. Australasian Journal of Neuroscience, 5 (1), 1–9.
Heron R. (2001). Interrater reliability of the Glasgow coma scale among nurses in sub-specialties of critical care. Australian Critical Care, 14 (3), 100–105.
Holbrook A. L., Krosnick J. A. (2010). Social desirability bias in voter turnout reports: Tests using the item count technique. Public Opinion Quarterly, 74 (1), 36–37.
Holdgate A., Ching N., Angonese L. (2006). Variability in agreement between physicians and nurses when measuring the Glasgow Coma Scale in the emergency department limits its clinical usefulness. Emergency Medicine Australasia, 18 (4), 379–394.
Ingram N. (1994). Knowledge and level of consciousness: Application to nursing practice. Journal of Advanced Nursing, 20 (5), 881–884.
Mortel T. F. (2008). Faking it: Social desirability response bias in self-report research. Australian Journal of Advanced Nursing, 25 (4), 40–48.
O’Farrell B., Zou G. Y. (2008). Implementation of the Canadian Neurological Scale on an acute care neuroscience unit: A program evaluation. Journal of Neuroscience Nursing, 40 (4), 201–211.
Segatore M., Way C. (1992). The Glasgow Coma Scale: Time for change. Heart and Lung, 21 (6), 548–557.
Shoqirat N. (2006). Nursing students’ understanding of the Glasgow Coma Scale. Nursing Standard, 20 (30), 41–47.
SPSS Inc. (2007). SPSS 16.0, SPSS base application guide. Chicago, IL: Author.
Steginga S. K., Dunn J., Dewar A. M., McCarthy A., Yates P., Beadle G. (2005). Impact of an intensive nursing education course on nurses’ knowledge, confidence, attitudes, and perceived skills in the care of patients with cancer. Oncology Nursing Forum, 32 (2), 375–381.
Waterhouse C. (2008). An audit of nurses’ conduct and recording of observations using the Glasgow Coma Scale. British Journal of Neuroscience Nursing, 4 (10), 492–499.
Wellington B. (2005). Development of a guide for neurological observations. Nursing Times, 101 (39), 32–34.