Secondary Logo

Exploring NCLEX Failures and Standardized Assessments

Emory, Jan, PhD, RN, CNE

doi: 10.1097/NNE.0000000000000601
Feature Articles
Free
SDC

Background Nurse educators seek valid and reliable tools to assist in early identification and intervention for students at-risk of NCLEX-RN failure.

Purpose The purpose of this cross-sectional study was to use principal component analysis to explore relationships within standardized assessment (SA) scores from a sample of students who failed the NCLEX-RN on the first attempt.

Methods Standardized assessment scores were collected from prelicensure programs between 2009 and 2016 (n = 296). Mixed modeling sought to reveal SA scores that represented redundancy or duplication.

Results The principal component analysis found 2 distinct components emerging from the 8 SAs included in the study, signifying duplication in the content assessed. Within these 2 components, maternal newborn and pharmacology had the strongest correlations among the SA scores.

Conclusions Discovering those SAs that assess similar content and have the strongest correlations can provide additional information for decision making when implementing these tests throughout the nursing curriculum.

Author Affiliations: Associate Professor, Department of Nursing, College of Education and Health Professions, University of Arkansas, Fayetteville.

The study was funded by the ATI Educational Assessment Nursing Research Grant.

The author declares no conflicts of interest. The author has no relationship with Assessment Technologies Institute, LLC.

Correspondence: Dr Emory, Eleanor Mann School of Nursing, 606 Razorback Rd, Fayetteville, AK 72701 (demory@uark.edu).

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Web site (www.nurseeducatoronline.com).

Accepted for publication: July 23, 2018

Published ahead of print: September 25, 2018

Cite this article as: Emory J. Exploring NCLEX failures and standardized assessments. Nurse Educ. 2019;44(3):142-146. doi: 10.1097/NNE.0000000000000601

The National Council of State Boards of Nursing reported that the total first-time, United States–educated, passing percentage of candidates taking the National Council Licensure Examination-Registered Nurses (NCLEX-RN) in the first quarter of 2018 was 89.25%.1 This percentage shows improvement from the 2017 report of 87.11%. The 2017 percentage leaves 5421 graduates unavailable for the workforce that is projected to need more than 1 million nurses by 2030.2,3

In addition to workforce concerns, pass rates on NCLEX-RN examinations remain a major focus for educational institutions offering nursing programs. For the individual nursing program, pass rates below established benchmarks can have serious implications for the academic institution, faculty, graduates, and employers.4,5 Accrediting bodies, such as the Commission on Collegiate Nursing Education and the Commission for Nursing Education Accreditation, evaluate program pass rates as an indicator of quality.6 These realities emphasize the importance of graduates' success on the first attempt at achieving licensure.

In response to this ongoing concern, nurse educators continue to seek valid and reliable tools to assist students for NCLEX-RN success. Consequently, many prelicensure nursing programs select commercially available end-of-program comprehensive predictor examinations in an effort to identify students predicted to fail NCLEX-RN. Students identified as at-risk may be placed in remedial activities and even denied opportunity to take the NCLEX-RN in an effort to improve outcomes.7,8 In a national study conducted by Assessment Technologies Institute, LLC (ATI), RN comprehensive predictor scores (n = 7126) were analyzed for accuracy in predicting NCLEX-RN success. The examination predicted successful passing of the NCLEX-RN 96% of the time when students (n = 4268) scored between 90% and 100% on the comprehensive examination.9 Consistent with this finding, additional studies support high levels of accuracy in predicting success on NCLEX-RN using comprehensive end-of-program exit examinations.7,10-13 The limitation of these end-of-program comprehensive predictor examinations is that recognizing at-risk students toward the end of a program of study is too late in the educational process for early intervention and just-in-time remediation.

Efforts to recognize at-risk students earlier in the educational process have led to the development of standardized content-specific assessments (CSAs) administered throughout the program of study and not exclusively at the end. In recent years, standardized assessment (SA) packages have emerged to offer nurse educators another tool to recognize at-risk students in preparation for the NCLEX-RN. These packages typically offer CSAs, methods for remediation, and a comprehensive end-of-program predictor examination implemented throughout a program of study. Content-specific assessments measure students' mastery level of concepts and content in specific areas such as adult medical-surgical, pharmacology, nursing care of children, and others. Content-specific assessments are placed within the nursing program consistent with the content presented. These SA packages are made available for purchase through companies such as ATI, Health Education Systems Incorporated (HESI), Hurst, Kaplan, National League for Nursing, Mosby, and others.

The implementation of CSAs has resulted in additional research on their ability to predict NCLEX-RN outcomes, even though the commercial vendors do not claim or seem to want to be responsible for a predictive relationship to NCLEX-RN.14 Content-specific assessments have been studied for predicting NCLEX-RN outcomes with varying results. Adult medical-surgical, mental health, community, nursing care of children, pharmacology, and fundamental tests were discovered to be significantly correlated with NCLEX-RN outcomes.10-13 In the strongest evidence found14 using CSA scores from ATI to predict NCLEX-RN outcomes, the results showed that 78% of students (n = 2440) performing at a level 2 or above on all 9 CSAs were highly likely to pass NCLEX. The predictive percentage decreased significantly when student performance at a level 2 or above fell to 8 assessments (63%) and 7 assessments (44%).7

Attempts to accurately predict NCLEX-RN success or failure have been the subject of numerous studies testing a variety of variables.7,10-13 Although many studies find significance in variables related to NCLEX-RN success, there is sparse evidence in the literature of relationships to student failure.5,6,10 This area of failures requires additional investigation. Several studies found links to NCLEX-RN failures but lacked the significance needed to suggest the existence of a strong relationship. A number of researchers studying NCLEX-RN failure found weak levels of significance or low predictability, ranging from 24% to 30.8%.5,6,15 The evidence available in the literature shows that although nursing education studies have identified strong indicators of NCLEX-RN success, the indicators for failure are still rather obscure. In addition, limitations of the body of work on SAs were found, including (a) the number and variety used, (b) lack of consistent placement within the program of study, (c) types of prelicensure programs sampled, and (d) focus on NCLEX-RN outcome.9

Back to Top | Article Outline

Purpose

The more traditional approaches to analyzing the significant indicators of NCLEX-RN success, such as assessment scores, have led to few new revelations over the past decade. Many studies on SAs compare passing and failing status or analyze aggregate datasets from limited sources. This study sought to analyze SA scores from multiple sources representing a variety of commercial vendors. In addition, the use of principal component analysis (PCA) was used as an exploratory technique to analyze the scores to acquire new insight into NCLEX-RN failure for future hypothesis testing. The purpose of this cross-sectional study was to use PCA to explore relationships in SA scores from a sample of students who failed NCLEX-RN on the first attempt. The SA scores were provided by multiple sources, including 10 programs of study and 1 commercial vendor in the United States. The aim was to explore the relationships among the SA scores in a sample from students failing NCLEX-RN on the first attempt. This seems to be the first study to use this statistical approach to analyze SA scores across multiple vendors and multiple programs of nursing.

Back to Top | Article Outline

Methods

A retrospective, cross-sectional design was used for this study. The statistical approach used PCA to explore the relationships among SA scores from students failing NCLEX-RN on the first attempt. The institutional review board at the university gave exempt status to the study.

Back to Top | Article Outline

Sampling Procedure

Standardized assessment scores from students failing NCLEX-RN on the first attempt were sought from associate and baccalaureate nursing programs and commercial vendors through networking opportunities at conferences and telephone and e-mail solicitations. Ten programs of nursing and 1 commercial vendor, Kaplan, provided deidentified SA scores for students failing NCLEX-RN on the first attempt between spring 2009 and spring 2016. Participating programs were provided a spreadsheet identifying 10 content areas typically used by commercial SA vendors. The sample of first-attempt SA scores were matched across vendors to the 10 content-specific areas: (1) fundamentals of nursing care, (2) pediatrics, (3) maternal/newborn or obstetrics, (4) pharmacology, (5) adult medical-surgical, (6) mental health/psychiatric nursing, (7) leadership, (8) community, and (9) nutrition, along with the comprehensive end-of-program assessment. Inclusion criteria for the sample were (a) completion of a prelicensure program of nursing, (b) completion of a minimum of 3 SAs, and (c) recorded failure of NCLEX-RN on the first attempt.

Programs were asked to provide students' scores for CSAs and end-of-program comprehensive or exit predictor examination. The specified sampling design and study inclusion criteria resulted in retrospectively collected SA score datasets from 296 students completing either an associate or baccalaureate prelicensure nursing programs, far exceeding the minimum sample size of 100 for PCA. Data provided by the 10 nursing programs reflected a variety of commercial vendors, including ATI, Kaplan, HESI, and Mosby. One commercial supplier provided first-attempt SA scores for students reported to have failed NCLEX-RN on the first attempt from prelicensure programs using their assessments. These SA scores were from multiple programs of study across the United States, and the program's identity was not provided.

Standardized assessment scores were merged into a Microsoft Excel file and then transformed to Z-scores. Student-level data that did not meet the inclusion criteria for the study were removed at this point in the analysis. The SA scores for nutrition and community health were not of a sufficient number to include in the analysis and were excluded. The standardized scores were also screened for outliers, with any significant outliers removed before analyzing the data. When listwise deletion was used, the final sample size was 135 complete cases.

Back to Top | Article Outline

Statistical Analysis

Principal component analysis is a multivariate technique with the following goals: to extract important information to represent as a new set of variables (components), compress the size of the data set by keeping only the most important information, simplify the description of the data set, and analyze the structure of the observations.15 The procedures recognize redundancy or duplication of content through loading factors represented by correlations that can uncover new insights for students at risk for NCLEX-RN failure.

Deidentified, standardized SA scores were imported from the data spreadsheet into IBM SPSS version 23.0 for PCA (IBM Corp, Armonk, New York). Varimax orthogonal rotation was performed, and comparisons were made between the rotated and the nonrotated results. Two methods were applied to correct for missing data in the standardized SA scores, listwise deletion and mean imputation. As PCA is highly impacted by missing data, the results of both were compared to detect trends in SA scores and redundancy in the 8 retained assessments in the sample of first-attempt NCLEX-RN failures.

Back to Top | Article Outline

Assumption Testing

Several criteria for the suitability of the SA score data for PCA were considered. The correlation matrix indicated that the 8 retained SA score variables had significant linear correlations with at least 1 or more of the remaining original SA score variables at the 5% significance level. This finding indicated that significant linear correlation existed among the SA scores and that PCA was a reasonable analytical approach to recognize redundancy or duplication in the content included in each SA. The Kaiser-Meyer-Olkin measure of sampling adequacy was approximately 0.80, which greatly exceeds the minimum recommended value of 0.50. In addition, the result of the Bartlett test of sphericity was significant (x2(28) = 335.11, P < .0001), indicating that PCA was an appropriate fit for the data. In addition, the communalities observed in the final PCA model ranged between 0.44 and 0.81, suggesting sufficient correlation among the original SA variables and suitability of the selected analysis (see Table 1, Supplemental Digital Content, http://links.lww.com/NE/A540).

After correcting for missing values using listwise deletion, the first eigenvalue of 3.51 explained approximately 44% of the overall variance; the second eigenvalue of 1.26 explained another 16% of the overall variance (see Table 2, Supplemental Digital Content, http://links.lww.com/NE/A541). Similar results were found using mean imputation to correct for missing values, with the first 2 eigenvalues explaining 37% and 14% of overall variance, respectively.

Applying both traditional methods such as Kaiser rule and Cattell scree test, it was determined that the 2-component model was most preferred, explaining approximately 60% of the total overall variance in SA scores. Only 2 components, components 1 and 2, were found to have eigenvalues greater than 1.0. There was a “leveling off” of the scree plot after 2 components (see Figure, Supplemental Digital Content, http://links.lww.com/NE/A539). Although PCA was conducted initially with no rotation, varimax orthogonal rotation was performed, and comparisons were made between the rotated and nonrotated results to reflect a simple structure in the loading values and enhance the interpretability of the extracted principal components. The nonrotated loading values are provided in Table 3, Supplemental Digital Content, http://links.lww.com/NE/A542.

The rotated loadings revealed a more defined component structure showing maternal/newborn and pharmacology as the highest loading assessments in the 2-component model. The rotated loading values are provided in the Table.

Table

Table

Back to Top | Article Outline

Results and Discussion

The structure of the rotated loading values, represented as correlations, found that 6 of the original SAs correlated strongly on component 1, contributing the most overall to the variation in the SA scores for first-attempt NCLEX-RN failures. Component 1 represented content related to fundamentals of nursing care, maternal/newborn, pediatrics, mental health, leadership, and the end-of-program comprehensive examination. Component 1 is especially related to maternal/newborn (r = 0.85), even more so than the comprehensive predictor (r = 0.78). This is an interesting finding given that one might expect the comprehensive predictor to load the strongest because it is thought to represent all the content expected for NCLEX-RN preparation.

The pediatric, leadership, and fundamental CSAs also loaded with strong correlations on component 1, but to a lesser extent than the maternal/newborn assessment. These multiple strong correlations emphasize the redundancy or duplication in the content assessed. The significance of the maternal/newborn CSA found in this study seems inconsistent with other work analyzing SAs. However, it is difficult to compare outcomes since all studies found used a different statistical approach other than PCA. In a recent study with a large sample (n = 19 535) of associate and baccalaureate student CSA scores, the maternal/newborn assessment was found to correlate at a moderate level (r = 0.54) with the comprehensive predictor.7

The remaining 2 CSAs—pharmacology and adult medical-surgical—correlated strongly on component 2 and contributed greatly to the overall variation in the scores, suggesting that the assessments contained redundant or duplicated content. This finding suggests that component 2 could be measured primarily by the pharmacology CSA.16 This might be logical considering the pharmacology examination uses patient scenarios in the context of formulating appropriate test items for the nursing profession and that many of these scenarios align with medical-surgical content. Pharmacology CSAs have been found to predict NCLEX-RN outcomes in previous work.10,13 Other studies report adult medical-surgical assessments to correlate the strongest with the comprehensive predictor score (r = 0.608).7 The mental health assessment loaded on component 1 and 2, similarly suggesting that the content was not well represented throughout the assessments included in this study. This finding is not surprising since mental health content is a specialized area that uses a different skill set than physiologically based nursing care.

In summary, the analysis found the 2-component model to be appropriately guided by the scree plot, the proportion of variance accounted for by the components, and the interpretation of substantive meaning that the components contributed. The PCA indicated that 2 distinct components could account for a considerable amount of variation in 8 SA scores of students failing the NCLEX-RN on their first attempt. Most of the SAs exhibit a great deal of redundancy, with strong correlations evident among students' SA scores, suggesting that the SAs are testing the same content repeatedly.17 In addition, 2 of the SAs, pharmacology and adult medical-surgical, seem to be particularly correlated to one another and less so to the others, again suggesting that these 2 SAs are testing essentially the same content that is distinctly different from the content reflected in component 1.

Some nursing programs are now opting to administer a reduced number of SAs to decrease the testing overload and mental fatigue that students can experience. Selection of SAs is largely dependent on faculty preference as the evidence to identify the best SAs to include in the program seems to be absent in the literature. The quest for early identification of students at risk for NCLEX-RN failure has resulted in an increased number of assessments and examinations that students are expected to complete. Students are reporting that too many SAs are administered throughout the nursing program. Selection decisions for using these should be evidence based to assess students' performance compared with national standards.18-20 Faculty have an ethical responsibility to ensure that tests are fair and based upon the best available evidence.20

Even though some researchers report that all SAs contribute substantially to preparing students for NCLEX-RN success,6 this analysis shows that SAs can potentially be explained in the 2-component PCA model and that the correlations are strong between the loaded variables in each component. The procedure using PCA as a reduction model warrants further consideration as a means for analyzing the relationship of SA scores and NCLEX-RN outcomes. The maternal/newborn and pharmacology assessments that were found to correlate the strongest on each component can guide future studies to predict NCLEX-RN outcomes.

Back to Top | Article Outline

Strengths and Limitations

This study's cross-sectional design and retrospective data collection procedures limit student participation to those meeting the inclusion and exclusion criteria. Additional SA scores could be used if the inclusion criteria were expanded to allow all scores. Another limitation of this study is the ability to control for the changes in the NCLEX applied every 3 years.

The strengths of this study lie in the sample collected from multiple programs of nursing throughout the United States. Another strength is the use of PCA to explore the SA scores. This study seems to be the first to use this approach to explore relationships between the variables and NCLEX failure.

Back to Top | Article Outline

Conclusions

The extensive literature on student performance on NCLEX-RN shows strong evidence to support the ability to predict success. Yet, the decades of studies have yielded little evidence to correct the problem as current trends in national pass rates have remained in the 80% range for a number of years. Recognizing that SAs are testing similar content can inform nurse educators for decision making related to the uses of content-specific and comprehensive assessments.

The findings of this study are the first in the literature to explore the utility of using PCA to analyze SA scores from students who failed NCLEX-RN on the first attempt. This is also the first study to collect and analyze SA scores from multiple prelicensure nursing programs from across the United States using a variety of commercial vendors. Future studies using SA scores of maternal/newborn and pharmacology assessments to predict NCLEX-RN outcomes are the next step to validate these findings. Investigation is also needed of SA in practical nursing programs as there is a gap in the evidence for this population. Studying the predictive relationship between the highly correlated SA scores and NCLEX-RN failure is the next step for future work.

Back to Top | Article Outline

References

1. National Council State Boards of Nursing. Exam statistics and publications. 2018. Available at https://www.ncsbn.org/Table_of_Pass_Rates_2018.pdf. Accessed May 28, 2018.
2. National Council State Boards of Nursing. Exam statistics and publications. 2017. Available at https://www.ncsbn.org/Table_of_Pass_Rates_2017.pdf. Accessed May 25, 2018.
3. Buerhaus PI, Skinner LE, Auerbach DI, Staiger DO. State of the registered nurse workforce as a new era of health reform emerges. Nurs Econ. 2017;35(5):229–237.
4. Hinderer KA, DiBartolo MC, Walsh CM. HESI admission assessment (A(2)) examination scores, program progression, and NCLEX-RN success in baccalaureate nursing: an exploratory study of dependable academic indicators of success. J Prof Nurs. 2014;30(5):436–442.
5. DeLima M, London L, Manieri E. Looking at the past to change the future: a retrospective study of associate degree in nursing graduates' National Council Licensure Examination scores. Teach Learn Nurs. 2011;6(3):119–123.
6. National League for Nursing. CNEA Mission and Values. 2018. Available at http://www.nln.org/accreditation-services/the-nln-commission-for-nursing-education-accreditation-(cnea). Accessed June 9, 2018.
7. Brussow JA, Dunham M. Students' midprogram content area performance as a predictor of end-of-program NCLEX readiness. Nurse Educ. 2018;43(5):238–241.
8. Reinhardt A, Keller T, Ochart Summers L, Schultz P. Strategies for success: crisis management model for remediation of at-risk students. J Nurs Educ. 2012;51(6):305–311.
9. Assessment Technologies Institute, LLC. Evaluating the predictive power of ATI's 2010 RN Comprehensive Predictor. Available at http://www.atitesting.com/Libraries/pdf/Research_Brief_-_RN_CPtoNCLEX.sflb.ashx. Accessed June 9, 2018.
10. Emory J. Standardized mastery content assessments for predicting NCLEX-RN outcomes. Nurse Educ. 2013;38(2):66–70.
11. Harding M. Predictability associated with exit examinations: a literature review. J Nurs Educ. 2010;49(9):493–497.
12. Homard CM. Impact of a standardized test package on exit examination scores and NCLEX-RN outcomes. J Nurs Educ. 2013;52(3):175–178.
13. Yeom YJ. An investigation of predictors of NCLEX-RN outcomes among nursing content standardized tests. Nurse Educ Today. 2013;33(12):1523–1528.
14. Assessment Technologies Institute, LLC. Using RN Content Mastery Series test data to identify student needs. Available at http://www.atitesting.com/Libraries/pdf/Research_Brief_-_RN_CMS_final.sflb.ashx. Accessed June 9, 2018.
15. Schooley A, Kuhn JR. Early indicators of NCLEX-RN performance. J Nurs Educ. 2013;52(9):539–542.
16. Abdi H, Williams LJ. Principal component analysis. Comput Stat. 2010;2(4):433–459.
17. Penn State Eberly College of Science. Interpretation of the Principal Components. 2018. Available at https://onlinecourses.science.psu.edu/stat505/node/54. Accessed June 10, 2018.
18. Lazarin M. Testing Overload in American's Schools. Centers for American Progress website. 2014. Available at https://cdn.americanprogress.org/wp-content/uploads/2014/10/LazarinOvertestingReport.pdf. Accessed June 10, 2018.
19. Mee CL, Hallenbeck VJ. Selecting standardized tests in nursing education. J Prof Nurs. 2015;31(6):493–497.
20. National League for Nursing. Fair testing guidelines for nursing education. 2012. Available at http://www.nln.org/docs/default-source/default-document-library/fairtestingguidelines.pdf?sfvrsn=2. Accessed June 10, 2018.
Keywords:

NCLEX-RN; NCLEX-RN failure; nursing education; principal component analysis; standardized assessments

Supplemental Digital Content

Back to Top | Article Outline
Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved