The Quality and Safety Education for Nurses (QSEN) competencies serve as a framework for nursing education. Developed in 2005,1 the 6 QSEN competencies emphasize behaviors consistent with patient-centered care, collaboration with other members of the health care team, the use of evidence-based practice, quality improvement, concentrated efforts to ensure patient safety, and the integrated use of informatics to support patient care. These 6 competencies align with the Institute of Medicine (now the National Academy of Medicine) competencies,2 which are required for all health care professionals. Using language that describes nursing actions, the knowledge, skills, and attitudes (KSAs) that support each competency provide clear direction for the expectation of competent clinical practice.
In prelicensure nursing education, knowing when the competency should be introduced, when it should be emphasized, and when it should be an expectation of performance as well as how the behavior is manifested in clinical learning can be challenging. Building on previous work to establish content validation for a QSEN-based clinical evaluation instrument for prelicensure acute medical-surgical nursing,3 this article discusses the development of QSEN-based clinical evaluation instruments for each prelicensure clinical course and describes the process of leveling and establishing content validation for each instrument's items.
The purpose of this study was to establish content validation for instruments to measure competency in all prelicensure clinical nursing courses. The importance of using validated instruments for the high-stakes assessment of clinical performance cannot be overstated because of the significance of the outcome and potential consequence,4 yet many nursing programs use clinical evaluation tools that have not been tested for reliability and validity.5 The literature indicates that students frequently view clinical evaluation in prelicensure nursing programs as a subjective process6-8; therefore, the goal was to create instruments reflective of clinical practice, leveled for appropriate expectation based on course placement within the curriculum, and constructed with an objective framework on which to score student performance.
Developing the Clinical Evaluation Instruments
Development began with creation of instruments formulated from a comprehensive conceptualization of the construct based on firsthand knowledge and a comprehensive review of the literature. The 6 QSEN competencies served as the headings for each evaluation instrument. A seventh heading of professional role development was added because of its importance in clinical education. Since a gap analysis had been completed for the prior work,3 the development of these instruments began with leveling the KSAs to align with expected performance levels based on clinical course placement within the standard prelicensure nursing curriculum. The standard curriculum begins with fundamentals of nursing and is followed by semesters where specialty clinical courses of maternal-child health, pediatrics, and psychiatric nursing are corequisite with the first adult medical-surgical nursing course. The final year usually entails the second adult medical-surgical nursing and public health clinical courses.
The Delphi study9 that identified where the QSEN competencies' KSAs should be introduced, where they should be emphasized, and where they should be expected performance parameters was used as the stepping off point for leveling items for each of the 6 instruments. Faculty colleagues reviewed each instrument based on specialty to ensure that course-specific core competencies and nurse-sensitive indicators were represented. Their feedback helped to further amend items. Once developed, a rigorous review of the newly developed instruments was conducted by content experts whose input refined items for greater clarity and authenticity.
Based on their area of teaching expertise, nurse educators were recruited for panels of 5 to 7 members each for the purpose of reviewing and scoring the items of each specific QSEN-based clinical evaluation. The panels were composed of a mix of doctorally prepared and master's degree–prepared educators; each team had at least 1 educator with expert knowledge of QSEN and at least 2 adjunct nurse educators with practice expertise and no direct knowledge of QSEN, although they understood quality and safety competencies from their practice positions (Table 1, Supplemental Digital Content, http://links.lww.com/NE/A604). The selection of reviewers to represent practice and expert QSEN knowledge was purposeful to ensure that each clinical evaluation not only aligned with the QSEN competencies but also addressed current clinical practice. Participation to serve as a reviewer was incentivized by a $50 honorarium for each completed review; the honoraria were funded by a small grant awarded by the author's school of nursing.
Data collection occurred over 2 rounds of review that spanned a 3-month time period. All content reviewers participated in both rounds. For each review, the experts were provided explicit written directions about the purpose of the review and detailed directions for scoring individual items. Reviewers were asked to rate the level of agreement with the relevance (appropriateness) of each item on the evaluation they reviewed while considering the expectation for students based on placement of the course in either the beginning, middle, or end of the nursing program. Reviewers were asked to provide detailed comments about individual items and the overall instrument being reviewed. The first round determined whether the items thoroughly addressed the domain of clinical performance evaluation and if they correctly represented the construct of clinical performance for the specific clinical course. The second round clarified that reviewer feedback was accurately reflected in the revised items and to assess content validity of the items and scale as a whole for each instrument.
The goals of the rigorous review for each course-specific instrument were to (1) achieve expert consensus that the items for each were relevant for inclusion on a QSEN-based clinical evaluation instrument specific to that course content, (2) reduce error of measurement by increasing the clarity of items, and (3) appropriately level requisite behaviors and demonstrate increasing progression in clinical performance expectation from the fundamentals course to the senior nursing courses. The Content Validity Index (CVI), a widely used method to determine content validity for multi-item scales in nursing research,10 was chosen as the process to compute consensus estimates based on ratings of relevance by each expert panel. The CVI quantifies the extent to which experts agree; a high level of agreement indicates that the instrument creates a shared understanding of a construct.
For each instrument, the CVI was used to quantify the degree of relevance for each item and compute an overall scale value for that specific instrument. Item levels were determined by scores assigned by the expert nurse educators using a 4-point ordinal scale with the following values: 1 = not relevant, 2 = somewhat relevant, 3 = quite relevant, and 4 = highly relevant. The item CVI (I-CVI) was computed as the number of experts rating an item 3 or 4, divided by the number of experts rating that specific instrument, indicating the proportion of agreement about an item's relevance. With 4 or fewer experts, Polit and Beck10 suggest 100% agreement is required, but with 5 or more experts, which each of these panels had, an item can still be considered valid with 1 rating of not relevant, allowing for a modest amount of disagreement.
The scale-level CVI (S-CVI) for each instrument was computed using both universal agreement method (UA), where the lower limit of acceptability for scale level values is 0.80 or greater, and averaging agreement method (Ave), where the lower limit of acceptability is 0.90 or greater.10,11
Microsoft Excel was used to calculate mean scores and CVI for the items included on each specific instrument for each round of review. CVI was calculated by grouping items rated quite relevant and highly relevant (scored as 3 or 4) and grouping items rated not relevant and somewhat relevant (scored as 1 or 2). Item scores as well as focused comments from the expert reviewers served to refine and clarify some items and discard others. The CVI of the entire scale was calculated using both the universal agreement and averaging methods. The proportion of agreement across all reviewers for each of the scales was calculated.
Each instrument was reviewed by its designated panel of experts. After the first round of review for each instrument, adjustments were made to modify syntax to specify meaning or appropriately level the performance expectation of items. Some items were reordered to increase clarity of the scale. Items rated as not relevant were discarded. After adjustments were made, the same reviewers for each instrument participated in a second round of review for the revised items. The second round of review yielded 6 course-specific clinical evaluation instruments where all items on each instrument had I-CVI scores greater than 0.80, indicating the items listed on each were evaluated as excellent and appropriate for inclusion on the course-specific QSEN-based clinical evaluation with one exception—a single item on the maternal-child health clinical evaluation requiring incorporation of the school's nursing model achieved a score of 0.67, indicating an evaluation of fair but still appropriate for inclusion (Table 2, Supplemental Digital Content, http://links.lww.com/NE/A605).
The S-CVI was calculated for each instrument using 2 methods, UA and Ave (Table). UA calculates the proportion of items on a scale where all reviewers rate an item as quite and highly relevant (3 or 4). The proportion of items can range from 0 to 1. Using this method, the S-CVI/UA of all 6 instruments was calculated as 0.833 or greater. Polit et al11 recommend an S-CVI of 0.80 or greater for an instrument to be judged as excellent when using this method. Ave method computes the I-CVI for all items on a scale and calculates the average of those items. The S-CVI/Ave of all 6 instruments was calculated as 0.968 or greater. Polit et al11 recommend an S-CVI of 0.90 or greater when using this method.
Proportion of relevance, an additional index that supports scale-level content validity, was calculated for each instrument. It is a measure of the proportion of experts who agree on the relevance of all items included in the scale, similar to what the CVI does. The proportion agreement of items judged as relevant across the total number of expert educators for each instrument is reported and is well above the defensible minimal standard of 0.80 (Table).
The CVI is a well-established validity index, appropriate for this expansive undertaking to establish content validation on several instruments simultaneously. Processes that strengthened this study included having colleagues review the newly developed instruments to establish face validity, the recruitment of expert nurse educator panels composed of both QSEN experts and clinical practice experts for each instrument, providing detailed instructions with each review, and using the same process and expert panels for each round of review.10,11 Expert panel characteristics were described to establish their qualifications to serve as reviewers. Implementing the development and review of these instruments simultaneously allowed the researcher to identify elements significant to progression in the complexity and sophistication of student performance expectations for instruments to be used from the first clinical nursing course to the last in a nursing education program.
Expert consensus was achieved for items included in these 6 newly developed QSEN-based clinical evaluation instruments as relevant measures of student clinical performance, addressing performance expectations appropriate for the intended course. Similar to previous work, the findings indicate the QSEN competencies provide a relevant framework for clinical practice evaluation, evidenced by the high scale-level scores for these 6 instruments, despite the varying knowledge level of the QSEN competencies by the expert nurse educator reviewers. Findings support the items for inclusion in these QSEN-based clinical evaluation instruments as valid measures for contemporary nursing education and practice.
Nursing Education Implications
Clinical evaluation is a high-stakes assessment that warrants valid and reliable instrumentation to determine student achievement of clinical competence. Seldomridge and Walsh12 call for the establishment of improved precision with clinical evaluation instruments that demonstrate how performance expectations increase in sophistication across the curriculum. They stress the use of deliberate performance criteria and language that is understandable to both faculty and students. The QSEN-based clinical evaluations address this mandate.
Establishing content validation for these instruments simultaneously supported careful attention to progression expectations for the entirety of the clinical education process. The high item- and scale-level scores for each instrument suggest the QSEN competencies provide a relevant framework for clinical practice that allows students to clearly identify areas of strength and weakness. The uniformity of the instruments reinforces standardization. These evaluation instruments framed in consistent quality and safety language demonstrate progression of increasing KSAs that provide students clarification of the expectations regarding clinical performance criteria throughout their nursing education. Framing clinical evaluations in the QSEN competencies allows students to appreciate the expectations for competent practitioners.
The inclusion of the lower score performance criteria related to evidence of a nursing model in the maternal-child health evaluation instrument is accounted for by the fact that some programs no longer apply a nursing model to the education process. Discussion with 2 reviewers of the maternal-child health evaluation identified that the reviewers were from states where the requirement of a nursing model for nursing education programs had recently been discontinued by the board of nursing. This accounted for a not relevant scoring of the item by these 2 reviewers, although this same item had an I-CVI of 0.80 or greater in all other clinical evaluations. Therefore, the item was included in the maternal-child evaluation instrument to meet the requirements of some schools to include the application of a nursing model to clinical education.
Establishing content validation for evaluation instruments is essential for the high-stakes assessment of student clinical performance. These QSEN-based clinical evaluation instruments provide an objective manner with which to assign a score to student competency for the KSAs associated with professional nursing practice. The organization of these instruments allows for adaptability by many nursing programs, whether clinical evaluation is conducted through the assignment of letter grades or as a pass/fail assessment. The clinical evaluations, currently organized for assignable grades with a rubric developed in the original study, are available to download at https://qsen.tcnj.edu/resources/. Framing clinical evaluation in quality and safety supports the work of nurse educators and sets the standard for competent nursing student performance throughout the nursing education program.
This study establishes content validity for prelicensure-level QSEN-based clinical evaluation instruments. Content validity is a single aspect of an effective clinical evaluation instrument. Determination of language used to demonstrate progression may vary within different institutions. Future studies can provide further validity and reliability data and explore refinement of leveling of items to demonstrate progression as expectations for clinical practice continue to evolve.
The QSEN-based clinical evaluation instruments provide standardized language in a quality and safety framework, appropriate to demonstrate evolving sophistication, complexity, and expectation of student clinical performance. Using content validated instruments to evaluate student performance is essential to support student development and success. Agreement of the expert nurse educator reviewers despite varying degrees of knowledge related to the QSEN competencies supports that the QSEN-based clinical evaluation instruments capture current clinical nursing education and contemporary nursing practice.
1. Cronenwett L, Sherwood G, Barnsteiner J, et al. Quality and safety education for nurses. Nurs Outlook
2. Institute of Medicine. Health Professions Education: A Bridge to Quality
. Washington, DC: National Academies Press; 2003.
3. Altmiller G. Content validation of a quality and safety education for nurses–based clinical evaluation instrument. Nurse Educ
4. Rutherford-Hemming T. Determining content validity and reporting a Content Validity Index for simulation scenarios. Nurs Educ Perspect
5. Oermann MH, Saewert KJ, Charasika M, Yarbrough SS. Assessment and grading practices in schools of nursing: national survey findings part I. Nurs Educ Perspect
6. Altmiller G. Student perceptions of incivility in nursing education: implications for educators. Nurs Educ Perspect
7. Del Prato D. Students' voices: the lived experience of faculty incivility as a barrier to professional formation in associate degree nursing education. Nurse Educ Today
8. Lasiter S, Marchiondo L, Marchiondo K. Student narratives of faculty incivility. Nurs Outlook
. 2012;60(3):121–126. 126.e1.
9. Barton AJ, Armstrong G, Preheim G, Gelmon SB, Andrus LC. A national Delphi to determine developmental progression of quality and safety competencies in nursing education. Nurs Outlook
10. Polit DF, Beck CT. Nursing Research: Generating and Assessing Evidence for Nursing Practice
. 10th ed. Philadelphia, PA: Wolters Kluwer; 2017.
11. Polit DF, Beck CT, Owen SV. Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Res Nurs Health
12. Seldomridge LA, Walsh CM. Waging the war on clinical grade inflation: the ongoing quest. Nurs Educ