Journal Logo

Feature Articles

Content Validation of the Quality and Safety Framed Clinical Evaluation for Nurse Practitioner Students

Altmiller, Gerry EdD, APRN, ACNS-BC, ANEF, FAAN; Dugan, Mary Ann DNP, APRN, FNP-BC

Author Information
doi: 10.1097/NNE.0000000000000936

Abstract

More than 28 000 new nurse practitioners (NPs) graduated in 2017 to 2018,1 a significant increase from a decade ago reported as 9200 in 2010.2 Although a number of validated clinical evaluation instruments for prelicensure students are available to nurse educators, few clinical evaluation instruments for NP students have been validated for use by students, faculty, and preceptors. This study describes the process to develop, validate, and pilot a clinical evaluation instrument for NP students framed in the Quality and Safety Education of Nurses (QSEN) Competencies for advanced practice.3

Literature Review

There is a plethora of literature focused on prelicensure nursing evaluation, but considerably less on graduate-level evaluation processes and tools. A search of databases that included the Cochrane Library, EBSCO, Cumulative Index to Nursing and Allied Health Literature, and PubMed yielded minimal results. Pearson and colleagues4 developed a progressive NP student clinical evaluation tool that was grounded in the underpinnings of the National Organization of Nurse Practitioner Faculties (NONPF) and the Commission on Collegiate Nursing Education for use by faculty, but not preceptors and students.

Cotter and colleagues5 described the evaluation methods and tools for assessing NP student progress in a dual track program pathway for an adult health and gerontology NP program. Evaluation methods included reflective logs and evaluation tools. Specifically, the Preceptor Evaluation Tool was to be completed by the preceptor and reviewed by the faculty and student. The process included a plan for a student-faculty-preceptor meeting with the premise of the written evaluation being preceptor led and faculty reviewed. The Preceptor Evaluation Tool used a tiered approach to meeting objectives with increasing competency percentages for meeting expectations as students progressed through the program.

Logan and colleagues6 investigated the obstacles mutually shared by schools of nursing and preceptors in meeting clinical expectations for NP students. In addition to identifying barriers to securing preceptors, they explored the inconsistency of expectations between preceptors and NP students, which was exacerbated by different evaluation tools being used to determine satisfactory performance and progress, particularly in the area of documentation. The authors suggested continual evaluations of appropriate clinical experiences and clear, consistent expectations for both the NP students and preceptors to improve the efficiency and safety of clinical rotations.

Tractenberg and colleagues7 developed a curriculum tool, the Mastery Rubric (MR), that captures the foundational competencies not only for the NP student but also for the NP graduate and beyond. The tenets of the MR align with the NONPF competencies, while providing guidance for the self-directed possibilities of the adult learner. It offers a clear developmental path for students to meet competencies in a stepwise fashion, with faculty monitoring the educational path. Although this tool can serve as a resource for faculty program design and evaluation and can guide the student learner, it does not provide for engagement of preceptors in evaluative processes.

The 2016 Criteria for Evaluation of NP Programs Criterion IV.B.1 emphasizes the need for NP faculty to have oversight of student clinical experiences.8 Pitts and colleagues9 published the NONPF/American Association of Nurse Practitioners checklist of faculty expectations for preceptors, which outlined a structured clinical learning plan and included a section that addressed evaluation. However, it did not include an itemized evaluation and neither organization has endorsed a standardized evaluation document.

The QSEN competencies for advanced practice were published in 2009.3 A mapping of the QSEN competencies for advanced practice and NONPF's NP Core and Practice Doctorate Competencies10 identified close congruence across the 2 sets of competencies and recommended a robust integration of the QSEN knowledge, skills, and attitudes (KSAs) for advanced practice into current NP curriculum. A thorough literature review did not reveal any clinical evaluation instruments for NP students framed in quality and safety competencies. A standardized evaluation tool for students, preceptors, and site visitors would support consistency among all stakeholders where performance expectations would be identified, defined, transparent, and explicit. This study attempts to address that gap.

Instrument Development

Clinical performance evaluation is a high-stakes assessment with significant consequences; therefore, it requires a valid and reliable measure of competency. The purpose of this study was to develop a valid and reliable clinical performance evaluation instrument for NP students. The process entailed establishing item and scale content validation for a newly developed instrument, followed by pilot testing among NP students in their second clinical practice nursing course. The graduate-level QSEN Competencies3 were chosen as the framework for the instrument because of their applicability to both academia and practice.11

The project had 3 phases: the first was the creation of the instrument based on first-hand knowledge of the construct of NP competency evaluation, examination of the updated Nurse Practitioner Core Competencies Content,12 and a thorough review of the literature. In the second phase, content experts conducted a rigorous review of the instrument's items and the scale as a whole. The third phase involved pilot testing of the content validated instrument.

The 6 categories of the QSEN competencies served as the headings for the evaluation instrument. An additional competency of professional role development served as the seventh heading due to its importance in advanced practice nursing education. The KSAs of the graduate-level QSEN competencies, chosen for their leadership lens on quality and safety, formed the basis for the 33 items placed under 1 of the 7 headings. Each item was followed by a parenthesis to allow indication of the student learning outcomes of the course that align with the specific item. Following the process used in our previous clinical evaluation instrument, a 4-point scale that included ratings of (1) unsatisfactory, (2) poor, (3) satisfactory, and (4) above average, along with an additional column of not observed, was placed in columns adjacent to each item to serve as the scoring basis for each item. Once the items were placed in this evaluation format, faculty members teaching in the graduate program of the school of nursing reviewed the instrument for face validity. Feedback provided guided additional refinement of the items to increase clarity before a rigorous review using content experts was begun.

Methods

Data Collection Procedure

A panel of 7 doctorally prepared nurse educators was recruited as content experts to score the 33 items of the Quality and Safety Framed Clinical Evaluation for NP Students. All reviewers taught in graduate programs; 4 reviewers held NP roles in clinics and 4 reviewers were well versed in the graduate level QSEN competencies through association with the QSEN Institute. Although 3 reviewers did not have specific knowledge of the graduate-level QSEN competencies, they were well informed about the concepts that the competencies represented through their practice role and course teaching. The selection of reviewers with varied QSEN knowledge and clinical practice roles was purposeful to ensure that the clinical evaluation items aligned well with the QSEN competencies and clinical practice. A small grant awarded by the investigators' college funded a $50 honorarium provided to the content experts for completion of each review, estimated to take about 30 minutes.

Data collection occurred over 2 rounds of review; the first round was completed by all 7 recruited content experts, and the second round completed by 6 of the first-round reviewers. Both rounds of review followed the same process, with clear written instructions that included purpose for the review and directions for scoring individual items. Reviewers were asked to rate the level of agreement with the relevance (appropriateness) of the item to be included in a QSEN framed clinical evaluation instrument for NP students. Reviewers were also asked to provide detailed comments regarding individual items and for the instrument as a whole. The first round of review was to determine whether the items clearly and thoroughly addressed the construct of clinical performance evaluation for the NP student. The second round was used to verify that reviewer feedback was accurately reflected in modifications made to items and to formally assess the content validity of the items and overall scale.

Content Validation Process

The content validity index (CVI), a process used for calculating content validity based on ratings of relevance by an expert panel, was chosen as the assessment approach. The CVI is widely used as a method for determining content validity for multi-item scales in nursing research.13 It is a process for computing consensus estimates, thereby quantifying the extent to which experts agree. A low level of agreement among expert reviewers would indicate that the instrument did not create a shared understanding of the construct and therefore was not a valid measure of clinical competency for NP students.

The CVI was used to determine the degree of relevance for each item and to calculate a value for the overall instrument as a scale. During the second round of review, item relevance was calculated based on scores assigned by 6 expert reviewers using a 4-point ordinal scale with the following values: 1, not relevant; 2, somewhat relevant; 3, quite relevant; and 4, highly relevant. The item CVI (I-CVI) was computed as the number of experts giving a rating of 3 or 4, divided by the number of experts. The value determined indicated the proportion of agreement among experts about an item's relevance. Polit and Beck13 suggest that with 4 or fewer experts, 100% agreement is needed, but with 5 or more experts, a small amount of disagreement with 1 rating of “not relevant” is acceptable for an item to still be considered valid. The scale-level CVI (S-CVI) was computed based on item CVI determinations using both universal agreement (U/A) and averaging (Ave) agreement methods using the criteria of 0.80 or higher and higher than 0.90, respectively, as the lower limit of acceptability for scale-level values.13 Proportion of agreement, an additional method to calculate expert agreement, and a modified κ statistic, a consensus index of interrater agreement that adjusts for chance agreement, were also computed.

Data Analysis

Microsoft Excel was used to calculate mean scores, standard deviations, CVI for each item, and S-CVI in each round of review. Items identified as quite relevant and highly relevant (ratings 3 and 4) were grouped together and items identified as not relevant and somewhat relevant (ratings 1 and 2) were grouped together. The item-level information, as well as the comments of the expert reviewers, was used to refine or discard items. The proportion of agreement was calculated to show items determined to be relevant by all expert reviewers, and a modified κ coefficient was computed to adjust for chance agreement among reviewers that an item was relevant.

Results

The first round of review yielded 2 items of the 33 with a CVI of 0.71 (out of possible 1); those items were restructured based on feedback. Comments from reviewers guided refinement for increased clarity and comprehensiveness of specific items. An additional item related to preparation for the clinical experience was added. Of the 7 original reviewers, 6 participated in the second round of review, which yielded 34 items with a CVI of 0.83 or higher, indicating that all items were content valid and relevant for inclusion in the final version of the instrument (Supplemental Digital Content, http://links.lww.com/NE/A857, Table).

The S-CVI of the 34-item instrument was computed using both the U/A method and the Ave method. Universal agreement calculates the proportion of items achieving a rating of 3 or 4 by all content experts. The proportion can range from 0 to 1. The S-CVI/UA for the final version of the instrument was 0.97, well above the minimal requirement of 0.80. The average method calculates the average I-CVI for all 34 items. Polit and Beck13 recommend an S-CVI of 0.90 or higher for an instrument to be determined as having excellent content validity using this method. The S-CVI/Ave yielded a score of 0.995 (out of possible 1). The close agreement between the different calculation methods for S-CVI does not always occur but was attributed to the fact that 33 of 34 items had an I-CVI of 1, meaning all experts scored the items as quite or highly relevant.

A modified κ coefficient was reported for each item to address chance agreement among reviewers that an item is relevant. For each of the 34 items, the modified κ ranged from 0.81 to 1, well above the 0.75 that is considered excellent.13 The proportion of agreement, a scale-level measure of the proportion of experts that agree on the relevance of all items included in a scale, was also computed and found to be 0.995, well above the minimal standard of 0.80.13

Pilot Study

The instrument was pilot tested with 16 NP students, 5 NP clinical faculty who serve as site visitors, and 25 preceptors over a 4-month period in a primary care course with a patient care clinical requirement. The student participants consisted of 11 family NP students and 5 adult gerontology NP students. All stakeholders were informed of the pilot testing and received instructions for use of the evaluation instrument for midterm and final evaluations via email at the beginning of the semester, at midterm, and before completion of the final evaluation.

The evaluation instrument was introduced to faculty at a graduate program meeting at the start of the semester. It was loaded into an electronic database for ease of use but was also made available as a paper form for preceptors who were unable to complete it electronically. All students, faculty, and preceptors completed the evaluation at midterm and during the final week of the semester. The completed evaluations were used to guide discussion of student progress during the midterm and final evaluation meetings that occurred between the student, preceptor, and faculty.

Students, faculty, and preceptors were queried for feedback via open text format regarding the evaluation after midterm and final evaluations. All 5 NP site visitors commented on the utility and ease of use of the evaluation instrument. Four NP site visitors reported the instrument as thorough. One faculty noted that the instrument was lengthy. Two commented that the tool reflected both the academic and professional criteria expected from students.

Eight of the 25 preceptors responded to feedback requests. Most reported that the evaluation covered all pertinent areas of NP education, was thorough, and was easy to use. One former student and new preceptor stated that the instrument “was so much better than what was used during my program,” which was 1 year before. Two of the 25 preceptors reported that the instrument was lengthy. Disappointingly, no students provided feedback.

Discussion

The process of scoring clinical competency varies among schools and programs, with some using a pass/fail rating and some assigning a numerical grade based on criteria and a preset calculation. Regardless of the process, the literature indicates that clinical evaluation is perceived by students and faculty as a subjective process,14,15 creating the imperative for a valid and reliable measure of student performance.

This study used the CVI, a well-established index used to determine validity in multi-item scales in nursing education. Establishing face validity with NP faculty as a first step provided an item check. Descriptions and qualifications of the reviewers establish their expertise for participating in the validation process. Inviting detailed comments served to better refine items in the context of NP nursing education and contemporary practice. All reviewers were knowledgeable of the core competencies for NP practice, but including expert reviewers who were well versed in the QSEN competencies for advanced practice, as well as experts who were not, supports that the QSEN competencies for advanced practice align with contemporary NP practice.

The Quality and Safety Framed Clinical Evaluation for NP Students was pilot tested over 4 months and was well received by preceptors, faculty, and students. The clinical evaluation instrument creates an opportunity for a 3-dimensional lens that includes that of the student, preceptor, and faculty member in viewing student progress; in this pilot test, all stakeholders used the same instrument in evaluating student performance, subsequently creating an environment of transparency and equity of stakeholder input. Currently, there is no single instrument identified in the literature that provides a 3-dimensional interpretation of NP student clinical performance progress.

The items of the evaluation reflect current practice and are congruent with core competencies identified by organizations that set the standard for NP practice and education. Of concern, 1 faculty member who participated in the pilot testing acknowledged not knowing what a Plan-Do-Study-Act cycle meant under item 20. This speaks to the need to support seasoned faculty with continuing education so that they may stay informed regarding current practice concepts.

Nursing Implications

This clinical evaluation instrument is a comprehensive scale that levels expectations across all clinical site settings. Because the instrument captures the common denominators of NP practice, it is applicable to a variety of primary care sites such as family, pediatric, and gerontologic practices, internal medicine, clinics, fast tracks, student health centers, or atypical sites such as prisons and occupational health areas. The items can easily be applied to telehealth or telemedicine rotations. In this study, the instrument was pilot tested in an NP primary care program with 2 tracks, family and adult gerontology practice. Students in the program complete rotations in gynecology, obstetrics, pediatrics, and specialty areas in addition to the base rotations in family practice and gerontology sites, and the instrument's items were appropriate across each of these. The instrument is easily adaptable to pediatric NP programs or women's health programs in meeting the needs of evaluating their students. Other specialized programs such as the psychiatric NP or acute care NP programs may also consider its utility in meeting the needs of the evaluative process.

The Quality and Safety Framed Clinical Evaluation for NP Students is appropriate for use by students, faculty, and preceptors as a way for implementing a consistent performance evaluation of the NP student by all stakeholders. The quality and safety criteria serve as a template that transcends specific course foci, encompassing the varied levels of learning from a first clinical course to the final clinical course in preparation for graduation. The organization and clarity of the items provide students a transparent trajectory of their performance and allow the student to be a participant in reflecting on their own performance.

Organizing the instrument using the graduate-level QSEN competencies provides a structure that is easily adaptable to many nursing programs. As seen with practice, standardization reduces variation and clarifies expectations. Standardizing the clinical evaluation form across a nursing program framed in quality and safety competencies serves to bridge the gap between academia and practice by clearly identifying competent practice. Establishing content validity of the evaluation instrument used for high-stakes assessment of clinical competence is fundamental for ensuring a transparent and meaningful analysis of student competency. The evaluation instrument is available for download at https://qsicenter.tcnj.edu.

Limitations

This study establishes content validation for the Quality and Safety Framed Clinical Evaluation for NP Students, which is one aspect of validity. It was pilot tested for 1 semester in a single NP clinical course with family NP and adult gerontology NP students. Future studies will be needed to provide further validity and reliability data for its adaptability in other NP tracks.

Conclusion

Content validation is essential when developing instruments for high-stakes performance evaluation. Evidence suggests that the Quality and Safety Framed Clinical Evaluation for NP Students is a valid and reliable instrument to evaluate the clinical performance of NP students. Framing NP clinical evaluation in the QSEN competencies for advanced nursing practice provides a clear and organized structure that reflects current practice. The high degree of agreement among experts in academia and practice and the feedback resulting from the pilot test indicate that the Quality and Safety Framed Clinical Evaluation for NP Students reflects contemporary nursing education and practice and provides a relevant measurement instrument for NP student clinical competence.

References

1. Fang D, Li Y, Turinetti MD, Trautman DE. 2018-2019 Enrollment and Graduations in Baccalaureate and Graduate Programs in Nursing. Washington, DC: American Association of Colleges of Nursing: National Organization of Nurse Practitioner Faculties; 2019.
2. Sheth S. Number of nurse practitioners increasing nationwide, but will it last for long?. Health Careers. 2011. Available at https://www.healthecareers.com/article/healthcare-news/number-of-nurse-practitioners-increasing-nationwide-but-will-it-last-for-long. Accessed January 20, 2020.
3. Cronenwett L, Sherwood G, Pohl JM, et al. Quality and safety education for advanced nursing practice. Nurs Outlook. 2009;57(6):338–348. doi:https://doi.org/10.1016/j.outlook.2009.07.009
4. Pearson T, Garrett L, Hossler S, McConnell P, Walls J. A progressive nurse practitioner student evaluation tool. J Am Acad Nurse Pract. 2012;24(6):352–357. doi:https://doi.org/10.1111/j.1745-7599.2012.00713.x
5. Cotter VT, Bradway CK, Cross D, Taylor MA. Clinical evaluation tools for dual track adult and gerontology nurse practitioner students. J Am Acad Nurse Pract. 2009;21(12):658–662. doi:https://doi.org/10.1111/j1745-7599.2009.00463.x
6. Logan BL, Kovacs KA, Barry TL. Precepting nurse practitioner students: one medical center's efforts to improve the precepting process. J Am Assoc Nurse Pract. 2015;27(12):676–682. doi:https://doi.org/10.1002/2327-6924.12265
7. Tractenberg RE, Wilkinson MR, Bull AW, Pellathy TP, Riley JB. A developmental trajectory supporting the evaluation and achievement of competencies: articulating the Mastery Rubric for the Nurse Practitioner (MR-NP) program curriculum. PLoS One. 2019;14(11):e0224593. doi:https://doi.org/10.1371/journal.pone.0224593
8. National Organization of Nurse Practitioner Faculties, American Association of Colleges of Nursing. 2016 Criteria for Evaluation of Nurse Practitioner Programs: A Report of The National Task Force on Quality Nurse Practitioner Education. 5th ed. NONPF; 2016.
9. Pitts C, Padden D, Knestrick J, Bigley MB. A checklist for faculty and preceptor to enhance the nurse practitioner student clinical experience. J Am Assoc Nurse Pract. 2019;31(10):591–597. doi:https://doi.org/10.1097/JXX.0000000000000310
10. Pohl JM, Savrin C, Fiandt K, et al. Quality and safety in graduate nursing education: cross-mapping QSEN graduate competencies with NONPF's NP core and practice doctorate competencies. Nurs Outlook. 2009;57(6):349–354. doi:https://doi.org/10.1016/j.outlook.2009.08.002
11. Altmiller G, Hopkins-Pepe L. Why Quality and Safety Education for Nurses (QSEN) matters in practice. J Contin Educ Nurs. 2019;50(5):199–200. doi:https://doi.org/doi.org/10.3928/00220124-20190416-04
12. Thomas A, Crabtree MK, Delaney K, et al. Nurse Practitioner Core Competencies Content. 2017. Available at https://cdn.ymaws.com/www.nonpf.org/resource/resmgr/competencies/20170516_NPCoreCompsContentF.pdf. Accessed March 12, 2019.
13. Polit DF, Beck CT. Nursing Research: Generating and Assessing Evidence for Nursing Practice. 11th ed. Philadelphia, Pennsylvania: Wolters Kluwer; 2021.
14. Altmiller G. Student perceptions of incivility in nursing education: implications for educators. Nurs Educ Perspect. 2012;33(1):15–20. doi:https://doi.org/10.5480/1536-5026-33.1.15
15. Cassidy S. Subjectivity and the valid assessment of pre-registration student nurse clinical learning outcomes: implications for mentors. Nurse Educ Today. 2009;29(1):33–39. doi:https://doi.org/10.1016/j.nedt.2008.06.006
Keywords:

clinical evaluation; instrument development; nurse practitioner; QSEN; quality and safety

Supplemental Digital Content

Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.