Secondary Logo

Systems Thinking for Transitions of Care

Reliability Testing for a Standardized Rubric

Phillips, Janet M., PhD, RN, ANEF; Stalter, Ann M., PhD, RN; Ruggiero, Jeanne S., PhD, RN, CNE; Bonnett, Pamela L., DNP, RN, CNE; Brodhead, Josette, PhD, MSHS, RN, RNC-MNN, CNE; Merriam, Deborah H., DNS, RN, CNE; Scardaville, Debra L., PhD, RN, CPNP-PC; Wiggs, Carol M., PhD, RN CNM, AHN-BC; Winegardner, Sherri, DNP, RN, MSN, MHA

doi: 10.1097/NUR.0000000000000443
Feature Article
Free

Purpose: The purpose of this study was to develop a standardized rubric for systems thinking across transitions of care for clinical nurse specialists.

Design: The design was a mixed-methods study using the Systems Awareness Model as a framework for bridging theory to practice.

Methods: Content validity was determined using a content validity index. Reliability was established using statistical analysis with Cronbach’s α and intraclass correlation coefficient. Usability of the rubric was established using content analysis from focus group discussions about their experiences in using the rubric.

Results: Content validity was established with a content validity ratio of 1.0. Statistical analysis showed a high interrater reliability (α = 0.99), and sections of the rubric showed a strong degree of reliability with α’s ranging from 0.88 to 1.00. Content analysis revealed several overall themes for usability of the rubric: clarity, objectivity, and detail. The area for improvement included adding more detail in the scholarly writing section.

Conclusion: The research team recommends using the rubric to reflect application of systems thinking across transitions of care.

Author Affiliations: Clinical Associate Professor (Dr Phillips), Indiana University School of Nursing, Indianapolis; Associate Professor (Dr Stalter), Wright State University, Dayton, Ohio; Associate Professor (Dr Ruggiero), New Jersey City University; Director (Dr Bonnett), RN-BSN Program, The University of Akron, Ohio; Full-Time Faculty and Assistant Professor of Nursing (Dr Brodhead) and Assistant Professor (Dr Merriam), Daemen College, Amherst, New York; Professor and Graduate Program Coordinator (Dr Scardaville), Department of Nursing, New Jersey City University; Associate Professor and RN-BSN Track Administrator (Dr Wiggs), The University of Texas Medical Branch, Galveston; and Director of Nursing and Associate Professor (Dr Winegardner), Bluffton University, Ohio.

The authors report no conflicts of interest.

Correspondence: Janet M. Phillips, PhD, RN, ANEF, 4002 Tolbert Place, Carmel, IN 46074 (janetphillipsrn@gmail.com; janephil@iu.edu).

Essential skills for clinical nurse specialists (CNSs) in today’s complex healthcare system include systems thinking (ST) for transitions of care. Systems-based practice is needed across the healthcare spectrum to address increasing complexity and rapid rates of change to provide safe, quality care.1 Clinical nurse specialists are poised to facilitate safe transitions of care using ST. Systems thinking is the “ability to recognize, understand, and synthesize the interactions and interdependencies in a set of components designed for a specific purpose.”2 As ST is being adopted at all levels of nursing education,3 including continuing education and practice,4 ways to evaluate learner knowledge and skills of ST are paramount in addressing patient quality and safety across transitions of care.

Accurate and consistent evaluation of ST in CNS education and practice can be accomplished by using a standardized rubric to assess learner knowledge and skills. For example, CNSs can use rubrics to rate performance of nurses through educational offerings or competency assessments. Rubric standards can be used to ensure that ST competencies are met and can also be used as tools for student reflection in areas that need improvement.5 There is a dearth of research evaluating the use of rubrics to measure ST. This study describes the development and evaluation of a standardized rubric for measuring ST for transitions of care, which can be used in all CNS education and practice.

Systems thinking across transitions of care is a critical skill for the CNS in today’s complex healthcare systems. Systems thinking is an essential part of education for all levels of nursing, from the associate degree to doctoral levels, including the advanced practice of the CNS. In practice, the CNS plays a primary role in care coordination and transitions of care promoting quality and safety within health systems and community-based settings such as home care or hospice.6 Systems thinking has become part of the clinical nursing education curricula, including continuing education.4,7 Systems thinking provides a framework for bridging theory to practice across all levels of education.8

What is a standardized analytic rubric? Rubrics, from the Latin derivative, red ink, are a coherent means (set of criteria) to grade and evaluate the quality of student learning.9,10 Recently, rubrics have been used more frequently and progressively as tools for assessing competency evaluation, therefore linking levels of performance to sets of specific criteria.11,12 Analytic rubrics provide evaluators with the opportunity to impart explicit improvement feedback on each criterion evaluated.13 Cockett and Jackson14 completed an integrative review to explore the use of rubrics to enhance feedback in higher education. They found that rubric use enhanced student self-assessment, self-regulation, and understanding of assessment criteria.

Back to Top | Article Outline

Development of the Standardized Analytic Rubric

The researchers in this study included experienced nurse educators who are members of the international Quality and Safety Educating Nurses RN-BSN Task Force, focusing on ST. The Task Force developed a standardized analytic rubric to be used across all levels of CNS education and practice, evaluating the content validity, reliability, and general usability of the rubric.

In preparing for this study, the researchers addressed 3 components of the development of the standardized analytic rubric: (1) development of a standardized rubric for ST and transitions of care according to criteria for analytic rubrics, (2) development of mock papers with varying writing patters, and (3) training for quantitative and (4) qualitative data collection.

Back to Top | Article Outline

Development of the Standardized Rubric for ST and Transitions of Care

To enhance learner improvement feedback, analytic rubrics must be well constructed, encompassing 4 criteria: (1) 1 or more criterion/criteria for determining student response/performance, (2) clear definitions for each criterion, (3) a rating scale, and (4) a standard of excellence from which to level student response/performance on each criterion.15 Analytic rubric criteria should be aligned with course learning outcomes, curriculum goals, and discipline-specific rating scales. Rating scales should indicate the level of learning mastery, providing opportunities for improvement feedback upon each criterion assessed.16

As the authors were preparing to develop the rubric, they discovered the complexity of integrating essential theory components with practice experience. The challenge of offering learners meaningful experiences based on academic and clinical foundations is common among faculty.17 The authors set out to create a rubric that could be used as a pathway across varying experience levels. The opportunity to employ learner improvement feedback in caring, supportive ways has a significant impact on academic progression in nursing, such as developing CNS practice roles,18 as well as offering steps to civility in professional practice.8 Advancing a meaningful academic assignment with opportunities to share ideas that boost student learning among nurses practicing across distinct experience levels was the ultimate goal.

The authors acknowledged that the rubric needed to accommodate the needs of a broad spectrum of universities, schools, programs, courses, and nurse educators in academia and practice. To address this need, they utilized the Systems Awareness Model (SAM) to lead the quality and safety focus, which provided a framework from which to showcase national criteria.3 Through brainstorming, the authors determined the following 7 SAM-aligned categories were essential to the rubric: (1) Nursing Roles: Care Management & Coordination, (2) SAM Model for Transitions of Care, (3) Home Health and Hospice Nursing: Standards of Care for Reimbursement, (4) Implementing Quality and Safety to Create a Just Culture, (5) Interprofessional Models of Care to Improve Outcomes, (6) Ethical and Legal Decision-Making in Coordination Care Transitions, and (7) Leading Care Transitions in Complex Health Care Systems. The authors expanded on ideas presented in a ST rubric developed by Stalter and Jauch19 and then modified a rubric focused on leadership and ST20 to ST for transitions of care.

In addition to the SAM-aligned components, the authors recognized a need to determine whether the rubric coincided with the academic preparation and hiring qualifications that practice partners perceived as relevant to transitioning care. A study by Stalter and Kaylor21 identified that among directors of nurses of home health and hospice agencies in Ohio gaps existed regarding the knowledge and experience levels of nurses entering jobs where transitions of care were required.

Swider et al22 developed 3 tiers describing practice experience in relation to Quad Council23 competency domains, particularly Leadership and Systems Thinking (domain 8). The Quad Council tiers provided the bridge between academic preparation and clinical roles, reinforcing transition standards from a public health nursing perspective. These tiers offered the ability to categorize outcomes according to educational preparation and clinical experiences, especially as it related to the role of the student and CNS in home health or hospice agencies. The authors used these evidence-based competencies to develop a standardized rubric for ST and transitions of care, encompassing both undergraduate and graduate learning.

The final rubric contained 10 content-related criteria for transitions of care: (1) self-description; (2) inquiry effort specific to home health or hospice agencies; (3) site selection, rationale, and goals; (4) role, setting, and vulnerable population description; (5) role of care coordination explanation; (6) observation in situ; (7) systems perspective and ST; (8) lessons learned; (9) increased understanding of home health and hospice roles in transitions of care; and (10) and maintaining American Psychological Association (APA) format-related academic requirements and academic honesty. The rubric identified 3 levels of performance (exemplary, developing, and substandard). Rating scales were developed in accordance to points associated with letter grades (Table 1).

Table 1

Table 1

Back to Top | Article Outline

Development of Mock Papers With Varying Writing Patterns

An educator on the research team, experienced in creating rubrics, developed 3 mock papers with varying writing patterns to reflect the levels of competency for transitions of care and ST. To facilitate interrater reliability of the rubric, having a variety of papers to rate created an opportunity to yield more robust findings. Therefore, 3 mock papers were developed, based on the levels of performance listed on the rubric: (1) exemplary, (2), developing, and (3) substandard.

Back to Top | Article Outline

Quantitative Analysis Training and Directions

The raters, also members of the Quality and Safety Educating Nurses RN-BSN Task Force, were sent an email by the primary investigator (PI) inviting them to participate in the study as rubric raters. Eight Task Force members were recruited for raters. They were not incentivized to participate in the study, aside from knowing they were contributing to science of ST and improving the quality of evaluative feedback. Raters were experienced nurse educators who collectively have had experiences in teaching traditional bachelor of science in nursing (BSN) students, registered nurse (RN)–to–BSN students, master’s degree in nursing education students, doctoral candidates, nurse practitioners, and CNS students in face-to-face, online, clinical, and/or laboratory settings.

Once the raters volunteered to participate, they were emailed rating instructions and electronic versions of the coded rubrics and papers. The email directed raters to read the papers and complete the associated rubrics, returning them to the PI within a 14-day period. Each rubric criterion contained an area for grading the mock paper (score and feedback) and rating the rubric (1 = satisfactory or 0 = unsatisfactory) and rater recommendations for rubric improvement (Table 2). Each rater was provided the PI and co-PI telephone numbers and email addresses to clarify questions on how to use the rubric.

Table 2

Table 2

At the end of the 14-day period, the raters returned all rubrics to the PI who maintained them on a password-protected computer that was locked in a university office. After the completed rubrics were received, the PI sent a thank-you email containing an online technology polling link for an invitation to participate in a 1-hour focus group session to discuss the raters’ experiences in using the rubric.

Back to Top | Article Outline

Qualitative Analysis Training and Directions

Eight participants were invited to participate in the focus group session. A focus group teleconference session was set up. A majority of the raters (n = 6) participated in the teleconference. For those who could not attend the teleconference (n = 2), a virtual focus group session was set up using a shared drive with electronic files. The PI and co-PI planned questions for the focus group session based on the data collection guidelines suggested by Krueger and Casey.24Table 3 highlights the questions that guided the data collection from the raters regarding use and impressions of the rubric.

Table 3

Table 3

The focus group held a 1.5-hour phone meeting to discuss each of the questions. The PI facilitated the meeting by asking the questions. The co-PI served as the scribe and transcribed the focus group responses in a shared computer drive, where all participants could synchronously view the responses. Each focus group member agreed that the written account of the focus group discussion was accurate. In addition, the PI added comments from the virtual focus group for those who could not attend the focus group phone meeting. All comments were transcribed into a table for content analysis using the stages of (1) decontextualization, (2) recontextualization, (3) categorization, and (4) compilation.25 A second reviewer, an expert in content analysis from the Task Force, provided feedback on the accuracy and trustworthiness of the analysis, including its validity and reliability.

Back to Top | Article Outline

METHODS

The study was declared exempt by the institutional review board. Evaluation of the standardized rubric was accomplished through a mixed-methods design for the establishment of content validity, reliability, and usability of the rubric. Content validity was determined using a content validity index. Reliability was established using statistical analysis with Cronbach’s α and intraclass correlation coefficient (ICC). Usability of the rubric was established using content analysis from focus group discussions about their experiences in using the rubric.

Back to Top | Article Outline

Content Validity

Content validity was established by rating the 11 items on the rubric as relevant (1) or not relevant (0). The agreement across raters was determined by calculating the percentage of congruence among the raters. A benchmark of 0.51 was established. As exemplified by Lawshe,26 the content validity index was determined by the potential number of outcomes, then dividing them by the number of items not meeting the benchmark.

Back to Top | Article Outline

Reliability

Through statistical consultation, Cronbach’s α was used to estimate the interrater reliability of the rubric. Cronbach’s α was calculated overall using combined data for the 3 papers. Interrater reliability was also assessed by the ICC, which is an estimate of the proportion of variation between experimental units. One minus the ICC is an estimate of the variation within an experimental unit. In this case, that would be the variation due to the raters. Pairwise correlations of rater scores were also examined to assess whether any of the raters differed from others.

Back to Top | Article Outline

Usability

Content analysis was used to determine themes from the focus groups, consisting of discussions about the usability of the rubric. Both positive and negative opinions were explored as were recommendations for improvements and change. The PI and co-PI coded the raters, rubrics, and the mock papers (ie, rater 1, paper 1), maintaining the reviewers’ anonymity. Thus, each rater received 3 coded rubrics and 3 mock papers. Seven themes were identified, concluding with 3 overall themes.

Back to Top | Article Outline

RESULTS

Content Validity

The content validity index revealed that all 11 criteria were found to be relevant, indicating a content validity ratio of 1.0 (Table 4).

Table 4

Table 4

Back to Top | Article Outline

Reliability

A strong degree of interrater reliability was found across the 8 raters (Cronbach’s α = 0.99). Rubric criteria of the rubric also showed a strong degree of reliability with α’s ranging from 0.88 to 1.00 (Table 5). Pairwise correlations of rater scores did not reveal any particular rater as being different from other raters (Table 6). The ICC was 0.89, implying that only 11% of the total variation was due to differences between raters.

Table 5

Table 5

Table 6

Table 6

Back to Top | Article Outline

Usability

Theme I: Involvement

Opening question: Why did you want to participate in the study?

The first theme identified by the focus group was that they were interested and wanted to be involved in the process of refining a rubric to objectively measure ST. Members discussed the reasons why they wanted to be involved with the rubric rating project. One rater stated:

I see faculty at my university struggle with creating rubrics all of the time, so I was interested in another faculty’s perspective. We are piloting a new electronic tool to measure program outcomes. I was curious to see how this rubric might work to measure systems thinking outcomes for BSN students.

Group members agreed that being involved in the rubric rating project would help them to understand the process of creating and refining a rubric, which is especially needed for evaluating students’ awareness of ST

Back to Top | Article Outline

Theme II: Consistency

Introductory question: Describe a time when you have used a rubric to grade and what you liked about it.

The second theme that the focus group agreed upon was that rubrics provide structure and consistency for grading and transparency for students. One rater pointed out:

I use rubrics for all my assignments. I feel that rubrics provide structure for the student. For course sections with adjunct faculty, the rubrics provide consistency with the grading process. The student can read the feedback provided on the rubric to know exactly why they were deducted points or to know when they excelled.

Back to Top | Article Outline

Theme III: Clarity

Transition question 1: What qualities did you like about using the rubric (the one that you rated)?

The third theme identified the value of clarity in the rubric, which provided consistency across the categories with clear point distributions. One rater described:

I like how the points were distributed for each section. The subcriteria provided the student structure to develop their paper. This allows them to focus on the critical constructs.

The focus group members agreed that clarity and consistency across the definitive grading categories were critical in grading the papers successfully.

Back to Top | Article Outline

Theme IV: Assigning Points

Transition question 2: What qualities did you dislike about using the rubric (the rubric that you rated)?

The fourth theme suggested that some raters had difficulty assigning points because there were 3 categories to choose from when grading the papers: (1) exemplary, (2) developing, and (3) substandard. The rater indicated that some criteria in each of the categories could have been more specific so that there was no element of subjectivity in the grading. One rater shared:

I struggled with the point spread because the middle-ground performance was hard to rate.

All but one rater agreed that the section on grading APA needed to be more detailed.

Back to Top | Article Outline

Theme V: Precision

Key question 1: Describe the clarity of each of the criteria and levels of performance.

Each section of the rubric was analyzed individually for clarity, identifying the fifth theme. Focus group members agreed overall that in each section the levels of performance were clear and precise, except for grading of the APA. One rater commented:

Each criterion provided the student with what concepts needed to be included in the assignment. The wording was clear and understandable. The levels of performance were appropriately spaced.

The raters agreed that overall the rubric criteria measured student learning. It was suggested that one criterion under scholarly effort be eliminated, which stated “preparation for assignment” because it seemed to be extraneous information and not helpful in using the rubric for grading the mock papers.

Back to Top | Article Outline

Theme VI: Mastery

Key question 2: How did the criteria allow for learning mastery and professional growth and/or development opportunities across interprofessional levels?

In identifying theme VI, overall raters agreed that the criteria were clear for learning mastery. One rater commented:

Each criterion focused on critical concepts regarding the topic. The student can read each area and be able to identify the level of mastery he/she has in each area.

It was suggested and agreed upon by the raters that professional growth and/or development be removed from the rubric because it was beyond the scope of the paper.

Back to Top | Article Outline

Theme VII: Explicitness

Key question 3: Explain how the feedback sections provided an ability for raters to impart explicit improvement feedback on each criterion evaluated.

The feedback sections for each criterion allowed the raters to write narrative comments to the student, augmenting the scoring. The raters identified theme VII, in that they believed that the narrative feedback allowed them to address any areas of subjectivity while describing explicit improvements. One rater said:

I liked the open-ended text box, so you can tell the student anything they might need to work on in the future or what they did well. You can let them know what is reflective of their grade if they did not get the full points.

The raters agreed that narrative feedback allows them to explicitly describe the reason for the score, while also providing areas for improvement to the student.

Back to Top | Article Outline

Overall Themes: Clarity, Objectivity, Detail

Ending question 1: What 3 qualities make using the rubric effective for determining student mastery of learning?

In identifying the overall themes, raters discussed the 3 factors that allowed them to use the rubric effectively, which included clarity, objectivity, and detail. All raters agreed that these 3 factors must be in place when determining mastery of learning. One rater commented that the effectiveness of this rubric was evident because it was (1) clear, (2) objective, and (3) provided enough detail.

Ending question 2: What 3 recommendations do you have to improve the rubric?

Data from the responses to this question were not rich enough for identification of a theme. However, the 3 main recommendations for improvement of the rubric included (1) provide more detail in the APA section, (2) delete the scholarly effort part (students had completed the modules), and (3) delete the professional growth/development section.

Back to Top | Article Outline

DISCUSSION

The need for nurse educators to substantiate the validity and reliability of grading rubrics across advanced practice roles such as the CNS is evident. The public health sector highlights the need to validate student learning outcomes according to degree level.22 This article focused on psychometrics (validity and reliability) and improvement of a standardized rubric for ST, expanding it beyond leadership among baccalaureate students to transitions of care for all levels of CNS education.

Standardized rubrics benefit educators with objective criteria for evaluation. Regarding ST, 3 undergraduate-focused rubrics were identified in the literature, of which none were evaluated for reliability and validity.18,19 In general, this is a common finding regarding the use of rubrics.11 Several authors assert that rubrics have historically contained poorly worded criteria and dubious rating categories.11,27–29

This study was designed to determine whether the rubric for ST and transitions of care was reliable, valid, and easy to use. A mixed-methods approach was used to evaluate the rubric. Content validity was established, indicating that content and skills were basic to the topics of care transitions and ST. Quantitative measures determined consistency of paper grading among raters. Statistical analysis indicated a strong degree of interrater reliability with minimal variation between raters. The focus group discussion provided insight into raters’ experiences in using the rubric. The overall qualitative data suggested that the rubric was clear and objective and contained key concepts for learning outcomes for ST and transitions of care. The focus group data provided helpful feedback for improving the rubric such as removing extraneous information that could not be measured (ie, preparation for assignment) and expanding the writing style criteria.

Back to Top | Article Outline

Limitation

While this study addressed measuring ST in students’ understanding of transitions of care, there is a need for further development of levels of ST for systems-based practice for graduate-level practicing nurses.

Back to Top | Article Outline

CONCLUSION

This study described the development and evaluation of a standardized rubric for measuring ST across transitions of care for CNS’s education and practice. Results indicated that (1) content validity for content and skills was basic to ST and the topics of care transitions; (2) high interrater reliability was seen across the rubric with sections of the rubric showing a strong degree of reliability; and (3) the content analysis of the focus group feedback revealed excellent usability of the rubric. The rubric can be used to measure transitions of care to enhance systems-based practice for improvement of quality and safety, highlighting the crucial role of the CNS as a change agent in working with patients in and out of home care, home health practitioners, and health systems to improve patient outcomes.30 The rubric is available with permission from the authors.

Back to Top | Article Outline

References

1. Stalter AM, Phillips JM, Dolansky MA. QSEN Institute RN-BSN Task Force: white paper on recommendation for systems-based practice competency. J Nurs Care Qual. 2017;32(4):354–358.
2. Dolansky MA, Moore SM. Quality and Safety Education for Nurses (QSEN): the key is systems thinking. Online J Issues Nurs. 2013;18(3):9.
3. Phillips JM, Stalter AM, Dolansky MA, McKee-Lopez G. Fostering future leadership in quality and safety in health care through systems thinking. J Prof Nrs. 2016;32(1):5–24.
4. Phillips JM, Stalter AM. Integrating systems thinking into nursing education. J Contin Educ Nurs. 2016;47(9):395–397.
5. Naber JL, Theobald A. Development of a school of nursing rubric. J Nurs Educ Pract. 2015;5(9):49–53.
6. Impact of the Clinical Nurse Specialist role on the costs and quality of health care. National Association of Clinical Nurse Specialists website. https://nacns.org/2013/12/clinical-nurse-specialists-uniquely-qualified-to-ensure-high-quality-cost-effective-health-care-essential-to-meeting-nations-needs/. Published 2013. Accessed July 18, 2018.
7. Holle CL, Rudolph JL. Management of delirium across an integrated health system. NUMA. 2018;49(3):24–34.
8. Phillips JM, Stalter AM, Winegardner S, Wiggs CM, Jauch A. Systems thinking and incivility in nursing practice: an integrative review. Nurs Forum. 2018;55(3):286–298.
9. Dictionary.com website. https://www.dictionary.com/. Accessed July 17, 2018.
10. Popham J. What’s wrong and what’s right with rubrics. Educ Leadersh. 1997;55(2):72–75.
11. Minnich M, Kirkpatrick AJ, Goodman JT, et al. Writing across the curriculum: reliability testing of a standardized rubric. J Nurs Educ. 2018;57(6):366–370.
12. Velasco-Martínez LC, Tójar-Hurtado JC. Competency-based evaluation in higher education—design and use of competence rubrics by university educators. Int Educ Stud. 2018;11(2):118.
13. Yune SJ, Lee SY, Im SJ, Kam BS, Baek SY. Holistic rubric vs. analytic rubric for measuring clinical performance levels in medical students. BMC Med Educ. 2018;18(1):124.
14. Cockett A, Jackson C. The use of assessment rubrics to enhance feedback in higher education: an integrative literature review. Nurse Educ Today. 2018;69:8–13.
15. Dawson P. Assessment rubrics: towards clearer and more replicable design, research and practice. Assess Eval High Ed. 2017;42(3):347–360.
16. Grainger P, Christie M, Thomas G, et al. Improving the quality of assessment by using a community of practice to explore the optimal construction of assessment rubrics. Reflective Pract. 2017;18(3):410–422.
17. Taylor EW. Transformative learning theory. In: Laros A, Fuhr T, Taylor EW, eds. Transformative Learning Meets Bildung. International Issues in Adult Education. Rotterdam, the Netherlands: SensePublishers; 2017:17–29.
18. White DA. Faculty behaviors influencing intent to pursue graduate education among RN-BSN students. Teach Learn Nurs. 2018;13(2):108–112.
19. Stalter AM, Jauch A. Systems thinking education in RN-BSN programs: a regional study. Nurse Educ. 2018.
20. Stalter AM, Phillips JM. Leadership and systems thinking to assess transitions of care and population health in home health and hospice agencies. Nursing educators innovative teaching strategy. Association of Community Health website. https://www.achne.org/i4a/pages/index.cfm?pageid=3382 Published 2018. Accessed August 28, 2018.
21. Stalter AM, Kaylor MB. A work force study: attitudes, skills and knowledge attributes of the home health and hospice nurse (ASK-A-HHN). Presented at the Ohio Council of Home Care and Hospice Agencies Annual Conference; Columbus, Ohio: 2018.
22. Swider SM, Krothe J, Reyes D, Cravetz M. The Quad Council practice competencies for public health nursing. Public Health Nurs. 2013;30(6):519–536.
23. Joyce BL, Harmon M, Johnson RGH, Hicks V, et al. Community/public health nursing faculty’s knowledge, skills and attitudes of the Quad Council Competencies for public health nurses. Public Health Nurs. 2018;35(5):427–439.
24. Krueger RA, Casey MA. Focus group interviewing research methods. In: Krueger website. https://richardakrueger.com/focus-group-interviewing/. Published 2015. Accessed July 8, 2018.
25. Bengtsson M. How to plan and perform a qualitative study using content analysis. NursingPlus Open. 2016;(2):8–14.
26. Lawshe CH. A quantitative approach to content validity. Pers Psychol. 1975;28:563–575.
27. Oakleaf M. Using rubrics to assess information literacy: an examination of methodology and interrater reliability. J Am Soc Inf Sci Technol. 2009;(60):969–983.
28. Shipman D, Roa M, Hooten J, Wang Z. Using the analytic rubric as an evaluation tool in nursing education: the positive and the negative. Nurse Educ Today. 2012;32(3):246–249.
29. Stellmack M, Konheim-Kalkstein Y, Manor J, Massey A, Schmitz J. An assessment of reliability and validity of a rubric for grading APA-style introductions. Teach Psychol. 2009;36:102–107.
30. Adams JH. The role of the clinical nurse specialist in home health. Home Healthc Now. 2015;33(1):44–48.
Keywords:

clinical nurse specialist; standardized rubric; systems thinking; transitions of care

Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved