GAPS, METHODS, AND RESULTS FROM 4 INQRI TEAMS
Team Report: Measuring the Quality of Nursing Care Related to Pain Management
Principal investigators: Susan L. Beck and Gail L. Towsley, University of Utah College of Nursing, Salt Lake City, UT.
Gaps in Measurement
Our team addressed 3 gaps in measuring the quality of pain management care: (1) pain is a prevalent problem that crosses multiple populations; (2) measuring the quality of pain management adds to measures of patient experience; and (3) pain care is relevant to nursing as well as the interdisciplinary team.3 Efforts to relate performance to pain outcomes have been global and are based on an entire hospital stay14; such measures do not connect the care of a nurse or team to the patient. We thus developed and tested a tool to measure the quality of nursing and interdisciplinary care within a specific patient encounter (a shift of care) using a patient-centered approach.
Our goal was to develop a parsimonious and clinically practical tool to measure the quality of care related to pain management in the acute care setting. We engaged adult patients in identifying the aspects of pain management that were important to them. Initially, we conducted semistructured interviews of 33 patients about their experience with pain and its management.15 Key theoretical constructs with 102 items were proposed; items addressed gaps in measurement by including positive aspects of care (eg, believing the patient’s report of pain) and wording that was applicable across populations of hospitalized patients in pain. Expert panels evaluated the content validity of candidate items.7 Subsequently, hospitalized patients (n=39) evaluated the items in cognitive interviews.7 We found that some items were related to nursing care and some were related to care provided by the interdisciplinary team. Because the focus of evaluation and the timeframe were different, 2 Pain Care Quality (PainCQ) surveys resulted: the PainCQ-N (Nursing) and the PainCQ-I (Interdisciplinary). Finally, we tested the reliability and validity of these surveys in 3 samples of hospitalized patients (total n=667), reducing the items over time from 44 to 33 to 20.8,9
Our cumulative evidence supports the reliability and validity of the final measure which is parsimonious and includes 20 items in 2 surveys: The PainCQ-Interdisciplinary and the PainCQ-Nursing (Table 1). The PainCQ as a whole explained 19% to 31% of the variance in pain outcome scores measured by the Brief Pain Inventory (n=446).9 The PainCQ surveys, a product of the INQRI initiative, are available for use in hospital-based quality improvement in oncology and medical-surgical settings and may be particularly helpful in units trying to improve national survey scores related to pain.
Team Report: 6th Vital Sign: Hospitalized Children’s Evaluation of the Quality of Their Daily Nursing Care
Principal investigators: Nancy A. Ryan-Wenger and William Gardner, Nationwide Children’s Hospital, Columbus, OH.
Gaps in Measurement
There is a need for tested measures of patient experiences with care.3 Adult patients’ judgments about satisfaction with care and quality of care are a measurement priority16 and a function of their perceived need for care, expectations of care, and actual experiences of care.17 In pediatric hospitals, parents are routinely asked to complete postdischarge “patient satisfaction” surveys. We are not aware of any hospital that asks pediatric patients to evaluate their hospital experience or the care that they received. Nearly half of all hospitalized pediatric patients are 6 to 21 years old and most are fully capable of providing these evaluations if given the opportunity.
Our goal was to measure the quality of pediatric nursing care from the children’s perspective.10 A total of 496 hospitalized pediatric patients, aged 6 to 21 years, participated in this cross-sectional descriptive study using semistructured interviews. The children provided 1673 responses to the question “what do you like most about your nurses?” Seventy-one percent (n=354) responded to the question about nurse behaviors that they did not like.
All participants named at least 1 behavior that they liked (range=1 to 9, mode=4) and named 485 behaviors (range=1 to 4, mode=1) that they did not like (Table 1). Responses to both questions were inductively sorted into 12 positive and 6 negative nurse behavior categories. Positive nurse behaviors fell into categories congruent with accepted definitions of care quality.18 For example, “checking on me often” made them feel safe; “giving me what I need when I need it” represented timely and efficient care; “gives me medicine” was beneficial; and “talks and listens to me” indicated patient-centered care.
These data yielded a 10-item instrument called the 6th Vital Sign, designed to be administered once per day during a child’s hospitalization. Children’s experiences with each nursing care item are scored as 0=hardly ever, 1=sometimes, or 2=most of the time. Children’s expectations of nursing care are reflected in their response to the question, “How does that make you feel?” which is scored as 0=bad, sad, or mad; 1=OK, it doesn’t matter to me; or 2=good or happy. Quality nursing care from the children’s perspective is defined as a score of 1 or 2 on the expectation scale for each item. When children report that they feel bad, sad, or mad (score of 0), a change in nursing care may be necessary. We propose that asking children themselves about the quality of care they receive can be a new vital sign, guiding care better adapted to children’s medical and emotional needs.
Team Report: Nurse Staffing, Discharge Preparation, and Postdischarge Utilization
Principal investigators: Marianne Weiss, Olga Yakusheva, and Kathleen Bobay, Marquette University, Milwaukee, WI.
Gap in Measurement
Our team investigated the gap in knowledge related to linkages between unit-level nurse staffing (structure variable), discharge teaching (care process variable), patient perception for hospital discharge readiness (proximal outcome variable), and postdischarge utilization of readmission and Emergency Department (ED) within 30 days after discharge (postdischarge outcome variables).13 We extended previous work on hospital-level staffing19–24 to the nursing unit as the level of analysis and incorporated a care process measure as a possible explanatory pathway for the relationship between staffing structure and patient outcomes.
We analyzed panel data from 1892 adult patients discharged from 16 medical-surgical units of 4 hospitals in a 7-month period. Questionnaires measuring quality of discharge teaching and readiness for discharge were completed within 4 hours before hospital discharge. The Quality of Discharge Teaching Scale25 measured the perceptions of discharge teaching; patients (or their family caregiver if the patient could not respond) answered questions about the amount of discharge-related teaching received from their nurses during the hospital visit and the teaching skills of the nurse. The Readiness for Hospital Discharge Scale26 measured patient perception in 4 domains: Personal Status (how I feel today), Knowledge, Perceived Coping Ability (anticipated coping at home after discharge), and Expected Support. Readmission and ED use within 30 days after discharge served as the postdischarge utilization outcome variables. Four simultaneous regression equations examined direct effects of unit-level nurse staffing on postdischarge outcomes and indirect effects through the nursing care process of discharge teaching and the proximal outcome of readiness for discharge.
Higher Registered Nurse (RN) nonovertime hours-per-patient-day (HPPD) was associated with fewer readmissions within the first 30 days after discharge, and higher RN overtime HPPD was associated with more ED visits (Table 1). The pathway from nurse staffing through discharge teaching and readiness for discharge was associated with postdischarge ED visits. The annualized net cost-saving estimate for the 16 study units from investment in a 1 SD (0.75 RNHPPD) increase in RN staffing exceeded $10 million.
The delivery of discharge teaching was related more to perceptions of readiness for discharge than the amount of content patients reported receiving (with perceived need controlled).13 In a small subset of parallel nurse and patient ratings of discharge readiness (n=162), nurse assessment of discharge readiness but not patient perception was associated with postdischarge utilization.11,12
Team Report: Development of a Composite Measure of Direct Care Staff Expertise
Principal investigators: Nancy Donaldson, University of California San Francisco (UCSF), San Francisco, CA, and Carolyn Aydin, Cedars-Sinai Medical Center and Burns & Allen Research Institute, Los Angeles, CA.
Gap in Measurement
The UCSF Collaborative Alliance for Nursing Outcomes (CALNOC) study was designed to address the need to effectively model the relationship of microsystem (structure) variables with process and outcomes.3 Specifically, our team’s goal was to develop an empirical predictive model examining effects of unit-level nurse workload, staff nurse characteristics, and selected risk assessment and preventive intervention processes of care on variance in acute care medical/surgical patient outcomes. We built on previous work that had linked nurse staffing characteristics (hours of care and indicators of expertise such as skill mix, years of experience, and percent of RNs with Bachelors of Science in Nursing degrees and specialty certification) with outcomes, but left a great deal of actual or potential variance unexplained, unaccounted for, or simply unaddressed.27,28
A measurement substudy, the focus of this report, examined the significance and predictive power of a unit-level composite measure of nurse expertise on key clinical processes of care and on nursing-sensitive patient-level outcomes (incidence of patient falls and injury falls, hospital acquired pressure ulcers and restraint use prevalence, medication administration accuracy, and peripherally inserted catheter-associated bloodstream infections). Key indicators of nurse expertise included highest degree, years of experience, and certifications. The conceptual model (Fig. 1) provides a schematic representation of the staff characteristics, effectiveness and qualifications, that were the focus of our composite measurement effort. We did not include model of care and organizational environment variables.
The measurement substudy was based on CALNOC RN Education/Experience Survey data, for direct care staff from 144 hospital units drawn from 219 hospitals. We used factor analysis to reduce the dimensionality of nurse expertise variables. Multiple regressions were fitted to each of the outcomes to assess the effect of workload, RN qualifications and processes of care adjusted for patient and hospital characteristics.
The mean response rate for the CALNOC RN Survey was 98.51% (SD=45.36%) (Table 1); all units with a response rate of 25% or less were excluded from analysis. Despite multiple analytic strategies, we were not able to create a reliable composite measure for direct care staff expertise. No multi-item factors were found, which we attribute to the small sample of units with complete data. To further explore the benefit of a larger sample, we added data from units with small response rates (under 25%) and included data from 2008. Analysis of the larger sample identified 1 factor we labeled “workforce qualification” that included highest degree and certification status. We concluded that these findings were not reliable and likely a consequence of including outlier values resulting from units with very small response rates. Without the option of a composite metric, we examined the predictive value of individual measures. We found that patient outcomes were predicted by combinations of all elements in our model, including: unit/patient characteristics, nursing workload, individual RN expertise measures, and clinical processes; importantly, however, predictors were different for each outcome. Findings added to evidence related to how variation in structures and processes of care impacts outcomes.
These exemplar projects have addressed many of the 7 measurement gaps that drove the strategic goals of INQRI. (1) Our teams addressed the need for broad quality measures that are useful across populations including pain, falls, pressure ulcers, restraint use, medication administration accuracy, bloodstream infections, discharge preparation, and perceptions of daily nursing care. (2) Three teams engaged patients and families in determining and validating the important aspects of measures of patient experience of care for acutely ill adults and children. We created brief tools that can be used in the clinical setting to measure nurses’ contributions to quality of care. (3) Two teams tackled the issue of system or structure variables and tested new ways to model the relationships between structure, process, and outcome. (4) The Weiss team addressed the continuum from hospital to home, measuring the role of discharge preparation on readmissions and ED visits. (5) Measures focused on positive aspects of what nurses do: believing the patient’s pain, providing daily comfort care, and preparing patients to go home after a hospitalization period. (6) Our teams augmented the evidence about the relationships between process and outcome measures, specifically between pain care quality and pain outcomes, discharge preparation and proximal and distal outcomes, and selected preventive measures and related outcomes. (7) The Beck team addressed the issue of measuring both nursing and interdisciplinary care as relevant to pain management.
The findings and the products from these examples of INQRI work add to the robust set of measures that are needed to measure nurses’ contributions to the care of hospitalized patients. In addition, the journey to complete these projects has been rich with lessons about the ongoing challenges related to measure development and large multisite investigations. This paper is neither a comprehensive review nor a summary of all inquiry teams. These lessons may not be generalizable but reflect our “collective wisdom,” informed by our experiences in conducting this research to advance the science of measuring quality. The paper was revised based on input from a presentation of the paper to stakeholders at the INQRI national meeting in April 2012.
We identified persistent challenges in measurement that confronted our teams. The first challenge relates to perspective: whether to measure process from a nurse-centric or patient-centric perspective. For example, the Weiss team measured quality of discharge teaching from the patient perspective. Although this approach provided data on the efficacy of teaching as measured by amount of content received and the patient’s perception of the skills of nurses in “delivering” the teaching, it did not address the amount of discharge teaching delivered in terms of time spent or content given by the nurse. The limitation from a measurement and service delivery stance is that the amount of discharge teaching that nurses must do to achieve targeted levels of patient-perceived discharge teaching quality remains unknown and unmeasured. Therefore, we do not know how much nurse teaching time is needed, a critical question for assuring adequate allocation of nursing and material resources to achieve patient-centered outcomes. Measurement of the giving and receiving of nursing care will fill a gap in current knowledge about the causal relationships between structural dose (eg, nursing expertise) and process dose (amount of care delivered and received), and related patient outcomes. However, we need to add one more piece–patient actions following care processes. A complete model should include care delivered, care received (patient experience), patient actions (do they adhere to recommended care? and what effective and ineffective self-management strategies are implemented?), and outcomes including patient health and cost.
A second challenge, related to the measurement of “care delivered,” is how to conceptualize and measure the dose of care. The dose of individual nursing care is often measured as an amount, such as time spent with patients.29 In addition, assuming that the skill of nurses in delivering quality care should increase with education, experience, and skill mix,30,31 these structural elements of nursing knowledge contribute to an alternate conceptualization of dose of care. The Donaldson team sought to develop a composite measure of nursing expertise. They did not succeed, potentially because of limitations of sample size, and additional research is warranted. Moreover, structural dose, measured at the hospital or unit level, is a proxy measure for nursing care provided to patients as a group but does not reflect care at the individual patient level. It is important to note that Donaldson and Aydin found that patient outcomes are predicted by both structure and processes of care, suggesting that optimal patient care is the result of staffing adequacy as well as the content of staff practices.
At the individual level, measures of dose are measures of care given by the nurse and received by the patient. The Beck team tried to link nurses to patients’ experience of pain within a shift of care. It was extremely challenging to recruit enough patients under each “nurse” to have a reliable estimate, and the nurse variable, surprisingly, was not related to pain care quality.9 Point-of-care measures, integrated into the delivery system and measured in real time, are recommended.
Measuring dose becomes more complex as most patient care is not provided by a single nurse but by many nurses (from 1 or more units) and interdisciplinary team members who provide care to a patient over the course of hospitalization. The Ryan-Wenger team developed a tool to measure daily nursing care provided to children. The goal is to develop a measure that is sensitive to daily changes in children’s experience, so that the measure can provide real-time feedback when integrated as a “6th Vital Sign.” Because it is focused on the child’s experience, the measure captures the effects of the efforts of many nurses. In some aspects of care such as discharge preparation, measurement must also account for care delivered over time. Ideally, delivery and receipt of care must reflect the totality of care received over the course of hospitalization or an episode of illness.
A third challenge is how to account for the interdisciplinary aspect of patient care. Only the Beck team addressed this gap by developing a survey that evaluated the patient’s experience with care provided by the “health care team.” The validity related to measuring 1 aspect of this interdisciplinary care, involving the patients as partners in pain management, was supported but the relationship to outcomes was weak. The approach of asking about care delivered by the whole team over the hospital stay lacks specificity in terms of who delivered the care and when in relationship to pain measurement. More precise approaches are needed for explanatory models and to relate care to outcomes for quality improvement.
How do we account for all providers, resources, and components of the care process? There may be lessons from studies of practice-based evidence to inform ways to measure care delivered by multiple members of the interdisciplinary team.32 In such studies, extensive facilitated work engages providers to develop a discipline-specific measure of types of care that may be appropriate for specific patient populations.
Our findings highlight the ongoing challenges in health services research to model and interpret the complex, widely varied, unstable, and often unpredictable relationships that exist within complex, diverse, and often unpredictable health systems.3 There is a need for a set of standardized, harmonized measures that capture nursing’s contribution to patient care quality, safety, costs, and outcomes. Our teams engaged in the rigorous and foundational research to advance the science necessary to address this need. Yet, the progress seems too slow in a dynamic health care delivery environment. How do we capture the complex nature of nursing care and the patient experience, collect meaningful information, control for important contextual variables, and keep it simple and affordable? Collecting patient-reported data must be simplified to keep the response burden on compromised patients low. Health care organizations want measures that are user friendly, are accompanied by adequate strategies for obtaining assistance, and processes that ensure the accuracy of data collection.33
These challenges in measuring nursing’s contribution to care emerge within the context of the dramatic changes looming in health care delivery. None of our teams used the electronic health record (EHR) as a platform to design or extract our study measures. Yet, since the inception of INQRI, there has been an explosion in development and integration of EHRs across the delivery setting. The National Quality Forum has added electronic data capture to its criteria and is leading a national dialogue on the development of e-measures. The Agency for Healthcare Research & Quality has recently published version 1.2 of the nation’s Common Format metrics for hospitals including key National Quality Forum nursing-sensitive measures related to patient safety.34 These advances create enormous opportunities for measuring nursing’s contribution to quality. Ideally, the work of quality measurement will be integrated into rapid learning systems that can use real-time data to measure quality and cost and make improvements for individual patients at the point of care and for populations by process improvement.35,36 The work by 1 of our teams (Ryan-Wenger team) to use a technology-enhanced approach to real-time data collection and feedback from hospitalized children illustrates how this integration might unfold.
We have identified some key questions to guide future research and policy (Table 3). The time is now to consider how to leverage explosive developments in health information systems. We need to challenge the classical approaches to meeting psychometric standards. Alternative criteria, based on measures during real-time clinical care, can demonstrate that we can consistently measure what we want to measure and show that it changes over time in response to clinical intervention or patient behavior.
Collaborative efforts by private/public partnerships such as the Society for Behavioral Medicine and the National Cancer Institute to define behavioral health measures provide a model for advancing this work. They have created a website in which the research community can work virtually and collaboratively with colleagues to share measures and harmonize data.37
The health care community must harness the ideas and creativity of a new and diverse group of stakeholders to tackle these questions. Funding is needed to support a dialogue to address these issues and to fund projects to test innovative approaches to integrating existing quality measures in EHRs and developing new measures, using new criteria and approaches, to examine the contributions of nurses, interdisciplinary team members, patients, and their family to health outcomes.
1. Naylor MD, Volpe EM, Lustig A, et al. Interdisciplinary Nursing Quality
Research Initiative. Med Care. 2013;51(suppl 2):S1–S5
2. Naylor MD, Volpe EM, Lustig A, et al. Linkages between nursing
and the quality
of patient care: A 2-year comparison. Med Care. 2013;51(suppl 2):S6–S14
3. Naylor MD. Advancing the science in the measurement
of health care quality
influenced by nurses. Med Care Res Rev. 2007;64:144S–169S
4. Riehle AI, Hanold LS, Sprenger SL, et al. Specifying and standardizing performance measures for use at a national level: implications for nursing
-sensitive care performance measures. Med Care Res Rev. 2007;64:64S–81S
5. National Voluntary Consensus Standards for Nurse-sensitive Care: An Initial Performance Measure Set. A Consensus Report. 2004 Washington, DC National Quality
6. ABC’s of Measurement
. 2010 Washington DC National Quality
7. Beck SL, Towsley GL, Berry PH, et al. Measuring the quality
of care related to pain management: A multiple method approach to instrument development. Nurs Res. 2010;59:85–92
8. Beck SL, Towsley GL, Pett MA, et al. Initial psychometric properties of the Pain Care Quality
survey (PainCQ). J Pain. 2010;11:1311–1319
9. Pett MA, Guo JW, Beck SL, et al. Confirmatory factor analysis of the Pain Care Quality
survey (PainCQ©). Health Serv Res. 2012 doi: 10.1111/1475-6773.12014
10. Ryan-Wenger NA, Gardner W. Hospitalized children’s perspectives on the quality
and equity of their nursing care
. J Nurs Care Qual. 2012;27:35–42
11. Bobay K, Jerofke T, Weiss M, et al. Age-related differences in perception of quality
of discharge teaching and readiness for hospital discharge. Geriatric Nurs. 2010;31:178–187
12. Weiss M, Yakusheva O, Bobay K. Nurse and patient perceptions of discharge readiness in relation to postdischarge utilization. Med Care. 2010;48:482–486
13. Weiss ME, Yakusheva O, Bobay KL. Quality
and cost analysis of nurse staffing, discharge preparation, and postdischarge utilization. Health Serv Res. 2011;46:1473–1494
14. Miaskowski C, Nichols R, Brody R, et al. Assessment of patient satisfaction utilizing the American Pain Society’s Quality
Assurance Standards on acute and cancer-related pain. J Pain Symptom Manage. 1994;9:5–11
15. Beck SL, Towsley GL, Berry PH, et al. Core aspects of satisfaction with pain management: cancer patients’ perspectives. J Pain Symptom Manage. 2010;39:100–115
16. Shaw G. Patient experience: help wanted. Health Leaders. 2010:50–54 Accessed October 15, 2012
17. Darby C. Patient/parent assessment of the quality
of care. Ambul Pediatr. 2002;2:345–348
18. Crossing the Quality
Chasm: A New Health System for the 21st Century. 2001 Washington DC National Academies Press
19. Aiken LH, Clarke SP, Sloane DM, et al. Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction. JAMA. 2002;288:1987–1993
20. Aiken LH, Clarke SP, Cheung RB, et al. Educational levels of hospital nurses and surgical patient mortality. JAMA. 2003;290:1617–1623
21. Cho S, Ketefian S, Barkauskas VH, et al. The effects of nurse staffing on adverse events, morbidity, mortality, and medical costs. Nurs Res. 2003;52:71–79
22. Needleman J, Buerhaus P, Mattke S, et al. Nurse-staffing levels and the quality
of care in hospitals. New Engl J Med. 2002;346:1715–1722
23. Seago JA, Williamson A, Atwood C. Longitudinal analyses of nurse staffing and patient outcomes
: more about failure to rescue. J Nurs Admin. 2006;36:13–21
24. Spetz J, Donaldson N, Aydin C, et al. How many nurses per patient? Measurements of nurse staffing in health services research. Health Serv Res. 2008;43:1674–1692
25. Weiss ME, Piacentine LB, Lokken L, et al. Perceived readiness for hospital discharge in adult medical-surgical patients. Clin Nurs Spec. 2007;21:31–42
26. Weiss ME, Piacentine LB. Psychometric properties of the Readiness for Hospital Discharge Scale. J Nurs Meas. 2006;14:163–180
27. Lang TA, Hodge M, Olson V, et al. Nurse-patient ratios: a systematic review on the effects of nurse staffing on patient, nurse employee, and hospital outcomes
. J Nurs Adm. 2004;34:326–337
28. Clarke SP, Donaldson NEHughes RG. Nurse staffing and patient care quality
and safety. Patient Safety and Quality
: An Evidence-Based Handbook for Nurses. 2008 Rockville (MD) Agency for Healthcare Research and Quality
29. Brooten D, Youngblut JM. Nurse dose
as a concept. J Nurs Scholarsh. 2006;38:94–99
30. Manojlovich M, Sidani S. Nurse dose
: what’s in a concept? Res Nurs Health. 2008;31:310–319
31. Manojlovich M, Sidani S, Covell CL, et al. Nurse dose
: linking staffing variables to adverse patient outcomes
. Nurs Res. 2011;60:214–220
32. Horn SD, Gassaway J. Practice-base evidence study design for comparative effectiveness research. Med Care. 2007;45:S50–S57
33. Kosel K, Gelinas L, Paxson C. Nursing
measures: implementation considerations: lessons learned from the field. Med Care Res Rev. 2007;64:82S–103s
35. Etheredge LM. A rapid-learning health system. Health Affair. 2007;26:w107–w118
36. Etheredge LM. Creating a high-performance system for comparative effectiveness research. Health Affair. 2010;29:1761–1767
Keywords:© 2013 Lippincott Williams & Wilkins, Inc.
quality; nursing; measurement; outcomes; nursing care; interdisciplinary; dose