As nurse leaders, we're constantly faced with metrics and data designed to help us determine whether our organization is doing well and identify opportunities to improve performance. When we actually delve into improvement opportunities, or attempt to assess evidence that might point us to a better way of doing things, we're also faced with data and metrics. Finally, when we determine a change to be made and implement the change in our organization, we're called on to demonstrate—using data—that our change made a difference. This means that we need to be comfortable working with concepts or labels for things. We also must understand how these concepts are turned into measurements and numbers.
This two-part series will address the basic issues of defining and measuring variables to support you in assessing performance, making improvements, and ensuring that metrics are telling you what you think they're telling you. In this first column, we'll describe the process of defining concepts and turning them into metrics. We'll also address reliability and validity, which lay the groundwork for a solid measurement process.
Taking the measure
There are five basic steps in developing simple measures for use in performance improvement. (See Table 1.) As we go through these steps, we'll be using examples from two areas that most of us have to respond to in relation to our own performance: patient satisfaction and nursing staff turnover. In both cases, these metrics are familiar because of a large body of research indicating that either nursing affects the outcome (patient satisfaction) or the variable (concept of interest), which points to the likelihood of good or poor organizational outcomes.
Before we can address the development of a measurement strategy that will give us reliable and valid results, we have to clearly define what it is we're trying to measure. Otherwise, we can spend effort in the measurement process without getting the data in which we're actually interested. Developing a conceptual definition—or defining our concept—means that we describe in words what we're trying to measure.
Conceptual definition: Patient satisfaction
There have been decades of research into the concept of patient satisfaction and a closely related concept: patient perceptions of the care experience. Early definitions still inform our definition of satisfaction as an evaluative attitude or perception that involves a comparison of the experience of healthcare services with expectations for service and an effective response or feelings about the experience.1-3 The focus for using satisfaction results lies in the potential to retain patients as “customers,” use for employee and provider performance evaluation, and use in marking and contracting discussions. There was a debate in the literature and among healthcare leaders about whether patients were even in a position to evaluate the quality or completeness of their care.
This approach to considering patient perceptions was exploded as research began to demonstrate correlations between patient perceptions of care and other acknowledged metrics reflecting quality.4,5 Therefore, the definition of patient satisfaction has expanded to include perceptions of dimensions of quality. These dimensions are now reflected in the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey required of all participants in Centers for Medicare and Medicaid Services reimbursement. The concept includes the overall perception of the patient regarding care received, as well as summary evaluations such as the likelihood to recommend the hospital/provider and the likelihood to revisit the hospital provider.6 A conceptual definition of patient satisfaction may be “an evaluative attitude formed when a patient compares expectations of care or service with the actual experience of care received.”
Conceptual definition: Nursing turnover
Nursing turnover refers to the act of a nurse leaving a position of employment as a nurse. It's often expressed as a rate of departure to depict the turnover seen within a specific work group, patient care unit, or organization. When we consider the movement of staff out of jobs as an organizational outcome, it's important to create a specific conceptual definition so that the measures that are taken correspond to the actual issue of interest. Turnover is most commonly reported as the number or rate of nurses leaving an organization.7 If we're assessing the adequacy of employment benefits that cut across the organization, this may be the most relevant conceptual definition.
However, if the focus of the inquiry is on the unit-level practice environment, then unit-level exits, including transfers within the organization, may provide the most useful information. In many cases, we're interested primarily in voluntary turnover, which is the rate at which nurses choose to leave a job (as opposed to nurses who are terminated or laid off).8 Thus, it's important to include the reason for measurement and the scope and targeted population in a conceptual definition such as “the rate of voluntary departure by unit for the whole nursing organization.”
Developing an operational definition
An operational definition describes how a concept will be measured. This is easiest when the conceptual definition has been specifically spelled out. Armed with a specific conceptual definition, you can search for measures that have already been developed and use data from those measures as the operational definition. It's important to look at the definition that's the basis for the measure to make sure it represents what you're interested in. Background on an existing instrument should provide evidence of validity (the measure captures the concept of interest) and reliability (the measure consistently captures the concept).
In our patient satisfaction example, the HCAHPS survey is described as collecting information about specific aspects of patients' experience with a healthcare organization and also includes summary or global questions about the patients' overall assessment of the organization and likelihood to recommend.6 There are items that specifically address nursing activities that have been shown to influence satisfaction, as well as other aspects of care. Substantial data regarding testing of validity and reliability are provided.5 The resulting operational definition would be “patient satisfaction as measured by average HCAHPS scores from patients being discharged from all med-surg units over the past 6 months.”
In our nursing turnover example, a commonly understood method for calculating a turnover rate is to divide the number of staff departing by the average number of staff on board during the measurement period. An example of an operational definition for the concept of voluntary unit turnover might be “the annual rate of voluntary departures by unit as measured by the number of exits meeting criteria for being ‘voluntary’ divided by the average of staff on board at the beginning and end of the measurement period and expressed as a percentage.”
Reliability and validity
Finding measures that reflect the concept of interest and have evidence of reliability and validity is the first step in creating metrics that can make a difference. You'll remember the concepts of reliability and validity from graduate school or undergraduate research courses. Reliability is the degree to which a measure performs consistently over time and across subjects. Validity is the degree to which a measure actually captures the concept that it was intended to measure. If there's no consistency in measurement, you can't be sure you're measuring anything, let alone the concept you wanted to measure. Therefore, reliability is a prerequisite for validity.
Figure 1 shows the concepts of reliability and validity as hitting a target. The target itself represents the concept we're trying to measure, such as patient satisfaction or nursing turnover. If a metric isn't reliable, each measurement we take, or each subject who responds, gives us a result in a different location in relation to our target. The results aren't consistently clustered closely together and we can't say what, if any, concept the measure is capturing. In the second example, the measures are reliable but not valid. The results cluster closely together (are consistent), but the cluster isn't near the center of the target. Therefore, we know we're measuring “something,” but not what we had intended to measure. And, finally, if we have a valid and reliable measure, all of our measurements will cluster together and hit close to the bull's eye of the target. In research terms: If a metric is reliable and valid, it will consistently capture data that actually represent the concept in which we're interested.
In next month's column, we'll consider how to turn a metric into an actual measurement strategy so that you get valid and reliable data in your setting, as well as environmental influences that can affect reliability, validity, and the value of a specific measurement process.
1. Chang CS, Chen SY, Lan YT. Service quality, trust, and patient satisfaction in interpersonal-based medical service encounters. BMC Health Serv Res
2. Pascoe GC. Patient satisfaction in primary health care: a literature review and analysis. Eval Program Plann
3. Ware JE Jr, Snyder MK, Wright WR, Davies AR. Defining and measuring patient satisfaction with medical care. Eval Program Plann
4. Giordano LA, Elliott MN, Goldstein E, Lehrman WG, Spencer PA. Development, implementation, and public reporting of the HCAHPS survey. Med Care Res Rev
5. Jha AK, Orav EJ, Zheng J, Epstein AM. Patients' perception of hospital care in the United States. N Engl J Med
7. Coomber B, Barriball KL. Impact of job satisfaction components on intent to leave and turnover for hospital-based nurses: a review of the research literature. Int J Nurs Stud
8. Gilmartin MJ. Thirty years of nursing turnover research: looking back to move forward. Med Care Res Rev