Skip Navigation LinksHome > March 2001 - Volume 31 - Issue 3 > Lessons Learned While Collecting ANA Indicator Data
Journal of Nursing Administration:
Articles

Lessons Learned While Collecting ANA Indicator Data

Jennings, Bonnie Mowinski DNSc, RN, FAAN; Loan, Lori A. PhD, RNC; DePaul, Debra MSN, RN; Brosch, Laura R. PhD, RN; Hildreth, Pamela MS, RN

Free Access
Article Outline
Collapse Box

Author Information

Bonnie Mowinski Jennings, DNSc, RN, FAAN, bonnie jennings@tma.osd.mil, Colonel, US Army Nurse Corps, TRICARE Management Activity, Health Program Analysis and Evaluation, Falls Church, Virginia,

Lori A. Loan, PhD, RNC, Chief, Nursing Research Service, Madigan Army Medical Center, Tacoma, Washington,

Debra DePaul, MSN, RN, Program Manager, TriService Nursing Research Program, Bethesda, Maryland,

Laura R. Brosch, PhD, RN, Lieutenant Colonel, US Army Nurse Corps, Chief, Nursing Research Service, Walter Reed Army Medical Center, Washington, DC,

Pamela Hildreth, MS, RN, Lieutenant Colonel (ret), US Army Nurse Corps, Community Services Quality Assurance Area Manager, Washington State Department of Social and Health Services, Olympia, Washington.

This report is from a study supported by the TriService Nursing Research Program, grant N97-011 When the work was conducted, all authors were members of the nursing staff at Madigan Army Medical Center, Tacoma, Washington.

The opinions expressed in this paper are entirely those of the authors and do not necessarily reflect the opinion of the Department of Defense, Department of the Army or the Army Medical Department.

Collapse Box

Abstract

Realizing the importance of linking nursing's contribution to quality patient care, a pilot study was conducted to determine whether data regarding the quality indicators proposed by the American Nurses' Association (ANA) could be collected from five acute-care inpatient units at one medical center that is part of a multisite managed care system. Although it was determined that data regarding the ANA quality indicators could be collected at the study site, a variety of unanticipated findings emerged. These findings reflect both discrepancies and congruities between how the investigative team expected the ANA indicators to operate versus what was actually experienced. The lessons learned while collecting ANA indicator data are shared to assist future users and to advance the evolution of the ANA indicators.

The triad of goals propelling the healthcare industry-cost, quality, and access-poses a complicated balancing act. The inextricable relationship between cost and quality has been acknowledged for years. And yet, the reality is that issues of healthcare costs often take center stage to access and quality.1 It is therefore not surprising that as goals of cost effectiveness are achieved in managed care organizations, there is a coexisting need to ensure that quality is not compromised. In fact, the quality of healthcare in America is coming under increasing scrutiny.1-3

As a large part of hospital labor costs, nursing personnel are often targeted when cost reductions are needed. Intuitively, it seems that insufficient numbers of nursing personnel could compromise the quality of inpatient care. Although nurses are available to inpatients 24 hours a day, the relationship between nurse staffing and quality is somewhat ambiguous from an empirical perspective.4,5 Thus, the American Nurses' Association (ANA) developed a set of quality indicators to reflect nursing's contribution to patient care.6-8 These indicators could be used to compliment existing institutional performance measures and report cards.

The real test of an indicator's utility is its ability to be measured and used in practice. As the nursing staff at a medical center in the Pacific Northwest gave serious consideration to implementing the ANA indicators as a gauge of nursing care quality, various pragmatic measurement issues slowed the momentum for institutionalizing the collection of these indicators. The nurses recognized considerable complexity surrounding the seemingly simple ANA indicators, giving rise to their uncertainty about the practical ability to routinely capture data reflecting these indicators. To address this complexity, a pilot study was conducted to explore the feasibility of collecting data specific to the ANA indicators at the medical center. Following a brief synopsis of the ANA Nursing Care Report Card project and an overview of this pilot study (the Nursing Quality Indicator Project), this article delineates lessons learned during the pilot study.

Back to Top | Article Outline

ANA Nursing Care Report Card

Advocacy for safe, high-quality patient care is an ANA priority. Thus, the paucity of nursing-sensitive quality measures on existing healthcare organization report cards served as a catalyst to formulate a nursing report card.6 The acute-care setting provides the initial focus for the ANA report card, with future plans to expand the initiative into all care settings.7 Critical to interpretation of a report card is understanding the link between patient outcomes and the quality of nursing care. Such an understanding can provide guidance to report card-based organizational restructuring efforts, decisions regarding staffing mix, and clues about how changes in care delivery may affects patients' health.8

Donabedian's9,10 classic triad of structure, process, and outcome provided the conceptual foundation for the ANA indicators. Following an extensive literature review, expert panel discussions, and focus group interviews, 21 nursing care quality indicators were identified for potential use in the report card. These 21 indicators were selected based on theoretical or established evidence of their strong link to nursing care quality in the acute-care setting.6

Interviews were then conducted with staff from hospitals and national healthcare organizations (e.g., The Joint Commission) to refine and reduce the number of indicators. Information from interviews was presented to the ANA report card advisory group. Selection of the final set of indicators was based on two factors: first, the indicators' specificity to nursing, or on the demonstrated strength of relationship to nursing quality; and second, their ability to be tracked, or on the presence of existing mechanisms to capture indicator data.7 This process yielded ten indicators pertaining to structure (two indicators), process (two indicators) and outcomes (six indicators). Together, these indicators are referred to as the ANA nursing care report card.6-8 The ANA definitions of these indicators (Table 1) are essential to guide data collection, measurement, and tracking. When completed, the goal of the ANA Nursing Quality Report Card is to include indicators that have demonstrated the ability to link nursing care quality to patient outcomes.8

Table 1
Table 1
Image Tools
Back to Top | Article Outline

Nursing Quality Indicator Pilot Study

Realizing the importance of understanding nursing's contribution to patient care quality, the investigators of this pilot study saw tremendous potential in the ANA nursing quality indicators. The ANA indicators afford a common context for benchmarking among facilities. Consequently, to help create an anchor for nursing in today's turbulent healthcare environment, it seems inherently wise for nurse administrators to consider monitoring nursing quality indicators with demonstrated links to quality. As previously noted, however, doubts surfaced regarding the practicality of routinely capturing data reflecting the ANA quality indicators.

As a consequence of a strong belief in the potential value of these indicators, a pilot study was conducted to assess the feasibility of collecting the ANA nursing quality indicators at one military medical center. This was a prospective study that used data collected over a 3-month period between June and August 1998. Data were obtained from a daily review of automated inpatient records, administration of patient satisfaction and nurse job satisfaction surveys, and a review of nursing administrative reports. Nursing care quality data were collected from automated inpatient records of adults hospitalized for greater than 24 hours on five acute-care nursing units. Nursing job satisfaction data and patient satisfaction data were also collected from the five study units. Patient satisfaction data were collected after hospital discharge.

Back to Top | Article Outline

A Priori Expectations

While it was ascertained that the ANA quality indicators could be collected at a military medical center, the potential to routinely assess quality using these indicators was mitigated by other factors such as the five a priori expectations of the ANA indicators held by the investigative team. We expected the ANA quality indicators would:

• provide standardized definitions for nursing care quality indicators that could ultimately facilitate comparisons across multiple sites;

• be easy to retrieve and affordable, using existing data sources, personnel resources, and minimal time demands;

• be relevant, believable, and acceptable to clinicians and administrators;

• be measurable to include specific details regarding measurement; and

• have a demonstrated link to nursing care.

We also assumed that the ANA indicators are sensitive to changes in nursing care quality. Each of these expectations is addressed by examining them in light of the experiences encountered by the investigators while conducting this pilot study.

Back to Top | Article Outline

Lessons Learned

Standardization
Denominator Specification

A common phrase is "the devil is in the details." For measures reflecting rates such as skin integrity, nosocomial infections, and patient injury, the common phrase might be modified to read "the devil is in the denominator." The denominator is critical to defining what constitutes the aggregate base and thus what the report card reflects of the institution. There must be a standardized approach to defining the denominator.

Although the definitions of some quality indicators denote the denominator as total number of patient days (Table 1), use among multiple sites necessitates more precise descriptions regarding the type of patients included and excluded in calculating "patient-days." For example, the acute-care report card excludes obstetrical patients and psychiatric patients, while intensive care and medical-surgical patients are included. Compliance with these inclusion/exclusion criteria is not difficult. However this important detail needs to be followed meticulously and consistently among sites to ensure that like things are compared. The accurate specification of the denominator is complicated by the need to cull medical and surgical patients from aggregate hospital numbers. To not do so will distort the derived denominator and impair accurate assessments of quality. This problem is exacerbated, especially in small facilities, where multiple patient populations-medical, surgical, and pediatric-are co-located.

Back to Top | Article Outline
Indicator Definitions

For certain indicators such as skin integrity, nosocomial infection, and patient injury, the definitions were sufficiently clear to support a standardized approach to data collection in the study facility. Although interfacility reliability would need to be monitored to ensure the definitions were applied correctly, there was a high level of confidence that data for these indicators could be consistently collected across multiple facilities.

For other indicators such as staff mix, the definition of full-time equivalents (FTEs) must be addressed. Despite its universal use, the definition of an FTE remains inconsistent. This simple but essential term must have a common application if comparisons are to be made within and across institutions. Furthermore, FTEs must be distinguished from positions. For example, the person who works three 12-hour shifts is filling a position, but the position is only a portion of an FTE if an FTE is defined as a 40-hour week. These distinctions are important because the actual hours worked affect the hours available to care for patients. Such issues become intertwined with productivity. Productivity and FTE data are companion pieces in calculating staffing needs.11 At present, there is a strong interest in determining how to boost nurse productivity.

Back to Top | Article Outline
Satisfaction Indicators

The various ANA satisfaction indicators are also surprisingly lacking in certain aspects of standardization. These include specification of instruments to measure particular indicators, data collection techniques, and issues pertaining to nursing staff satisfaction.

Instruments. There are no specified instruments to measure either patient satisfaction or nurse job satisfaction. This allows unintended differences, resulting from instrument selection, to complicate the interpretation of satisfaction scores across sites. Satisfaction is a complex and important indicator. It is also the potential target of those who favor objective assessments and thus undervalue subjective perceptions. Using standard instruments with defensible psychometric properties will assist cross-facility comparisons and enhance internal acceptability among those who argue the usefulness of subjective assessments.

Data Collection. The lack of standard data collection techniques is particularly evident in the assessment of patient satisfaction. Patient data might be collected by mailed or telephone survey, each approach having different merits. They might be collected at various times before, during, or after discharge. This raises many questions regarding the most valid method for collecting these data. Given that 4 of the 6 outcome indicators are derived from patient satisfaction, there is considerable urgency to designate standardized techniques to govern data collection. Data interpretation and potential interfacility comparisons will be enhanced based on knowing whether the recommended approach to data collection was followed. Reducing variation in measurement techniques will support the ultimate goal of creating a nursing care report card.

Nursing Staff Satisfaction. There is a need to clarify who is expected to complete the survey, only registered nurses (RNs) or all nursing personnel? The recommended ANA definitions7,8 and instructions in the implementation guide8 expressly target RNs as the focus for this data. However, in today's' healthcare environment, patient care teams are comprised of a mix of nursing personnel, including RNs, licensed practical nurses (LPNs), and unlicensed personnel. Clarifying the nursing population intended to provide staff satisfaction data must be reconciled based on the theoretical conceptualization of the relationship between nursing job satisfaction and patient care quality. What affects patient care quality? Is it only the job satisfaction of the RN or the satisfaction of the entire nursing team?

Another standardization issue effecting nurse satisfaction concerns obtaining an adequate response rate. A low response rate undermines the representativeness of the nursing job satisfaction data. Response rate and, therefore, data quality might be enhanced or compromised depending upon who endorses its collection. An implicit or explicit push from the chief nurse executive might yield different findings than if a more removed source such as nurse researchers or a university serves as the requesting agents. Additional factors could affect the data quality. Honesty and completeness of the data could be influenced by the degree of trust in the environment, concerns regarding possible repercussions, as well as an understanding of how the data will be used. There could also be concerns regarding mandatory completion of the surveys or issues of anonymity and the likelihood of tracing responses.

Back to Top | Article Outline
Retrieval

Many lessons were learned regarding retrievability. Ease of retrieval referred to whether indicator data collection was affordable with respect to people, equipment, and time. It also refers to whether it was possible to use existing data or if additional data collection was required. Patient injury data, for example, were easy to retrieve. Because injuries were reportable, there were existing databases to track this indicator. Therein lies a key discovery, one that is so obvious in its statement yet not entirely evident until made explicit. The extent to which databases are in place to capture any of these indicator data reduces the need for labor-intensive chart abstraction and thus enhances the ease of retrievability.

Collection of skin integrity data underscores this issue. If skin assessments, including pressure ulcers, are only reported in narrative notes in the patient record, retrieving these data is extraordinarily time consuming. If, however, there is a central repository for reporting pressure ulcers that are grade II and higher, then collecting the data is simplified greatly. This discovery alone has already led to an important process change in the facility where the pilot study was conducted through the creation of a central reporting process and database for pressure ulcer data.

An alternate approach to collecting skin integrity data is to perform pressure ulcer surveillance. By designating a particular day each month to examine all inpatients for pressure ulcers, prevalence data can be gathered. Looking at prevalence is a shift away from the current ANA definition as reflected in Table 1. Although there are merits to using a prevalence approach, there are pros and cons to both methods of skin integrity data collection. A focused debate on this topic would be beneficial to outline the merits and detractors of each approach leading to an informed decision that would then be applied consistently to collect skin integrity data.

Both staff mix and total nursing care hours (TNCH), the structure indicators, are easy to retrieve. There is a price to collecting these data, however. The price pertains to the accuracy of the data. If collected on a monthly basis, these indicators are reasonably simple to collect, but accuracy may be compromised. Accuracy can be improved by increasing the frequency of data collection, a choice that increases the cost of these data. A high level of accuracy is achieved if data are retrieved daily to ensure float staff and agency personnel are appropriately represented as well as capturing whether the nurse manager is working in a patient care role or an administrative role. These then become very expensive indicator data.

It is noteworthy that prior reports are not enthusiastic about the value of collecting structural indicator data because their connection to quality outcomes is vague at best.4,12 More recent reports are finding nurse staffing, particularly higher RN skill mix, to have an important relationship to better patient outcomes.13-16 Poor data quality may distort the influence of the structural indicators on patient outcomes. Perhaps the influence of structure has been understated in the past due to poor data quality. By specifying the frequency for collecting the structural indicator data, a common level of data accuracy might prevail among all studies using the ANA quality indicators.

Retrieval issues also influence patient satisfaction data. That is, the ease of patient satisfaction data collection must be balanced with the consideration of the bias that might be introduced by the timing of survey distribution. Although it may be easier to collect the data while the patients are in hospital, the more accurate representation of their satisfaction may be obtained after discharge. If the patients receive the surveys after discharge, however, it may require multiple mailings to get a sufficient return rate, thus increasing the complexity and cost of data collection. The challenge of when to collect patient satisfaction data is one that pervades the entire healthcare industry. At present, no one method is acknowledged as superior.

Back to Top | Article Outline
Relevance

To maximize the usefulness of the ANA Nursing Quality Indicators, it is important that nurses as well as individuals from other disciplines embrace each indicator as a relevant reflection of the quality of nursing care. The current composite of indicators was assumed to be relevant reflections of nursing quality care. Questions related to this assumption surfaced, however, when indicator data collection was implemented for the pilot study.

Back to Top | Article Outline
Tracking Throughout the Care Continuum

Key among the relevance issues is the inability to track outcomes that occur subsequent to the patient leaving the hospital. For example, undesirable outcomes for skin integrity and nosocomial infection indicators may surface after discharge. These undesirable outcomes may be a consequence of poor inpatient nursing care quality. If they were not detected and reported as a part of the inpatient experience, reflections of quality may be falsely favorable.

Back to Top | Article Outline
Shorter Lengths of Stay

Another group of questions pertaining to relevance emerged with regard to setting 72 hours as the parameter for examining nosocomial infections and skin integrity. The substantial reduction in lengths of stay has resulted in very few admissions of 72 hours or more. The 72-hour period may be an appropriate timeframe for ill patients with long stays, but it is irrelevant for inpatients with short hospital stays. Those patients who meet the 72-hour criteria are often exceedingly ill, and certainly infections and skin breakdown are important aspects of quality for them. Quality indicators must also be defined that are more relevant to patients whose stays are short.

Back to Top | Article Outline
Nosocomial Infections

Although urinary tract infection rates initially defined nosocomial infections as an ANA indicator, bacteremia rates are now used to calculate the ANA's Nosocomial Infection Indicator. Despite adopting Center for Disease Control (CDC) procedures for collecting bacteremia data, questions of relevance remain. For example, nosocomial infections may be less relevant if the focus remains on bacteremia from central lines. Physicians usually insert these lines, and the possibility of contamination on insertion cannot be discounted. Unfortunately, data for other forms of nosocomial infections or for other sources of bacteremia are not routinely collected. Expanding the definition beyond central lines does not circumvent the problem, however.

The larger relevance question is whether nosocomial infections are an acceptable indicator of nursing care quality. Considering the variety of healthcare team members who contact patients, it is difficult to place nosocomial infections exclusively within the realm of nursing's influence. This indicator may not pass the relevant, believable, acceptable litmus test given the difficulty in collecting infection rate data for discharged patients, the potentially weak link between nursing care and the development of infection, and the strong likelihood that the infection may have been initiated from another source.

Back to Top | Article Outline
Patient Injury

Another indicator, patient injury-specifically falls-is considered to reflect nursing care quality because nurses are responsible for implementing measures to prevent patient falls. During the pilot study, the performance of this indicator elicited concerns regarding both the relevance and comparability of falls data. Because there may be patient characteristics independently associated with the likelihood of falling such as age and cognitive impairment, it may be important to risk-adjust patient injury data. If there was a surge in falls because of an atypical increase in admissions of patients with a predilection to falling, for instance, then the facility might appear to provide a lower level of quality if risk adjustment was not used. Although there are many sophisticated risk adjustment systems, a simple adjustment such as reporting fall rates based on age cohorts rather than as an aggregate rate would enhance interfacility comparisons and help focus quality improvement efforts. Fortunately falls were a rare occurrence in the pilot study. However patient outcomes that reflect rare occurrences may not vary sufficiently with deviations in nursing care quality to provide valuable information.

Back to Top | Article Outline
Nursing Job Satisfaction

A very interesting experience in this pilot study was the response of the nursing staff to the job satisfaction questionnaire. In this work, the McCloskey/Mueller Satisfaction Scale (MMSS) was used after very careful and extensive examination of possible nurse job satisfaction instruments.17 Anecdotal reports suggested the nursing staff viewed completing the survey as somewhat of a bother, overall, with many individuals remarking they were tired of filling out surveys. Pilot study respondents revealed the staff was more interested in questions pertaining to quality of care than questions addressing job satisfaction. In an era of constrained nursing resources, when every moment is consumed with patient care activities, perhaps completing a survey ranks low on their priority list. Furthermore, the expressed interest in care quality rather than personal satisfaction is an interesting value statement.

Back to Top | Article Outline
Staff Mix and Total Nursing Care Hours

The structural indicators of staff mix and total nursing care hours (TNCH) were viewed as relevant but with some caveats. For instance, the use of a single data point to reflect staffing over a 24-hour period was found to be too restrictive. It may not represent the reality of today's inpatient care setting where staffing fluctuates not only among shifts but within shifts.

Questions also arise regarding who is counted as staff. For example, is the nurse manager counted among staff caring for patients? The answer to this particular question may depend upon role definition, but clear guidance to ensure a standard decision tree for categorizing the nurse manager would be useful. This need for clarity also applies to staff who support patient care across several units. For example, IV nurses, lactation consultants, and clinical nurse specialists (CNSs) all supplement the fixed staff of a particular unit in countless ways. In the absence of these types of nursing care providers, existing staff must absorb these specialized aspects of nursing care to ensure patient needs are met. The presence or absence of support staff such as phlebotomists and escort service personnel may also need to be addressed.

In addition, these indicators do not reflect whether the aggregate number of nursing personnel caring for patients is too few or too many. In the pilot study, on many daily reports, the staff mix data simply did not seem to reflect the complexity of the staffing issues. For example, on a high census day, two non-RN staff members called in sick. The staffing concern was remedied by calling in one unscheduled RN staff member, thus rendering a staff mix with a higher ratio of RNs to non-RNs, implying the nursing care quality should be at a higher level. The reality of the situation was the unit was actually short one staff member. The ANA staff mix indicator fails to take into account the overall adequacy of staffing. As reflected in the preceding example, on a day of high census and reduced staffing, the RN ratio per patient was maintained, while the staff's ability to provide quality care to patients was uncertain. A better indicator might be one that captures expected staffing compared with actual staffing.

Similarly, a factor measuring the intensity of required nursing care is also needed to reflect that very ill patients may consume more nursing resources due to their tremendous care requirements as compared to less ill patients. As severity of illness was not a part of either the staff mix or the TNCH indicators, the relevance and believability of these indicators should be questioned.

Back to Top | Article Outline
Measurable

The fourth expectation was that the indicators would be measurable. Patient satisfaction illustrates issues regarding measurement, the preeminent one being that there is no specific tool recommended. Considerable time was spent selecting a patient satisfaction tool, and a number of interesting factors were identified during the search. Of paramount concern is the limited number of satisfaction instruments that have reported psychometric properties and also address the ANA indicators-patient satisfaction with: nursing care, pain management, educational information, and overall care. This concern is heightened when considering that four out of six ANA outcome indicators relate to aspects of patient satisfaction.

Ultimately the Patient Satisfaction with Nursing Care (PSNC) instrument was used in this pilot study.18 The PSNC operated well insofar as capturing information about patient satisfaction with education. However, because patient education is provided by multiple disciplines at the facility where the pilot study was conducted, it is impossible to determine whether measured satisfaction with education information can be traced exclusively to nurses. To measure patient satisfaction with educational information specifically provided by nurses, the items on the instrument must be modified. This same concern applies to the patient satisfaction indicator pertaining to pain management.

Patients, in attempting to complete their patient satisfaction surveys, raised additional questions and frustrations of high importance from a measurement standpoint. Some had difficulty consolidating their satisfaction with care into one score. The survey did not allow them to distinguish among the various nursing units on which they received care during a single inpatient stay. It is not uncommon, for example, for patients to move from intensive care to a step-down unit to a general surgery floor. Therefore, should the patient respond based upon the last shift or unit where they received care, or should their responses reflect an amalgam of all nurses and units? Do really memorable experiences, either bad or good, overstate the gestalt of the whole inpatient experience?

Back to Top | Article Outline
Link to Nursing

Fundamental to the interpretation of the meaning of the quality indicators is the assertion that these variables represent aspects of care that can be linked to nursing. Creating such a link is vitally important to address the often-asked questions regarding what difference nurses make, along with what kind of nursing personnel are needed (e.g., RNs or nursing assistants) and how many.

Back to Top | Article Outline
Structure of Care Indicators

Staff mix and TNCH are important structural variables that are gaining prominence and reflect a link between nursing and patient outcomes. Nonetheless, previously addressed issues that complicate measuring these important variables remain to be reconciled. Fundamental among these issues is that staffing must be adequate to begin with so a key issue becomes, "What is adequate staffing?" This is not to suggest mandated staffing ratios are the right response to the question. However, addressing the question remains tremendously important.

Back to Top | Article Outline
Process of Care Indicators

The process of care delivery is infinitely more complicated than the two ANA process indicators suggest. In fact, most process indicators are very likely the result of a complex set of interdisciplinary interactions. This aside, maintenance of skin integrity seems to be indisputably linked to nursing care quality. Nursing staff satisfaction however, while important, is questionable as a process indicator. There are numerous intervening factors such as the experience level of the nurses and the functioning of the entire care team. Fundamentally, it must be acknowledged that staff satisfaction is not a guarantee of positive patient outcomes.

Back to Top | Article Outline
Outcome Indicators

The 6 outcome indicators suggest varying strength in their link to nursing. Patient injury, in particular falls rates, is logically associated with nursing care quality and an adequate number of nursing personnel to ensure patient safety. Conversely, nosocomial infection rates are problematic for the various reasons previously stated. If these rates are deemed appropriate as nursing quality indicators, the next question is whether the nosocomial infections are best viewed as an outcome indicator or if they are more appropriately conceptualized as process measures.

Considerable weight of the ANA indicators resides within the concept of patient satisfaction, giving rise to two important considerations. The first concerns measuring patient satisfaction. At present, patient satisfaction scores do not demonstrate the variability desirable to reflect fluctuations in nursing care quality. That is, most patient satisfaction data is skewed toward the high end of satisfaction. The relevance of patient satisfaction is not the question, because care is indeed in the eye of the person experiencing the care. Rather, the question is how to capture this indicator in a more meaningful way. The second concern relates to expanding the indicators in this category because high-quality care and outcomes involve more than patient satisfaction. Hence, there is a need to include outcome indicators that are less diffuse than patient satisfaction. A number of holistic patient outcomes within nursing's purview-from functional status to quality of life and more-are existing candidates to complete this category.

Back to Top | Article Outline

Conclusion

Today's turbulent, uncertain, unpredictable healthcare environment is not likely to stabilize in the near term. Having data to inform clinical, administrative, and policy decisions is essential to ensure that competing demands of access, cost, and quality are balanced. The lessons learned and pragmatic issues raised here were generated during the conduct of a pilot study to determine if the ANA Quality Indicators could be measured at a military medical center. These issues have implications far beyond the walls of military healthcare settings, however. They need to be addressed to support the more widespread use of the ANA Quality Indicators and to make informed assessments regarding the relationship between nursing and the quality of care delivered. Clarifying issues such as those addressed here are important advances needed to support the addition of indicators sensitive to nursing care to healthcare organization report cards.

Back to Top | Article Outline

References

1. Brook RH Commentary Managed care is not the problem, quality is JAMA. 1997,278(19) 1612-1614

2. Kohn KT, Corrigan JM, Donaldson MS, eds. To Err is Human Building a Safer Health System. Washington, DC National Academy Press; 2000.

3. The President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry Quality First. Better Health Care for All Americans. Final Report to the President of the United States Columbia, MD; 1998.

4. Pierce SF. Nurse-sensitive health care outcomes in acute care settings: An integrative analysis of the literature. J Nurs Care Qual. 1997,11(4) 60-72.

5. Wunderlich GS, Sloan FA, Davis CK, eds. Nursing Staff in Hospitals and Nursing Homes. Is it Adequate? Washington, DC National Academy Press, 1996.

6. Amencan Nurses Association. Nursing Care Report Card for Acute Care. Washington, DC: American Nurses Publishing; 1995.

7. American Nurses Association. Nursing Quality Indicators. Definitions and Implications. Washington, DC: American Nurses Publishing; 1996.

8. American Nurses Association Nursing Quality Indicators. Guide for Implementation Washington, DC: American Nurses Publishing; 1996.

9. Donabedian A. Explorations in Quality Assessment and Monitoring. Volume 1. The Definition of Quality and Approaches to its Assessment. Ann Arbor: Health Administration Press; 1980.

10. Donabedian A. The role of outcomes in quality assessment and assurance. Quality Review Bulletin. 1992;18(11):356-60.

11. Ward WJ. Health Care Budgeting and Financial Management for Non-Financial Managers. Westport, CT: Auburn House; 1994.

12. Brook RH, McGlynn EA, Cleary PD. Quality of Health Care. Part 2: Measuring quality of care. New Engl J Med. 1996,335(13) 966-969

13. Blegen MA, Goode CJ, Reed, I. Nurse staffing and patient outcomes. Nurs Res. 1998;47(1):43-50.

14. Kovner C, Gergen PJ. Nurse staffing levels and adverse events following surgery in U.S. hospitals. Image J Nurs Sch. 1998;30(4):315-321

15. Lichtig LK, Knauf RA, Milholland DK Some impacts of nursing on acute care hospital outcomes. J Nurs Adm. 1999;29(2):25-33.

16. Moore K, Lynn MR, McMillen BJ, Evans S. Implementation of the ANA report card. J Nurs Adm. 1999;29(6):48-54.

17. McCloskey J, Mueller CW Nurses' job satisfaction: a proposed measure. Nurs Res. 1990;39(2):113-117.

18. Jacox AK, Bausell BR, Mahrenholz, DM. Patient satisfaction with nursing care in hospitals. Outcomes Management for Nursing Practice. 1997;1(1):20-28.

Cited By:

This article has been cited 7 time(s).

Medical Care Research and Review
Nursing sensitive databases - Their existence, challenges, and importance
Alexander, GR
Medical Care Research and Review, 64(2): 44S-63S.
10.1177/1077558707299244
CrossRef
Journal of Nursing Scholarship
A theory-driven approach to evaluating quality of nursing care
Sidani, S; Doran, DM; Mitchell, PH
Journal of Nursing Scholarship, 36(1): 60-65.

Journal of Nursing Care Quality
Nurse staffing and system integration and change indicators in acute care hospitals - Evidence from a balanced scorecard
Hall, LM; Peterson, J; Baker, GR; Brown, AD; Pink, GH; McKillop, I; Daniel, I; Pedersen, C
Journal of Nursing Care Quality, 23(3): 242-250.
10.1097/01.NCQ.0000310655.60187.92
CrossRef
Medical Care
Framing the problem of measuring and improving healthcare quality - Has the quality health outcomes model been useful?
Mitchell, PH; Lang, NM
Medical Care, 42(2): 4-11.
10.1097/01.mlr.0000109122.92479.fe
CrossRef
Health Affairs
Strengthening hospital nursing
Buerhaus, PI; Needleman, J; Mattke, S; Stewart, M
Health Affairs, 21(5): 123-132.

International Journal for Quality in Health Care
Nurse staffing and patient safety: current knowledge and implications for action
Needleman, J; Buerhaus, P
International Journal for Quality in Health Care, 15(4): 275-277.
10.1093/intqhc/mzg051
CrossRef
Journal of Nursing Administration
Measuring Nurse Job Satisfaction
Best, MF; Thurston, NE
Journal of Nursing Administration, 34(6): 283-290.

PDF (98)
Back to Top | Article Outline

© 2001 Lippincott Williams & Wilkins, Inc.

 

Login