At the University of Kansas Medical Center School of Nursing, NDNQI, in Kansas City, Kan., Michael Simon is a postdoctoral fellow, Yevhen Yankovskyy is a senior research analyst, and Nancy Dunton is a research professor.
Patient days and midnight census are deeply interrelated measures that are widely used in nursing administration and health services research. The patient day measure is an important indicator, which characterizes the patient load in hospital units or the exposure of patients to certain risks or treatments. Whereas "patient days" is a concept, midnight census is the most often used method to collect data from which patient days can be calculated. In the National Database of Nursing Quality Indicators (NDNQI), patient days are the denominator of nursing hours per patient day (NHPPD) and the patient fall indicators—both nursing-sensitive quality indicators endorsed by the National Quality Forum (NQF). But what's the right data collection method?
Data collection methods
The NQF had endorsed five methods to collect patient day data in connection with NHPPD:1,2
* M1 represents the daily midnight census
* M2 is the midnight census with additional patient days from actual hours for short stay patients (SSPs)
* M3 utilizes midnight census with additional patient days from average hours for SSPs
* M4 employs patient days from actual hours for inpatients and SSPs
* M5 uses patient days from multiple census reports.
Four of the five methods are census based. Although M1 solely relies on the midnight census, M2, M3, and M5 integrate some kind of adjustment for SSPs. M4 calculates patient days on actual admission and discharge times, making it the only method that doesn't use a census. It's the most accurate but least used method among NDNQI participants. (See Table 1.) More than half of all units use the midnight census (M1) followed by a quarter of all units using M2, adjusting with actual hours from SSPs. The remaining fifth of all units use M3, M4, and M5. A current draft of the implementation guidelines suggests removing M3 from the set of eligible methods.
Table 1: Five patien...Image Tools
Taking the distribution of the patient day methods into account, two questions might be asked: What drives the decision to pick a certain patient day data collection method and are the methods used biased? At this point we can only speculate about why units decide to choose one method over the other. A straightforward explanation for the current method distribution seems to be the availability of recording systems or the ease of use. Whereas the census-based methods "only" require the midnight census, M4 requires more effort such as a query of admission and discharge times from an electronic system.
Are patient day methods biased?
In addition to practical issues that may drive the decision to pick one of the patient day data collection methods, it's important to know if the method itself introduces bias: underestimating or overestimating patient days. One way to investigate this question is to apply all the methods to the same set of patients. However, this requires having exact admission and discharge times from each patient in the unit, which aren't available from data sources such as NDNQI or the Agency for Healthcare Research and Quality's Healthcare Cost and Utilization Project (HCUP). Another way to investigate the bias related to the patient day data collection method is to simulate data based on predefined assumptions.
To this end, we conducted a simulation study of an "average" surgical unit with 225 patient stays for a period of 30 days. To get a reasonable length of stay (LOS) estimate, we used a mix of 38 common surgical procedures from the HCUPnet website with a mean LOS of 63.9 hours (2.6 days) and a standard deviation of 44.8 hours (1.86 days). Additionally, we assumed that 80% of the patients were admitted during the daytime (8 a.m. to 8 p.m.) and varied the proportion of SSPs from 0% to 50%. Simulations were iterated 10,000 times. We used M4, which derives patient days from actual hours of each patient, as the gold standard against which to assess the performance of the other measures. M3 uses the average number of SSP days, which were derived from a pilot study. To simulate this pilot study, we used the average hours of SSPs from a randomly chosen quarter (7.5 days) of the month under investigation. For M5 we used multiple censuses of every 4 hours. Furthermore, we applied another M5 approach (called M6), which consisted of a noon and midnight census.
Results from the simulation study showed that for all methods, the coefficient of variation increased with increasing percentages of SSPs. (See Table 2.) This indicates that the higher the number of SSPs, the less exact all methods capture the "true" number of patient days. Looking at the mean bias, we can see a more diverse picture, with increasing biases for M1 and M2; a negative bias for M3 with low numbers of SSPs and a positive bias for high numbers of SSPs; and finally decreasing biases for M5 and M6. The boxplots in Figure 1 show how strongly the data collection method drives the variance of the bias distribution. The larger the boxes of the boxplots, the wider the bias range. M3 has relatively more outliers (indicated by the dots above and below the whiskers) than any of the other methods, whereas M5 and M6 do have considerably smaller inter-quartile ranges (smaller boxes) than M1 and M2.
The results from the simulation indicate that there are consistently negative biases for M1 and M2 (an underestimation of patient days), an overestimation or underestimation of patient days due to SSPs for M3, and small positive biases for M5 and M6. More important than the mean bias are the volatility of the distribution and the presence of outliers. M3 seems to be highly susceptible to producing outliers, which isn't a desirable attribute. M5 (every 6 hours) and M6 (noon and midnight) consistently have the smallest bias and fewer numbers of outliers and should, along with the gold standard M4, be regarded as methods of choice.
Choosing the right method
The connection of patient days with nursing-sensitive indicators such as NHPPD and patient fall indicators stresses the importance of accurate patient day measurement. Nurse managers should aim to use actual admission and discharge times derived from electronic systems or multiple census approaches to collect patient day data.
1. The Joint Commission. Implementation Guide for the NQF Endorsed Nursing-Sensitive Care Performance Measures—Appendix F. Oakbrook Terrace, IL: The Joint Commission; 2005.
2. National Database of Nursing Quality Indicators. Guidelines for Data Collection and Submission on Quarterly Indicators. Kansas City, KS: The University of Kansas School of Nursing; 2009:49f.
© 2010 by Lippincott Williams & Wilkins, Inc.