Secondary Logo

Share this article on:

Statewide Longitudinal Progression of the Whole-Patient Measure of Safety in South Carolina

Turley, Christine B.; Brittingham, Jordan; Moonan, Aunyika; Davis, Dianne; Chakraborty, Hrishikesh

doi: 10.1097/JHQ.0000000000000092
Original Article

ABSTRACT Meaningful improvement in patient safety encompasses a vast number of quality metrics, but a single measure to represent the overall level of safety is challenging to produce. Recently, Perla et al. established the Whole-Person Measure of Safety (WPMoS) to reflect the concept of global risk assessment at the patient level. We evaluated the WPMoS across an entire state to understand the impact of urban/rural setting, academic status, and hospital size on patient safety outcomes. The population included all South Carolina (SC) inpatient discharges from January 1, 2008, through to December 31, 2013, and was evaluated using established definitions of highly undesirable events (HUEs). Over the study period, the proportion of hospital discharges with at least one HUE significantly decreased from 9.7% to 8.8%, including significant reductions in nine of the 14 HUEs. Academic, large, and urban hospitals had a significantly lower proportion of hospital discharges with at least one HUE in 2008, but only urban hospitals remained significantly lower by 2013. Results indicate that there has been a decrease in harm events captured through administrative coded data over this 6-year period. A composite measure, such as the WPMoS, is necessary for hospitals to evaluate their progress toward reducing preventable harm.

For more information on this article, contact Christine B. Turley at christine.turley@uscmed.sc.edu.

Supported by a grant from The Duke Endowment. Data were provided by South Carolina Revenue and Fiscal Affairs Office (RFA), Health and Demographics division.

The authors declare no conflicts of interest.

This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

Back to Top | Article Outline

Introduction

The national focus in healthcare on reducing preventable errors has been growing steadily since the publication of the Institute of Medicine report, Crossing the Quality Chasm.1 Intensive work to determine best practices for preventing medical harm continues to be a driving factor in the transformation of healthcare.2,3 Despite myriad activities, it remains quite challenging to characterize patient safety into a single measure.4–6 Best practices in one area may not translate to improvements of preventable errors in other areas or engage the same members of the healthcare team.7,8 Furthermore, a patient-centric approach to evaluating progress in safety is inherently limited when models are based on fragmented metrics. Given that any adverse safety event is a highly undesirable event (HUE) for patients and their caregivers, many are interested in composite measures of safety, creating a patient-centered approach to reducing harm.6,9

There are several efforts to characterize the safety of hospitals, including Hospital Compare, Healthgrades' Best Hospitals and the Leapfrog Hospital Safety Score rankings.4,6,10–12 Each of these relies on a combination of data including patient survey results, metrics on timeliness, use of imaging, surgical outcomes, volume of certain types of patients, disease-specific measures, voluntary submission, and self-reported measures of care.10–12 Others have focused strictly on voluntary reporting of adverse events in hospitals, but have identified important challenges in underreporting with this approach.13–17 An important limitation of these approaches is that they may fail to provide a clear picture of overall hospital care because it is experienced by most patients who may have conditions or receive services that cross into domains not assessed by these tools.5,9,18 Discordance between major quality reporting systems makes these data even more challenging for systems, as well as for the public, to interpret.4,6 A readily understandable patient-centered metric, which extends beyond disease-based measures remains an important gap in improving patient safety.

Perla et al,9 have worked to develop a measure of safety that can be applied to any patient receiving care in a hospital setting. Termed the “Whole-Person Measure of Safety (WPMoS),” this measure is based on a patient-centered approach and the all-or-nothing premise of a global measure of preventable harm, based on any one of 14 clearly defined HUEs. Each HUE is identifiable from administrative data and represents a preventable adverse event.19–23 The WPMoS is a patient-level measure of harm events using total occurrences of HUEs in discharges over time.

In South Carolina hospitals, as in the rest of the country, numerous quality improvement (QI) activities are underway to address preventable errors, and best practices are being implemented across hospitals and systems.24 These improvement activities are necessary in the journey to improve safety in healthcare and to address change in the risk of harm for patients in a medical setting.

This study was designed to extend the WPMoS concept by evaluating its use across an entire state with varied types of hospitals, in an effort to understand the impact of urban/rural setting, academic status, and hospital size on safety outcomes over time. In addition, it seeks to assess the use of administrative claims data, a universally available source of data, to establish a composite baseline regional rate of change in safety events.

Back to Top | Article Outline

Methods

Data for this study were obtained from the SC Revenue and Fiscal Affairs Office (RFA), which has received healthcare data from in-state medical facilities since 1996, in accordance with SC state law. The study population included all inpatient hospital discharges within SC from January 1, 2008, through to December 31, 2013. This project was secondary analysis of deidentified data, and was deemed not human subjects research. As such, IRB approval was not required.

Descriptive statistics were used to examine the study population in terms of both discharges and patients. Demographic characteristics included age (18–35, 36–64, 65+ years), sex (male or female), race (Caucasian, African American, Other), and payer (commercial, Medicaid, Medicare, self-pay/uninsured). Additional characteristics included surgical patient (yes/no), length of stay, intensive care unit (ICU) days, total charges (dollars), and severity of illness three and four. If a patient had multiple discharges over the study period, including at least one surgical discharge, then they were counted as a surgical patient regardless of whether it was their first discharge. Length of stay was measured in days from the admission date to the date of discharge, and ICU days were the number of days a patient spent in the ICU during a single hospital stay. Total charges were measured as billed charges, in dollars, for all services rendered during a hospital stay. Severity of illness 3 (major) and 4 (extreme) were defined using the University HealthSystem Consortium (UHC) classification.25

In addition, hospital characteristics including academic status (academic/nonacademic), location (urban, rural, missing/out of state), and size (small, medium, large) were described. A hospital was specified as academic if it was affiliated with a medical school and/or university. Hospitals were classified as urban/rural based on their metropolitan statistical area according to the Office of Management and Budget (OMB).26 Small hospitals were those that had ≤99 beds, medium hospitals were those that had between 100 and 299 beds, and large hospitals were those that had greater than 300 beds.

The WPMoS and associated HUEs were derived according to the methods outlined by Perla et al.9 The WPMoS was calculated as the proportion of hospitalized patients who experienced at least one of 14 HUEs during their episode of care. The 14 HUEs in this study, based on the Center for Medicare and Medicaid Services (CMS) and the Agency for Healthcare Research and Quality (AHRQ) guidelines, included air embolism, blood incompatibility, pressure ulcers, falls and trauma, catheter-associated urinary tract infection (UTI), central venous catheter-related bloodstream infections (CLABSI), manifestation of poor glycemic control, admission risk of mortality = 1 and expired, death in low mortality diagnosis-related category (DRG), deep vein thrombosis/pulmonary embolism (DVT/PE), iatrogenic pneumothorax, accidental puncture or laceration, all-cause readmissions within 72 hours, and hospital-acquired infection/surgical care improvement project. Frequencies and percentages were used to summarize hospitalizations with at least one of the 14 HUEs individually and in total.

A two-proportion Z-test was used to determine whether there was a significant change in the HUEs over the study period. This method tested whether the proportion of hospitalizations with at least one HUE in 2008 was significantly different from the proportion of hospitalizations with at least one HUE in 2013, with respect to the standard error of the sampling distribution. This same method was used to compare the difference in the proportion of hospitalizations with at least one HUE by urban/rural setting, academic status, and hospital size for each year. To quantify the change in proportion of hospitalizations with at least one HUE from 2008 to 2013, we used percent change which equals the change in proportion divided by the absolute value of the original value multiplied by 100. An alpha level of 0.05 was used to determine significance. All data analyses were performed using SAS for Windows, version 9.4 (SAS Institute Inc, Cary, NC).

Back to Top | Article Outline

Results

Characteristics of the study population described by discharges and patients are summarized in Table 1. Over the 6-year study period, there were a total of 1,251,030 patients with a combined 2,738,461 discharges (annual mean, 456,410). Most discharges were from nonacademic institutions (62.6%), most were based in an urban setting (71.1%), and hospital size was distributed fairly evenly between medium (43.8%) and large (47.8%) classifications.

Table 1

Table 1

Table 2 presents yearly aggregate frequencies of HUEs. All-cause readmission within 72 hours was the most common individual HUE across all hospitals followed by hospital-acquired infections. Least common HUEs included air embolism and blood incompatibility. The total number of hospitalizations with at least one HUE spanned from 38,645 (8.8%) in 2013 to 45,321 (9.8%) in 2009.

Table 2

Table 2

Nine of the 14 HUEs, including pressure ulcers, falls and trauma, catheter-associated UTI, CLABSI, admission risk of mortality = 1 and expired, death in low mortality DRG, accidental puncture or laceration, all-cause readmissions within 72 hours, and hospital-acquired infection, significantly decreased over the study period (p < .05). Accordingly, the proportion of hospitalizations with at least one of the 14 HUEs significantly decreased from 9.7% in 2008% to 8.8% in 2013 (p < .01), a reduction of 9.2% overall. The CLABSI demonstrated a notable decrease from 413 (0.09%) hospitalizations in 2008 to 48 (0.01%) hospitalizations in 2013, which represented a reduction of 87.8%. Other HUEs displaying substantial decreases over the study period included pressure ulcers, accidental puncture or laceration, and death in low mortality DRG with reductions of 55.6%, 41.8%, and 41.6%, respectively.

Figure 1 displays the proportion of annual hospitalizations experiencing at least one HUE in total and by hospital characteristics including hospital academic status, hospital location, and hospital size. As shown in Table 2, the proportion of hospitalizations with at least one HUE significantly decreased from 2008 to 2013. This was the case for both academic (p < .01) and nonacademic hospitals (p < .01), with percent decreases of 3.3% and 12.0%, respectively. Academic hospitals had a significantly lower proportion of hospitalizations with at least one HUE in 2008 compared to nonacademic hospitals (p < .01), but the proportion decreased significantly more in nonacademic hospitals over the timeframe (p = .032). Therefore, there was no difference in the proportion of hospitalizations with at least one HUE by academic status in 2013 (p = .58).

Figure 1

Figure 1

The proportion of hospitalizations experiencing at least one HUE also significantly decreased in urban (p < .01) and rural (p < .01) hospitals from 2008 to 2013, with percent decreases of 8.5% and 9.5%, respectively. The percent decrease by hospital location was not significantly different between urban hospitals and rural hospitals (p = .71). In 2008, the proportion of hospitalizations with at least one HUE was significantly lower in urban hospitals compared with rural hospitals (p < .01) and remained significantly lower (p < .01) in 2013.

Over the study period, the proportion of hospitalizations experiencing at least one HUE significantly decreased for small (p < .01), medium (p < .01), and large hospitals (p < .01), with percent decreases of 16.7%, 11.1%, and 5.4%, respectively. The proportion of hospitalizations with at least one HUE decreased significantly more in small hospitals compared with large hospitals (p = .02); however, the decrease was not significantly different for medium hospitals compared with small hospitals (p = .12) or large hospitals (p = .17). In 2008, the proportion in small hospitals was significantly higher compared with both medium (p < .01) and large (p < .01) hospitals, and the proportion for medium hospitals was also significantly higher compared with large hospitals (p < .01). In 2013, however, the proportion was not significantly different by hospital size.

Back to Top | Article Outline

Limitations

There are several limitations to this study. First, when compared with manual chart abstraction, administrative data have well-documented limitations for identifying all preventable harm occurring during hospitalization.27–31 Although not capturing every instance of harm, each of the events captured in administrative data are definite HUEs and create a means of rapidly identifying target areas. Capturing incidence of every harmful event in hospitals would require manual chart abstraction and dual-level review, and is not practical for widespread safety monitoring. Use of administrative data was chosen because it is consistently available across systems and can be evaluated from a standardized framework, creating a scalable model of establishing both benchmarks and monitoring systems. An additional consideration is that the primary outcome of interest, change in safety event rate over time, may reflect changing coding practices,32 alongside the changes in temporal trends in safety events. With the ongoing shift from volume to value, there are many QIs, coding and patient safety initiatives occurring in each healthcare institution.

Back to Top | Article Outline

Discussion

When considered in aggregate, these data indicate a decrease in total harm events captured through administrative coded data over this 6-year period. Using the WPMoS, hospitalizations with HUEs in SC decreased significantly by 9.2%, an important change that allows for understanding of the context of the safety of healthcare in the region.

Over this period of major transformation in healthcare systems, the wider differences between academic and nonacademic hospitals at the start of the period narrowed and converged to the same rate for academic and nonacademic institutions. This may highlight that the complexity of care in the academic environment is an important and challenging landscape in implementation science, and one that can serve as a crucial driver of improving the overall safety of care.33,34 All sizes of hospitals had decreases in HUEs over the 6-year period, with the small institutions showing remarkable improvements (16.7% decrease in HUEs) overall. Analyzing the data from this perspective revealed that differences between small, medium, and large hospitals at the beginning of the study converged by 2013. In South Carolina, as in much of the United States, these larger institutions are home to academic training programs of varying types and provide an important opportunity for primary education, a path to dissemination, and a testing ground for new innovation in preventable patient harm.

The convergence demonstrated both by size and institutional type is an interesting observation. Smaller, less academic, and rural programs had steep declines overall in HUEs during this period. This may represent increased prioritization of QI in these organizations because of multiple factors, or increased knowledge of best practices and support to implement them, or both. In addition, it may reflect that engagement in a smaller setting, with fewer learners allows permanent team members to be engaged in implementation of changes, and thus may allow improvement to occur more rapidly and successfully over time.

Nine of the 14 individual HUEs showed significant reductions, and align with specific initiatives of the Partnership for Patients, a national initiative sponsored by CMS focused on making care safer and improving care transitions.35 Of note were marked decreases in CLABSI, pressure ulcers, and hospital-acquired infections. The two HUEs accounting for the highest percentage of occurrences were readmissions and hospital-acquired infections, both of which showed small but significant improvement over the study period. Many of these areas have been subject to intense local, regional, and national work, establishment of best practices, and engagement by payers in the value-based purchasing movement. Other HUEs, including iatrogenic pneumothorax and DVT/PEs, showed little or no improvement. These areas of preventable harm likely represent opportunities for organizations to examine the evidence and establish QI targets.

The WPMoS approach to reporting harm is important for several reasons. With recent reports estimating death from medical errors as the third leading cause of death in the United States, patient-level tools for understanding risk of harm are needed.36 Currently, there are no standardized, objective measures in use, although all-or-none measures have been proposed to be among the most effective tools for improving safety.37 Available systems evaluate particular types of care or subsets of care practices, by means of a combination of self-reported and publicly available data.10–12 Using widely accepted patient safety indicators together with administrative data, although not capturing all errors or all harm, will nevertheless provide a means of assessing trends in risk of harm. When taken as an all-or-nothing measure of harm, the WPMoS may provide a means of risk assessment and provide important context needed to fuel or accelerate QI activity.

Although earlier efforts to evaluate regional change in rates of patient harm did not demonstrate progress,38 there has been an ever-increasing emphasis on patient safety in SC and the United States in general. A further effort at establishing the context of change in safety events across a region is demonstrated here as a means of evaluating healthcare progress in this area. Improvements in single areas of HUEs do not directly translate to improvements in other areas. To understand directly the outcomes of safety culture development activity that is occurring in hospitals and health systems across the United States, a composite and reproducible measure of harm is necessary.39 Through the WPMoS assessment, significant decreases in coded safety events were noted across all settings. To understand appropriate targets in this context, organizations will need a composite score to understand their rates of improvement relative to the changing benchmark. This will enable systems to keep pace or accelerate their work. Based on the initial work by Perla et al,9 institutions are relatively internally consistent over a 2-year period, and thus the rate of change over time may ultimately prove to be useful at the organizational level. Although there may be a wide variation at the local level in the degree and effectiveness of implementation and dissemination of each of these activities, we would expect these changes to occur with the same relative variability across all systems. Establishing this background rate of change is important to understanding the full context of uptake of safety evidence.

Back to Top | Article Outline

Conclusions

A person-based model of risk of harm is an important step forward in considering a patient-centered approach to healthcare safety. This project extends the evaluation of the WPMoS measure, recognizing that patients experience healthcare systems as single units in which each aspect of the system must work well, or the overall risk to the individual remains high. The need for a meaningful measure of risk at the patient level, which is not disease based, is crucial because that is how individual patients experience healthcare systems. The airline industry safety model is often used in healthcare training. It would be inconceivable for travelers to weigh the risk of an engine malfunction relative to the risk of running out of gas when making their choice of a particular airline. Taken further, the industry would not overlook the ability to prevent a plane crash because of current limitation in the ability to track every individual element perfectly, while better monitoring systems were being built. A person-based measure of safety, although less than perfect, is a crucial step forward, whereas healthcare safety monitoring systems are improved because no current single measure, based on any individual procedure or disease, is sufficient to understand patient risk at present.

Back to Top | Article Outline

Implications

Establishing a benchmark rate of change in overall hospital safety is a crucial step for individual hospitals to evaluate their own progress toward safer, highly reliable care within the national healthcare system. Understanding larger trends in hospital safety events is an important aspect of the healthcare transformation occurring nationally, and a critical aspect of both framing progress and driving future work to increase the safety of healthcare.

Back to Top | Article Outline

References

1. Institute of Medicine (IOM). Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001.
2. Berwick DM. Disseminating innovations in health care. JAMA. 2003;289(15):1969–1975.
3. Pronovost PJ, Goeschel CA, Marsteller JA, Sexton JB, Pham JC, Berenholtz SM. Framework for patient safety research and improvement. Circulation. 2009;119(2):330–337.
4. Halasyamani LK, Davis MM. Conflicting measures of hospital quality: Ratings from “hospital compare” versus “best hospitals” J Hosp Med. 2007;2(3):128–134.
5. Chassin MR, Loeb JM, Mibank Q. High-reliability health care: Getting there from here. Milbank Q. 2013;91(3):459–490.
6. Austin JM, Jha AK, Romano PS III, et al. National hospital ratings systems share few common scores and may generate confusion instead of Clarity. Health Aff. 2015;34:3423–3430.
7. Mackenzie SJ, Goldmann DA, Perla RJ, Parry GJ. Measuring hospital-wide mortality-pitfalls and potential. J Healthc Qual. 2016;38(3):187–194.
8. Thomas AN, Taylor RJ. An analysis of patient safety incidents associated with medications reported from critical care units in the North West of England between 2009 and 2012. Anaesthesia. 2014;69(7):735–745.
9. Perla RJ, Hohmann SF, Annis K. Whole-patient measure of safety: Using administrative data to assess the probability of highly undesirable events during hospitalization. J Healthc Qual. 2013;35(5):20–31.
10. Hospital Compare. Centers for medicare and medicaid services website. http://medicare.gov/hospitalcompare/search.html. Accessed 8 May 2016.
11. About the Score. Hospital Safety Score the Leap Frog Group. http://www.hospitalsafetyscore.org/your-hospitals-safety-score/about-the-score. Accessed 8 May 2016.
12. Healthgrades mortality and complications outcomes 2016 methodology. Healthgrades Website. http://www.healthgrades.com/quality/methodology-mortality-and-complications-outcomes. Accessed 8 May 2016.
13. Levinson DR; Department of Health and Human Services, Office of Inspector General. Adverse events in hospitals: State reporting systems. http://oig.hhs.gov/oei/reports/oei-06-07-00471. Updated December 2008. Accessed 8 May 2016.
14. Levinson DR; Department of Health and Human Services, Office of Inspector General. Adverse events in hospitals: Overview of key issues. http://oig.hhs.gov/oei/reports/oei-06-07-00470.pdf. Updated December 2008. Accessed 8 May 2016.
15. Levinson DR; Department of Health and Human Services, Office of Inspector General. Adverse events in Hospitals: Methods for identifying events. http://oig.hhs.gov/oei/reports/oei-06-08-00221.pdf. Updated March 2010. Accessed 8 May 2016.
16. Levinson DR; Agency for Healthcare Research and Quality. Hospital incident reporting systems do not capture most patient harm. https://psnet.ahrq.gov/resources/resource/23842/hospital-incident-reporting-systems-do-not-capture-most-patient-harm. OEI-06-09-00091. Updated January 2012. Accessed 8 May 2016.
17. Noble DJ, Pronovost PJ. Underreporting of patient safety incidents reduces health care's ability to quantify and accurately measure harm reduction. J Pat Saf. 2010;6(4):247–250.
18. Shekelle PG, Pronovost PJ, Wachter RM, et al. Advancing the science of patient safety. Ann Intern Med. 2011;154(10):693–696.
19. Zhan C, Miller MR. Administrative data based patient safety research: A critical review. Qual Saf Health Care. 2003;12(suppl 2):ii58–ii63.
20. Rivard PE, Luther SL, Christiansen CL, et al. Using patient safety indicators to estimate the impact of potential adverse events on outcomes. Med Care Res Rev. 2008;65(1):67–87.
21. Agency for Healthcare Research and Quality. AHRQ quality indicators toolkit for hospitals 2014. http://www.ahrq.gov/professionals/systems/hospital/qitoolkit/index.html. Accessed 8 May 2016.
22. McDonald KM, Romano PS, Geppert J, et al. Measure of Patient Safety Based on Hospital Administrative Data–the Patient Safety Indicators (Technical Review 5) 2002. Rockville, MD: Agency for Healthcare Research and Quality; 2002.
23. Fact Sheet on Patient Safety Indicators. Agency for Healthcare Research and Quality: Quality Indicators Toolkit. 2014. http://www.ahrq.gov/sites/default/files/wysiwyg/professionals/systems/hospital/qitoolkit/a1b_psifactsheet.pdf. Accessed 8 May 2016.
24. South Carolina Safe Care Commitment. South Carolina hospital association website. http://www.scha.org/south-carolina-safe-care-commitment. Accessed 8 May 2016.
25. Meurer SJ. Mortality Risk Adjustment Methodology for University Health System's Clinical Data Base. 2011. http://archive.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/mortality/Meurer.pdf. Accessed 8 May 2016.
26. 2010 Standards for Delineating Metropolitan and Micropolitan statistical areas. 2010. http://www.whitehouse.gov/sites/default/files/omb/assets/fedreg_2010/06282010_metro_standards-Complete.pdf. Accessed 8 May 2016.
27. Blumenthal D, Ferris TG. Safety in the academic medical Center: Transforming challenges into ingredients for improvement. Acad Med. 2006;81(9):817–822.
28. Keroack MS, Youngber AJ, Cerese JL, Krsek C, Prellwitz LW, Trevelyan EW. Organizational factors associated with high performance in quality and safety in academic medical centers. Acad Med. 2007;82:1178–1186.
29. About the Partnership for Patients Center for Medicare and Medicaid Services. 2010. https://partnershipforpatients.cms.gov/about-the-partnership/aboutthepartnershipforpatients.html. Accessed May 2016.
30. Makary MA, Daniel M. Medical error—the third leading cause of death in the US. BMJ. 2016;353:i2139.
31. Nolan T, Berwick DM. All-or-none measurement raises the bar on performance. JAMA. 2006;295(10):1168–1170.
32. Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010;363:2124–2134.
33. Kaplan HC, Brady PW, Dritz MC, et al. The influence of context on quality improvement success in health care: A systematic review of the literature. Milbank Q. 2010;88(4):500–559.
34. Classen D, Resar R, Griffin F, et al. Global “trigger tool” shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood). 2011;30:581–589.
35. Groene O, Kristensen S, Arah OA, et al. Feasibility of using administrative data to compare hospital performance in the EU. Int J Qual Health Care. 2014;26(suppl 1):108–115.
36. Masheter CJ, Hougland P, Xu W. Detection of inpatient health care associated Injuries: Comparing two ICD-9-cm Code classifications. In: Henriksen K, Battles JB, Marks ES, et al, eds. Advances in Patient Safety: From Research to Implementation (volume 1: Research Findings). Rockville, MD: Agency for Healthcare Research and Quality; 2005.
37. Patrick SW, Davis MM, Sedman AB, et al. Accuracy of hospital administrative data in reporting central line-associated bloodstream infections in newborns. Pediatrics. 2013;131(suppl 1):S75–S80.
38. O'Leary KJ, Devisetty VK, Patel AR, et al. Comparison of traditional trigger tool to data warehouse based screening for identifying hospital adverse events. BMJ Qual Saf. 2013;22:130–138.
39. Sjoding MW, Iwashyna TJ, Dimick JB, et al. Gaming hospital-level pneumonia 30-day mortality and readmission measures by legitimate changes to diagnostic coding. Crit Care Med. 2015;43(5):989–995.
Back to Top | Article Outline

Authors' Biographies

Christine B. Turley, MD, chief medical officer at Health Sciences South Carolina, a faculty member at the University of South Carolina School Of Medicine, a pediatrician in the Department of Pediatrics at Palmetto Health Richland, has a decade of experience as PI on clinical trials and clinical research and is the PI on grants from private foundations, the NIH, PCORI, and SCTR.

Jordan Brittingham, MSPH, is a biostatistician at Health Sciences South Carolina, Columbia, SC. There, he manages and conducts in-depth longitudinal analysis of clinical data from four of the largest health systems in South Carolina, stored in the clinical data warehouse (CDW). He works as a biostatistician within the Biostatistics Collaborative Research Core (BCRC) at the University of South Carolina where he assists with data management, consultation, and statistical analysis.

Aunyika Moonan, PhD, MSPH, CPHQ, is the Director of Quality Improvement for the South Carolina Hospital Association in Columbia, South Carolina, a not-for-profit organization made up of some 100 member hospitals and health systems and about 900 personal members associated with our institutional members.

Dianne Davis, BS, works for South Carolina Revenue and Fiscal Affairs Office (RFA). She works in researching, maintaining, and providing independent and professional analysis information, to state and local officials regarding demographic, economic, health, and other data in developing public policy and effective administration of programs.

Hrishikesh Chakraborty, DrPH, is an associate professor in the Department of Epidemiology and Biostatistics at the University of South Carolina and Director of the Biostatistics Collaborative Research Core, where he provides collaborative and consulting biostatistical support to researchers across South Carolina. He is also the Director of Epidemiology and Biostatistics at Health Sciences South Carolina, Columbia, SC, and has more than 20 years of collaborative consulting, and secondary data analysis research experience in public health, biomedical, clinical, epidemiologic, population, and social sciences.

Keywords:

healthcare safety; inpatient quality measures; medical errors; reducing preventable harm

© 2018 National Association for Healthcare Quality