Secondary Logo

Statistical Process Control

No Hits, No Runs, No Errors?

Vetter, Thomas R., MD, MPH*; Morrice, Douglas, PhD

doi: 10.1213/ANE.0000000000003977
General Articles: Special Article
Free
SDC

A novel intervention or new clinical program must achieve and sustain its operational and clinical goals. To demonstrate successfully optimizing health care value, providers and other stakeholders must longitudinally measure and report these tracked relevant associated outcomes. This includes clinicians and perioperative health services researchers who chose to participate in these process improvement and quality improvement efforts (“play in this space”). Statistical process control is a branch of statistics that combines rigorous sequential, time-based analysis methods with graphical presentation of performance and quality data. Statistical process control and its primary tool—the control chart—provide researchers and practitioners with a method of better understanding and communicating data from health care performance and quality improvement efforts. Statistical process control presents performance and quality data in a format that is typically more understandable to practicing clinicians, administrators, and health care decision makers and often more readily generates actionable insights and conclusions. Health care quality improvement is predicated on statistical process control. Undertaking, achieving, and reporting continuous quality improvement in anesthesiology, critical care, perioperative medicine, and acute and chronic pain management all fundamentally rely on applying statistical process control methods and tools. Thus, the present basic statistical tutorial focuses on the germane topic of statistical process control, including random (common) causes of variation versus assignable (special) causes of variation: Six Sigma versus Lean versus Lean Six Sigma, levels of quality management, run chart, control charts, selecting the applicable type of control chart, and analyzing a control chart. Specific attention is focused on quasi-experimental study designs, which are particularly applicable to process improvement and quality improvement efforts.

From the *Department of Surgery and Perioperative Care, Dell Medical School at the University of Texas at Austin, Austin, Texas

Bobbie and Coulter R. Sublett Centennial Professorship in Business, Department of Information, Risk, and Operations Management, Red McCombs School of Business at the University of Texas at Austin, Austin, Texas.

Published ahead of print 6 November 2018.

Accepted for publication November 6, 2018.

Funding: None.

The authors declare no conflicts of interest.

Reprints will not be available from the author.

Address correspondence to Thomas R. Vetter, MD, MPH, Department of Surgery and Perioperative Care, Dell Medical School at the University of Texas at Austin, Health Discovery Bldg, Room 6.812, 1701 Trinity St, Austin, TX 78712. Address e-mail to thomas.vetter@austin.utexas.edu.

In God we trust; all others bring data.

Without data, you’re just another person with an opinion.

—W. Edwards Deming (1900–1993), American engineer, statistician, professor, author, lecturer, and management consultant

Increasing emphasis is being placed on delivering value-based health care, in which value can be defined as health outcomes achieved per dollar spent.1 In this value-based health care quotient, the numerator of achieved health outcomes encompasses quality and safety as well as patient and provider satisfaction.2–4

Significant changes in population demographics and health policy are mandating these new value-based models of health care delivery—including for surgical patient care.5 , 6 Anesthesiologists are well positioned to assume a broader yet continued highly collaborative role in the perioperative care of surgical patients.7

Furthermore, anesthesiologists have considerable experience and demonstrated expertise in the fields of performance improvement and quality improvement. They can thus serve as leaders in this new perioperative care environment,5 , 7 including in perioperative population health management.8–11

A novel intervention or new clinical program must achieve and sustain its operational and clinical goals.12 To demonstrate successfully optimizing health care value, providers and other stakeholders must longitudinally measure and report these tracked relevant associated outcomes.13 This includes anesthesiologists and perioperative health services researchers who chose to participate in these process improvement and quality improvement efforts (“play in this space”).14–16

Previous tutorials in this ongoing series in Anesthesia & Analgesia dealt with types of clinical study design17 , 18 and data analysis.19–23 The present basic statistical tutorial focuses on the related and equally germane topic of statistical process control. It is not intended to provide in-depth coverage24–26 but instead to familiarize the reader with these specific concepts and techniques:

  • Random (common) causes of variation versus assignable (special) causes of variation
  • Six Sigma versus Lean versus Lean Six Sigma
  • Levels of quality management
  • Run chart
  • Control charts
  • Selecting the applicable type of control chart
  • Analyzing a control chart
Back to Top | Article Outline

RESEARCH STUDY DESIGN CLASSIFICATION

As noted in 2 previous tutorials, depending on the circumstances, there are various study designs that are appropriate to apply in conducting clinical or health services research, including process improvement and quality improvement efforts.17 , 18

Figure 1

Figure 1

These various study designs are conventionally classified as experimental, quasi-experimental, or observational, with observational studies being further divided into descriptive and analytic subcategories (Figure 1).17 , 18 , 27–30 In this current tutorial, specific attention is focused on quasi-experimental study designs, which are particularly applicable to process improvement and quality improvement efforts.

Back to Top | Article Outline

QUASI-EXPERIMENTAL STUDY DESIGNS

A quasi-experimental study is typically performed when there are practical and/or ethical barriers to conducting a randomized controlled trial.17 , 31 A quasi-experimental study design can be applied in practice-based research on the performance improvement or quality improvement with a new intervention or health care delivery program.17 , 31–33 The reader is referred to the work of Shadish et al34 for an in-depth coverage of quasi-experimental study designs.

Back to Top | Article Outline

Uncontrolled Before and After Study

An uncontrolled before and after study simply measures ≥1 performance or quality variables before and after the introduction of an intervention in the same study population and at the same care delivery site(s). Any observed before versus after difference in the quality metric or key performance indicator is assumed to be due to the intervention.12 , 17 , 31 , 32 , 34 , 35

The findings of an uncontrolled before and after study, which focuses on performance or quality improvement, are often presented graphically using statistical process control methods, including a single Shewhart-style, statistical process control chart for the pre- and postintervention time periods with their respective sequential time points and measurements.17 , 36–40

Back to Top | Article Outline

Controlled Before and After Study

In a controlled before and after study, a control population with similar demographics is identified. This control population is expected to demonstrate an underlying temporal trend in other performance or behavior that is also similar to the active study population.12 , 17 , 31 , 32 , 34 , 41

A sample of performance or quality data is collected simultaneously from both populations before and after the intervention or a new health care delivery program is introduced into the active study population. These performance or quality data are compared, and any observed differences are assumed to be due to the process change.31 , 32 , 34 , 41

The findings of an controlled before and after study, which focuses on performance or quality improvement, can also be presented graphically using statistical process control methods, including 2 Shewhart-style, statistical process control charts for the pre- and postintervention time periods and their respective sequential time points and measurements—1 chart for the active intervention group and the other chart for the control group.17 , 36–40

Back to Top | Article Outline

Interrupted Time Series

Interrupted time series is considered the strongest, quasi-experimental research design for evaluating the longitudinal effects of an intervention.17 , 34 , 42–47 An interrupted time series design assesses whether an intervention had a significantly greater effect than the underlying temporal or background trend.12 , 17 , 31 , 32 , 34 , 41 , 44 , 46 , 47 This design can be appropriate for evaluating the effects of a wide-scale, system-wide guideline implementation or care delivery process change—situations in which it is difficult to identify a valid control group or randomize study participants.17 , 31 , 32 , 34 , 41 , 44 , 45

Outcomes data are consistently collected at ≥20 nearly equal time points before and then at ≥20 nearly equal time points after the intervention. The multiple time points before implementing the intervention allow the underlying trend to be estimated, whereas the multiple time points after implementing the intervention allow the effect of the intervention to be estimated, potentially accounting for any continued underlying or background trend in the outcome.12 , 17 , 31 , 34 , 41 , 46–48

The key distinction is that the results of an interrupted time series are reported as 2 sequential, graphically adjacent, yet distinct scatter plots with their respective trend lines shown—not simply with a conventional Shewhart-style, statistical process control chart for the pre- and postintervention time periods.46 , 47 , 49

However, an interrupted time series design may still not adequately compensate for the effects of other known or unknown interventions or events occurring concurrently with the study intervention, which might also affect the performance or quality outcome measure(s).31 , 32 , 34 Segmented (also known as hockey stick, piecewise, or broken stick) regression can mitigate this internal validity risk, and its use with an interrupted time series design has become strongly recommended as to make it essentially a prerequisite.41–43 , 50 , 51

Back to Top | Article Outline

WHAT IS STATISTICAL PROCESS CONTROL?

Statistical process control is a branch of statistics that combines rigorous sequential, time-based analysis methods with graphical presentation of performance and quality data. Statistical process control and its primary tool—the control chart—provide researchers and practitioners with a method of better understanding and communicating data from health care performance and quality improvement efforts.39

The basic theory of statistical process control was developed in the late 1920s by Walter Shewhart, a statistician at the AT&T Bell Laboratories in the United States.52 , 53 It was then applied in post-World War II Japan and eventually disseminated worldwide by W. Edwards Deming.53 , 54

Shewhart and Deming originally worked with manufacturing processes, but both recognized that their methodology could be applied to any sort of process.39 In its broader sense, statistical process control is concerned with repeated measurements of a process exhibiting variation. In this broader context, statistical process control has greater and major applicability in clinical care and health care–related quality improvement.

Back to Top | Article Outline

WHY SHOULD YOU CARE ABOUT STATISTICAL PROCESS CONTROL?

Statistical process control (1) presents performance and quality data in a format that is typically more understandable to practicing clinicians, administrators, and health care decision makers; and (2) often more readily generates actionable insights and conclusions.39

Health care quality improvement is predicated on statistical process control.24 , 25 , 55 Undertaking, achieving, and reporting continuous quality improvement in anesthesiology, critical care, perioperative medicine, and acute and chronic pain management all fundamentally rely on applying statistical process control methods and tools.

Back to Top | Article Outline

RANDOM OR COMMON CAUSES OF VARIATION VERSUS ASSIGNABLE OR SPECIAL CAUSES OF VARIATION

The notion of random or common causes of variation versus assignable or special causes of variation has its roots in the earliest, seminal work by Shewhart on quality control.56 All processes display variation. The distinction between 2 types of variation is fundamental to statistical process control.

Random or common causes of variation refer to the random variation or noise inherent in any process, which yields unpredictable outcomes. This means that the outputs of any process, whether in health care, manufacturing, or some other application, are not identically repeatable due to naturally occurring variation in, for example, materials or operating conditions.

Assignable or special causes of variation refer to process variation identifiable and assignable to specific causes. Furthermore, this type of variation can be corrected and controlled. Examples include human error due to lack of training or absence of standard operating procedures and malfunctioning equipment that needs adjustment or repair. Once detected, appropriate corrective actions eliminate these sources of variation and restore the process to control. Essentially, statistical process control accounts for common causes of variation and then detects and corrects assignable causes of variation.

Statistical process control relies on the statistical theory of hypothesis testing.57 The null hypothesis assumes in-control process displays’ common causes of variation according to a specific probability distribution. Typically, process observations are sample averages that rely on the Central Limit Theorem, resulting in a normal or Gaussian (“bell shaped”) probability distribution.58

If sampling of the process yields variation with a low probability of being explained by common causes of variation, one rejects the null hypothesis and considers the process out of control. This rejection of the null hypothesis represents the detection step of assignable causes of variation.

The subsequent correction step requires further investigation to understand the specific causes (eg, personnel make errors on a task for which they have not been adequately trained) and if corrective action occurs (eg, personnel receive proper training and management institutes standard operating procedures in order to eliminate these errors).

The distinction between common and assignable causes of variation may be one of degrees. As quality management matures, organizations seek to improve their processes by adopting continuous improvement approaches. Under continuous improvement, these organizations take rigorous steps to make more of the variation in the process assignable rather than simply categorizing it as random and unpredictable.59 In the next section, we discuss specific approaches for continuous improvement.

Back to Top | Article Outline

SIX SIGMA VERSUS LEAN VERSUS LEAN SIX SIGMA

Six Sigma and Lean represent 2 continuous quality improvement approaches. These processes have been applied and to varying degrees validated in complex health care environments to eliminate error and affect change.16 , 60–65

Developed by Motorola and championed by General Electric, Six Sigma exemplifies a philosophical framework and statistical methodology designed to eliminate defects in products and processes.66 We focus on the Motorola approach because it provides the statistical foundation for programs used in practice.

Six Sigma consists of repeated cycling through the following 5 steps: (1) defining the process, (2) measuring outcomes, (3) analyzing the measurements, (4) making improvements, and (5) controlling the improved process.66 For analysis and improvement, Six Sigma relies on the comparison of process variability embodied in common cause variation used to construct statistical process control chart limits and process specification limits or design tolerances set by managers or customers of the process. Defects result from process outputs that fall outside the specification limits.

Six Sigma achieves continuous improvement by reducing process variability relative to the specification limits—such that ±6 sigma units of process variation falls within these specification limits. This ideally translates to a defect rate of 2 parts per billion opportunities (eg, clinical events). It does so by persistent process improvement, shifting more and more common causes of variation to assignable causes of variation for correction and control. As a result, the probability distribution associated with common cause variation narrows relative to the specification limits occasioning fewer defects.

The Lean approach has its foundations in the Toyota Production System.67 The overarching idea of waste reduction drives the principles, organizational practices, and methods found in this comprehensive approach to quality management. The principles include a set of ideals to eliminate defects, inventories, setups, breakdowns, and nonvalue-adding activities in processes. Although aspirational, these principles help organizations know where to focus their continuous quality improvement efforts and apply the statistical techniques we discuss in this article, along with process improvement and inventory control practices from the field of operations management.67

Organizationally, the Lean approach empowers and rewards people directly involved in the processes to take responsibility for quality and its continuous improvement. Whenever possible, front-line workers detect and correct quality problems when they occur. They conduct root cause analysis, participate on cross-functional teams to address longer term quality issues, and continually look for ways to improve quality. Lean also involves the standardization of work, flexible capacity, and specialized production methods centered on the idea of “pull” or “Just-in-Time,”68 by which one produces only the required amounts when needed.

Lean Six Sigma represents the amalgamation of these 2 systems. It merges the management practices of Lean with the scientifically based, statistical rigor of Six Sigma to eliminate waste and reduce variation to improve quality in a continuous manner.69 While Lean and Six Sigma both originated in manufacturing, their scope has expanded to eliminate waste in all aspects of an enterprise along the value chain, including product development, supply chain management, and sales and maintenance.70 Additionally, Lean Six Sigma has many different applications across various industries, including health care.65 , 71–73

Back to Top | Article Outline

LEVELS OF QUALITY MANAGEMENT

In practice, quality management programs differ in their levels of maturity, from inspection through process control and, ultimately, to continuous improvement (Figure 2).74

Figure 2

Figure 2

The most rudimentary approaches involve inspection of the quality of outputs generated by a process. The inspection typically entails selecting a sample of process outputs and then determining whether a high enough percentage of the sample passes quality inspection. If the observed passing percentage exceeds a threshold, as determined by inferential statistical analysis, then the procedure deems all process outputs of sufficient quality and accepts them. Otherwise, it rejects them. Hence, the inspection approach screens for bad quality after the fact but does nothing to control or improve it. Consequently, as the old adage goes, “You cannot inspect in quality.”75

Process control involves monitoring a process using statistical process control techniques. If monitoring indicates that the process is producing poor quality (ie, the process has gone out of control), then corrective action seeks to fix the problem and bring the process back into control. This approach keeps the quality of the process outputs within an acceptable range of variation. While process control represents a major advancement beyond inspection, it maintains the quality status quo.

Continuous improvement challenges the quality status quo and leads to ever-improving quality. Consequently, it is the most mature quality program applied in practice. One such approach, namely, Six Sigma, operationalizes continuous improvement by comparing the process variation to the process specifications. The latter are design tolerances set by process managers or consumers of the process outputs. Six Sigma achieves ever-improving quality by relentlessly pursuing process improvement to force more and more process variation inside the process specifications. The approach may also entail setting the standards higher by tightening the specifications to achieve even greater levels of quality.

Because of their more progressive and rigorous nature, process control and continuous improvement have greater relevance and applicability to outputs (outcomes) in health care settings.

Back to Top | Article Outline

RUN CHART

The run chart is the simplest but also a useful chart for statistical process control. The basic run chart is a plot of the data observations of an individual variable plotted in time order. Thus, it is synonymous with a time-series plot. A run chart can detect overall trends, variation, and patterns in the time-ordered data.16 , 76–79 Use of a run chart has several advantages76 , 80:

  • It requires no statistical calculations, computer hardware, or software.
  • It can be constructed literally by hand with paper and a pen or pencil.
  • It can be used with any type of process (clinical, financial, or operational).
  • It can be used with virtually any type of data (discrete measurements, counts of events, percentages, or ratios).
  • It can be easily and thus widely understood.

Unlike a control chart, a run chart can be used for data that do not display a normal (Gaussian) distribution. A run chart usually includes a centerline, which represents the median of all the observed values.76–78

Run charts have been used to report diverse quality improvement efforts in the perioperative setting, including decreasing operating room turnover,81 improving clinician hand hygiene,82 monitoring changes in wireless hands-free communication patterns during electronic health record system implementation,83 and detecting opioid abuse among anesthesiologists.84

Back to Top | Article Outline

CONTROL CHART

Both a run chart and a control chart are used to distinguish random (common) causes of variation versus assignable (special) causes of variation in the outcomes generated by and data collected about a process. However, the control chart is considered a more sensitive and powerful tool than the run chart.85

Like a run chart, a control chart is a graphic representation of data over time. To the casual observer, control charts can look similar to run charts. Data are plotted with time on the horizontal axis and the process measure on the vertical axis. Like a run chart, a control chart has a centerline, but this centerline represents the mean rather than the median of all the observed values in that time period. A control chart also includes an upper control limit and a lower control limit.16 , 76 , 78 , 80 , 86

The upper control limit and lower control limit correspond to ±3 SD or sigma units from the mean for the observed sample.78 , 80 However, the upper control limit and lower control limit of a control chart should not to be confused with the 99% upper and lower confidence interval limits of the sample data distribution. The control limits describe the variability in the process, whereas confidence limits describe the variability in a distribution of data.80 , 85

Control charts have been applied to report diverse quality improvement efforts in the perioperative setting, including intraoperative glucose monitoring to reduce surgical site infections in patients with diabetes,87 modifications to the electronic medical record to improve the administration of the second antibiotic dose,88 the use of an anesthesia medication template to reduce medication errors during anesthesia,89 and the benefits of a distraction-free pediatric induction zone.90

Back to Top | Article Outline

BASIC TYPES OF CONTROL CHARTS

Control charts are conventionally divided into those applicable for continuous or variables data and those intended for discrete or attributes data. Continuous or variables data can take on different measurement values on a continuous scale. Discrete or attributes data are counts of events that can be aggregated into typically dichotomous (binary, yes/no) categories. With 1 type of discrete data, one can count both the occurrences and nonoccurrences and then calculate the percentage of “defectives.” With the other type of discrete data, one can only count the occurrences, which are regarded as “defects.”78 , 85

Back to Top | Article Outline

I-Chart

The I-chart or X-chart, where X stands for a data value, is simply a run chart with an upper control limit and a lower control limit added. The “I” in I-chart stands for “individual.” Each subgroup and its data point comprised a single observation. Like with a run chart, an I-chart displays the individual values of the process observations in a time-ordered sequence. Like a run chart, the primary use of an I-chart is to provide initial insights for a process or quality improvement project, particularly a nonrandom pattern to and thus influence on the outcome data.78 , 85 , 86 , 91

Back to Top | Article Outline

X-Bar Chart

The X-bar chart is an extension of the I-chart. X-bar stands for the mean value. With an X-bar chart, each subgroup and its corresponding data point contains >1 observation, and the outcome data are measured and their mean (average) value is thus calculated.78 , 85 , 86 , 92

Back to Top | Article Outline

P-Chart

The p-chart is considered the most easily understood and most often applied control chart. For each event, its dichotomous outcome is considered “special” or “not special,” with the special outcome being typically (but not always) adverse or unfavorable. The values are calculated by dividing the count of special outcome events (numerator) by the total event count (denominator). Hence, the “p” in p-chart stands for either “percent” or “proportion.”78 , 85 , 86 , 93 The np-chart is a variation of the p-chart, in which the count of “special outcomes” is plotted; however, the p-chart will reportedly always instead suffice.93

Back to Top | Article Outline

C-Chart

The c-chart is the simplest attribute control chart. Each plotted point is the c value or the count of occurrences of defects for each sampled period. A count cannot exist without an area of opportunity or “exposure.” This so-called area of opportunity is the subgroup size, with which a c-chart is equal and constant (fixed) for each measured time period and its subgroup. The “c” in c-chart stands for count or “constant area of opportunity.”78 , 85 , 86 , 94

Back to Top | Article Outline

U-Chart

For a u-chart, the area of opportunity or subgroup sample size does not need to be constant. If the subgroup sample sizes vary considerably, a u-chart rather than a c-chart is applicable. The “u” in u-chart stands for “unit” or “unequal area of opportunity.” The sampled subgroups must be compared “on a level playing period.” This is accomplished by dividing the subgroup defect count by the subgroup size, where this size is expressed in the most logical unit of measure. This u-chart is also called a “count per unit” chart.78 , 85 , 86 , 94

Back to Top | Article Outline

SELECTING THE APPLICABLE TYPE OF CONTROL CHART

Table

Table

Figure 3

Figure 3

After determining the type of collected data, the next step is to examine a control chart decision algorithm (Figure 3). This algorithm can be exemplified using operating room turnaround time (“wheels out to wheels in”) (Table). Charts for continuous or variables data are more powerful in detecting assignable or special causes of variation than charts for discrete or attributes data. The X-bar chart is similarly more powerful than the I-bar chart. The c-chart or u-chart is likewise more powerful than the p-chart. Thus, whenever possible, one should seek to measure an activity rather than count events.85 , 95 Anesthesiologists and perioperative health services researchers undertaking process and quality improvement efforts should attempt to collect their outcomes data such that they will be able to use the better chart, not simply a correct chart.85

Back to Top | Article Outline

ANALYZING A CONTROL CHART

Both a run chart and a control chart are intended to distinguish random or common causes of variation versus assignable or special causes of variation in the outcome data produced by a process.85 As noted earlier, assignable or special causes of variation refer to process variation that can be corrected and controlled by an intervention. While beyond the intended scope of this tutorial, a series of rules and pattern recognition has been promulgated for identifying and classifying process variation in run and control charts.16 , 78 , 96–98

Back to Top | Article Outline

CONCLUSIONS

This statistical tutorial focuses on the basics of statistical process control. While statistical process control is principally the content expertise and ostensibly the domain of engineering, manufacturing, and management science, it is also pertinent to patient care, education, and research in anesthesiology, perioperative medicine, critical care, and pain medicine. Thus, our other goal here is to raise awareness of the importance of statistical process control in patient care and clinical research in anesthesiology, perioperative medicine, critical care, and pain medicine.

Even though we provide a conventional, simplistic algorithm for choosing a specific type of control chart (Figure 3), we do not promote a cookbook approach to statistical process control. Moreover, this tutorial is not intended to provide in-depth coverage of the rather expansive field of continuous quality improvement. The so-inclined reader is referred to one of the number of textbooks with in-depth coverage of the rationale and methodology of continuous quality improvement.24 , 25 , 99–101

Back to Top | Article Outline

DISCLOSURES

Name: Thomas R. Vetter, MD, MPH.

Contribution: This author helped write and revise the manuscript.

Name: Douglas Morrice, PhD.

Contribution: This author helped write and revise the manuscript.

This manuscript was handled by: Jean-Francois Pittet, MD.

Back to Top | Article Outline

REFERENCES

1. Porter ME. What is value in health care? N Engl J Med. 2010;363:2477–2481.
2. Vetter TR, Ivankova NV, Pittet JF. Patient satisfaction with anesthesia: beauty is in the eye of the consumer. Anesthesiology. 2013;119:245–247.
3. Mohammed K, Nolan MB, Rajjo T. Creating a patient-centered health care delivery system: a systematic review of health care quality from the patient perspective. Am J Med Qual. 2016;31:12–21.
4. Friedberg MW, Chen PG, Van Busum KR, et al. Factors affecting physician professional satisfaction and their implications for patient care, health systems, and health policy. 2013. Available at: www.rand.org/pubs/research_reports/RR439.html. Accessed August 19, 2018.
5. Holt NF. Trends in healthcare and the role of the anesthesiologist in the perioperative surgical home: the US perspective. Curr Opin Anaesthesiol. 2014;27:371–376.
6. Vetter TR, Boudreaux AM, Jones KA, Hunter JM Jr, Pittet JF. The perioperative surgical home: how anesthesiology can collaboratively achieve and leverage the triple aim in health care. Anesth Analg. 2014;118:1131–1136.
7. Prielipp RC, Cohen NH. The future of anesthesiology: implications of the changing healthcare environment. Curr Opin Anaesthesiol. 2016;29:198–205.
8. Boudreaux AM, Vetter TR. A primer on population health management and its perioperative application. Anesth Analg. 2016;123:63–70.
9. Peden CJ, Mythen MG, Vetter TR. Population health management and perioperative medicine: the expanding role of the anesthesiologist. Anesth Analg. 2018;126:397–399.
10. Aronson S, Westover J, Guinn N. A perioperative medicine model for population health: an integrated approach for an evolving clinical science. Anesth Analg. 2018;126:682–690.
11. Aronson S, Sangvai D, McClellan MB. Why a proactive perioperative medicine policy is crucial for a sustainable population health strategy. Anesth Analg. 2018;126:710–712.
12. Shojania KG, Grimshaw JM. Evidence-based quality improvement: the state of the science. Health Aff (Millwood). 2005;24:138–150.
13. Winegar AL, Moxham J, Erlinger TP, Bozic KJ. Value-based healthcare: measuring what matters-engaging surgeons to make measures meaningful and improve clinical practice. Clin Orthop Relat Res. 2018;476:1704–1706.
14. Jones E, Lees N, Martin G, Dixon-Woods M. Describing methods and interventions: a protocol for the systematic analysis of the perioperative quality improvement literature. Syst Rev. 2014;3:98.
15. Jones EL, Lees N, Martin G, Dixon-Woods M. How well is quality improvement described in the perioperative care literature? A systematic review. Jt Comm J Qual Patient Saf. 2016;42:196–206.
16. Valentine EA, Falk SA. Quality improvement in anesthesiology: leveraging data and analytics to optimize outcomes. Anesthesiol Clin. 2018;36:31–44.
17. Vetter TR. Magic mirror, on the wall-which is the right study design of them all? Part I. Anesth Analg. 2017;124:2068–2073.
18. Vetter TR. Magic mirror, on the wall-which is the right study design of them all? Part II. Anesth Analg. 2017;125:328–332.
19. Vetter TR, Mascha EJ. Unadjusted bivariate two-group comparisons: when simpler is better. Anesth Analg. 2018;126:338–342.
20. Schober P, Vetter TR. Repeated measures designs and analysis of longitudinal data: if at first you do not succeed-try, try again. Anesth Analg. 2018;127:569–575.
21. Vetter TR, Schober P. Agreement analysis: what he said, she said versus you said. Anesth Analg. 2018;126:2123–2128.
22. Vetter TR, Schober P. Regression: the apple does not fall far from the tree. Anesth Analg. 2018;127:277–283.
23. Schober P, Vetter TR. Survival analysis and interpretation of time-to-event data: the tortoise and the hare. Anesth Analg. 2018;127:792–798.
24. Carey RG. Improving Healthcare with Control Charts: Basic and Advanced SPC Methods and Case Studies. 2003.Milwaukee, WI: ASQ Quality Press.
25. Carey RG, Lloyd RC. Measuring Quality Improvement in Healthcare: A Guide to Statistical Process Control Applications. 2001.Milwaukee, WI: ASQ Quality Press.
26. Hart MK, Hart RF. Statistical Process Control for Health Care. 2002.Pacific Grove, CA: Duxbury/Thomson Learning.
27. Grimes DA, Schulz KF. An overview of clinical research: the lay of the land. Lancet. 2002;359:57–61.
28. DiPietro NA. Methods in epidemiology: observational study designs. Pharmacotherapy. 2010;30:973–984.
29. Grimes DA, Schulz KF. Clinical research in obstetrics and gynecology: a Baedeker for busy clinicians. Obstet Gynecol Surv. 2002;57:S35–S53.
30. Vetter TR, Chou R. Benzon H, Rathmell J, Wu CM, Turk D, Argoff C, Hurley R. Clinical trial design methodology for pain outcome studies. In: Practical Management of Pain. 2013:5th ed. Philadelphia, PA: Elsevier Inc, 1057–1065.
31. Grimshaw J, Campbell M, Eccles M, Steen N. Experimental and quasi-experimental designs for evaluating guideline implementation strategies. Fam Pract. 2000;17suppl 1S11–S16.
32. Eccles M, Grimshaw J, Campbell M, Ramsay C. Research designs for studies evaluating the effectiveness of change and improvement strategies. Qual Saf Health Care. 2003;12:47–52.
33. Handley MA, Schillinger D, Shiboski S. Quasi-experimental designs in practice-based research settings: design and implementation considerations. J Am Board Fam Med. 2011;24:589–596.
34. Shadish WR, Cook TD, Campbell DT. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. 2002.Belmont, CA: Wadsworth.
35. Sedgwick P. Before and after study designs. BMJ. 2014;349:g5074.
36. Andersson Hagiwara M, Andersson Gäre B, Elg M. Interrupted time series versus statistical process control in quality improvement projects. J Nurs Care Qual. 2016;31:E1–E8.
37. Langley GJ, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2014.2nd ed. San Francisco, CA: Jossey-Bass.
38. Provost LP, Murray SK. The Health Care Data Guide: Learning From Data for Improvement. 2011.San Francisco, CA: Jossey-Bass.
39. Benneyan JC, Lloyd RC, Plsek PE. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care. 2003;12:458–464.
40. Thor J, Lundberg J, Ask J. Application of statistical process control in healthcare improvement: systematic review. Qual Saf Health Care. 2007;16:387–399.
41. Handley MA, Lyles CR, McCulloch C, Cattamanchi A. Selecting and improving quasi-experimental designs in effectiveness and implementation research. Annu Rev Public Health. 2018;39:5–25.
42. Penfold RB, Zhang F. Use of interrupted time series analysis in evaluating health care quality improvements. Acad Pediatr. 2013;13:S38–S44.
43. Wagner AK, Soumerai SB, Zhang F, Ross-Degnan D. Segmented regression analysis of interrupted time series studies in medication use research. J Clin Pharm Ther. 2002;27:299–309.
44. Fretheim A, Zhang F, Ross-Degnan D. A reanalysis of cluster randomized trials showed interrupted time-series studies were valuable in health system evaluation. J Clin Epidemiol. 2015;68:324–333.
45. Ewusie JE, Blondal E, Soobiah C. Methods, applications, interpretations and challenges of interrupted time series (ITS) data: protocol for a scoping review. BMJ Open. 2017;7:e016018.
46. Bernal JL, Cummins S, Gasparrini A. Interrupted time series regression for the evaluation of public health interventions: a tutorial. Int J Epidemiol. 2017;46:348–355.
47. Jandoc R, Burden AM, Mamdani M, Lévesque LE, Cadarette SM. Interrupted time series analysis in drug utilization research is increasing: systematic review and recommendations. J Clin Epidemiol. 2015;68:950–956.
48. Crabtree BF, Ray SC, Schmidt PM, O’Connor PJ, Schmidt DD. The individual over time: time series applications in health care research. J Clin Epidemiol. 1990;43:241–260.
49. Fretheim A, Tomic O. Statistical process control and interrupted time series: a golden opportunity for impact evaluation in quality improvement. BMJ Qual Saf. 2015;24:748–752.
50. Taljaard M, McKenzie JE, Ramsay CR, Grimshaw JM. The use of segmented regression in analysing interrupted time series studies: an example in pre-hospital ambulance care. Implement Sci. 2014;9:77.
51. Kontopantelis E, Doran T, Springate DA, Buchan I, Reeves D. Regression based quasi-experimental approach when randomisation is not an option: interrupted time series analysis. BMJ. 2015;350:h2750.
52. Shewhart WA. The Economic Control of Quality of Manufactured Product. 1931.New York, NY: D Van Nostrand.
53. Ryan TP. Introduction. In: Statistical Methods for Quality Improvement. 2011:Oxford, United Kingdom: Wiley-Blackwell; 3–12.
54. Deming WE. Out of the Crisis. 2000.Cambridge, MA: The MIT Press.
55. Mohammed MA. Using statistical process control to improve the quality of health care. Qual Saf Health Care. 2004;13:243–245.
56. Shewhart WA. Scientific basis for control. In: The Economic Control of Quality of Manufactured Product. 1931:New York, NY: D Van Nostrand; 8–25.
57. Vetter TR, Mascha EJ. In the beginning-there is the introduction-and your study hypothesis. Anesth Analg. 2017;124:1709–1711.
58. Mascha EJ, Vetter TR. Significance, errors, power, and sample size: the blocking and tackling of statistics. Anesth Analg. 2018;126:691–698.
59. Cachon G, Terwiesch C. Quality management, statistical process control, and six-sigma capability. In: Matching Supply With Demand: An Introduction to Operations Management. 2013:New York, NY: McGraw-Hill Education; 198–221.
60. Mason SE, Nicolay CR, Darzi A. The use of Lean and Six Sigma methodologies in surgery: a systematic review. Surgeon. 2015;13:91–100.
61. Sedlack JD. The utilization of six sigma and statistical process control techniques in surgical quality improvement. J Healthc Qual. 2010;32:18–26.
62. DelliFraine JL, Langabeer JR II, Nembhard IM. Assessing the evidence of Six Sigma and Lean in the health care industry. Qual Manag Health Care. 2010;19:211–225.
63. Deblois S, Lepanto L. Lean and Six Sigma in acute care: a systematic review of reviews. Int J Health Care Qual Assur. 2016;29:192–208.
64. DelliFraine JL, Wang Z, McCaughey D, Langabeer JR II, Erwin CO. The use of Six Sigma in health care management: are we using it to its full potential? Qual Manag Health Care. 2013;22:210–223.
65. Nicolay CR, Purkayastha S, Greenhalgh A. Systematic review of the application of quality improvement methodologies from the manufacturing industry to surgical healthcare. Br J Surg. 2012;99:324–335.
66. Montgomery DC, Woodall WH. An overview of Six Sigma. Int Stat Rev. 2008;76:329–346.
67. Womack JP, Jones DT, Roos D. The rise of lean production. In: The Machine That Changed the World: The Story of Lean Production. 1991:New York, NY: Harper Perennial; 47–70.
68. Ohno T. Starting from need. In: Toyota Production System: Beyond Large-Scale Production. 1988:Portland, OR: Productivity Press; 1–16.
69. Furterer SL. Lean Six Sigma roadmap. In: Lean Six Sigma Case Studies in the Healthcare Enterprise. 2014:New York, United Kingdom: Springer; 11–62.
70. Womack J, Jones D. From lean production to the lean enterprise. Harvard Bus Rev. 1994;72:93–103.
71. Cole B. Lean-Six Sigma for the Public Sector: Leveraging Continuous Process Improvement to Build Better Governments. 2011.Milwaukee, WI: ASQ Quality Press.
72. Arthur J. Lean Six Sigma for Hospitals: Improving Patient Safety, Patient Flow and the Bottom Line. 2016.New York, NY: McGraw-Hill Education.
73. Furterer SL. Lean Six Sigma Case Studies in the Healthcare Enterprise. 2014.New York, United Kingdom: Springer.
74. Stevenson WJ. Quality control. In: Operations Management. 2009:10th ed. New York, NY: McGraw-Hill/Irwin; 456–508.
75. Deming WE. Principles for transformation of western management. In: Out of Crisis. 2000.Cambridge, MA: The MIT Press.
76. Carey RG. Basic SPC concepts and the run chart. In: Improving Healthcare with Control Charts: Basic and Advanced SPC Methods and Case Studies. 2003:Milwaukee, WI: ASQ Quality Press; 3–12.
77. Hart MK, Hart RF. The run chart for time-ordered variables data. In: Statistical Process Control for Health Care. 2002:Pacific Grove, CA: Duxbury/Thomson Learning; 45–55.
78. Carey RG, Lloyd RC. Using run and control charts to analyze process variation. In: Measuring Quality Improvement in Healthcare: A Guide to Statistical Process Control Applications. 2001:Milwaukee, WI: ASQ Quality Press; 53–77.
79. Provost LP, Murray SK. Understanding variation using run charts. In: The Health Care Data Guide: Learning From Data for Improvement. 2011:San Francisco, CA: Jossey-Bass; 62–106.
80. Varughese AM, Hagerman NS, Kurth CD. Quality in pediatric anesthesia. Paediatr Anaesth. 2010;20:684–696.
81. Fletcher D, Edwards D, Tolchard S, Baker R, Berstock J. Improving theatre turnaround time. BMJ Qual Improv Rep. 2017 Feb 10;6.
82. Pimentel MPT, Feng AY, Piszcz R, Urman RD, Lekowski RWJ, Nascimben L. Resident-driven quality improvement project in perioperative hand hygiene. J Patient Saf. 2017 [Epub ahead of print].
83. Friend TH, Jennings SJ, Levine WC. Communication patterns in the perioperative environment during epic electronic health record system implementation. J Med Syst. 2017;41:22.
84. Chisholm AB, Harrison MJ. Opioid abuse amongst anaesthetists: a system to detect personal usage. Anaesth Intensive Care. 2009;37:267–271.
85. Carey RG. Control chart theory simplified. In: Improving Healthcare with Control Charts: Basic and Advanced SPC Methods and Case Studies. 2003:Milwaukee, WI: ASQ Quality Press; 13–26.
86. Provost LP, Murray SK. Understanding variation using Shewhart charts. In: The Health Care Data Guide: Learning From Data for Improvement. 2011:San Francisco, CA: Jossey-Bass; 149–191.
87. Ehrenfeld JM, Wanderer JP, Terekhov M, Rothman BS, Sandberg WS. A perioperative systems design to improve intraoperative glucose monitoring is associated with a reduction in surgical site infections in a diabetic patient population. Anesthesiology. 2017;126:431–440.
88. Hincker A, Ben Abdallah A, Avidan M, Candelario P, Helsten D. Electronic medical record interventions and recurrent perioperative antibiotic administration: a before-and-after study. Can J Anaesth. 2017;64:716–723.
89. Grigg EB, Martin LD, Ross FJ. Assessing the impact of the anesthesia medication template on medication errors during anesthesia: a prospective study. Anesth Analg. 2017;124:1617–1625.
90. Crockett CJ, Donahue BS, Vandivier DC. Distraction-free induction zone: a quality improvement initiative at a large academic children’s hospital to improve the quality and safety of anesthetic care for our patients. Anesth Analg. 2018 [Epub ahead of print].
91. Hart MK, Hart RF. Control chart theory and the I chart for time-ordered data. In: Statistical Process Control for Health Care. 2002:Pacific Grove, CA: Duxbury/Thomson Learning; 57–94.
92. Hart MK, Hart RF. The Xbar and s chart. In: Statistical Process Control for Health Care. 2002:Pacific Grove, CA: Duxbury/Thomson Learning; 95–129.
93. Hart MK, Hart RF. Using attribute data: the p chart. In: Statistical Process Control for Health Care. 2002:Pacific Grove, CA: Duxbury/Thomson Learning; 189–241.
94. Hart MK, Hart RF. Using attribute data: the c chart and the u chart. In: Statistical Process Control for Health Care. 2002:Pacific Grove, CA: Duxbury/Thomson Learning; 157–187.
95. Carey RG. Limitations of attribute charts. In: Improving Healthcare With Control Charts: Basic and Advanced SPC Methods and Case Studies. 2003:Milwaukee, WI: ASQ Quality Press; 71–93.
96. Carey RG. Drilling down into aggregated data. In: Improving Healthcare With Control Charts: Basic and Advanced SPC Methods and Case Studies. 2003:Milwaukee, WI: ASQ Quality Press; 29–51.
97. George ML, Rowlands D, Price M, Jaminet P. Variation analysis. In: The Lean Six Sigma Pocket Toolbook: A Quick Reference Guide to Nearly 100 Tools for Improving Process Quality, Speed, and Complexity. 2005:New York, NY: McGraw-Hill; 117–140.
98. Provost LP, Murray SK. Learning from variation in data. In: The Health Care Data Guide: Learning From Data for Improvement. 2011:San Francisco, CA: Jossey-Bass; 107–148.
99. Sollecito WA, Johnson JK. Mclaughlin and Kaluzny’s Continuous Quality Improvement in Health Care. 2018.5th ed. Sudbury, MA: Jones & Bartlett Learning.
100. Norman CL, Nola TW, Moen R, Provost LP, Nolan KM, Langley GJ. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2009.2nd ed. San Francisco, CA: Jossey-Bass.
101. Joshi M, Ransom ER, Nash DB, Ransom SB. The Healthcare Quality Book: Vision, Strategy, and Tools. 2014.3rd ed. Chicago, IL: Health Administration Press.
© 2019 International Anesthesia Research Society