Secondary Logo

Journal Logo

Original Article

Improving Timeliness of Internal Medicine Consults in the Emergency Department: A Quality Improvement Initiative

Beckerleg, Weiwei; Hasimja-Saraqini, Delvina; Kwok, Edmund S. H.; Hamdy, Noha; Battram, Erica; Wooller, Krista R.

Author Information
doi: 10.1097/JHQ.0000000000000235



Emergency department (ED) wait time is an important quality indicator in our health system. Overcrowding in the ED is associated with increased morbidity and mortality, prolonged inpatient stay, and decreased patient satisfaction.1–4 In addition, dissatisfaction with the care provided, overcrowding as well as long wait times in the ED may be a source of patients leaving without being seen, who are subsequently at a greater risk of having poorer clinical outcomes.5 For the above reasons, government jurisdictions have made reducing emergency wait times a top priority. A median of 8 hours has been set as the target for total length of stay (LOS) in the emergency department, as per the Canadian Association of Emergency Medicine.6

Waiting for a consultant or admitting service is one component that can contribute to an increase in the overall LOS in the emergency department,7–9 making up 33–54% of total ED LOS.10,11 Consultations occur in up to 40% of all ED visits.12 General internal medicine (GIM) is the most frequently consulted specialty at our institution, and the mean consult to decision time (CTDT), which is defined as the time between when a consultation is received by GIM and the time at which a disposition decision is reached, was 4.6 hours for the last 2 quarters of 2016. This was above our institution target of 3 hours' CTDT for 70% of admitted patients.

Unique challenges to reducing CTDT exist at teaching centers due to the presence of trainees because they often require more time to perform assessments and need to review with more senior staff before a disposition decision is reached. Berstein et al2 have shown a reduction in ED LOS when consultation is performed by more senior clinicians. Several studies have demonstrated strategies that can be used to decrease the CTDT specifically in teaching hospitals. Soong et al13 implemented an education program that focused on internal medicine residents in an academic teaching hospital. In this study, a 20-minute orientation stressing the importance of triage and the rationale behind decreasing wait times in the ED was combined with resident-specific audit and feedback throughout the year-long intervention. They were able to achieve a decrease of approximately one hour in the CTDT as well as a small overall decrease in LOS in ED for admitted patients. Kachra et al14 used a resident-driven standardized admissions protocol at 3 tertiary care teaching hospitals and were able to achieve a decrease in CTDT of 1.3 hours. Wells et al15 reduced CTDT by restructuring hospitalist work schedules and Faryniuk and Hochman16 reported a decrease in the delay in consultant arrival after introducing a dedicated acute general surgery team for non-trauma-related cases. Other studies have looked at implementing guidelines,17 reminder texting and daily audit and feedback to hospital leaders,18 and direct consultation to senior physicians with varying amounts of improvement gained.19

The purpose of this quality improvement project was to identify delays in the consultation process for GIM and trial interventions with the goal of reducing the CTDT for GIM patients. Drawing on successful experiences reported in the literature, we planned to trial multiple interventions in a sequential fashion, using the Model for Improvement proposed by the Institute for Healthcare Improvement (IHI).20Figure 1 shows the project outline in accordance with the IHI quality improvement framework. A process mapping exercise was planned first to raise awareness of the current delay in CTDT and better understand the factors contributing to it. By understanding the components that contribute to prolonged CTDT for GIM patients at our institution, interventions were adopted and modified to address it. The primary aim of the project was to reduce the mean CTDT to less than 3 hours for 50% of admitted patients for the GIM service. This target was chosen given that only 25% of admissions had CTDT of less than 3 hours before implementation of the project.

Figure 1.
Figure 1.:
Project outline in accordance with IHI quality improvement framework. IHI = Institute for Healthcare Improvement.


Study Design, Setting, and Population

An observational preintervention and postintervention study was conducted to reduce CTDT of patients admitted to GIM from the EDs at a large tertiary teaching hospital with 2 inpatient campuses and over 1,100 inpatient beds. The quality improvement team included a medicine chief resident, two internal medicine attending physicians who were both quality improvement (QI) leads at their respective sites, two quality improvement specialists from the hospital quality department, and an ED physician who was also a QI lead. The team had the support of both the internal medicine division head and the ED Chief. The internal medicine service has inpatient wards at both campuses and during the day, admissions from the EDs are done by a consult team consisting of senior medicine residents (SMRs), under the supervision of an attending physician. At night, the admission team consists of junior medicine residents (JMRs) and medical students, supervised by an SMR. During the weekdays, day shift takes place between 08:00 and 17:00, and night shift takes place between 16:00 and 08:00. During weekends and holidays, residents cover 24-hour shifts typically from 09:00 to 09:00. All admissions to GIM were screened with a consult first. All internal medicine residents rotate through both sites. For the 2016–2017 year, there were 44,567 admissions to the hospital, of which 7,728 were to GIM. Occupancy level in the inpatient units has been rising, with an average of 102.2% for the years of 2016 and 2017. This project was reviewed by the institutional research ethics board and exempted from full review due to its observational and quality improvement nature.


Interventions took the form of sequential Plan-Do-Study-Act (PDSA) cycles, which tested change ideas while the outcome of CTDT was monitored. The timeline for all steps of the process is summarized in Figure 2. During the first PDSA cycle, a process mapping exercise was held with the participation of SMRs, JMRs, and attending physicians to capture the full process from when a patient was referred to internal medicine to the submission of admission or discharge orders. In addition, awareness on the current state of prolonged CTDT was raised and its potential impact on patient care emphasized. Advice was sought from participants regarding the means through which efficiency could be improved. Time stamps associated with each step were also collected during this period from the SMR handover tool, which were paper sheets submitted at the end of a shift on a voluntary basis. Several observations were made during the SMR on call shift by a member of the quality improvement team to verify the accurate documentation of time stamps at both campuses, and comparisons were made to the steps obtained from the process mapping exercise. During the process mapping exercise with attending physicians and residents, it was identified that many were not aware of the prolonged CTDT at our institution. In addition, a main concern expressed by residents was the imbalance of staff distribution between daytime and overnight shifts. It was therefore determined that improving education for stakeholders and increasing staff for the overnight shift would be trialed as subsequent interventions.

Figure 2.
Figure 2.:
Flowchart documenting project timeline. *The numbering of weeks denotes the specific time during which interventions were active. For example, PDSA Cycle 3 took place between the 33rd and 39th week of 2017. PDSA = Plan-Do-Study-Act.

In the second PDSA cycle, education sessions (n = 8) for residents regarding the importance of CTDT were held monthly, over 4 months at the two hospital sites. Internal medicine staff met with SMRs at the beginning of the GIM consults rotation to discuss strategies to shorten the CTDT and reduce inefficiencies in the admission process. These strategies were obtained from chief residents and staff physicians. The education session was also given to the new SMRs at orientation day. During these sessions, the adverse effects of prolonged CTDT and ED LOS and patient safety were emphasized, and tips on improving efficiency on consult and triage were given. These included assigning consults to JMRs and students as soon as they are received, setting expectations on time for completion of consults for junior trainees, and improving documentation efficiency by using templates and dictation tools, among others. The same education sessions were given during this time to ensure consistency, with each lasting 20–30 minutes. Over 90% of SMRs attended these sessions.

In the third PDSA cycle, the quality improvement team emailed weekly audit and feedback of CTDT to residents and attending physicians on service over a period of 8 weeks. The weekly feedback included the CTDT for the GIM consult team as a group as well as one tip of the week on efficient triaging.

Changes to staffing patterns were made in the fourth PDSA cycle to match peak demand. After surveying the resident body and collaborating with the residency program, an evening swing shift (from 4 to 11 pm) for SMRs was added to help cover peak consult volume. This evening shift was piloted at 1 campus first for 4 weeks to confirm feasibility, and it was eventually trialed at both campuses over 8 weeks.

The quality improvement team surveyed residents and attending physicians every 1–2 weeks during PDSA cycles 2–4. Survey questions included the following: (1) Opinions on the timing of the start and end times for the swing shift, (2) number of consults seen during the swing shift, (3) the number of times a resident had to remain past the finishing time of the swing shift, and (4) number of consults left to review for discharge with the attending physicians the next morning. This allowed for feedback regarding the new processes to monitor for potential negative consequences.


Outcome measures included mean CTDT for patients admitted to GIM, and proportion of admitted patients with CTDT of less than 3 hours. Data on CTDT were supplied weekly by the hospital's data warehouse and included patients referred only to GIM. Emergency department physicians enter the time of consultation and clerks input the time of admission electronically. Process measures were number of educational sessions scheduled and their attendance, number of weekly feedback reports sent to the GIM consult team, and the total number of swing shifts scheduled. Resident satisfaction, number of patients arriving on the ward with incomplete admission orders, and percentage of admissions with inpatient LOS <24 hours were balancing measures. The latter was added to assess for a potential increase in “soft admissions” in an effort to decrease CTDT. Resident satisfaction and patients without orders were collected through weekly surveys that were answered on a voluntary basis, and percentage of admissions with LOS <24 hours was obtained from the hospital data warehouse.


Statistical analyses were performed using Microsoft Excel and SASS version 9.4. T-tests were used for comparing differences between mean values because all data were normally distributed. Tukey adjustment was applied for multiple comparisons. Linear regressions assessed the influence of consult volume as a confounder. This was done as the number of consults GIM receives from the ED was independent of the time spent on them, and the magnitude of change in CTDT might have been influenced by a change in consult volume. A p value of <0.05 was chosen to represent statistical significance. We examined the influence of the interventions on CTDT variation over time with a statistical process control chart (X-Bar chart).21 Upper and lower controls were set at 3 standard deviations above and below the mean, respectively. Eight consecutive points above or below the mean signaled special cause variation. Responses from surveys were summarized as qualitative data, and major themes were extracted from them.


Changes in mean CTDT are listed in Table 1, with their respective 95% confidence intervals, and p values representing comparisons of mean valuss between each PDSA cycle and the previous intervention period. Consult to decision time during all the intervention periods combined was also compared to the preintervention period and overall, there was a statistically significant decrease in CTDT (p < .0001). There was a statistically significant reduction in CTDT during PDSA Cycle 1 when compared with the previous period. The adjusted p value for PDSA Cycle 2 showed a trend toward statistical significance when compared to previous cycle. The proportion of admissions meeting the institutional target of 3 hours' CTDT also increased overall from 25% before intervention to 33% after intervention (p < .0001).

Table 1. - Consult to Decision Time Before and After Intervention
Before intervention Process mapping exercise Education sessions Weekly feedback Swing shift After intervention (overall)
Mean (hr) 4.61 4.24 3.89 4.16 4.20 4.18
95% CI (hr) 4.46–4.77 4.10–4.38 3.69–4.08 3.97–4.34 3.94–4.47 4.1–4.26
p value .0002; .0021a .0093; .0698a .1321; .5564a .8456; .9997a <.0001
Note: All p values denote comparison with CTDT to the period before it, except for after intervention, which denotes a comparison with before intervention.
aAdjusted p value (Tukey adjustment for multiple comparisons); all comparisons were adjusted for consult volume.
CI = confidence interval.

Figure 3 shows a downward shift in both the upper and lower control limits of the mean CTDT in the postintervention period. It seemed that regular education sessions in PDSA Cycle 2 had a significant impact with more than eight consecutive data points below the mean demonstrated on the statistical process control chart. An outlier occurred at Week 44 of 2017, where the mean CTDT was noticeably higher than the preceding weeks. An analysis of results showed that this increase was due to a prolonged CTDT of 5.5 hours at Campus 1. The consult volume during Week 44 was comparable with that in other weeks; however, the consult team was short staffed that week with the unexpected absence of two residents.

Figure 3.
Figure 3.:
Statistical process control chart depicting mean CTDT before and after intervention. *Special cause variation was detected between Weeks 16 and 29 (after PDSA #2), with 8 consecutive data points below the mean. CTDT = consult to decision time. PDSA = Plan-Do-Study-Act; CL = control limit; UCL = upper control limit; LCL = lower control limit.

The weekly consult volume increased over the study period; the mean weekly admissions rose from 104 before interventions to 112 afterward (p = .015). When CTDT and percentage of admissions done under 3 hours were adjusted against a change in consult volume using linear regression, the differences compared with the preintervention period remained similar.

There were no obvious negative unintended consequences identified during the interventions. Residents were receptive to educational sessions and the majority expressed enthusiasm in trialing a change in shift structure. The response rate for weekly surveys conducted during the swing shift trial was approximately 50%. Responses during this cycle were mixed; although many appreciated the improvement in flow and alleviation of stress from having more personnel during the night shift, variability in consult volume and missing formal teaching sessions conducted during the day shift by attending physicians were identified as two major concerns. There were no reports of patients arriving on the floor with incomplete admission orders. The percentages of admissions with inpatient LOS <24 hours were similar, with 6.2% before intervention and 4.5% after intervention.


There are several important limitations to our study. First, our study did not include a control group; so, it is not possible to draw cause and effect associations from the results. Second, our initial process maps were developed based on self-reported data, which may be biased. Although the time stamps collected matched those recorded by the observer on randomly selected shifts, the Hawthorne effect might have been at play, which is the change in behavior caused by the knowledge of being observed.22 In addition, due to resource limitations, only one observer was available to monitor the accuracy of the time stamps collected for a minority of shifts. Thus, interobserver agreement could not be assessed. We could not rely solely on electronic records for data collection because it did not encompass many of the time points of interest. Despite the above, the mean CTDT calculated from hospital data warehouse matched those from the self-reported data, with only a slight discrepancy in median CTDT. Third, specific changes in resident behavior that occurred after the implementation of interventions could not be determined because postintervention time stamps were not collected due to insufficient manpower. Fourth, other potential confounders such as patient age and presenting complaints were not factored into the analyses. They were assumed to be stable over the study period due to the absence of major epidemics or population changes in the region. Fifth, due to limitations in resources, this QI initiative focused on CTDT alone and did not consider other important outcomes such as ED LOS, mortality rates, and cost reduction. Finally, this project was conducted at a single center, and only patients referred to GIM were included in the analysis, limiting its generalizability.


Mean CTDT was reduced by 0.43 hours (25.8 minutes) after intervention, and the proportion of admitted patients with CTDT meeting the institutional target of 3 hours' CTDT increased significantly from 25% before intervention to 33% after intervention. However, we did not meet our initial aim of reducing CTDT to less than 3 hours for 50% of patients.

The process mapping exercise resulted in a significant decrease in mean CTDT of 22.2 minutes when compared with the previous period. Although not statistically lower than that reported during PDSA Cycle 1, the lowest mean CTDT was observed during PDSA Cycle 2. In addition, special cause variation was apparent during PDSA Cycle 2 because there were greater than 8 data points consecutively below the mean. Although the mean CTDT continued to be below that of preintervention during PDSA cycles 3 and 4, the magnitude of effect was reduced, as shown in Table 1. Educational interventions are often considered as the weakest improvement intervention23,24; however, our study showed that the initial education session followed by targeted monthly education had the greatest impact on reducing CTDT (Figure 2). Given the absence of other changes in the residency program's academic curriculum and trainees in the same period, it is likely that the decrease in CTDT was directly related to the educational interventions. In order for the impact of the education sessions to be sustained, they would need to be continued into the future. As new residents become SMRs, the importance of efficient consult and triage will have to be reinforced.

There are several possibilities for this finding. First, trainees may benefit more from educational interventions as they are still learning and may take more away from education sessions versus established physicians. Second, the interventions trialed in the following PDSA cycles may have been less effective. Weekly group feedback as trialed in PDSA Cycle 3 may be less effective than individual feedback. During PDSA Cycle 4, consult volume was variable, and the addition of personnel may have had less impact on workflow during shifts with fewer admissions. On average, 2 patients were seen by SMRs working the evening shift, with a range from 0 to 4 based on survey responses. Due to insufficient human resources, the swing shift could only be scheduled for a small proportion of shifts. Moreover, the swing shift initiative was stopped after a 2-month trial due to concerns over residents missing daytime education sessions. As such, there may not have been enough extra swing shifts scheduled to impact CTDT in a meaningful manner.


Overall, the interventions implemented throughout the project led to a sustained decrease in CTDT over a 12-month period, despite an increase in consult volume. Educational interventions had the greatest impact on CTDT, which happens to be the most easily applicable intervention, thus increasing its applicability to other settings. Academic teaching centers may improve performance on quality measures by focusing on sustainable educational interventions for trainees.


Findings from this study show the effectiveness and sustainability of educational interventions in reducing CTDT at an academic teaching hospital. These results will help inform future research efforts on reducing CTDT and in turn, ED LOS, particularly in teaching centers where unique challenges are present.

Authors' Biographies

Weiwei Beckerleg, MD, is currently a resident in General Internal Medicine (GIM) at the University of Ottawa Faculty of Medicine.

Delvina Hasimja-Saraqini, MD, is an attending physician in GIM at the Ottawa Hospital, and the quality improvement lead for the Division of GIM.

Edmund S. H. Kwok, MD, is an attending physician and the director of quality improvement and patient safety in the Department of Emergency Medicine at the Ottawa Hospital. He is also an assistant professor at the University of Ottawa Faculty of Medicine.

Noha Hamdy, BASc is a process engineer in the Quality and Patient Safety Department at the Ottawa Hospital.

Erica Battram, BScN, MSc is a registered nurse and a manager in the Quality and Patient Safety Department at the Ottawa Hospital.

Krista R. Wooller, MD, is an attending physician and the site director for GIM at the Civic Campus of the Ottawa Hospital. She is also an assistant professor at the University of Ottawa Faculty of Medicine.


The authors thank Hanna Kuk and Kylie McNeill (research methodologist, Department of Medicine, The Ottawa Hospital) for their assistance with editing of the manuscript, and Tinghua Zhang (methodologist, The Ottawa Hospital Research Institute) for her help with data analysis.


1. Geelhoed GC, de Klerk NH. Emergency department overcrowding, mortality and the 4-hour rule in Western Australia. Med J Aust. 2012;196:122-126. Erratum in: Med J Aust. 2012 Mar 5;196(4):245.
2. Bernstein SL, Aronsky D, Duseja R, et al. The effect of emergency department crowding on clinically oriented outcomes. Acad Emerg Med. 2009;16(1):1-10.
3. Wang H, Kline JA, Jackson BE, et al. The role of patient perception of crowding in the determination of real-time patient satisfaction at Emergency Department. Int J Qual Health Care. 2017;29(5):722-727.
4. Nippak PM, Isaac WW, Ikeda-Douglas CJ, Marion AM, VandenBroek M. Is there a relation between emergency department and inpatient lengths of stay? Can J Rural Med. 2014;19(1):12-20.
5. Hitti E, Hadid D, Tamim H, Al Hariri M, El Sayed M. Left without being seen in a hybrid point of service collection model emergency department. Am J Emerg Med. 2019 May 18. [Epub ahead of print].
6. Affleck A, Parks P, Drummond A, Rowe BH, Ovens HJ. Emergency department overcrowding and access block. Cjem. 2013;15(6):359-370.
7. Brouns SH, Stassen PM, Lambooij SL, Dieleman J, Vanderfeesten IT, Haak HR. Organisational factors induce prolonged emergency department length of stay in elderly patients: A retrospective cohort study. PLoS One. 2015;10(8):e0135066.
8. Erenler AK, Akbulut S, Guzel M, et al. Reasons for overcrowding in the emergency department: Experiences and suggestions of an education and research hospital. Turk J Emerg Med. 2016;14(2):59-63.
9. van der Veen D, Heringhaus C, de Groot B. Appropriateness, reasons and independent predictors of consultations in the emergency department (ED) of a Dutch tertiary care center: A prospective cohort study. PLoS One. 2016;11(2):e0149079.
10. Lee PA, Rowe BH, Innes G, et al. Assessment of consultation impact on emergency department operations through novel metrics of responsiveness and decision-making efficiency. CJEM. 2014;16(3):185-192.
11. Brick C, Lowes J, Lovstrom L, et al. The impact of consultation on length of stay in tertiary care emergency departments. Emerg Med J. 2014;31(2):134-138.
12. Woods RA, Lee R, Ospina MB, et al. Consultation outcomes in the emergency department: Exploring rates and complexity. CJEM. 2008;10(1):25-31.
13. Soong C, High S, Morgan MW, et al. A novel approach to improving emergency department consultant response times. BMJ Qual Saf. 2013;22:299-305.
14. Kachra R, Walzak A, Hall S, et al. Resident-driven quality improvement pre-post intervention rargeting reduction of emergency department decision to admit time. Can J GIM 2016;11(2):14-20.
15. Wells M, Coates E, Williams B, Blackmore C. Restructuring hospitalist work schedules to improve care timeliness and efficiency. BMJ Open Qual. 2017;6(2):e000028.
16. Faryniuk AM, Hochman DJ. Effect of an acute care surgical service on the timeliness of care. Can J Surg. 2013;56(3):187-191.
17. Geskey JM, Geeting G, West C, Hollenbeak CS. Improved physician consult response times in an academic emergency department after implementation of an institutional guideline. J Emerg Med. 2013;44(5):999-1006.
18. Horng S, Pezzella L, Tibbles CD, Wolfe RE, Hurst JM, Nathanson LA. Prospective evaluation of daily performance metrics to reduce emergency department length of stay for surgical consults. J Emerg Med. 2013;44(2):519-525.
19. Shin S, Lee SH, Kim DH, et al. The impact of the improvement in internal medicine consultation process on ED length of stay. Am J Emerg Med. 2018;36(4):620-624.
20. How to Improve. Accessed September 9, 2018.
21. Vetter TR, Morrice D. Statistical process control: No hits, No runs, No errors? Anesth Analg. 2019;128(2):374-382.
22. Sedgwick P, Greenwood N. Understanding the Hawthorne effect. BMJ. 2015;351:h4672.
23. Mostofian F, Ruban C, Simunovic N, Bhandari M. Changing physician behavior: What works? Am J Manag Care. 2015;21(1):75-84.
24. Davis D, O'Brien MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A. Impact of formal continuing medical education: Do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? JAMA. 1999;282(9):867-874.

Journal for Healthcare Quality is pleased to offer the opportunity to earn continuing education (CE) credit to those who read this article and take the online posttest at This continuing education offering, JHQ 286 (September/October 2020), will provide 1 hour to those who complete it appropriately.

Core CPHQ Examination Content Area

IV. Performance Measurement and Improvement

CE Objectives and Posttest Questions: Improving Timeliness of Internal Medicine Consults in the Emergency Department: A Quality Improvement Initiative

Learning Objectives:

  • 1. Describe the primary aim of the CTDT initiative.
  • 2. Identify the process and balancing measures used in the CTDT initiative.
  • 3. Describe the PDSA cycles in the CTDT initiative and identify the most effective intervention.


  • 1. What is the correct sequence of components in the model for improvement put forth by the Institute for Healthcare Improvement (IHI)?
    1. Aim, measures, changes
    2. Measures, aim, changes
    3. Changes, measures, aim
    4. None of the above
  • 2. What is the primary aim of the CTDT initiative?
    1. Reducing the ED length of stay to 3 hours for 50% of admitted patients for the GIM service.
    2. Reducing the ED length of stay to 3 hours for 70% of admitted patients for the GIM service.
    3. Reducing the CTDT to less than 3 hours for 70% of admitted patients for the GIM service.
    4. Reducing the CTDT to less than 3 hours for 50% of admitted patients for the GIM service.
  • 3. Which of the following best describes process measures in a QI initiative?
    1. Metrics that must be tracked to ensure an improvement in one area is not negatively impacting another area.
    2. Specific steps in a process that lead to a particular outcome metric.
    3. The ultimate quality and/or cost targets for the improvement initiative.
    4. None of the above.
  • 4. Which if the following best describes balancing measures in a QI initiative?
    1. Metrics that must be tracked to ensure an improvement in one area is not negatively impacting another area.
    2. Specific steps in a process that lead to a particular outcome metric.
    3. The ultimate quality and/or cost targets for the improvement initiative.
    4. None of the above.
  • 5. Which of the following is an example of a process measure in our CTDT initiative?
    1. Mean CTDT for patients admitted to GIM.
    2. Proportion of admitted patients with CTDT of less than 3 hours.
    3. Number of patients arriving on the ward with incomplete admission orders.
    4. Number of education sessions scheduled for residents.
  • 6. Which of the following is an example of a balancing measure in our CTDT initiative?
    1. Mean CTDT for patients admitted to GIM.
    2. Proportion of admitted patients with CTDT of less than 3 hours.
    3. Number of patients arriving on the ward with incomplete admission orders.
    4. Number of education sessions scheduled for residents.
  • 7. What usually takes place in the fourth stage of the Plan Do Study Act cycle (i.e., the Act portion of the PDSA cycle)?
    1. Describe modifications for the next cycle based on what you learned.
    2. Describe what actually happened when you ran the test.
    3. Describe the measured results and how they compared to the predictions and what you learned from the cycle.
    4. List the tasks needed to set up this test of change.
  • 8. Which one of the following correctly describes the difference between a run chart and a statistical process control chart?
    1. A run chart has a central line for average, and upper and lower lines for upper and lower control limits.
    2. A statistical control chart has a central line for the average, and upper and lower lines for upper and lower control limits.
    3. A statistical control chart allows you to spot upward and downward trends, but a run chart does not.
    4. Run charts and statistical process control charts are different names of the same thing.
  • 9. Which of the following best describes special cause variation?
    1. Special cause variation is a source of variation caused by factors that result in a steady but random distribution of output around the average of the data.
    2. Special cause variation is also called random variation, noise or non-controllable variation.
    3. Special cause variation is caused by factors that result in non-random distribution of output around the average of the data.
    4. None of the above.
  • 10. Which of the following PDSA cycles tried in the study led to special cause variation?
    1. PDSA cycle 1: process mapping exercise.
    2. PDSA cycle 2: education sessions.
    3. PDSA cycle 3: audit and feedback.
    4. PDSA cycle 4: swing shift.


quality improvement; consult to decision time; ED length of stay; PDSA cycles

© 2019 National Association for Healthcare Quality