Share this article on:

The Surgical Care Improvement Project Antibiotic Guidelines: Should We Expect More Than Good Intentions?

Schonberger, Robert B. MD, MA; Barash, Paul G. MD; Lagasse, Robert S. MD

doi: 10.1213/ANE.0000000000000735
Patient Safety: Special Article

Since 2006, the Surgical Care Improvement Project (SCIP) has promoted 3 perioperative antibiotic recommendations designed to reduce the incidence of surgical site infections. Despite good evidence for the efficacy of these recommendations, the efforts of SCIP have not measurably improved the rates of surgical site infections. We offer 3 arguments as to why SCIP has fallen short of expectations. We then suggest a reorientation of quality improvement efforts to focus less on reporting, and incentivizing adherence to imperfect metrics, and more on creating local and regional quality collaboratives to educate clinicians about how to improve practice. Ultimately, successful quality improvement projects are behavioral interventions that will only succeed to the degree that they motivate individual clinicians, practicing within a particular context, to do the difficult work of identifying failures and iteratively working toward excellence.

From the Department of Anesthesiology, Yale School of Medicine, New Haven, Connecticut.

Accepted for publication January 24, 2015.

Funding: Dr. Schonberger is supported in part by the National Heart Lung and Blood Institute (NHLBI) of the National Institutes of Health (NIH) under grant award K23HL116641. The views expressed herein are those of the authors and do not necessarily represent the views of NHLBI, NIH, or the United States Government.

The authors declare no conflicts of interest.

Reprints will not be available from the authors.

Address correspondence to Robert B. Schonberger, MD, MA, Department of Anesthesiology, Yale School of Medicine, 333 Cedar St., TMP-3, P.O. Box 208051, New Haven, CT 06520. Address e-mail to

Since its inception in 2006, the Surgical Care Improvement Project (SCIP) promoted 3 perioperative antibiotic recommendations as one component of an ambitious goal to reduce overall surgical complication rates by 25% before 2011.1 Although SCIP based its antibiotic recommendations on several high-quality studies demonstrating good efficacy, the project fell short. Indeed, a 2011 article concluded that far from contributing to a portion of the hoped for 25% reduction in complications, “SCIP infection prevention measures did not yield measurable improvement in [surgical-site infections].”2 In the present article, we begin with a brief overview of historical development of SCIP and then explore why its perioperative prophylactic antibiotic recommendations have failed to provide improved surgical outcomes. Although SCIP began with a set of well-validated patient care guidelines, numerous factors likely contributed to its failure to meet its own benchmark. Contributing factors may have included an aging population with greater comorbid burdens, increases in antibiotic resistance, and resistance of individual practioners to adopt best practices. However, amid the multitude of explanations for the failure of SCIP, we believe that 3 factors in particular have been important determinants of its limited achievement:

  1. The delayed launch of SCIP in relation to the initial adoption by clinicians of the underlying care measures.
  2. The inability of SCIP to quantify the true spectrum of quality care through dichotomous (i.e., all-or-none) process measures.
  3. The dependence of SCIP on unvalidated performance data that is subject to corruption by pay-for-performance incentives.

Each of these factors has impaired the effectiveness of SCIP. Together, the ongoing costs of SCIP and limited ability to improve outcomes provide a cautionary tale regarding national topdown quality improvement efforts in general. We conclude with some suggestions about ways of reconceiving future quality improvement programs with a focus on: (a) local and regional specificity; and (b) the critical importance of inspiring a culture of improvement among institutions with an ethic of care and among the individuals who work in them.

Back to Top | Article Outline


In 1999, the Centers for Disease Control and Prevention (CDC), as part of a multipronged effort to reduce the public health burden of surgical site infections (SSIs), published a 30-page guideline that included recommendations for targeted perioperative antibiotic prophylaxis.3 The small proportion of the 1999 CDC guidelines that discussed perioperative antibiotics emerged out of rich scientific literature that included multiple randomized controlled trials, prospective observational trials, and meta-analyses.4–8 This literature established the efficacy of preoperative antibiotic administration in the 2 hours before incision as an important component of reducing SSIs in selected populations. The clinical data were buttressed by laboratory data offering additional scientific rationale for the importance of achieving therapeutic levels of antibiotics in a timely manner when treating tissues with a high risk of surgical infection.9–11

In the ensuing years, CDC joined with the Centers for Medicare and Medicaid Services (CMS) to create the Surgical Infection Prevention (SIP) Project in 2002, leading to SCIP in 2006. In 2004, the SIP Project included appropriate perioperative antibiotic administration in its core recommendations to reduce surgical morbidity. To this day, the original CDC guidelines regarding perioperative antibiotics continue in the form of SCIP’s present prophylactic antibiotic recommendations (Table 1).

Table 1

Table 1

The studies on which the CDC guidelines were based included a variety of populations and outcome measures. A representative subset of CDC evidence is listed in Table 2. As would be expected, baseline rates of infection and the absolute reduction in rates of SSIs were wide ranging, but all of the listed studies showed that timely antibiotic prophylaxis was superior to no antibiotic prophylaxis. The number needed to treat, a measure frequently favored in evidence-based medicine and throughout the present article to help understand the clinical relevance of a given intervention,12 ranged between 5.3 and 37 patients to prevent one SSI across the different surgical populations. This evidence led to well-intended SCIP recommendations, but at least 3 factors have markedly impaired the apparent effectiveness of their implementation.

Table 2

Table 2

Back to Top | Article Outline


Delayed Launch of SCIP in Relation to the Widespread Adoption by Clinicians of the Underlying Care Measures

In the case of antibiotic prophylaxis, clinical practice by anesthesiologists began to change before the formation of the quality partnership of SCIP. By the time SCIP was inaugurated in 2006, the original 1999 CDC guidelines, and the studies on which they were based, had already led to a fairly high rate of adoption of clinical standards for perioperative antibiotic prophylaxis. In 2001, 55.7% adherence to SCIP infection measure 1 (INF-1, regarding timely antibiotic administration) was documented, rising to as high as 76% in 2002.13,14 Regarding SCIP INF-2 (regarding appropriate antibiotic selection), studies found that adherence ranged from 90% to 92.6% in the early 2000s. Although the numbers left room for improvement, these high baseline adherence rates inevitably diluted the apparent benefit of improved adherence to SCIP antibiotic prophylaxis recommendations in follow-up effectiveness trials. By illustration, assuming that effect sizes for reduction in SSIs remain constant in Table 2, an increase in adherence to timely antibiotic administration (SCIP INF-1) from 0% to 76% among control groups across the 5 studies13,14 would markedly increase the number of patients needed to treat to prevent 1 additional SSI from the range of 5.3 to 37 to the range of 21.9 to 154. This dilution of apparent effectiveness is likely to have been compounded by high compliance with other SCIP INF recommendations, specifically a >90% adherence to appropriate antibiotic selection (SCIP INF-2).

As might be expected, based on the assumption that good baseline clinical practice already existed, intensive efforts to increase adherence to the SCIP recommendations succeeded at improving reported adherence rates, but the marginal benefit experienced by patients proved vanishingly small. For example, in one prospective study, Dellinger et al.14 found that the absolute reduction in the incidence of SSIs after increasing the already substantial adherence to the SCIP INF recommendations was 0.6% (P = 0.10). The paradoxical disconnect between the well-documented efficacy of appropriate prophylactic perioperative antibiotic administration and the repeated reports of a lack of effectiveness of the SCIP INF recommendations may not be so paradoxical at all, despite continuing to confound many in the medical and health policy community.15–17

To strengthen the argument that the SCIP antibiotic guidelines were well founded but enacted too slowly to make a large difference in observed outcomes, it is helpful to look at a study of an infection reduction bundle that stretched from 2000 to 2006, and therefore spanned an extended adoption period of the CDC guidelines before the establishment of SCIP.18 Hedrick et al.18 showed the effectiveness of increased adherence to an infection reduction bundle with an absolute reduction in rates of SSI after colorectal surgery of nearly 10% (number needed to treat 10.3). This represents an effect size >16 times that seen in the study by Dellinger et al. that began just 2 years later in 2002.14 Even if relative reductions in SSIs had remained steady, they would have corresponded to progressively smaller and harder-to-detect absolute reductions. The association between marginal improvements in care and better outcomes becomes increasingly difficult to detect as baseline care improves.

Back to Top | Article Outline

Inability of SCIP to Quantify Quality of Care with Dichotomous (i.e., All-or-None) Process Measures

In addition to the relative decline in effectiveness over time, as illustrated above, the effectiveness of the evidence-based SCIP INF recommendations was likely further diluted by the inability to grade adherence along a continuum by means of all-or-none process measures.

Regarding SCIP INF-1, the process measure treats the continuous variable of antibiotic time-before-incision as a dichotomous variable, sharply demarcated at 60 minutes (or 120 minutes in the case of vancomycin and fluoroquinolones). The discontinuous treatment of time by the SCIP INF-1 process measure naturally means that the difference between adherence and nonadherence to timely antibiotic administration may have a tenuous relation to improved outcomes or to the original evidentiary foundations for the measure.2 One of the few interventional studies that discussed the limitations of a dichotomous measure of time noted that procedures with a long setup (e.g., cardiac surgery cases or prone-position cases) sometimes demonstrated nonadherence because correct antibiotics had been given shortly before the 60-minute window.19 In 1992, Classen et al.6 demonstrated the importance of timely antibiotics on rates of SSIs, but their principle finding was to show a significant difference in the efficacy of prophylactic antibiotics given in reasonable proximity to the time of incision versus long before or long after incision. The difference in efficacy between adjacent time periods was extremely modest. Interestingly, antibiotics given within the hour before incision versus the hour after incision demonstrated a nonsignificant absolute reduction in SSI rates of 0.6% (P > 0.05). On a similar note, in our post hoc analysis comparing the 1-hour preoperative window versus the 2-hour preoperative window, the observed absolute reduction in the rate of SSIs was 0.2% (P > 0.05).6 In 2013, Hawn et al.2 correctly noted, “The empirical superiority of the 60-minute timing metric has yet to be substantiated.” Unfortunately, a dichotomous process measure such as SCIP INF-1 is insensitive to the difference between antibiotics given 61 minutes before incision compared with antibiotics never given at all. Subtle changes along the continuum of care that have no effect on SSIs may receive inappropriate weight in effectiveness trials using SCIP measures. Invalid judgments about quality have been shown to result when dichotomous care metrics are used to evaluate a continuum of care in other medical environments.20 By failing to differentiate meaningful adherence from strict adherence, care metrics of SCIP further diluted their apparent effectiveness.

Back to Top | Article Outline

Pay-for-Performance—Regulation, Accreditation, and Reimbursement Incentives Have Corrupted the Reporting of Process Measures

Starting with the Medicare Prescription Drug, Improvement, and Modernization Act of 2003, CMS began to incentivize the reporting of quality metrics. With the Deficit Reduction Act of 2005, CMS increased its penalty for institutions that did not report quality metrics and made the metrics available to the public.21 Hospitals responded predictably with the number of hospitals reporting data increasing by a factor of 50 between 2002 and 2005. The number of reporting hospitals nearly doubled again in 2006, coincident with the increasing financial incentives for reporting that accompanied passage of the Deficit Reduction Act.22 The move toward pay-for-performance may have been well intended, and a number of investigators have demonstrated this technique as a valid means to induce behavior change in medical providers;23,24 however, despite these good intentions, subsequent studies have demonstrated that improvements in outcomes associated with pay-for-performance are no greater than improvements seen in the absence of such incentives.25 Indeed, the unintended consequences of the financial incentives imposed by CMS have likely further impaired the utility of SCIP more than they have helped.

To see the harm in pay-for-performance, it is important to differentiate between incentives for increasing reported adherence and incentives that require actual adherence. This difference has been underemphasized in the public square, whereas the drumbeat to increase reported adherence has grown steadily louder.

Incentives that reward the reported adherence to SCIP INF process measures have led to the creation of several decision-support tools within multiple anesthesia-specific electronic health record (EHR) environments that succeeded in improving apparent adherence to SCIP INF-1 and INF-2.19,26 But as improved compliance reporting has become a selling point in a crowded EHR marketplace, some EHRs make checking the correct boxes almost impossible not to do via creative user interfaces. For example, an EHR designed so that the electronic checkbox indicating a surgical incision appears behind the electronic checkbox indicating antibiotics administered might increase provider awareness, but it would certainly assure an increase in reported adherence. When facing such a hard stop in EHR workflow, a provider might conceivably administer antibiotics within seconds of the incision, click the correct boxes in the correct order, satisfy the SCIP INF-1 metric, and fail to achieve optimal antibiotic prophylaxis. In another example, an EHR with increasing national penetrance includes a SCIP compliance section to be filled out by the anesthesia provider after the surgery has been completed but before the bill can be submitted; it reads SCIP—Antibiotics Administered? (Fig. 1). The EHR in this case, although epic in its ambitions, remains unable to adjust patient outcomes depending on whether the provider clicks the yes box or the no box, but it can certainly affect local performance incentives and public opinion. The 2 EHR examples above, although anecdotal examples of user interfaces, lead to predictable discordance between documented and meaningful adherence that is consistent with what has been documented in the peer-reviewed literature. For example, in a study of actual-versus-apparent adherence to a pediatric perioperative antibiotic protocol, Hawkins et al. 27 found 100% nurse-documented adherence despite 48% actual adherence to all aspects of the protocol. Many of the discrepancies in the Hawkins study were innocent errors, but our point is not to define reasons for discordance between the documentation of care measures and actual care. Rather, in our opinion, what matters is that the discordance between documentation and reality, no matter the cause, will undoubtedly serve to mask deficiencies and impair the effectiveness of quality improvement interventions. In this regard, well-designed decision-support tools should play an important role in contributing to antibiotic guideline adherence, especially when paired with effective, if still somewhat cumbersome, medication-scanning technologies.28,29 However, the SCIP national pay-for-performance incentives, as formulated presently, pay for correctly checking boxes without a concerted effort to validate actual care delivery. As a recent editorial in JAMA Internal Medicine argued, “…public reporting and pay-for-performance systems shift the focus of quality improvement to documentation.” Hospitals must invest in “…actual performance improvement rather than in merely creating its appearance.”30

Figure 1

Figure 1

In parallel with regulators and the deficit reduction efforts of Congress, accrediting organizations have been party to further incentivizing apparent adherence by encouraging the public reporting of poorly validated data. In 1997, The Joint Commission—or The Joint Commission on Accreditation of Health Care Organizations, as it was then known, began its ORYX initiative. ORYX was an effort to require health care organizations to measure and submit a list of specific performance metrics as part of the accreditation process.31 By compelling hospitals to submit performance metrics and publicly reporting the results through ORYX, The Joint Commission further incented gamesmanship surrounding SCIP measures. Gaming performance metrics is by no means unique to SCIP, and it has recently received extensive publicity in other health care contexts, such as in the Veterans Health Administration, in which methods of validation regarding patient wait times were found lacking.32,33

As expected with so many incentives in place, public reporting of SCIP INF-1 to INF-3 adherence has become almost universally high across the nation, with very little variation. Using hospital referral regions (HRRs) from the Dartmouth Atlas of Health Care,34 a recent study found median reported adherence rates to SCIP INF-1 across the majority of HRRs to be >95% (Fig. 2), with most HRRs containing either 0 or 1 negative outlier hospital.35 Because our system continues to incentivize the appearance of adherence, it should come as no surprise that nearly all hospitals have started to look pretty good. But most positive changes in perioperative antibiotic practice likely already occurred in response to CDC guidelines and in the early years of SIP, before SCIP established its goal to reduce surgical complications by 25% and CMS began its pay-for-reporting efforts. The heartening truth may be that care improved not as a result of public reporting and the Deficit Reduction Act, but because anesthesiologists were intent upon providing good care to their patients.

Figure 2

Figure 2

Back to Top | Article Outline


So far, we have argued that 3 factors have contributed substantially to the apparent failure of the SCIP antibiotic guidelines: (1) the delayed launch of SCIP in relation to the initial adoption of the underlying care measures by clinicians; (2) the inability of SCIP to quantify the spectrum of quality of care using dichotomous process measures; and (3) the reliance of SCIP on unvalidated performance data that have been corrupted by pay-for-performance incentives. In an effort to improve the value of hospital accreditation, The Joint Commission has been engaged in trying to reform and expand the scope of its performance metrics. Recently, the former head of The Joint Commission, Dr. Mark Chassin, attempted to create a new framework for considering performance metrics by advocating for a focus on so-called accountability measures. Accountability measures were promoted as the subset of performance measures deemed to convey the most direct and significant health benefits to patients.36 Four criteria were offered to determine whether process-of-care metrics should be considered as accountability measures:

  1. Strong evidence shows that the care process leads to improved outcomes.
  2. The measure accurately captures whether the care process was, in fact, provided.
  3. The measure has few intervening care processes that must occur before the realized outcome improvement.
  4. Implementation has minimal chance to induce unintended adverse consequences

In 2010, shortly after Chassin et al.36 published their framework, The Joint Commission added the SCIP measures to the newly rarefied list of accountability measures. However, comparing the 4 criteria required of accountability measures to the attributes of SCIP INF-1 to INF-3, the decision to incorporate the SCIP INF process measures seems unjustified (Table 3).

Table 3

Table 3

Regarding SCIP INF-1 and SCIP INF-2, there is strong evidence that giving an appropriate prophylactic antibiotic within a limited window of time before the surgical incision represents good clinical practice, that the steps themselves are likely to improve outcomes, and that adverse consequences are unlikely. However, as shown, it is simply false to assert that the measures accurately capture whether the care actually happened.

On this point, it is worth noting that smoking cessation counseling, a longstanding ORYX quality metric, did not make the cut as an accountability measure. This occurred even though smoking cessation efforts received increasing attention as a component of critical public health efforts to reduce cardiovascular morbidity, including their being prominently featured in the widely publicized Million Hearts Initiative.37 Chassin et al. argued that smoking cessation counseling should not be an accountability measure because such a metric could appear to be satisfied by simply checking a box that did not necessarily reflect whether the counseling had, in fact, occurred.37 Clearly, in our world of electronic checkboxes, SCIP INF-1 and SCIP INF-2 fail as accountability measures by the standards of ORYX for the very same reason that smoking cessation counseling did not make the list.

Regarding SCIP INF-3, its failure to meet the standards of an accountability measure is even easier to see because there remains little evidence that stopping antibiotics within 24 hours postoperatively in a general patient population directly improves outcomes to any appreciable extent. It is not bad care to do so, but it certainly pales compared with smoking cessation counseling in its potential to improve an individual patient’s life and the broader public health. Indeed, The Joint Commission recently announced that as of January 2015, both SCIP INF-2 and SCIP INF-3 are retired from the ORYX initiative along with several other performance measures that have fallen out of favor.38

Rather than discarding the concept of accountability measures altogether, questionable inclusion by ORYX of SCIP INF-1 to INF-3 could serve as an inspiration for anesthesiologists to adopt a similar framework, but one based solely on the magnitude of the potential health benefit that the suggested care could bring to patients’ lives. For example, imagine if in support of the Million Hearts Initiative,37 anesthesiologists resolved to counsel their patients regarding smoking cessation,39–42 or, as perioperative physicians, anesthesiologists pursued education and follow-up for patients with suspected poorly controlled hypertension.43–45 Although the move toward accountability measures is unlikely to redeem SCIP, a similar framework could lead to improved patient care if it focused on inspiring clinicians to engage in a broader, more comprehensive concern for surgical patients’ long-term well-being.

Back to Top | Article Outline


What Can Be Done to Redeem SCIP?

Should we expect more than good intentions from SCIP antibiotic measures? Perhaps, on the contrary, SCIP INF-1 to INF-3 have fallen short to some degree because they have been adopted by CMS and The Joint Commission in a way that ignores the necessary role of good intentions among individual caregivers. Pay-for-performance incentives and accreditation mandates may seem attractive, but in the era of the modern EHR, they can become a costly and alienating distraction for institutions and individuals. Although good processes should positively associate with good outcomes, it does not follow that imperfect metrics of good processes will share the same relationship. The amount of care, skill, and attention to detail required for truly excellent perioperative care can be daunting. In contrast, the amount of effort that goes into the appearance of fulfilling an imperfect metric of good care may be reduced to a triviality.

This is not to say that accountable care and countable care are inherently at odds; rather, they never succeed without the final common denominator of the clinician practicing within an institutional culture of safety. A manifestation of this need for individual and institutional sincerity is that quality partnerships tend to do best as local or regional collaboratives that include tracking of risk-adjusted outcomes.46When an institution is interested in quality, it will highlight its deficiencies and address them. When an institution is motivated by pay-for-performance and public reporting, it will tend to highlight its successes. The consequence is that top-down national reporting efforts focused on poorly validated metrics are unlikely to lead to improvement,47 whereas regional collaboratives with the flexibility to respond to local lapses in quality allow institutions to focus on deficiencies and eventually to surpass their peers in achieving desired outcomes.48

The effort of SCIP to reduce SSIs can be redeemed, but a new orientation is required. Although critical and repeated evaluation of the evidence underlying quality initiatives is of course necessary,49 good evidence is not enough. Quality improvement must be seen as a behavioral intervention. To achieve lasting success, it requires that individual providers be motivated to care, and this most naturally happens at the local or regional level. National, financially punitive process measures, written into a deficit reduction act and subsequently manipulated by creative EHR user interfaces, have led physicians to the very opposite of quality care. Across every effort to improve health care, success will ultimately hinge on whether individual clinicians commit themselves to do the difficult work of identifying their failures and iteratively working toward excellence. Avedis Donabedian,50 a founder of quality improvement science, recounted shortly before his death, “It is the ethical dimension of individuals that is essential to a system’s success.” With that in mind, clinicians must be motivated to approach each workday with the understanding that quality improvement is difficult and happens 1 patient at a time. Although we may not finish the job, neither are we free to abandon the effort.51

Back to Top | Article Outline


Name: Robert B. Schonberger, MD, MA.

Contribution: This author helped design the analysis and prepare the manuscript.

Attestation: Robert B. Schonberger approved the final manuscript and is the archival author.

Name: Paul G. Barash, MD.

Contribution: This author helped design the analysis and prepare the manuscript.

Attestation: Paul G. Barash approved the final manuscript.

Name: Robert S. Lagasse, MD.

Contribution: This author helped design the analysis and prepare the manuscript.

Attestation: Robert S. Lagasse approved the final manuscript.

This manuscript was handled by: Sorin J. Brull, MD, FCARCSI (Hon).

Back to Top | Article Outline


1. Bratzler DW, Hunt DR. The surgical infection prevention and surgical care improvement projects: national initiatives to improve outcomes for patients having surgery. Clin Infect Dis. 2006;43:322–30
2. Hawn MT, Richman JS, Vick CC, Deierhoi RJ, Graham LA, Henderson WG, Itani KM. Timing of surgical antibiotic prophylaxis and the risk of surgical site infection. JAMA Surg. 2013;148:649–57
3. Mangram AJ, Horan TC, Pearson ML, Silver LC, Jarvis WR. Guideline for Prevention of Surgical Site Infection, 1999. Centers for Disease Control and Prevention (CDC) Hospital Infection Control Practices Advisory Committee. Am J Infect Control. 1999;27:97–132
4. Bernard HR, Cole WR. The prophylaxis of surgical infection: the effect of prophylactic antimicrobial drugs on the incidence of infection following potentially contaminated operations. Surgery. 1964;56:151–7
5. Boxma H, Broekhuizen T, Patka P, Oosting H. Randomised controlled trial of single-dose antibiotic prophylaxis in surgical treatment of closed fractures: the Dutch Trauma Trial. Lancet. 1996;347:1133–7
6. Classen DC, Evans RS, Pestotnik SL, Horn SD, Menlove RL, Burke JP. The timing of prophylactic administration of antibiotics and the risk of surgical-wound infection. N Engl J Med. 1992;326:281–6
7. Da Costa A, Kirkorian G, Cucherat M, Delahaye F, Chevalier P, Cerisier A, Isaaz K, Touboul P. Antibiotic prophylaxis for permanent pacemaker implantation: a meta-analysis. Circulation. 1998;97:1796–801
8. Tanos V, Rojansky N. Prophylactic antibiotics in abdominal hysterectomy. J Am Coll Surg. 1994;179:593–600
9. Burke JF. The effective period of preventive antibiotic action in experimental incisions and dermal lesions. Surgery. 1961;50:161–8
10. DiPiro JT, Vallner JJ, Bowden TA Jr, Clark BA, Sisley JF. Intraoperative serum and tissue activity of cefazolin and cefoxitin. Arch Surg. 1985;120:829–32
11. Nichols RL. Preventing surgical site infections: a surgeon’s perspective. Emerg Infect Dis. 2001;7:220–4
12. Kumana CR, Cheung BM, Lauder IJ. Gauging the impact of statins using number needed to treat. JAMA. 1999;282:1899–901
13. Bratzler DW, Houck PM, Richards C, Steele L, Dellinger EP, Fry DE, Wright C, Ma A, Carr K, Red L. Use of antimicrobial prophylaxis for major surgery: baseline results from the National Surgical Infection Prevention Project. Arch Surg. 2005;140:174–82
14. Dellinger EP, Hausmann SM, Bratzler DW, Johnson RM, Daniel DM, Bunt KM, Baumgardner GA, Sugarman JR. Hospitals collaborate to decrease surgical site infections. Am J Surg. 2005;190:9–15
15. Ingraham AM, Cohen ME, Bilimoria KY, Dimick JB, Richards KE, Raval MV, Fleisher LA, Hall BL, Ko CY. Association of surgical care improvement project infection-related process measure compliance with risk-adjusted outcomes: implications for quality measurement. J Am Coll Surg. 2010;211:705–14
16. Pastor C, Artinyan A, Varma MG, Kim E, Gibbs L, Garcia-Aguilar J. An increase in compliance with the Surgical Care Improvement Project measures does not prevent surgical site infection in colorectal surgery. Dis Colon Rectum. 2010;53:24–30
17. Stulberg JJ, Delaney CP, Neuhauser DV, Aron DC, Fu P, Koroukian SM. Adherence to surgical care improvement project measures and the association with postoperative infections. JAMA. 2010;303:2479–85
18. Hedrick TL, Heckman JA, Smith RL, Sawyer RG, Friel CM, Foley EF. Efficacy of protocol implementation on incidence of wound infection in colorectal operations. J Am Coll Surg. 2007;205:432–8
19. O’Reilly M, Talsma A, VanRiper S, Kheterpal S, Burney R. An anesthesia information system designed to provide physician-specific feedback improves timely administration of prophylactic antibiotics. Anesth Analg. 2006;103:908–12
20. Pogach LM, Rajan M, Aron DC. Comparison of weighted performance measurement and dichotomous thresholds for glycemic control in the Veterans Health Administration. Diabetes Care. 2006;29:241–6
22. New Initiatives, Practices Make Strides in Fight Against SSIs. 2007 Available at: Accessed July 14, 2015
23. Mukherjee D, Eagle KA. Improving quality of care in the real world: efficacy versus effectiveness? Am Heart J. 2003;146:946–7
24. Oxman AD, Thomson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. CMAJ. 1995;153:1423–31
25. Kristensen SR, Meacock R, Turner AJ, Boaden R, McDonald R, Roland M, Sutton M. Long-term effect of hospital pay for performance on mortality in England. N Engl J Med. 2014;371:540–8
26. Wax DB, Beilin Y, Levin M, Chadha N, Krol M, Reich DL. The effect of an interactive visual reminder in an anesthesia information management system on timeliness of prophylactic antibiotic administration. Anesth Analg. 2007;104:1462–6
27. Hawkins RB, Levy SM, Senter CE, Zhao JY, Doody K, Kao LS, Lally KP, Tsao K. Beyond surgical care improvement program compliance: antibiotic prophylaxis implementation gaps. Am J Surg. 2013;206:451–6
28. Jelacic S, Bowdle A, Nair BG, Kusulos D, Bower L, Togashi K. A system for anesthesia drug administration using barcode technology: the Codonics Safe Label System and Smart Anesthesia ManagerTM. Anesth Analg. 2014 [epub ahead of print]
29. Nair BG, Newman SF, Peterson GN, Wu WY, Schwid HA. Feedback mechanisms including real-time electronic alerts to achieve near 100% timely prophylactic antibiotic administration in surgical cases. Anesth Analg. 2010;111:1293–300
30. Goitein L. Virtual quality: the failure of public reporting and pay-for-performance programs. JAMA Intern Med. 2014;174:1912–3
31. Lee KY, Loeb JM, Nadzam DM, Hanold LS. An overview of the Joint Commission’s ORYX initiative and proposed statistical methods. Health Serv Outcomes Res Methodol. 2000;1:63–73
32. Kizer KW, Kirsh SR. The double edged sword of performance measurement. J Gen Intern Med. 2012;27:395–7
33. Kizer KW, Jha AK. Restoring trust in VA health care. N Engl J Med. 2014;371:295–7
34. Dartmouth. The Dartmouth Institute for Health Policy and Clinical Practice. “The Dartmouth Atlas of Health Care.”. Available at: Accessed November 18, 2012
35. Safavi KC, Dai F, Gilbertsen TA, Schonberger RB. Variation in surgical quality measure adherence within hospital referral regions: do publicly reported surgical quality measures distinguish among hospitals that patients are likely to compare? Health Serv Res. 2014;49:1108–20
36. Chassin MR, Loeb JM, Schmaltz SP, Wachter RM. Accountability measures—using measurement to promote quality improvement. N Engl J Med. 2010;363:683–8
37. Frieden TR, Berwick DM. The “Million Hearts” initiative—preventing heart attacks and strokes. N Engl J Med. 2011;365:e27
38. The Joint Commission; “Surgical Care Improvement Project.”. Available at: Accessed December 9, 2014
39. Warner DOAmerican Society of Anesthesiologists Smoking Cessation Initiative Task Force. . Feasibility of tobacco interventions in anesthesiology practices: a pilot study. Anesthesiology. 2009;110:1223–8
40. Warner DO, Klesges RC, Dale LC, Offord KP, Schroeder DR, Shi Y, Vickers KS, Danielson DR. Clinician-delivered intervention to facilitate tobacco quitline use by surgical patients. Anesthesiology. 2011;114:847–55
41. Warner DO, Klesges RC, Dale LC, Offord KP, Schroeder DR, Vickers KS, Hathaway JC. Telephone quitlines to help surgical patients quit smoking patient and provider attitudes. Am J Prev Med. 2008;35:S486–93
42. Wong J, Abrishami A, Yang Y, Zaki A, Friedman Z, Selby P, Chapman KR, Chung F. A perioperative smoking cessation intervention with varenicline: a double-blind, randomized, placebo-controlled trial. Anesthesiology. 2012;117:755–64
43. Allen N, Berry JD, Ning H, Van Horn L, Dyer A, Lloyd-Jones DM. Impact of blood pressure and blood pressure change during middle age on the remaining lifetime risk for cardiovascular disease: the cardiovascular lifetime risk pooling project. Circulation. 2012;125:37–44
44. Schonberger RB. Ideal blood pressure management and our specialty. J Neurosurg Anesthesiol. 2014;26:270–1
45. Schonberger RB, Burg MM, Holt NF, Lukens CL, Dai F, Brandt C. The relationship between day-of-surgery and primary care blood pressure among Veterans presenting from home for surgery. Is there evidence for anesthesiologist-initiated blood pressure referral? Anesth Analg. 2012;114:205–14
46. Kim EK, Sheetz KH, Bonn J, DeRoo S, Lee C, Stein I, Zarinsefat A, Cai S, Campbell DA Jr, Englesbe MJ. A statewide colectomy experience: the role of full bowel preparation in preventing surgical site infection. Ann Surg. 2014;259:310–4
47. Ryan A, Sutton M, Doran T. Does winning a pay-for-performance bonus improve subsequent quality performance? Evidence from the Hospital Quality Incentive Demonstration. Health Serv Res. 2014;49:568–87
48. Campbell DA Jr, Englesbe MJ, Kubus JJ, Phillips LR, Shanley CJ, Velanovich V, Lloyd LR, Hutton MC, Arneson WA, Share DA. Accelerating the pace of surgical quality improvement: the power of hospital collaboration. Arch Surg. 2010;145:985–91
49. Brinkman W, Herbert MA, O’Brien S, Filardo G, Prince S, Dewey T, Magee M, Ryan W, Mack M. Preoperative β-blocker use in coronary artery bypass grafting surgery: national database analysis. JAMA Intern Med. 2014;174:1320–7
50. Donabedian A. A founder of quality assessment encounters a troubled system firsthand. Interview by Fitzhugh Mullan. Health Aff (Millwood). 2001;20:137–41
51. Tarfon R. Mishna Pirkei Avot; “The Chapters of the Fundamentals” (alt. “The Ethics of the Fathers”) Chapter 2, Verse 21.
© 2015 International Anesthesia Research Society