The president of the United States can implement policy by executive order, creating or amending government regulations that do not require the approval of Congress but are subject to judicial review. “Midnight regulations” are the flurry of new regulations implemented at the end of a presidential term, especially during a transition to an administration of the opposite political party.1 Because of the waiting period, these regulations are implemented during the initial months of the new presidency, when they may or may not be amended by the current incumbent.
An example was the Clinton administration, on January 18, 2001, eliminating completely the federal requirement for hospital-based physician supervision of a certified registered nurse anesthetist (CRNA). However, on November 13, 2001, just before this earlier administration’s midnight regulation was to go into effect, the Bush administration revised it.2
The resulting final Centers for Medicare and Medicaid Services (CMS) rule of November 2001 maintained, “the current physician supervision requirement for certified registered nurse anesthetists, unless the Governor of a State, in consultation with the State’s Boards of Medicine and Nursing, exercises the option of exemption from this requirement consistent with State law.”3 Exercising the “option of exemption” has become known “opt-out.”
In the ensuing 14 years, the governors of 17 states have elected to opt-out of this federal physician supervision requirement for CRNAs practicing in hospitals or ambulatory surgery centers.4
The pro/con debate over the CMS opt-out rule has largely focused on the 2 issues of whether its implementation has affected the quality of care and access to care.
A handful of studies have explored the relationship between the model of anesthesia services and the quality of care. However, no study has reached definitive conclusions. This is an expected result. Perioperative care is complex, anesthetic complications are rare, and it is nearly impossible to assess confounding variables and determine casual relationships without performing randomized trials. For example, a 2014 Cochrane Database Systematic Review was unable to reach any conclusion about whether 1 type of anesthesia care model was superior to another.5
To date, no studies have examined whether the opt-out rule has improved access to care. Thus, the study by Sun et al.6 in this month’s issue of Anesthesia & Analgesia is especially noteworthy, because it examined the extent to which the 2001 federal opt-out rule increased access to anesthesia care for urgent surgical cases.
Sun et al. focused on patients admitted to an acute care hospital for appendicitis, bowel obstruction, choledocholithiasis, or hip fracture. By using 1998 to 2010 data from the National Inpatient Sample across all 4 surgical diagnoses, opt-out was not associated with a statistically significant change in the percentage of patients who received a procedure (0.0315% point increase; 95% confidence interval of −0.843 to +0.906 percentage point increase). Opt-out was associated with a small but statistically significant increase in the percentage of appendicitis patients receiving an appendectomy (0.876% point increase; 95% confidence interval, +0.194 to +1.56). Subanalyses revealed that the effects of opt-out did not differ between rural and urban areas. Sun et al. concluded that patients in opt-out states did not have increased access to urgent anesthesia care.
Study Design and Analysis
The impact of a new health policy cannot be assessed with randomized trials. Health care policy is not amenable to random assignment. If states (or hospitals, practices, or other units of health care) randomly chose to implement the policy, then the impact could be compared between states that implement the policy and states that do not. However, states do not randomly decide to implement a new policy. Some states adopt new regulations quickly. Others may delay implementation or not implement the policy at all. The reasons for such differences may lie in regional demographics, financial resources, or political perspective. Regardless, the reason for differences in implementation will confound any observed differences in policy impact. This makes unbiased and nonconfounded assessment of the impact of the new policy difficult. An alternative method to assess the effects of a policy change is the “difference-in-differences” (DID) approach. Here, change in the outcome of interest before and after a new policy has been implemented is compared with the change over a similar time period for comparable states (or other health care units) in which the policy has not yet been implemented.
Sun et al. use the DID method to assess access to 4 urgent surgical procedures across opt-out states and non-opt-out states. They estimated a DID model for each of the 4 surgical procedures using data both before and after the opt-out date for states that have already opted out and all available data for non-opt-out states. We note that only the 10 states that submitted data to the National Inpatient Sample before they opted out would have contributed to the exposure data (Sun et al., Table 2). Most of the information in the analyses is from non-opt-out states. In a nutshell, Sun et al. compared opt-out status versus non-opt-out status on the change over time in access to care. In this analysis, a positive difference means that opting-out increased access to these procedures.
As with any observational study, it was crucial for Sun et al. to remove confounding variables to the extent possible. They adjusted for any differences between the opt-out and non-opt out states, including those related to changes over time. They adjusted for patient characteristics of age, gender, and Charlson comorbidity score, as well as state-specific and national trends of the outcome over time. As they explain, including state-specific trends removed considerable observed and unobserved confounding influences because of differing trends across states. However, this approach will not completely remove the many patient- and practice-level differences across states or other differences such as why some states opted out sooner than others. In addition, in the National Inpatient Sample, different hospitals are sampled in different years, thus introducing the possibility of confounding by time-varying factors. For these reasons, further adjustments would have strengthened the analysis had the data been available.
Sun et al. are to be commended for further assessing whether the opt-out effect was consistent for rural and urban areas within a state. Interestingly, they found no rural–urban differences. As well, an important feature of a DID analysis is that data may be correlated within a unit when measured over time. Sun et al. used clustered SEs to avoid the bias inherent in ignoring the within-state correlations.7
Given the complexity of the research question and design, conclusions reached by Sun et al. might have been strengthened by sensitivity analyses. For example, a key assumption for any DID analysis is that temporal trends in outcome for both groups would be the same in the absence of treatment, assessed by comparing groups on preintervention trends.8 Also, it is instructive to substitute an artificial treatment group and assure that the trends parallel those in the control group, vary the controls, and assess whether changes in practice occurred close to the time of policy implementation. Modeling choices may have a large impact on results.9
Health Policy Implications
If the findings presented reflect a reasonable approximation of the effect of opt-out decisions on access to anesthesia services, then there are noteworthy implications for health policy. The authors highlight the most important implication in their discussion: the evidence suggests that opting-out of the requirement for physician supervision of nurse anesthetists does not change procedural availability. Perhaps the fundamental obstacle to changing clinical practice lies in institutional inertia.10 If so, then there may still be physician supervision of CRNAs in opt-out states. Complex organizational structures such as hospitals tend to resist change. Another possibility is that a lack of access to anesthesia services is not the primary obstacle to the receipt of time-sensitive surgical services. Additional quantitative research is required both to verify the findings of Sun et al. and to determine whether they are generalizable to other surgical services. If the findings of Sun et al. are replicable and generalizable, then research is also needed to find out why opt-out did not accomplish the intended objective.
Health Care Economic Implications
Even if opt-out policies do not have significant effects on the access to surgical services, CRNA practice models have important economic effects. These effects are not well understood. Different cost-effectiveness analyses have come to diametrically opposed conclusions.11,12 The degree of uncertainty is large because the relative quality of anesthesia services provided by anesthesiologists compared with supervised or independent CRNAs is indeterminate, as noted above. Incremental cost-effectiveness ratios are calculated as the ratio of the difference in costs between 2 options and the difference in their effectiveness (Δcost/Δeffectiveness). When incremental differences in effectiveness are nonexistent or small, as is likely the case here, then even very small differences in costs translate into very large cost-effectiveness ratios.13 Further, the cost differences between CRNA and physician-delivered anesthesia are highly dependent on the perspective taken in the analysis. Explicitly, insurers covering services may see relative costs differently than do service providers. More definitive studies will be required to establish the value of physician supervision of CRNAs and under what circumstances the value justifies the higher costs of care.
The Role of the Anesthesia Care Team
Given that nonphysician providers often play a role in the administration of anesthesia services, CMS has structured its anesthesia payment system into 4 categories: personally performed, teaching, medically directed, and medical supervision.14 This corresponds to a series of Medicare billing modifiers (Table 1).a The QZ modifier ostensibly designates those cases in which a CRNA has administered anesthesia with no physician supervision (per CMS, “CRNA service: without medical direction by a physician”).14 This would equate to the opt-out state-level status and anesthesia practice examined by Sun et al.
However, an analysis of Medicare beneficiaries revealed that among 538 hospitals that exclusively filed anesthesia claims using the modifier QZ in 2013, 48% had affiliated physician–anesthesiologists.4 As noted by the authors, “it seems likely that the physician anesthesiologists were involved in patient care and had some relationship with nurse anesthetists practicing at the hospitals.”4 Thus, the modifier QZ may not be a valid surrogate for the scenario of no anesthesiologist being involved in the anesthesia care provided.
Two further key comparisons can be made in Medicare anesthesia claims in 2009 versus in 2014.b 14 In 2009, 34.6% of Medicare anesthesia claims involved an “anesthesia care team” (physician supervision of a CRNA). In 2014, 34.5% of Medicare anesthesia claims involved an anesthesia care team. However, in 2009, 23.8% of Medicare anesthesia claims included a QZ modifier, whereas in 2014, 29.2% of Medicare anesthesia claims included a QZ modifier. In 2014, a CRNA was involved in 63.7% of Medicare claims (Fig. 1). Thus, although the anesthesia care team is well established in the United States, the use of the QZ modifier is increasing. However, the increasing use of the QZ modifier may not equate to an increasing absence of an anesthesiologist in attendance.
The data presented by Sun et al. indicate that the 2001 CMS opt-out rule is not associated with increased access to anesthesia care for a subset of 4 common urgent surgical procedures during the 1998 to 2010 epoch. Additional health services studies and data are needed to validate these initial findings. Furthermore, the effect of 2001 CMS opt-out rule on the quality, cost, and hence the value of anesthesia care remains largely unknown.
Name: Thomas R. Vetter, MD, MPH.
Contribution: This author helped write the manuscript.
Attestation: Thomas R. Vetter approved the final manuscript.
Name: Edward J. Mascha, PhD.
Contribution: This author helped write the manuscript.
Attestation: Edward J. Mascha approved the final manuscript.
Name: Meredith L. Kilgore, PhD.
Contribution: This author helped write the manuscript.
Attestation: Meredith L. Kilgore approved the final manuscript.
This manuscript was handled by: Steven L. Shafer, MD.
a Centers for Medicare and Medicaid Services: Medicare Claims Processing Manual Pub 100–04; Chapter 12: Physicians/Nonphysician Practitioners. Available at: https://www.cms.gov/Regulations-and-Guidance/Guidance/Transmittals/downloads/R1859CP.pdf. Accessed January 8, 2016.
b “CMSAnesData for the UNITED STATES” © 2016 Stead Health Group, Inc. All rights reserved. Source: 2014 CMS PSPS Masterfile via Stead Health Group, Inc.
1. Brito J, de Rugy V. For Whom the Bell Tolls: The Midnight Regulations Phenomenon (Policy Primer No. 9). Mercatus Policy Series. 2008:Arlington, Virginia: George Mason University, 1–26.
2. Inglis T. Nurse anesthetists: one step forward, one step back. Am J Nurs 2003;103:91–4.
3. Centers for Medicare and Medicaid Services. Medicare and Medicaid programs; hospital conditions of participation: anesthesia services. Final rule. Fed Regist 2001;66:56762–9.
4. Miller TR, Abouleish A, Halzack NM. Anesthesiologists are affiliated with many hospitals only reporting anesthesia claims using modifier QZ for Medicare claims in 2013. A A Case Rep 2016;6:217–9.
5. Lewis SR, Nicholson A, Smith AF, Alderson P. Physician anaesthetists versus non-physician providers of anaesthesia for surgical patients. Cochrane Database Syst Rev 2014;7:CD010357.
6. Sun E, Dexter F, Miller TR. The effect of “opt-out” regulation on access to surgical care for urgent cases in the United States: evidence from the National Inpatient Sample. Anesth Analg 2016;122:1983–91.
7. Bertrand M, Duflo E, Mullainathan S. How much should we trust differences-in-differences estimates? Q J Econ 2004;119:249–75.
8. Dimick JB, Ryan AM. Methods for evaluating changes in health care policy: the difference-in-differences approach. JAMA 2014;312:2401–2.
9. Ryan AM, Burgess JF Jr, Dimick JB. Why we should not be indifferent to specification choices for difference-in-differences. Health Serv Res 2015;50:1211–35.
10. Schwarze S. Juxtaposition in environmental health rhetoric: exposing asbestos contamination in Libby, Montana. Rhetor Publ Aff 2003;6:313–35.
11. Abenstein JP, Long KH, McGlinch BP, Dietz NM. Is physician anesthesia cost-effective? Anesth Analg 2004;98:750–7.
12. Hogan PF, Seifert RF, Moore CS, Simonson BE. Cost effectiveness analysis of anesthesia providers. Nurs Econ 2010;28:159–69.
13. Detsky AS, Naglie IG. A clinician’s guide to cost-effectiveness analysis. Ann Intern Med 1990;113:147–54.
14. Byrd JR, Merrick SK, Stead SW. Billing for anesthesia services and the QZ modifier: a lurking problem. ASA Newsl 2011;75:36–8.