The Patient Protection and Affordable Care Act (ACA) was signed into law in March of 2010 and undoubtedly represents one of the most important healthcare policy changes in recent history. The primary goals of the ACA were to address the three major health policy challenges facing American healthcare: access, quality, and cost. To achieve these goals, the ACA implemented a number of policies over the course of approximately 4 years. For example, extending dependents’ coverage within a parent’s insurance policy to the age of 26, protections for those with a preexisting condition, an individual mandate to enroll in a healthcare plan, and Medicaid expansion to those up to 133% of the federal poverty level were implemented under the ACA (1). These policy efforts to increase health insurance coverage were intended to result in improved healthcare access and ultimately improve health outcomes. Crucial to policy implementation is the in vivo policy analysis and evaluation; however, neither of these are easy. The potential of threats to validity and unknown confounders can impact outcomes in association with policy implementation. Isolating these issues and addressing them, while still maintaining rigor in the methods, is necessary so that the results of the policy are not under- or overstated.
In this issue of Critical Care Medicine, Chinai et al (2) used the National Inpatient Sample (NIS) to determine if there are differences in outcomes for patients with sepsis and septic shock between the ages of 18–64 before and after implementation of the ACA. Their premise is that timely access to healthcare services, as anticipated due to insurance status changes in implementation of the ACA, could play a major role in improving patient outcomes for sepsis and septic shock. The authors did find improved outcomes in both mortality and length of stay (LOS) across all insurance categories in the post-ACA cohort. These findings represent a rare opportunity to evaluate the broad-scale policy changes associated with the ACA on a subset of ICU patients. In their article, the authors have acknowledged the limitations of the dataset, the definitions, and some challenges to the study’s design. In addition, there are a few other important considerations that are relevant to readers as they interpret this work.
The authors assessed NIS discharges in pre-ACA (2011–2013) and post-ACA (2014–2016) cohorts. Technically, both of the described cohorts occur in the post-ACA time frame since the ACA was instituted in March 2010. The authors have made the case that it takes time to truly realize the impact of policy change, which may be supported by the results of their analysis. Nonetheless, there are four examples of systematic bias to consider while interpreting the results of this analysis.
The first is the washout effect, implying that time needs to pass before all of the effects of a prior treatment, in this case, policies, are extinguished (3). For example, prior to 2010, most young adults “aged out” of their parents health insurance plan at age 19 or age 22 if they were full-time students (4). The ACA brought changes that extended coverage for young adults up to the age of 26 under their parents plan, regardless of being in school or living at home with their parents (1). Although these policies were put in place in 2010, the American public may have been operating under old policies, and some may not have been aware of these changes. It is possible that the pre-ACA group could have been capturing the effects of old policies, in this case, adults ages 19–26 who were uninsured. This effect could cause the outcomes to be overstated. Yet, given the nature of the objectives in the study by Chinai et al (2), there would be no way to ensure the effects from old policies were completely eliminated from this study population prior to analysis.
The second bias relates to lead time. Just as it takes time to resolve the effects of prior treatments, it also takes time before new treatments, or policies, start to have their intended impact across the nation, especially in national healthcare policy (5). For example, the post-ACA group was intended to have outcomes which were influenced by ACA policy actions, but in actuality, it may have been too early to detect these implications. Medicaid expansion and the individual mandate were launched in 2014 but were variably adopted throughout the United States. In fact, by April of 2015, 21 states did not expand Medicaid eligibility because of the Supreme Court’s decision to allow states to opt-out (6,7). Increased healthcare coverage in 2014 does not immediately equate to increased access to healthcare services, especially when considering the time it takes for recipients to schedule annual checkups or other medical appointments. The timeframes used in the authors’ two cohorts may not account for either the washout period or lead time issues that may be at play in this national sample.
Third, as the use of large, administrative datasets, like the NIS, become increasingly popular in clinical and policy research, it is important to consider their design, construction, and influence on the outcomes under study (8). From 1988 to 2011, NIS data consisted of 100% of discharges from 20% of U.S. hospitals participating in the Healthcare Cost and Utilization Project (HCUP) (8). In 2012, the NIS sampling methodology underwent a redesign in an attempt to create a more representative sample. A smaller proportion of discharges from 100% of HCUP hospitals were included in the NIS data from 2012 onward (9). The redesign in sample methodology overlaps with the pre-ACA group and causes pre-2012 data to be not directly comparable to post-2012 data unless investigators used the newly established weighting methodology (10). The use of unweighted data may contribute to higher rates of mortality and LOS in the pre-ACA group due to characteristics specific to the concentrated sample of hospitals selected for that year. Hence, the comparison of pre-ACA to post-ACA outcomes may be confounded by underlying data inconsistencies resulting from the redesign. This design implication would serve to overstate differences in mortality and LOS between the two cohorts and falsely attribute improvements in outcomes to impacts of the ACA.
Finally, it is important to remember that the two cohorts under study are not the same group of individuals and data are not longitudinal. Readers should not believe that the population from the pre-ACA group were followed to the post-ACA group. The serial cross-sectional study design does not allow us to make causal inferences, making it difficult to know, with confidence, that the post-ACA group was truly benefiting from policy actions that were not present prior to the enactment of the ACA.
Chinai et al (2) have provided us with valuable work in understanding the effects of the ACA on a subset of ICU patients. Research is never perfect, and evaluating health policy at a national level has inherent design challenges that must be considered when forming an appropriate interpretation of the results. The authors have addressed the changes in ICU practice that may be operative over time, including advances in sepsis management and transfusion practices and changes to the definitions of sepsis and septic shock. In conclusion, Chinai et al (2) made a legitimate case in showing overall improved outcomes for this subset of sepsis and septic shock patients that likely benefited from legislation that incrementally improved access to healthcare for Americans.
1. French MT, Homer J, Gumus G, et al. Key Provisions of the Patient Protection and Affordable Care Act
(ACA): A systematic review and presentation of early research findings. Health Serv Res 2016; 51:1735–1771
2. Chinai B, Gaughan J, Schorr C. Implementation of the Affordable Care Act
: A Comparison of Outcomes in Patients With Severe Sepsis and Septic Shock Using the National Inpatient Sample. Crit Care Med 2020; 48:783789
3. Brookhart MA, Stürmer T, Glynn RJ, et al. Confounding control in healthcare database research: Challenges and potential approaches. Med Care 2010; 48:S114–S120
5. Cucchetti A, Trevisani F, Pecorelli A, et al.; Italian Liver Cancer Group: Estimation of lead-time bias and its impact on the outcome of surveillance for the early diagnosis of hepatocellular carcinoma. J Hepatol 2014; 61:333–341
6. Kaiser Family Foundation: Medicaid Expansion, Health Coverage, and Spending: An Update for the 21 States That Have Not Expanded Eligibility. 2015. Available at: https://www.kff.org/medicaid/issue-brief/medicaid-expansion-health-coverage-and-spending-an-update-for-the-21-states-that-have-not-expanded-eligibility/
. Accessed March 3, 2020
7. Kaiser Family Foundation: Status of State Medicaid Expansion Decisions: Interactive Map. 2020. Available at: https://www.kff.org/medicaid/issue-brief/status-of-state-medicaid-expansion-decisions-interactive-map/
. Accessed March 3, 2020
8. Khera H, Krumholz R. With great power comes great responsibility: “Big Data” research from the National Inpatient Sample. Circ Cardiovasc Qual Outcomes 2017; 10:e003846
9. Houchens RL, Ross DN, Elixhauser A, et al. Nationwide Inpatient Sample Redesign Final Report. 2014. Rockville, MD, U.S. Agency for Healthcare Research and Quality. Available at: https://www.hcup-us.ahrq.gov/db/nation/nis/reports/NISRedesignFinalReport040914.pdf
. Accessed March 3, 2020
10. Healthcare Cost and Utilization Project: Trend Weights for HCUP NIS Data. Rockville, MD, Agency for Healthcare Research and Quality and US Department of Health and Human Services. 2015. Available at: https://www.hcup-us.ahrq.gov/db/nation/nis/trendwghts.jsp
. Accessed February 28, 2020