In July 2016, the Centers for Medicare and Medicaid Services (CMS) unveiled overall hospital quality ratings on the CMS Hospital Compare website for the first time.1 Hospital Compare is a consumer-facing website that was established by the CMS in 2002 and contains numerous hospital-based quality measures with the goal of increasing transparency and empowering consumers.2,3 Through the overall hospital quality rating system, eligible hospitals receive a 1–5-star overall rating that summarizes ~60 Hospital Compare measures across 7 domains: mortality, readmissions, hospital-acquired conditions, patient experience, compliance with evidence-based guidelines, timeliness, and appropriate imaging.4 This single metric allows for easy comparison between hospitals and increases the visibility of hospital quality performance.
Consequently, hospitals are seeking out different approaches to improve their performance in the CMS overall ratings and other rating programs. Care coordination has frequently been identified as an approach for ensuring that patients receive high-quality care and ultimately good clinical outcomes, especially as the health care systems grows increasingly complex.5 Care coordination occurs when there is deliberate coordination of a patient’s care and sharing of information across multiple providers and settings.6 When care is well-coordinated, patients are at reduced risk for adverse events. Many hospitals are working to implement strategies aimed to promote care coordination, like medication reconciliation and postdischarge care management.7,8
The effectiveness of these strategies at improving care coordination and thereby improving patient outcomes has been primarily studied in the context of clinical trials. Many of these strategies have been shown to be effective at influencing the performance measures in the overall rating program. However, few studies have aimed to determine whether these strategies effectively contribute to high-quality care in the setting of routine clinical usage.9–11 Our objective was to determine at a national-scale whether hospitals that widely implement care coordination strategies perform better on CMS overall hospital quality ratings. As 22% of the overall rating is based on 30-day readmissions and care coordination strategies have been found to reduce readmissions, we also explored the impact of strategy implementation on readmission rates.
It is worth noting that the CMS overall ratings have been quite controversial. There have been a number of concerns related to the methodology used to calculate the overall ratings, which has resulted in multiple changes to the methodology and delayed releases in the past years.12–14 The overall ratings have been thought to unfairly penalize safety-net and teaching hospitals.12,15 Hospitals eligible for more measures and domains, because they provide a larger variety of services and have larger patient volumes, have been found to perform worse in the program.16 In addition, many of the measures included in the program have also been called into question. For instance, the readmission measures do not account for socioeconomic factors and have been found to be unreliable for medical conditions due to low patient volumes in some cases.17–19 Nevertheless, although these ratings are not ideal, they provide some insight into a hospital’s overall quality of care.12
Data and Sample
We merged hospital data from the 2015 American Hospital Association (AHA) Annual Survey, the 2015 AHA Care Systems and Payment Survey, the 2015 Health Resources and Services Administration Area Health Resource File (AHRF), the December 2016 CMS Hospital Compare Hospital General Information file containing the overall hospital rating, and the fiscal year 2018 CMS Hospital Readmission Reduction Program (HRRP) file containing readmission measures from July 2013 to June 2016 discharges. The AHA Annual Survey contains information about ownership, governance, size, and payer mix of most hospitals.20 The AHA Care Systems and Payment Survey is a voluntary hospital survey that includes questions about care coordination, system integration, and alternative payment models participation.21 In 2015, 1808 hospitals participated in the survey. From the AHRF, we used county level demographic data from the county in which the hospital was located.22
Our baseline population was the 3584 hospitals that received star ratings in December 2016. We excluded 25 hospitals that did not participate in the AHA annual survey. We found that specialty hospitals and critical access hospitals (CAHs) received overall ratings based on significantly less performance measures (specialty: 28; CAH: 23; general non-CAH: 45) and domains (specialty: 3.7; CAH: 5.3; general: 6.6) and received significantly higher overall ratings (specialty: 4.0; CAH: 3.3; general: 3.0) than general acute hospitals. In addition, CAHs are not included in the HRRP.23 Hence, we excluded specialty hospitals (46 hospitals) and CAHs (588 hospitals) to improve the precision of our findings.
After the linkage with the 2015 AHA Care Systems and Payment Survey, our final sample yielded 710 hospitals who answered ≥1 care coordination survey questions. The final sample included ~24% of general non-CAHs with ratings. A total of 671 hospitals answered all 12 care coordination strategy questions. We discuss the generalizability of our findings in the limitations section.
Our primary dependent variable was the overall hospital quality rating. These ratings range from 1–5 stars with top performers earning 5 stars and summarize hospital performance across 7 domains using a latent variable model. Each domain has a different weight in the overall score: readmissions (22%), patient experience (22%), mortality (22%), safety of care (22%), effectiveness of care (4%), timeliness of care (4%), and efficient use of medical imaging (4%).12 There is variation in the performance periods for the different measures (eg, the patient experience performance period was April 2015 to March 2016, whereas the mortality performance period was July 2012 to June 2015). The overall rating does not include any care coordination process measures.
Hospitals that do not have enough patients eligible for a measure will not have that measure included in their overall rating and CMS will redistribute the weight of domains with missing measures.24 In December 2016, ratings were based on 41 measures on average. Hospitals need to be eligible for 3 measures in 3 domains to receive an overall rating.24 Each year, ~20% of hospitals are not eligible for overall ratings due to insufficient measurement.25 Maryland was exempted from the overall rating program in 2016 and is still exempt from the HRRP.26
The distribution of scores in the overall rating program is approximately normal with most hospitals earning 3 stars (December 2016 distribution: 1 star=3.26%, 2 stars=18.86%, 3 stars=48.33%, 4 stars=26.49%, and 5 stars=3.06%). We operationalized overall ratings as a binary variable and compared hospitals that received 4 or 5 stars (ie, top performing hospitals) to hospitals that received 1–3 stars.12
Our second set of dependent variables were the 6 30-day excess readmission ratios in the HRRP: acute myocardial infarction (AMI), heart failure (HF), chronic obstructive pulmonary disease (COPD), pneumonia (PN), coronary artery bypass graft (CABG), and total hip or knee replacement (THA/TKA). The HRRP is a hospital-based pay-for-performance program where hospitals with higher than expected readmission ratios receive penalties of up to 3% of their annual Medicare inpatient reimbursement.27 Under this methodology, hospitals that received an excess readmission ratio >1 had more readmissions than expected, hospitals that received a ratio of 1 had as many as expected, and hospitals that received a ratio <1 had less than expected.27 We operationalized the readmission ratios as binary measures comparing hospitals with ratios of ≤ (ie, expected or less than expected) to hospitals with ratios >1 (ie, greater than expected).
Key Independent Variables
Our main independent variable was the number of care coordination strategies that the hospital reported implementing on the AHA Care Systems and Payment Survey. Hospitals were asked about the degree (not used at all, used minimally, used moderately, used widely, or used hospital-wide) to which they implemented the following 12 strategies: predictive analytics (computer algorithms that identify patients at-risk for adverse outcomes and prompt clinical teams to develop collaborative risk reduction plans and ensure that patients receive necessary resources), medication reconciliation, hospitalists, visit summaries (encounter summaries that are given to patients and include scheduled follow-up appointments), outreach after discharge (follow-up phone calls within 72 h), home visits for patients unable to make office visits, prospective patient management for high-risk patients, discharge care plans/continuity of care program, outpatient follow-up with a case manager for patients at risk for readmission, chronic care management processes/program, disease management programs, and nurse case managers for outpatient management of chronic conditions (see Appendix 1, Supplemental Digital Content 1, http://links.lww.com/MLR/B892, for survey questions).
Strategies were determined to be implemented if the hospital responded that the strategy was used widely or hospital-wide. We categorized the number of strategies into 4 categories based on approximate quantiles from all AHA Care Systems and Payment Survey respondents: 0–2 (22.3% of hospitals), 3–4 (30.6%), 5–7 (21.6%), and 8–12 (25.5%) strategies. We also analyzed each of these strategies individually.
We included hospital and county characteristics that could impact the hospital’s overall rating and excess readmission ratios in the estimation. For hospital characteristics, we examined rural location, medical school affiliation, system affiliation, ownership status, size, safety-net status as defined a Medicaid discharge rate >1 SD above their respective state’s mean private hospital Medicaid discharge rate,28 the number of full-time equivalent registered nurses per 1000 patient-days,29 and the percentage of discharges reimbursed by Medicare. Overall rating high performers have previously been identified as smaller, nonprofit, system-affiliated, located in the Midwest and West, and to care for lower rates of Medicaid patients.12 For county characteristics, we examined the percentage of residents living below the poverty line and the percentage of residents that are Black.
Measure and domain eligibility have been found to negatively impact overall ratings.30 We found this trend to be roughly present in the December 2016 ratings for all hospitals (1-star hospitals: 48 measures and 6.8 domains; 2-star hospitals: 46 measures and 6.7 domains; 3-star hospitals: 39 measures and 6.3 domains; 4-star hospitals: 41 measures and 6.3 domains; and 5-star hospitals: 43 measures and 6.1 domains). To account for this feature of the program design, we controlled for the number of domains.
We first used χ2 test and t tests to compare the characteristics of hospitals that responded to the care coordination questions on the AHA survey to hospitals that did not respond to the questions. We then compared strategy implementation, hospital characteristics, and county characteristics of hospitals that received a 4-star or 5-star rating to hospitals that received a 1–3-star rating. In our primary analysis, we used multiple logistic regression to regress the binary overall rating variable (4–5-star hospitals compared with 1–3-star hospitals) on the number of strategies and controlled for county and hospital characteristics.
Hospital performance in the overall rating program varied significantly by state with >60% of hospitals receiving a 4-star or 5-star rating in New Hampshire, South Dakota, Vermont, and Wisconsin and < 10% of hospitals receiving a 4-star or 5-star rating in Alaska, Connecticut, District of Columbia, Guam, Nevada, New York, Puerto Rico, Virgin Islands, and West Virginia. In addition, participation in the survey varied significantly by state with ≥50% participating in Alaska, Delaware, North Dakota, and Vermont and <10% in Alabama, New Mexico, and Tennessee. No hospitals participated in Guam, Puerto Rico, or the Virgin Islands. To account for across state variation in overall rating performance and survey participation (see descriptive statistics of state variation in Appendix 2, Supplemental Digital Content 1, http://links.lww.com/MLR/B892), we ran these models with and without state-fixed effects.31
In our secondary analyses, we used 12 separate multiple logistic regression models to regress the binary overall rating variable on the 12 individual care coordination strategies. We also regressed the 6 disease-specific readmission variables on the number of strategies.
We also conducted sensitivity analyses using different reference groups for our independent variable and comparing hospitals with more than the median number of strategies to hospitals with less. Similar results were obtained. All analyses were performed using STATA 15.1. We considered results to be statistically significant at P<0.05.
In Table 1, we compare the characteristics of hospitals that responded to the AHA care coordination questions with nonresponding hospitals. Hospitals participating in the survey were more likely to be large, academic, nonprofit, not system affiliated, and located in urban or suburban areas and in counties with lower rates of poverty. Participating hospitals received ratings based on significantly more measures and domains, yet there was no significant difference in the percentage of hospitals that received a 4-star or 5-star rating between the 2 groups. The distribution of high star ratings within states was similar for survey participants and nonparticipants, aside from Delaware and Washington (see Appendix 2, Supplemental Digital Content 1, http://links.lww.com/MLR/B892, for state-level survey participation and ratings).
Table 2 displays the characteristics and care coordination strategy adoption rates of our sample and compares the 505 hospitals that received a 1–3 stars (low ratings) with the 205 hospitals that received 4–5 stars (high ratings). We found that hospitals that received high-star ratings were in communities with lower rates of poverty and minorities. High-rated hospitals had more nurses per patient days and a greater Medicare fraction. They were more likely to be nonprofit and less likely to be safety-net. There were no differences in the mean number of measures and domains contributing to the overall rating.
On average, hospitals in our sample reported having widely implemented nearly 6 strategies. More than 50% of hospitals reported having widely implemented medication reconciliation, hospitalists, visit summaries, and outreach after discharge. Hospitals reported lower rates of wide-scale implementation of predictive analytics and care management programs. When comparing rates of implementation between hospitals with higher and lower overall ratings, hospitals with higher ratings reported having implemented significantly more strategies on average (6.22 vs. 5.48; P<0.01) and had higher rates of implementation for 7 strategies: medication reconciliation, provision of discharge summaries, outreach after discharge, discharge care plans, outpatient follow-up, chronic care management, and disease management programs. Eight hospitals reported implementing no strategies and 7 of those hospitals received a 1–3-star rating (see distribution of strategies in Appendix 3, Supplemental Digital Content 1, http://links.lww.com/MLR/B892).
Table 3 displays the results of overall rating regressed on number of strategies. In Model 1, which controlled for county and hospital characteristics, we found that as compared with hospitals with 0–2 strategies, hospitals with 3–12 strategies had ~2.5 times the odds of receiving a high rating. When we included state-fixed effects, the magnitude of the coefficients was somewhat lower, but the findings for 3–4 strategies and 8–12 strategies categories were still significant. Of note, the state-fixed effects model does not contain observations from 11 states where all the hospitals in the state who participated in the survey were either low-performing or high-performing.
We also found that for-profit, safety-net, and large hospitals had lower odds of receiving a high rating. With each additional full-time registered nurse per 1000 patient days, a hospital’s odds of receiving a high rating increased by 27%.
Table 4 displays the results of the 12 individual care coordination strategy models. We found that hospitals that implemented medication reconciliation, visit summaries, outreach after discharge, discharge care plans, and disease management programs had higher odds of receiving a high rating, however only outreach after discharge was significant after applying state-fixed effects.
Table 5 displays the results of the 6 readmission models. We found a positive association between number of strategies and successful performance on the HF and AMI readmission measures (ie, expected or less than expected readmissions). We also found a positive association between number of strategies and performance on the COPD readmission measure, although this finding was not significant in the state-fixed effects model.
Care coordination theoretically has the potential to promote hospital performance in each of the 7 domains included in the overall rating program. If all members of the inpatient team (hospitalists, consultants, nurses, ancillary providers, and others) could better coordinate care among themselves and better coordinate care with outpatient providers, it would seem that there would be a reduction in mortalities, readmissions, hospital-acquired conditions, and unnecessary medical imaging and improvements in patient experience, timeliness of service delivery, and compliance with clinical practice guidelines. However, there is a need for evidence regarding what care coordination strategies are actually effective at promoting coordinated hospital-based care and thereby improving outcomes.
This study finds that the most commonly used care coordination strategies or at least a combination of these strategies may be effective at coordinating care and promoting quality at general non-CAHs, as seen by the association between the number of strategies implemented and performance in the overall rating program. These findings were significant after controlling for the structural differences between the hospitals in our sample and were largely robust to state-level fixed effects.
Although this study cannot determine whether the care coordination strategies themselves contribute to the high quality of care at these institutions, it does seem that institutions that are making a commitment to promote care coordination, as evidence by their adoption of ≥3 strategies, are receiving higher overall ratings. It may also be that institutions that promote care coordination also have robust quality improvement infrastructure, high-functioning and committed medical staff, nursing staff, and management, and other unobservable features that would promote strong performance in the overall rating program. The care coordination strategies included in the AHA survey are those most commonly used by hospitals to provide high-value care to patients and payers. They are represented in major hospital-based care coordination programs: the Naylor Transitional Care Model,7 the Coleman Care Transition Intervention,32 and the Project RED.33 Our findings suggest that these common strategies may be more successful at promoting coordinated care for medical patients, particularly those with exacerbations of chronic conditions, than for surgical patients. Additional evidence is needed to identify strategies that are most effective for surgical populations. There may be valuable lessons to be learned from the Comprehensive Joint Replacement and other surgical bundle payment programs.34
We also found significant variation in hospital performance across states. When we included state-fixed effects in our models, the magnitude of the association between care coordination strategies and overall rating diminished. This suggests that a hospital’s success in the overall rating program is somewhat dependent on their location. This may be attributed to state-level differences in patient complexity, demographics, collaboration between hospitals and local health departments, public health and social service spending, services provided by Medicaid programs, the robustness of the outpatient delivery system, quality improvement initiatives, or payment reform.35,36 We also cannot rule out the contribution of the differences between hospitals that participated and did not participate in the survey. Additional research is needed to understand how state-level factors contribute to hospital care coordination and quality.
Lastly, the hospital overall ratings have received a lot of negative publicity over concerns that they unfairly penalize hospitals that care for the socially disadvantaged.12 Our analysis confirms these concerns as we found that safety net hospitals had lower odds of receiving top ratings. In 2016, the 21st Century Cures Act led to the establishment of a new, peer group-based payment adjustment method for the HRRP in which hospitals are broken up into 5 peer groups based on the percentage of their Medicare discharges that are dual eligible and compared against their peers instead of all participating hospitals. This approach lead to a 14 percentage-point reduction in HRRP penalties among low socioeconomic status hospitals.37 We encourage policymakers to consider similar peer-grouping approaches for the overall rating program.
The study only included data from hospitals participating in the 2015 AHA Care Systems and Payment Survey. To improve the precision of our findings, we excluded specialty hospitals and CAHs from our analysis. However, the survey participants differed somewhat from hospitals that did not participate in the survey. The hospitals in our sample were larger and were rated based on more measures and domains. As coordinating care is likely more difficult at large institutions, it is encouraging that we found an association between care coordination and overall ratings in this sample. However, the potential differences in strategy implementation between the participating and nonparticipating hospitals remains a limitation of this study and results may have limited generalizability beyond survey participants. This survey also relies on hospital self-report and the actual implementation of the strategies has not been validated by an external body. The way that hospitals operationalize these strategies can vary markedly, and this study does not account for this variation. We also do not know how these strategies were implemented in relation to specific conditions; however, we believe that hospitals would prioritize implementation for conditions included in public reporting programs. This study also used cross-sectional data and causal inferences cannot be inferred, thus it is unclear whether high-performing hospitals are more likely to implement care coordination strategies or whether hospitals that implement strategies are more likely to perform well in the overall rating program.
Despite the controversy, CMS overall ratings are likely here to stay. In the first quarter of 2019, CMS sought comments on how to enhance the methodology. One major proposal is to place hospitals into similar groups, like small hospitals or academic medical centers, and conduct “like-to-like” comparisons.3 Regardless of methodology change to calculate, we believe that our key finding will continue to hold: hospitals that implement more care coordination practices are more likely to receive high overall ratings.
1. CMS. CMS updates website to compare hospital quality. Centers of Medicare and Medicaid Services. 2017. Available at: www.cms.gov/newsroom/press-releases/cms-updates-website-compare-hospital-quality
. Accessed April 8, 2019.
2. CMS. Find and compare information about hospitals. Hospital Compare
. Available at: www.medicare.gov/hospitalcompare/search.html?
Accessed April 8, 2019.
3. CMS. CMS updates consumer resources for comparing hospital quality. Centers of Medicare and Medicaid Services. 2019. Available at: www.cms.gov/newsroom/press-releases/cms-updates-consumer-resources-comparing-hospital-quality
. Accessed April 9, 2019.
4. CMS. What are the hospital overall ratings? Centers of Medicare and Medicaid Services. Available at: www.medicare.gov/hospitalcompare/about/hospital-overall-ratings.html
. Accessed March 3, 2019.
5. AHRQ. Care coordination. Agency for Healthcare Research and Quality. 2014. Available at: www.ahrq.gov/professionals/prevention-chronic-care/improve/coordination/index.html
. Accessed June 27, 2019.
6. Bodenheimer T. Coordinating care—a perilous journey through the health care system. New Engl J Med. 2008;358:1064–1071.
7. Naylor MD, Brooten D, Campbell R, et al. Comprehensive discharge planning and home follow-up of hospitalized elders: a randomized clinical trial. JAMA. 1999;281:613–620.
8. Gillespie U, Alassaad A, Henrohn D, et al. A comprehensive pharmacist intervention to reduce morbidity in patients 80 years or older: a randomized controlled trial. Arch Intern Med. 2009;169:894–900.
9. Jungerwirth R, Wheeler SB, Paul JE, et al. Association of hospitalist presence and hospital‐level outcome measures among medicare patients. J Hosp Med. 2014;9:1–6.
10. Chen LM, Birkmeyer JD, Saint S, et al. Hospitalist staffing and patient satisfaction in the national Medicare population. J Hosp Med. 2013;8:123–131.
11. Figueroa JF, Feyman Y, Zhou X, et al. Hospital-level care coordination strategies associated with better patient experience. BMJ Qual Saf. 2018;27:844–851.
12. Chatterjee P, Joynt Maddox K. Patterns of performance and improvement in US Medicare’s hospital star ratings, 2016–2017. BMJ Qual Saf. 2018;28:486–494.
13. Bean M. CMS’ overall star ratings updates, delays: a timeline. Becker’s Hospital Review. 2018. Available at: www.beckershospitalreview.com/rankings-and-ratings/cms-overall-star-ratings-updates-delays-a-timeline.html
. Accessed April 8, 2019.
14. Castellucci M. CMS considers tossing hospital star-rating methodology. Modern Healthcare. 2019. Available at: www.modernhealthcare.com/safety-quality/cms-considers-tossing-hospital-star-rating-methodology
. Accessed April 8, 2019.
15. DeAngelis CD. How helpful are hospital rankings and ratings for the public’s health? Milbank Q. 2016;94:729–732.
16. Castelluci M. CMS star ratings disproportionately benefit specialty hospitals, data show. Modern Healthcare. 2018. Available at: www.modernhealthcare.com/article/20180314/NEWS/180319952/cms-star-ratings-disproportionately-benefit-specialty-hospitals-data-show
. Accessed June 25, 2019.
17. Thompson MP, Kaplan CM, Cao Y, et al. Reliability of 30-day readmission measures used in the hospital readmission reduction program. Health Serv Res. 2016;51:2095–2114.
18. Gu Q, Koenig L, Faerberg J, et al. The medicare hospital readmissions reduction program: potential unintended consequences for hospitals serving vulnerable populations. Health Serv Res. 2014;49:818–837.
19. Glance LG, Kellermann AL, Osler TM, et al. Impact of risk adjustment for socioeconomic status on risk-adjusted surgical readmission rates. Ann Surg. 2016;263:698–704.
20. AHA. AHA Annual Survey DatabaseTM
. American Hospital Association. 2019. Available at: www.ahadata.com/aha-annual-survey-database-asdb/
. Accessed April 11, 2019.
21. AHA. AHA Annual Systems Survey. American Hospital Association. 2019. Available at: www.ahadata.com/aha-survey-care-systems-payment/
. Accessed April 11, 2019.
22. HRSA. Area Health Resources Files. Health Resources and Services Administration. Available at: https://data.hrsa.gov/topics/health-workforce/ahrf
. Accessed April 11, 2019.
23. McIlvennan Colleen K, Eapen Zubin J, Allen Larry A. Hospital readmissions reduction program. Circulation. 2015;131:1796–1803.
24. Kreke J. Overall Hospital Quality Star Ratings: Answers to your frequently asked questions. Advisory Board. 2016. Available at: www.advisory.com/research/revenue-cycle-advancement-center/at-the-margins/2016/12/demystifying-overall-hospital-quality-star-ratings
. Accessed April 17, 2019.
25. CMS. How are hospital overall ratings calculated? Centers of Medicare and Medicaid Services. Available at: www.medicare.gov/hospitalcompare/Data/Hospital-overall-ratings-calculation.html
. Accessed March 3, 2019.
26. Advisory Board. New CMS overall star ratings are out. See how hospitals fared on our map. Advisory Board. 2016. Available at: www.advisory.com/daily-briefing/2016/12/22/cms-star-ratings
. Accessed June 30, 2019.
27. CMS. Hospital Readmissions Reduction Program (HRRP). Centers of Medicare and Medicaid Services. 2019. Available at: www.cms.gov/medicare/medicare-fee-for-service-payment/acuteinpatientpps/readmissions-reduction-program.html
. Accessed March 3, 2019.
28. Ross JS, Bernheim SM, Lin Z, et al. Mortality and readmission at safety net and non-safety net hospitals for three common medical conditions. Health Aff (Millwood). 2012;31:1739–1748.
29. Spetz J, Donaldson N, Aydin C, et al. How many nurses per patient? Measurements of nurse staffing in health services research. Health Serv Res. 2008;43(5 pt 1):1674–1692.
30. Bilimoria KY, Barnard C. The new CMS hospital quality star ratings: the stars are not aligned. JAMA. 2016;316:1761–1762.
31. Jencks SF, Cuerdon T, Burwen DR, et al. Quality of medical care delivered to Medicare beneficiaries: a profile at state and national levels. JAMA. 2000;284:1670–1676.
32. Coleman EA, Parry C, Chalmers S, et al. The care transitions intervention: results of a randomized controlled trial. Arch Intern Med. 2006;166:1822–1828.
33. Jack BW, Chetty VK, Anthony D, et al. A reengineered hospital discharge program to decrease rehospitalization. Ann Intern Med. 2009;150:178–187.
34. Iorio R. Strategies and tactics for successful implementation of bundled payments: bundled payment for care improvement at a large, urban, academic medical center. J Arthroplasty. 2015;30:349–350.
35. Chen J, Novak P, Barath D, et al. Local health departments’ promotion of mental health care and reductions in 30-day all-cause readmission rates in Maryland. Med Care. 2018;56:153–161.
36. Cole MB, Wilson IB, Trivedi AN. State variation in quality outcomes and disparities in outcomes in community health centers. Med Care. 2017;55:1001–1007.
37. McCarthy CP, Vaduganathan M, Patel KV, et al. Association of the new peer group-stratified method with the reclassification of penalty status in the Hospital Readmission Reduction Program. JAMA Netw Open. 2019;2:e192987.