Escalating health care costs impose significant burdens on patients, medical institutions, and society. In a recent survey, voters described cost control and affordability as their top health care priorities.1 Further, research has shown that geographical regions with higher costs do not experience better outcomes, suggesting that much of this additional spending may be unnecessary.2 Because nearly 60% of health care spending goes to hospitals, physicians, clinical services, and medications, doctors play a critical role in this issue3; however, most physicians have poor knowledge of the costs of care.4–6
The role of cost-effectiveness in medical education still needs to be defined.
Medical educators know little about how much interest residents have in health care costs or whether training in cost-effective—or “value-based”7medicine is necessary. In Crossing the Quality Chasm, the Institute of Medicine recommended a fundamental realignment of graduate medical education to support the transition from a fragmented, often-wasteful health care system to one focused on both quality and cost concerns.8 In addition, the Accreditation Council for Graduate Medical Education now mandates that residency curricula include “practice-based learning and improvement” and that residents receive training in both “cost-effective health care and resource allocation that does not compromise quality of care.”9 Nonetheless, despite these recommendations and mandates, to our knowledge, few residency programs have introduced formal instruction on providing effective, cost-efficient health care.
To address this gap, we designed an intervention with two primary objectives: (1) to increase awareness among residents about how their decisions influence medical costs and (2) to improve the cost-effectiveness of care provided by residents. We hypothesized that a brief educational intervention for residents involving a review of actual hospital bills and a discussion of approaches to reducing unnecessary costs would decrease expenses without adversely affecting patient outcomes.
Design, setting, and participants
We conducted this study at two hospitals within a single internal medicine residency program in Boston, Massachusetts. Residents in our program receive a brief orientation about care delivery, including efficiency of care, at the beginning of their internship, but no other modules in the residency curriculum focus on this domain.
Because decisions regarding patient care in our hospitals are primarily made by teams of residents (as opposed to individuals), we performed a cluster randomized trial. The inpatient team served as the unit of randomization. Figure 1 presents the CONSORT flow diagram.10 Each team consisted of one to three interns (postgraduate year 1) and one or two supervising residents (postgraduate years 2 and 3). Two teams had four subinterns (fourth year medical students) in place of two interns. A computer randomly assigned 33 teams of internal medicine residents, comprising a total of 96 individuals, to either the intervention or control group. We conducted two rounds of interventions and data collection four weeks apart, in October and November 2009. Teams received the intervention training only once. In the second round, we excluded teams with residents who had already participated in the study (in either the intervention or control arm). We stratified randomization by department (community hospital general medicine, tertiary hospital general medicine, oncology, and cardiology) to ensure even distribution of departments across intervention and control groups. Assuming, a priori, an intracluster correlation of 0.05 and a cluster size of 40 admissions per team, we calculated that our final sample would contain 1,300 patients and yield 70% power to detect a 15% decrease in total costs.
Residents in the intervention group provided verbal informed consent; we received a waiver of consent for control group residents who received no intervention. The decision to participate in the study did not have any effect on the residency program’s evaluation of individual residents, and we removed all resident identifiers before data analysis to prevent linking specific residents with any patient outcomes. The Partners HealthCare institutional review board approved this protocol.
The intervention was a team-based, 45-minute teaching session on health care costs, led by a faculty member (J.T.K. or M.T.) and senior resident or fellow (B.D.S. or N.D.). We offered residents a $5 gift card as an incentive to attend. The session began with an overview of medical costs and a presentation of research demonstrating physicians’ lack of knowledge about costs. Then, each resident received the discharge summary and the itemized bill of a patient for whom he or she had recently cared (the bill and discharge summary had been randomly selected before the session from a database provided by the hospital admitting office). After the residents had a chance to review the bills, we facilitated an open-ended discussion which covered the following: residents’ reactions to the bills, opportunities for reducing unnecessary costs, and barriers to doing so. The session ended with a review of key conclusions and the distribution of a printed pocket card. The card included recommendations on how to provide more cost-effective care (List 1) and a list of charges for common lab tests, radiology studies, and expensive medications. We developed the card in cooperation with other senior members of the hospital faculty. We offered a voluntary make-up session within two days to residents in the intervention group who had missed their originally scheduled session.
In our hospitals, residents work together for two weeks before changing teams. This required us to limit the study period to the two-week rotation. The educational intervention occurred on the first or second day of each rotation. We collected data on all patients cared for during the 13-day period in which the teams’ personnel did not change.
The primary outcomes, which we obtained from the hospitals’ accounting offices, were total hospital costs per patient, and costs per patient in three subcategories over which residents may have particular influence: laboratory, radiology, and pharmacy. These figures were measures of the direct costs of care to the hospital, as estimated by the hospital’s finance department, in 2009 dollars.
Secondary outcomes were the following measures, obtained from the hospital’s data repository and the Social Security Death Index: length of stay (LOS), transfer to an intensive care unit (ICU) after initial admission, 30-day readmission rate, 30-day mortality rate, and a composite adverse-events measure (readmission, ICU transfer, or death). We limited the relevant dates of care for each admission to the two-week rotation in which the teams remained intact; this means that we truncated longer admissions by the beginning or end of the study period. We measured 30-day mortality and readmission from either the discharge date or from the end of the study period for patients whose admissions lasted past that point.
Subgroup and post hoc analyses
For our exploratory subgroup analyses, we limited the sample to admissions that started during the study period (as opposed to an already-admitted patient for whom the study team assumed care midhospitalization); we hypothesized that the intervention would have its strongest effect on new admissions.
Because of unexpected results, we conducted a post hoc analysis of readmissions, in which we classified each readmission by cause. We defined “preventable readmissions” as any unplanned readmission for the same diagnosis treated during the original admission, any unplanned readmission for a related diagnosis, or any readmission for complications related to treatment for the original admission. We defined “unpreventable readmissions” as any planned hospitalizations (e.g., scheduled chemotherapy cycles) or any readmissions due to diagnoses unrelated to the original admission. One of us (A.L), who was blinded to each patient’s study group assignment, derived these classifications through a chart review, and one of us (N.D.) reviewed a random 10% subsample of readmissions to ensure reliability.
Three months after the completion of the intervention, we sent a one-time e-mail invitation to residents from both the intervention and control groups to complete an anonymous online survey. Residents had two weeks to respond to the survey. Once they completed the survey, which took approximately five minutes, they received a $5 gift card for participating. The survey had been pilot-tested on five residents not in the study sample. The survey assessed the following measures using Likert-type scales: the priority each resident places on cost-effectiveness when providing care; the effect they believe their decisions have on the costs of care; their exposure to and training in cost-effectiveness during residency; and how often they consider cost-effectiveness when making decisions regarding labs, radiology, medications, discharge planning, and overall care. The survey included a section for open-ended comments. We designed the survey de novo for the purposes of this study.
We conducted all statistical analysis with Stata 11.0 (StataCorp, College Station, Texas), on an intention-to-treat basis.
We addressed the cluster trial design through analysis of all patient-level outcomes clustered at the level of the team. We compared outcomes for continuous variables using adjusted Wald tests, and we used Pearson chi-square tests for categorical variables. We employed multivariate regression analyses, adjusting for covariates known to explain variation in costs and outcomes, to improve our precision and, in turn, our power to detect an intervention effect.11 Multivariate analysis also allowed us to adjust for potential confounders that may not have been evenly distributed across groups. The patient- or hospital-stay-related covariates we obtained from the patient data repository were as follows:
- health insurance,
- language spoken,
- Charlson comorbidity index,12
- diagnosis-related group payment weight,
- department of care,
- whether the patient’s admission was truncated by the beginning and/or end of the study period,
- month of admission, and
- whether a patient was initially admitted to the ICU before being transferred to a study team.
For our regression analyses, we used Huber–White robust standard errors clustered at the level of the team. Previous research demonstrates that this is a valid approach for examining cluster trial data.13,14 We analyzed costs through ordinary least squares regression, and we conducted sensitivity analyses using the logarithm of charges to account for positive skewness. We analyzed binary outcomes using multivariate logistic regression.
We analyzed survey responses at the resident level using Wilcoxon rank-sum tests to identify differences between intervention and control groups. We divided the open-ended responses into treatment and control groups, and we sorted into themes (decided by consensus post hoc) those topics that recurred in multiple surveys. We then selected representative quotations for each theme.
Of the 47 residents invited, 43 (91%) attended the scheduled intervention; 3 residents (6%) missed the initial session but completed it two days later, and 1 resident (2%) did not attend the intervention (Figure 1).
The study covered 1,195 admissions, but we excluded one because no cost data were available, leaving a sample of 1,194. Between the intervention and control groups, we detected no statistically significant differences in residents’ level of training (Table 1) or in patients’ demographic features or health measures (Table 2). Across both groups, the mean patient age was 63 years, and 380 patients (32%) belonged to racial or ethnic minorities.
The results of our univariate analyses for the total sample revealed that total costs were $69 less per admission in the intervention than the control group ($6,422 versus $6,491), which was not statistically significant (P = .92, Table 3); the intracluster correlation was r = 0.048. Radiology, laboratory, and pharmacy costs also did not differ significantly. Univariate results were similar in the subgroup of admissions that began during the study period. Health outcomes did not differ significantly between the intervention and control groups in the full sample in univariate analyses. In the subgroup analysis, 30-day mortality was significantly lower in the intervention than the control group (1.5% versus 5.8%, P = .008), though we detected no difference in the composite adverse-events outcome.
In multivariate analyses, the intervention was not associated with any statistically significant differences in costs for the total sample (Table 4). In the subgroup of admissions that began during the study, the intervention was associated with a $163 reduction in laboratory costs per admission (95% confidence interval [CI]: −$323, −$3; P = .046). The intervention was associated with a significantly higher risk of 30-day readmission both in the full sample (adjusted odds ratio [OR] = 1.51; 95% CI: 1.11, 2.07; P = .010) and in the subgroup (adjusted OR = 1.40; 95% CI: 1.02, 1.91; P = .036). In the subgroup analysis, the intervention was associated with significantly lower 30-day mortality (adjusted OR = 0.24; 95% CI: 0.07, 0.82; P = .023). In the composite adverse-events outcome, we detected no significant differences in any of the analyses.
Sensitivity analysis using logarithmically transformed costs produced similar results. The intervention was associated with a significant reduction in lab costs in the subgroup analysis (adjusted ß = −0.17; 95% CI: −0.32, −0.02; P = .033; not shown) but no other significant differences in other cost outcomes or in any patient outcomes.
Analysis of readmissions
The post hoc analysis of readmissions showed that the intervention was associated with more readmissions even after excluding unpreventable readmissions (adjusted OR = 2.01; 95% CI: 1.25, 3.23; P = .004; not shown). Interrater reliability for classification of readmissions was substantial, with 85% agreement (kappa = 0.70, P < .001).15
Table 5 presents results from the follow-up survey. Participation was over 70% in both groups (35/47 in the intervention, 35/49 in the control). Of 35 residents who received the intervention, 18 (51%) strongly agreed that cost-effectiveness was an “important factor in the care I deliver” compared with only 10 (29%) in the control group, though this effect was not statistically significant (P = .063). Residents in the intervention group were significantly more likely to agree that their residency program had given them “sufficient exposure and training” on cost-effective care compared with controls (P = .041). Overall levels of cost-conscious ordering by residents were low; less than half of the residents (n = 29/69; 42%) reported that they consider costs at least “most of the time.” The remaining survey items did not show any significant differences.
List 2 presents selected comments from the open-ended portion of the survey. The most common theme from the residents in the control group was that the lack of education about costs was a major limitation in their training. For instance, one resident wrote, “[We] would benefit from more education about the costs of different orders so as to make decisions more wisely.” The most common theme from the intervention group was the identification of barriers to residents’ practicing cost-effective care, particularly the belief that attending physicians would overrule residents’ decisions. One resident explained, “Although I think about cost-effective care, I often don’t act based on it because it seems that most attendings don’t find that type of thinking appropriate when making decisions about individual patients.”
A brief educational intervention featuring a review and discussion of hospital bills led to increased awareness of medical costs among residents but was not associated with cost reductions for the overall study sample.
Our intervention produced a statistically significant decrease in lab costs for a subset of patients—those admitted during the study. This result has face validity: Our qualitative survey results indicated residents were less likely to apply cost-conscious care when they thought attending physicians would overrule them, and prior research demonstrates that residents exert significant autonomy when ordering labs, with less input from attendings than for procedures, radiology, or discharge timing.16 Further, it is plausible that decreased lab costs would most likely occur for patients who were admitted by the residents who received the intervention, especially because many recurring lab orders are entered at the time of initial hospitalization. Cost savings in this subgroup were substantial: an average of $163 per admission for the 403 intervention patients admitted during the two rounds of the study, which produced a total of $65,700 (a 17% reduction in lab costs) across four weeks.
An unexpected finding was that the intervention was associated with an increased 30-day readmission rate. Whether this effect was real or due either to chance or confounding is unclear. Possibly, the intervention could have led to early discharges and therefore more readmissions, especially because one item on the intervention card emphasized not ordering labs on the day of discharge for patients who were definitely leaving (List 1, item 3). However, given the overall findings that the intervention did not affect LOS, it seems unlikely that the intervention led to a high number of premature discharges. Meanwhile, in the subgroup analysis, mortality decreased in the intervention group by a commensurate and statistically significant amount, leaving no change in the overall composite measure of ICU transfer, readmission, and mortality. These are probably Type I errors, as the magnitude of the changes in each individual measure—particularly for mortality—does not seem plausible. Most likely, because of chance, the intervention group’s sicker patients were discharged and readmitted, whereas the control group’s sicker patients died at higher rates—with no significant change in the composite outcome. However, it is extremely important that future educational interventions emphasize the adverse health and economic effects of hospital readmissions17a topic that our intervention may not have adequately addressed.
Comparison with previous research
Previously studied interventions designed to increase physician cost awareness have had mixed results; most of these were not randomized studies, which makes inferring causality difficult.18–20 One recent pre–post observational study found that showing lab charges to surgeons was associated with reduced charges of roughly $40 per patient-day.21 Our study takes a similar approach, providing actual bills to one set of residents and then comparing the hospital and lab costs associated with their patients with those of a control group’s patients; however, our study is a randomized control trial and uses actual costs as the outcome, as opposed to charges.
A randomized control trial in the 1990s found that listing the charge of laboratory tests within the electronic order system reduced lab ordering by 4.5% and reduced lab charges by 4.2%, though these differences were not statistically significant.22 A second randomized control trial in the 1980s targeted housestaff. The intervention group experienced an intensive, seven-session, 14-hour curriculum, which led to significant reductions in both LOS (by 0.72 days) and charges per admission (of $703 [in 1986 dollars]).23 Notably, the investigators did not detect any impact on either mortality or readmission rates. Our results suggest that a much less time-intensive intervention may also be effective, and thus potentially more practical, given the multitude of time constraints that currently exist in residents’ educational and clinical schedules. Further, our study provides current evidence to support an educational intervention approach, which is necessary considering the dramatic changes in health care and graduate medical education that have occurred over the past 20 years.
Outside of reduced lab costs, our educational intervention did not result in the savings one might expect. The open-ended comments residents wrote on the follow-up survey suggest that resistance from attending physicians was a major factor—that is, residents were reluctant to change their behavior for fear of being overruled by an attending physician. We cannot know for sure how much of this pushback was genuine as opposed to perceived. In terms of genuine pushback, larger, system-wide changes, such as payment reform and an evolving culture of medicine, may be necessary to produce greater cost awareness among attending physicians. Meanwhile, addressing the perception that attendings may not want residents to practice cost-effective medicine (even if that is not in fact the case) may require including attendings in future educational interventions or otherwise fostering open dialogue between faculty and housestaff.
The survey revealed that our relatively simple educational intervention made a significant difference in whether residents felt they had received adequate training regarding health care costs. Many residents reiterated in their open-ended comments their desire for more education on this topic. As costs continue to rise, equipping physicians-in-training with the skills and knowledge they need to navigate the challenges of providing value-based care is critical.24 Interventions such as ours may represent a straightforward, efficient method of providing that training in graduate medical education.
At our own institution, we have incorporated a modified form of this intervention into the standard intern curriculum to ensure that all residents are exposed to some training in value-based care. We have also revised, as a result of our unexpected 30-day readmission findings, the teaching session and pocket card to place greater emphasis on the importance of ensuring smooth transitions in care and avoiding premature discharges to reduce the risk of readmission.
Two areas that we feel would be most fruitful for future research are (1) exploring the long-term effects of this sort of intervention (and what the ideal “booster shot” might look like), and (2) searching for a better way to engage attending physicians in value-based care so as to help remove barriers to residents’ adoption of more cost- effective practices.
Strengths and limitations
Our study has several strengths: its randomized control design, the use of actual costs instead of charges, our combination of objective measures and self-reported educational outcomes including qualitative data, and a feasible, brief intervention that residency directors can easily apply in other settings.
Our study also has several limitations. First, our intervention’s effect was limited to lab costs in an exploratory subgroup analysis; we failed to detect an overall effect on our primary sample. Our initial projection of 40 admissions per team turned out to be slightly higher than actual patient volume (36 per team) during the study period, and the standard deviation of total costs was greater in the actual sample than in the projections (from total charges) used for our power calculations. Thus, the intervention possibly had significant effects on costs in areas other than labs, but the study was underpowered, and we were unable to detect such effects. We were limited in our study to the number of teams in our residency program, which prevented us from continuing recruitment to compensate for the fewer-than-expected admissions.
Second, we were unable to determine whether any provider-behavior changes resulting from our intervention would have been long lasting or if they would have waned over time, as occurred in one prior intervention study.18 This prior research finding raises the possibility that a “booster shot” or follow-up session could improve the intervention’s long-term efficacy. Because of the rapidly shifting team structure of our residency program, cross-contamination limited our ability to measure long-term outcomes after the initial two weeks. However, the follow-up survey suggested that the intervention was still affecting residents’ attitudes three months later. On the basis of these findings, we suggest a repeat of the intervention once or twice a year. Further research in this area would provide valuable insights.
Third, we did not have outpatient data and cannot rule out the possibility that the intervention led teams to shift costs to outpatient providers. Similarly, we did not have access to data from outside our system, which means that readmissions to other hospitals would not appear in our results.
The major limitations of the survey are twofold: First, cross-contamination could have occurred since a substantial interval had passed. By three months, many residents had worked on teams with residents from the other arm of the study. However, the qualitative comments from the control group suggest that most were unaware that other residents had experienced an educational intervention. The second concern is social desirability bias; that is, residents may have answered questions on the basis of how they thought the investigators would want them to. The evidence from the lab cost analyses, however, suggests that the residents in the intervention group were not merely paying lip service and did indeed change their behavior.
Residents identify their training as inadequate in the area of cost-effectiveness. A brief teaching intervention featuring a review and discussion of actual hospital bills significantly increased their sense of exposure to value-based care. The intervention also resulted in reductions in inpatient laboratory costs for the subgroup of new admissions. However, overall costs for the full sample were not significantly affected. More intensive educational efforts may be needed to produce reductions in total costs. Furthermore, this kind of session may also increase the risk of hospital readmission, which future educational efforts should address directly in order to prevent unintended negative consequences.
Acknowledgments: The authors are grateful to Maryalice Kenney (Faulkner Hospital), Keiron Tumbleton (Faulkner Hospital), and Ryan Fuller (Brigham & Women’s Hospital) for their assistance with hospital data, and to the Brigham & Women’s Hospital medicine residents for their participation in this study.
Funding/Support: This work was supported by funding from the Partners Center of Expertise in Quality & Safety. The sponsor reviewed and approved the original study design, but had no role in the conduct of the study or approval of the manuscript.
Other disclosures: Dr. Sommers had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. The authors have no conflicts of interest to report.
Ethical approval: The study protocol was approved by the hospital institutional review board. This trial is registered with ClinicalTrials.gov; its registration number is NCT01303263.
Previous presentations: A preliminary version of this paper was presented at the national meeting of the Society of General Internal Medicine in 2010.
1. Blendon RJ, Altman DE, Benson JM, et al. Voters and health reform in the 2008 presidential election. N Engl J Med. 2008;359:2050–2061
2. Sutherland JM, Fisher ES, Skinner JS. Getting past denial—The high cost of health care in the United States. N Engl J Med. 2009;361:1227–1230
3. Bodenheimer T. High and rising healthcare costs. Part 1: Seeking an explanation. Ann Intern Med. 2005;142:847–854
4. Allan GM, Lexchin J, Wiebe N. Physician awareness of drug cost: A systematic review. PLoS Med. 2007;4:e283
5. Thomas DR, Davis KM. Physician awareness of cost under prospective reimbursement systems. Med Care. 1987;25:181–184
6. Kuiken T, Prather H, Bloom S. Physician awareness of rehabilitation cost. Am J Phys Med Rehabil. 1996;75:416–421
7. Porter ME. What is value in health care? N Engl J Med. 2010;363:2477–2481
8. Committee on Quality of Health Care in America. . Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. 2001 Washington, DC National Academies Press
9. Joyce B. Introduction to competency-based residency education. http://ortho.ucsd.edu/educational/documents/CompetenciesforACGME.pdf
Accessed February 15, 2012
10. Campbell MK, Elbourne DR, Altman DG. CONSORT statement: Extension to cluster randomised trials. BMJ. 2004;328:702–708
11. Murray DM Design and Analysis of Group-Randomized Trials. New York, NY Oxford University Press
12. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: Development and validation. J Chronic Dis. 1987;40:373–383
13. Campbell MJ, Donner A, Klar N. Developments in cluster randomized trials and statistics in medicine. Stat Med. 2007;26:2–19
14. Ma J, Thabane L, Kaczorowski J, et al. Comparison of Bayesian and classical methods in the analysis of cluster randomized controlled trials with a binary outcome: The Community Hypertension Assessment Trial (CHAT). BMC Med Res Methodol. 2009;9
15. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–174
16. Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: A report from one university’s hospitalist service. Acad Med. 2011;86:139–145
17. Chen LM, Jha AK, Guterman S, Ridgway AB, Orav EJ, Epstein AM. Hospital cost of care, quality of care, and readmission rates: Penny wise and pound foolish? Arch Intern Med. 2010;170:340–346
18. Hampers LC, Cha S, Gutglass DJ, Krug SE, Binns HJ. The effect of price information on test-ordering behavior and patient outcomes in a pediatric emergency department. Pediatrics. 1999;103:877–882
19. Guterman JJ, Chernof BA, Mares B, Gross-Schulman SG, Gan PG, Thomas D. Modifying provider behavior: A low-tech approach to pharmaceutical ordering. J Gen Intern Med. 2002;17:792–796
20. Roth EJ, Plastaras CT, Mullin MS, Fillmore J, Moses ML. A simple institutional educational intervention to decrease use of selected expensive medications. Arch Phys Med Rehabil. 2001;82:633–636
21. Stuebing EA, Miner TJ. Surgical vampires and rising health care expenditure: Reducing the cost of daily phlebotomy. Arch Surg. 2011;146:524–527
22. Bates DW, Kuperman GJ, Jha A, et al. Does the computerized display of charges affect inpatient ancillary test utilization? Arch Intern Med. 1997;157:2501–2508
23. Manheim LM, Feinglass J, Hughes R, Martin GJ, Conrad K, Hughes EF. Training house officers to be cost conscious: Effects of an educational intervention on charges and length of stay. Med Care. 1990;28:29–42
24. Cooke M. Cost consciousness in patient care—What is medical education’s responsibility? N Engl J Med. 2010;362:1253–1255
25. Thavendiranathan P, Bagai A, Ebidia A, Detsky AS, Choudhry NK. Do blood tests cause anemia in hospitalized patients? The effect of diagnostic phlebotomy on hemoglobin and hematocrit levels. J Gen Intern Med. 2005;20:520–524
26. Hall EJ, Brenner DJ. Cancer risks for diagnostic radiology. Br J Radiol. 2008;81:362–378