The goals of value-based payment models for joint replacement surgery are to incentivize care coordination, reward centers that meet certain performance benchmarks, and discourage use of low-value treatments. Owing to the success of the Bundled Payments for Care Improvement pilot, the Centers for Medicare & Medicaid Services implemented Comprehensive Care for Joint Replacement (CJR), a mandatory bundled-payment program, in 67 metropolitan areas. The goal of CJR is to have institutions coordinate care and reduce unnecessary variance in outcomes and overall costs across hospitals and geographic regions. The program seeks to reward hospitals for providing high-value care and penalize them for postoperative complications, readmissions, and higher-than-expected costs.
Institutions need ways to translate value-based care models into everyday orthopaedic practice, and orthopaedic surgeons should have information available to them about patient- and system-level results as well as treatment costs to influence clinical decision making. Institutional leadership involving proscriptive guidelines and restrictions tends to disenfranchise surgeons, and may not account for the unique needs of different patients and communities . Increasing surgeons’ awareness of their own performance to reduce variation with respect to clinical outcomes such as length of stay, discharge disposition, readmission, patient experience and episode costs may more effectively increase value, professional engagement, and collaboration.
In response to the finding that orthopaedic surgeons are often unaware of the costs associated with total joint replacement procedures , Seton Medical Center Austin in Austin, TX, USA worked with Dell Medical School to create an unblinded monthly surgeon value scorecard (SVS) for hip and knee arthroplasty. Each SVS summarized a rolling 6-month view of clinical and financial results by surgeon from their patients undergoing primary THA or TKA. Participating surgeons worked together to develop definitions and measurement techniques for the desired metrics, which comprised five categories: Patient demographics, clinical measures (length of stay, discharge disposition, readmissions, complications, and adherence to standardized care pathways), patient experience, financial costs, and operational metrics (including duration of surgery, total time spent in the operating room, and time spent in post-anesthesia care unit). The results were associated with each surgeon by name in an unblinded fashion and reviewed at monthly interdisciplinary meetings where surgeons and clinical and administrative staff discussed their observations and opportunities for improvement.
One important characteristic of the SVS is its primary focus on outcomes and costs rather than process measures. When pursuing performance improvement efforts, it is easy to become mired in the detailed analysis of everyday workflows and decisions of unknown importance. A better strategy is to identify the metrics that matter most to patients and their providers. By including outcomes data (such as complication rates) in addition to process measures (such as adherence to care pathways and duration of surgery), the SVS design allowed for surgeons to experiment with changes to their processes without a burdensome degree of measurement, while keeping their focus on the ultimate priorities—their patients’ health and the resources available to care for them.
Another central feature of the SVS is that it combined clinical and cost data. Assessing one in the absence of the other would severely limit the utility of both. Surgeons could be assured that their efforts to contain costs were not adversely affecting their patients’ clinical results, and that their efforts were not adversely impacting their care-episode costs. We also monitored demographics and Charlson Comorbidity Indices for every patient, and confirmed retrospectively that they were similar for surgeons at the start and in the tenth cycle of the SVS, to minimize concerns about invalid comparisons due to lack of risk adjustment. We recommend including this type of monitoring to ensure that surgeons are not rewarded for cherry-picking only the healthiest patients rather than objectively assessing the risks and benefits of surgery for each patient as an individual.
Potentially the most important aspect of the SVS implementation process is that it was unblinded. Prior research has shown that exposing physicians to cost data alone is not sufficient to change practice . In our model, surgeons also saw data from other surgeons in the practice, providing a high degree of accountability with their peers. The in-person monthly reviews also provided an outlet for brainstorming and sharing of best practices. In comparing surgeons’ outcomes with their own and those of their peers rather than a national standard, it also allowed for flexibility and customization of practice in response to the unique needs of the local population.
Preliminary results of the SVS have been promising. In comparing patient cohorts from the first cycle of the SVS to one 10 months later, costs were considerably lower with no detrimental impact on clinical outcomes. Prior research has shown that a surgeon scorecard was effective in general surgery at reducing operating room supply costs at the service line level when provided along with a financial incentive . Our early experience with the SVS in total joint arthroplasty suggests that unblinded sharing of cost and clinical outcomes data can achieve similar results in orthopaedic surgery, even in the absence of a direct financial incentive (unpublished institutional data).
Arthroplasty is a good starting point for piloting the SVS because primary total joint replacement is a high-volume procedure with a relatively homogenous patient population that lends itself to standardization, with implant choice, length of stay, and postacute care services accounting for a substantial amount of the variation and total episode cost. These aspects make it possible for small changes and reduction in variation to have a large and measurable effect during a relatively short period of time. Surgeon comparisons in other areas of orthopaedics also are possible, but they are more complex than in total joint replacement because of heterogeneity in patient populations and procedures.
At our institution, engaging surgeons in creating the SVS and agreeing to its parameters ahead of time was critical to our success. All stakeholders were prepared for the process and invested in the results. Exporting similar initiatives to other institutions will require paying close attention to the local culture and ensuring buy-in from all stakeholders ahead of time. By doing this, surgeons can actively integrate their knowledge and expertise into performance improvement opportunities, rather than these realms existing in opposition to one another.
Asking surgeons to participate in an unblinded assessment of their costs and clinical outcomes may not be feasible at every institution. In a recently published multisite study of a similar quality improvement collaboration, only five of 13 institutions involved were compliant with the protocol, and only a fraction of eligible patients was initially submitted to be included in the analysis . In some practice environments, this level of disclosure may be viewed as intrusive. Physicians may have an initial distaste as they fear the data could be used against them. The adoption of a SVS requires an initial culture of mutual support, respect, and continuous learning to ensure that the process helps build interprofessional bonds rather than eroding them. This culture must extend to include trust between surgeons and hospital leadership in order to avoid the possibility of—or fears about—results affecting distribution of institutional backing. It also requires administrative support, as there are resources required to collect, analyze, and maintain this data. Institutions large enough that not all surgeons have close working relationships might do well to subdivide surgeons into smaller groups (eg, by practice location), and smaller institutions may choose to collect and compare a smaller number of metrics in order to minimize administrative burden.
We developed the SVS with the notion that “what gets measured, gets managed.” Sharing cost and clinical data through a value scorecard minimizes the role of imposed constraints and financial incentives in performance improvement, and instead maximizes the role of professional responsibility. Measuring the most important aspects of practice—clinical outcomes and overall costs of care—focuses surgeons’ efforts on the big picture and the ultimate goal of improving the value of care we deliver to our patients. Associating the unblinded results with our surgeons led to communal accountability, but also may pose challenges when expanding the model to other environments. Overall, the SVS may represent a helpful tool for improving value by engaging surgeons directly in the creation of a learning health system. We aim to gather additional data, rigorously validate our preliminary findings, and continuously refine our care delivery model based on our results.
1. Cauchy F, Farges O, Vibert E, Boleslawski E, Pruvot FR, Regimbeau JM, Mabrut JY, Scatton O, Adham M, Laurent C, Grégoire E, Delpero JR, Bachellier P, Soubrane O. Sensitizing surgeons to their outcome has no measurable short-term benefit. Ann Surg. 2017;266:884–889.
2. Okike K, O'Toole RV, Pollak AN, Bishop JA, McAndrew CM, Mehta S, Cross WW 3rd, Garrigues GE, Harris MB, Lebrun CT. Survey finds few orthopedic surgeons know the costs of the devices they implant. Health Aff (Millwood). 2014;33:103–109.
3. Sedrak MS, Myers JS, Small DS, Nachamkin I, Ziemba JB, Murray D, Kurtzman GW, Zhu J, Wang W, Mincarelli D, Danoski D, Wells BP, Berns JS, Brennan PJ, Hanson CW, Dine CJ, Patel MS. Effect of a price transparency intervention in the electronic health record on clinician ordering of inpatient laboratory tests: The PRICE randomized clinical trial. JAMA Intern Med. 2017;177:939–945.
4. Timmermans S. From autonomy to accountability: The role of clinical practice guidelines in professional power. Perspect Biol Med. 2005;48:490–501.
5. Zygourakis CC, Valencia V, Moriates C, Boscardin CK, Catschegn S, Rajkomar A, Bozic KJ, Soo Hoo K, Goldberg AN, Pitts L, Lawton MT, Dudley RA, Gonzales R. Association between surgeon scorecard use and operating room costs. JAMA Surg. 2017;152:284.