DEPARTMENTS: PRACTICE POINTS
The key to successful Merit-based Incentive Payment System (MIPS) reporting is your engagement in the process. As discussed in previous columns, there are 4 categories of MIPS-eligible clinician performance contributing to a composite performance score of up to 100 points: Quality (weighted at 60% for 2017), Advancing Care Information (weighted at 25% for 2017), Improvement Activities (weighted at 15% for 2017), and Cost (weighted at 0% for 2017, but will be weighted for 2018 and in future).
In this column, we are digging deeper into the “Quality” category and taking a closer look at the Quality Measure Benchmarks. The following information is directly referenced from the Quality Measure Benchmarks Overview.1 Under the “documents and downloads” section, you can download the 2017 Quality Measure zip file at https://qpp.cms.gov/about/resource-library to access the Quality Measure Benchmarks Overview document for your files. It is the fourth URL in the list.
When a clinician submits measures for the MIPS Quality Performance Category, each measure is assessed against its benchmarks to determine how many points the measure earns. A clinician can receive anywhere from 3 to 10 points for each measure (not including any bonus points). Benchmarks are specific to the type of submission mechanism: electronic health records (EHRs), Qualified Clinical Data Registry (QCDR)/registries, Consumer Assessment of Healthcare Providers and Systems (CAHPS), and claims. These historic benchmarks are based on actual performance data submitted to PQRS in 2015, except for CAHPS. For CAHPS, the benchmarks are based on 2 sets of surveys: 2015 CAHPS for Physician Quality Reporting System (PQRS) and CAHPS for Accountable Care Organizations. Submissions via the Centers for Medicare & Medicaid Services Web Interface will use benchmarks from the Shared Savings Program.
Each benchmark is presented in terms of deciles. Points will be awarded within each decile (see Table, http://links.lww.com/NSW/A2). Clinicians who receive a score in the first or second decile will receive 3 points. Clinicians who are in the third decile will receive somewhere between 3 and 3.9 points, depending on their exact position in the decile; clinicians in higher deciles will receive a corresponding number of points. For example, if a clinician submits data showing 83% on the measure, and the fifth decile begins at 72% and the sixth decile begins at 85%, then the clinician will receive between 5 and 5.9 points because 83% is in the fifth decile. For inverse measures where a positive performance is seen in a lower score, the scores are reversed in the benchmark deciles, and higher scores are in lower deciles, but the lowest deciles still receive the lowest points.
For measures with no historic benchmark, MIPS will attempt to calculate benchmarks based on 2017 performance data. Benchmarks are created if there are at least 20 reporting clinicians or groups that meet the criteria for contributing to the benchmark, including meeting the minimum case size (which is generally 20 patients), meeting the data completeness criteria, and having performance greater than 0% (<100% for inverse measures). If no historic benchmark exists, and no benchmark can be calculated, then the measure will receive 3 points. Measures without historic benchmarks are listed at the bottom of the Table found online (http://links.lww.com/NSW/A2). The benchmark calculations for the 2017 performance year used data that were submitted for PQRS in 2015 by clinicians who were a provider type eligible for MIPS and were not newly enrolled in 2015, or groups with at least 1 such clinician. Comparable Alternative Payment Model data are included when possible.
Each benchmark has the following information: measure name and identification; submission type (EHR, QCDR/registries, claims); measure type (eg, outcome, process); whether a benchmark could be calculated for that measure/submission mechanism; range of performance rates for each decile to help identify how many points the clinician earns for that measure; and whether the benchmark is topped out (topped out means the measure is not showing much variability and may have different scoring in future years).
To review the specific benchmarks for the following categories, see page 3 of the Quality Benchmarks Overview1: Historical Benchmark for Web Interface Reporters, Historical Benchmarks for CAHPS Reporters, Historical Benchmarks for the All-Cause Hospital Readmission Measure, and Historical Benchmarks for Topped-Out Process Measures.
As the program year moves forward, it is important to know how your patient population influences your measure selection. Equally important is to understand how your measure selection directly impacts your documentation captured for reporting. Next, check the progress of your reporting often to gauge your progress for all aspects of MIPS. And finally, reach out for assistance if necessary.