The quality of medical care is notoriously difficult to measure, due in part to the large number of influential variables from patient to patient, such as pre-existing medical conditions, extent and duration of disease, and the degree of coordination of care among specialists. Modern care is distributed among so many subspecialists that a serious error or oversight may not be traceable or even identified.
If measuring the quality of care is so difficult for us professionals, how hard must it be for patients? How are they to know whether a physician or hospital provides high-quality care? Most choose both by word of mouth recommendations from family members, friends, or a primary care doctor, and by geographic convenience.
Despite their lack of knowledge, patients are routinely asked about the “quality” of their care or, more precisely, satisfaction with their experience in an episode of care. Short waiting times, easy parking, good hospital food, or a pleasant staff often tops the lists of such polls. These are important factors that can ease the stress of medical care. But they say little or nothing about the medical quality of care.
There are many forces that may lead us to equate patient satisfaction with the quality of the patient's medical care. The former is easy to capture in a phone call, and responding requires no medical knowledge. I have been in many hospitals in my career and have yet to hear of one that scored poorly on these questionnaires. Maybe 95 percent of all patients are quite happy with their care, but I doubt it. And even if they were, it still tells us next to nothing about the medical quality of care. Also, patients polled by phone may be reluctant to complain, fearing they might anger the very people responsible for their care.
Medicare is one objective resource for measuring quality. It collects information on certain quality measures that are easily tracked in their reimbursement database. The database is a goldmine for clinical investigators because of the national scope and reliability of the data. Process measures are the easiest to identify and collect, such as appropriate, timely treatment and follow-up.
Although identifying a direct relationship of a process measure with a particular outcome is often difficult or impossible, using process measures is better than nothing and at the very least it creates an opportunity for comparative studies. Combined with the few widely used outcome measures, like postoperative infections, readmission rates, and 30-day surgical mortality, one can get a general sense of the quality of care.
More to my point, some studies have shown that there is an inverse relationship between patient satisfaction and the medical quality of care; the higher the accolades the lower the quality of care. The most recent such study was published by Dr. Robert Lieberthal and Dominique Comer of Thomas Jefferson University in Philadelphia: “What Are the Characteristics That Explain Hospital Quality? A Longitudinal Pridit Approach,” available online ahead of print in Risk Management and Insurance Review (DOI: 10.1111/rmir.12017). The study does not deal with oncology, but for this purpose it is unnecessary.
They did a retrospective analysis of Medicare hospital data from 4,217 acute care and critical access hospitals that report data to CMS's Hospital Compare database, which is the source of the quality measures. There are 20 quality measures (over 70 total variations of measures) reported in four categories: heart attack care, heart failure care, pneumonia care, and surgical infection prevention, and five structural measures of hospital type.
They applied a mathematical methodology to determine an aggregate relative measure of hospital quality using individual process measures. The scoring is a way of recoding variables in a data set so that one has a measure not of their absolute values but their positions in the distribution of observed values. This method was initially developed to detect insurance fraud in institutional data.
This method made it possible to rank hospitals with respect to quality of care using process measures and demographic attributes of the hospitals. Hospital quality measures should take into account the differential value of different quality indicators, including hospital “demographic” variables. The results showed a large number of slightly below average hospitals and small numbers of extremely high and extremely low scoring hospitals. Longitudinal scores were consistent with the initial cross sectional sampling.
To me, the most interesting findings were that the highest scores of all were patient satisfaction scores and these scores were inversely related to the objective quality scores. On average, poorer-quality hospitals were ranked highest in patient satisfaction, and the highest-quality hospitals ranked lowest in satisfaction scores. The authors opine that most teaching hospitals score well in medical quality but they are often crowded with inefficient patient flow, have poor parking, and can be intimidating by their size and geographic complexity.
One can draw several possible conclusions from these findings:
- Patients are judging something other than medical quality of care when they are questioned about their satisfaction. These include the courtesy and kindness of the staff, the ease of parking and finding their way around the institution, and the speed and efficiency of service.
- The measures of medical quality in this study are faulty.
- Patients may have been asked the wrong questions, or none at all, about the quality of medical care.
My own belief is that 1 and 3 together are the most likely.
One might simply argue that the patient is in no position to judge the quality of medical care. Nonetheless, it could be enlightening if a model were developed to obtain such information. Apparently, current methods of questioning rarely are helpful; maybe the wrong questions are being asked. Some patients may feel unqualified to answer medically related questions, but others might weigh in given the opportunity.
In any case, institutions that point to patient satisfaction surveys as a surrogate for the quality of care are often misleading. And I am certain that any institution that gets a poor rating for patient satisfaction would not advertise that result.