Share this article on:

Study: Pathology Errors Can Have Serious Effect on Cancer Diagnosis & Treatment

Fromer, Margot J.

doi: 10.1097/01.COT.0000291164.08133.9a
Article

It almost goes without saying that errors in the pathology laboratory can have serious consequences for cancer patients. But although everyone knows that pathologists, being human, make mistakes, few physicians discuss it, and almost no research has been done to assess the type and extent of errors and the consequent effect on cancer care.

Until now. A team led by Stephen S. Raab, MD, Professor of Pathology at the University of Pittsburgh School of Medicine, catalogued and assessed such mistakes at four institutions. The results were published in the November 15 issue of Cancer.

Dr. Raab noted in an interview that the majority of cancer diagnoses are made on the basis of either histologic or cytologic evaluation, and if the diagnosis is wrong, treatment can be delayed and/or imprecise.

The reported frequency of anatomic pathologic errors ranges from 1% to 43% of all specimens, regardless of origin and disease, he said. The error rate for oncology is 1% to 5%.

But error frequency and effect are poorly characterized because there is no uniform measurement process, and many physicians have trouble understanding when an error has occurred.

Back to Top | Article Outline

Error Detection

The most common way to detect error is secondary review'i.e., another pathologist looks at a slide. The method was established by the Clinical Laboratory Improvement Amendments of 1988 (CLIA ′88) in which cytologic and surgical specimens from the same sites (for example sputum and lung tissue) are compared—so-called cytologic-histologic correlation.

Figure

Figure

If the two diagnoses are different (i.e., sputum is suspicious for cancer but the biopsy is benign), it means that an error has been made in one.

“One of the problems of secondary review is that it is not standardized,” Dr. Raab noted. “Not everyone does it the same way.”

It's not always possible to know if the review pathologist is correct, he acknowledged. “The only way you can be sure is if what you said the patient has turns out to be what he or she really has.”

This seems like a high risk of a Catch-22. Dr. Raab agreed: “Accuracy of cancer diagnosis depends on competence all around: from the pathologist as well as the clinician.”

Bruce A. Jones, MD, Senior Staff Pathologist at Henry Ford Hospital in Detroit, said there is no consensus about secondary review: “It is highly variable, and every hospital does it differently.”

Neither does CLIA mandate how the process is performed. Therefore, laboratories are free to use their own methods, which can lead to bias.

Dr. Raab described bias in interpreting pathology slides as something like what exists among jurors: “You have preexisting ideas about which disease is going to show up on the slide,” he said. “The process is very arbitrary.”

Yes, it is—somewhat, Dr. Jones said. “But pathologists as a rule are extremely conscientious about their work, and although second opinions are almost never mandated, we do them all the time anyway.”

Dr. Raab said that a detailed study of the effect of pathology errors by cytologic-histologic correlation has not been done, but others have estimated that 2.3% of cytologic specimens and 0.44% of surgical specimens were wrong, and that 23% of those errors had a significant effect on patient care.

Back to Top | Article Outline

4 Institutions

The Cancer study examines the frequency and cause of anatomic pathology errors at four institutions. The purpose was to share data, determine frequency, collect patient outcome information to determine the clinical effect, analyze the causes, develop strategies to reduce frequency, and assess the success of those strategies.

The authors standardized cytologic-histologic correlation in the four institutions. Then, each month a cytotechnologist identified all patients who had had both cytology and surgical specimens from the same site.

A review pathologist selected cases in which there was a discrepancy between the two specimens and then examined the slides and determined the cause of error.

For example, a diagnostic error occurred if a patient's bronchial brush specimen was diagnosed as benign and the lung biopsy showed carcinoma. The review pathologist examined all slides and determined which (or both or neither) was wrong, and then assigned a cause of error: interpretation or sampling or both.

Back to Top | Article Outline

Categories

Dr. Raab and his colleagues reviewed the clinical records of all gynecologic (Pap tests) and nongynecologic errors. Then an independent clinical outcomes data collector reviewed the records. Finally, a pathologist assessed the clinical severity of the errors, rating them as follows:

  • ▪ No harm: Medical intervention acted regardless of an erroneous diagnosis.
  • ▪ Near miss: Intervention before harm occurred or no intervention because of the erroneous diagnosis.
  • ▪ Minimal harm: Further unnecessary noninvasive diagnostic testing, delay in diagnosis or therapy, or minor morbidity due to further diagnostic tests or treatment predicated on the error.
  • ▪ Moderate harm: Further unnecessary invasive diagnostic tests, a delay of more than six months in diagnosis or therapy, or major morbidity lasting less than six months.
  • ▪ Severe harm. Loss of life or limb or morbidity lasting more than six months.

For all four institutions, 46% of gynecologic errors were no-harm events, 8% were near miss, and 45% carried harm. For nongynecologic errors, the rate was 55% for no harm, 5% for near miss, and 39% carried harm.

Figure. B

Figure. B

What happens when pathologists disagree about how much harm their errors caused? “Assessment of harm is very difficult,” Dr. Raab replied. “It's true that there are differences of opinion about the meaning of harm and its severity, but it's also true that pathologists don't like to talk about it, so it's hard to get the discussions out in the open and evaluate the extent of harm.”

Back to Top | Article Outline

What the Results Showed

There were more errors for nongynecologic specimens than for gynecologic ones.

This is not surprising, Dr. Jones said. “A Pap test is a screening mechanism, and the error focus—if there is one—is on the side of sensitivity rather than specificity. A few false-positive Pap tests are not important because you're not diagnosing cancer. What you're saying is that there is something not quite right, something that should raise the level of evaluation to another Pap or a colposcopy or another way to look at the cells.”

Error frequency in the study, regardless of type of specimen, depended on the institution; he added, for some, the error rate was more than 10%.

All institutions showed a relatively high number of errors in specimens from the urinary tract and lung. Most were attributed to cytology rather than surgical sampling or interpretation. “These are difficult areas from which to obtain samples,” Dr. Raab explained.

Dr. Jones agreed, but he said that differences do not necessarily represent a greater number of errors. “When you're dealing with two types of pathologic examinations—cytology and histology—you will always have more difference of opinion.”

In fact, this is true across the board, he added. “A difference of opinion is not the same as an error, and a single secondary review is not going to solve the problem of wrong diagnoses in cancer pathology.

“Instead of one other set of eyes, it would be better to ask a panel of five acknowledged experts to look at blinded slides. You'd still have conflict, but more minds would be at work, and that would reduce the error rate.”

He also said that “error” needs to be precisely defined, and there ought to be a more rigorous way to determine if an error has occurred.

Back to Top | Article Outline

Implications

Dr. Raab and his colleagues acknowledge that it is exceedingly difficult to measure the true frequency of errors in cancer diagnosis because of the variety of detection methods, bias, and the inability of institutions to secondarily review large case volumes.

Dr. Raab said that in future studies, analysis should be standardized, cases and data should be shared among institutions in order to reduce bias, and more and varied methods should be used to increase accuracy.

That may not be enough, though, Dr. Jones said. “There are a number of weaknesses in this study. It compares apples and oranges—mixed fruit piled into one basket.

“Cytology and histology slides were compared to one another when they should have been separated. Gynecologic and nongynecologic cases were lumped together. Too many types of tissue, from too many different organs, were included in one study. And probably most important, slides of premalignant tissue were compared with slides of cancer. They represent vastly different issues—in and of themselves, as well as the action taken as a result of their interpretation.”

Back to Top | Article Outline

No Standardization

The study authors noted that currently, there is no standardization of laboratory practices. Pathologists' experience, subspecialty practices, training programs, methods of preparing specimens, and quality assurance systems vary from institution to institution.

“We believe that differences in test ordering contribute to clinical sampling errors. For example, clinicians who bypass noninvasive cytologic diagnostic techniques for more invasive surgical techniques may have a higher rate of more accurate cancer diagnoses, but with higher costs, morbidity, and mortality,” Dr. Raab said.

In addition, the effect of diagnostic errors on patient outcomes is largely unknown, but think about the havoc wreaked by, for example, 150,000 Pap test mistakes each year (assuming 50 million annual tests).

Clinicians often don't know that a pathology error has occurred, and pathologists rarely learn the clinical outcome of their errors. If they do, they disagree about the extent of harm.

Even if pathology standards existed, how could institutions be made to adhere to them? “I don't know,” said Dr. Raab. “It would be a major challenge.

“I think government regulations would have to be instituted, with penalties for noncompliance, or maybe financial incentives for quality care as is happening in other areas of medical practice. But whatever you do, pathology practice is still a matter of individual competence and responsibility.”

Dr. Jones said that standardization isn't really the issue—at least not yet. “Before you develop a standard, you have to validate its value, and you need lots of documentation about performance outcomes. We have begun thinking about improving the way we do pathology, but we're a long way from perfection—even from development of imperfect standards.”

© 2005 Lippincott Williams & Wilkins, Inc.
Home  Clinical Resource Center
Current Issue       Search OT
Archives Get OT Enews
Blogs Email us!