Nakhleh, Raouf E. MD
Successful implementation of a quality assurance (QA) and improvement program largely depends on 3 core principles.1,2 The first is a commitment to continuous quality improvement. To realize the significance of this principle, one needs to only reflect upon the changes that have occurred to motor vehicles, computers, or cell phones in the past 20 years. Whole industries have made the commitment to continuous quality improvement because anything less would have led to their demise and in fact did lead to the demise of many competitors. Along the way, incredible innovation occurred with dramatic improvement in the quality of products. The second is the adoption of a systems approach, which at its core makes the admission that humans will make mistakes. Therefore, systems and processes must be designed to prevent errors, whereas at the same time actively looking for errors and fixing them at the earliest possible point in the process. The systems approach mirrors patient safety ideals in the commitment to prevent errors and actively finding ways to detect them. The final piece needed for success is a persistent effort. A high level of quality cannot be achieved by adhering to principles some of the time. Because it only takes a second for a disaster to happen, quality demands attention all the time.
To achieve quality, it must be defined. The term quality is vexing. On the one extreme it can be used to describe an individual attribute in a nuanced manner such as the feel of a material, the taste of food, or even the stirring of emotion. On the other extreme, quality could be used to describe a product that has been produced to precise specification. This is the reason why quality must be defined in every situation where it is expected. In anatomic pathology, examination and reporting of specimens is the main product of the laboratory. Therefore, the quality of anatomic pathology is dependent on the quality of reports. Measuring report quality can be carried out with 3 attributes in mind; accuracy, completeness, and timeliness.3
The focus of this article is to outline the core components of a comprehensive QA plan. There are many details that cannot be included in an article of this size, but my hope is that this article provides the scaffolding on which a sound program can be built.
QUALITY ASSURANCE AND IMPROVEMENT PLAN
The existence of a QA program is mandated by the Laboratory Accreditation Program's standards ANP.10000; “Is the quality management program defined and documented for surgical pathology?” and GEN.13806; “Does the laboratory have a documented quality management (QM) program?”4 A written quality management plan should document all activities related to QA and improvement.1,2 This document is best reviewed and amended annually with notation of progress and accomplishments of goals. Inherent in quality management is “keeping score.” This should refer to the program as a whole as well as to individual monitors and tasks. Defining a quality management program is best carried out in a quality management plan that includes the following items:
1. A QA team with a clear charge: This is most often constituted as a committee with representation from all the parts of the laboratory. One of the keys to improvement is the dissemination of information about performance. From that perspective, it is important to involve as many people as possible in quality assurance. This ideally provides feedback to those involved in processes and helps them to understand the consequences of actions as they relate to patient care. In contrast, individual most intimate with the process frequently have ideas on improvement of the process. If people understand the QA measures and their implications, they are more likely to make an effort to improve the process.
2. An assessment of the risks to the laboratories: The literature provides ample information regarding the danger to patients from errors that occur in surgical pathology. An assessment of risk could be addressed in many ways. Some choose to do a formal evaluation of risk such as a failure mode effects evaluation.5 Others choose to focus on institutional concerns and known problems within the laboratory. Evaluation of the literature concerning legal claims and settlements provide information on legal risk. Evaluation of this literature demonstrates that legal cases are largely (>90%) composed of errors in the examination and interpretation of specimen findings (ie, analytic phase of the test cycle).6,7 Evaluation of QA literature, however, demonstrates that analytic problems only account for approximately 1/4th of the total errors that occur in pathology.8 The remaining problems can be attributable equally to the preanalytic and postanalytic phases of the test cycle. An analysis of risk helps to focus on resources to maximize the benefits of one's efforts.
3. A list of study monitors that address laboratory risk and regulatory requirements with a timetable for evaluation and discussion of the results: Most departments will have regularly scheduled meetings with deadlines for reporting specific monitors assigned at each meeting. Typical monitors cover the entire test cycle (preanalytic, analytic, and postanalytic), turn-around-times (TATs), and customer satisfaction. Although an error may manifest itself as an erroneous report, the source of error may have occurred anywhere in the test cycle from collection of the specimen to the delivery of the report. This is the main reason to continuously monitor all aspect of the laboratory including the preanalytic, analytic, and postanalytic phases of the test cycle. In addition, the time it takes to get a case completed and reported plays an important role in people's perceptions of quality. The inclusion of customer or clinician satisfaction is important because it gives the laboratory an opportunity to open up a dialogue with physicians regarding their expectations of the laboratory with the prospect to be fully recognized as a consultative service.9
4. Individual responsibility for the various monitors of the plan and/or a strategy for collection of the data: A serious effort to improve laboratory performance requires a commitment to finding the manpower to collect and analyze the data necessary to objectively evaluate the laboratory. Some institutions have a separately designated Quality Officer who is charged with the collection and maintenance of the QA data. Most laboratories divide this work among a number of individuals including the histology laboratory supervisor, the cytology supervisor, secretarial staff, and pathologists. Various other individuals could also be involved including most histotechnologists or cytotechnologists. Depending on the availability and capability of a computer system, much of the data collection can be automated.
5. Defined working relationships with other departmental and institutional QA and other committees: Frequently, anatomic pathology QA and improvement is part of the laboratory medicine and pathology program. Most institutions also require departments to report various quality measures to institutional QA and/or patient safety committees. Rather than viewing this relationship as a mandate, it should be viewed as an interactive relationship between the laboratory and the institutions it serves. The laboratory has defined capabilities and the institution has expectations of the laboratory services. Within the formal relationship of reporting of quality measures is the opportunity to demonstrate that institutional concerns are met and to constantly explore new way that the laboratory can be viewed as vital to the institution.10
6. Define procedures and reporting mechanisms for incident reports and near miss events: Most institutions have well-defined mechanisms for incident reports and near miss events. The laboratory leadership, particularly the QA committee should be familiar with institutional policy and procedures for incident reports and near miss events.
7. Annual review of the plan: Ideally, this should be formalized in an annual meeting where progress and results are examined regarding goals of the previous year with delineation of goals for the forthcoming year. Within the annual review, performance or quality improvement initiatives should be included and their progress noted.
QUALITY ASSURANCE MONITORS
QA monitors should be organized within the framework of each laboratory from a divisional perspective (surgical pathology, cytology, and autopsy). The monitors, as much as possible, should also be organized to reflect a precise location and/or function (accessioning, histology, gross room, clerical, etc). Keeping these organizational categories provides an opportunity for each working unit to review data specific for that area. At the same time, the monitors should be organized into preanalytic, analytic, postanalytic, TAT, and customer satisfaction domains.1,11 Keep in mind that errors in one area (eg, accessioning; preanalytic) can result in errors in other areas (eg, erroneous report; postanalytic). Some monitors tend to be global (eg, amended reports) encompassing the errors from the entire test cycle.12 TAT is a global measure as is customer satisfaction; of course, these may be broken down to examine subsegments of the test cycle. For each monitor, a specification sheet should be produced and include the components listed in Table 1.
Most would agree that the preanalytic phase of the test cycle represents everything that occurs to the specimen until the point of accessioning. Some would also include slide and block labeling as preanalytic components. A number of monitors can be implemented within the preanalytic phase of the test cycle (Table 2). Preanalytic process that could be targeted for monitoring includes specimen identification, specimen adequacy, specimen transport, specimen integrity, appropriate fixation, appropriate containers, and adequate information provided on the requisition slip.
Specimen identification is the most important part of this phase of the test cycle. Unfortunately, specimen identification is largely a function that is beyond the direct control of the laboratory.13–15 Anatomic pathology specimens are typically labeled in procedure rooms or operating rooms and this function is dependent on a great number of individuals who are not under the control of the laboratory. Studies have shown that up to 0.7% of specimens have specimen identification problems.16 The Joint Commission's patient safety goals include a focus on patient identification, which incorporates specimen identification. Under the Joint's Laboratory patient safety goals, specimen identification is specifically targeted.17
Pathologist and histotechnologists are unlikely to successfully improve specimen identification without the assistance of clinical departments. Fortunately, some clinical departments are recognizing this situation as part of their responsibility and are taking action to remedy the situation.18 Ideally, specimen identification becomes an institutional goal. Specimen identification problems should always be shared with the clinical department from whence they came. It is also ideal that institutions also recognize specimen identification measures as performance measures of the various clinical areas rather than just the laboratory.
The analytic phase of the test cycle represents all of the steps in processing and evaluating the specimen in the laboratory until a diagnosis is rendered. This includes gross examination and dissection, sectioning, block processing, cutting, slide preparation, routine staining, special staining, immunohistochemistry, other special studies, microscopic examination, rendering of a diagnosis, and providing all necessary descriptive features (size, stage, etc). All of these steps could be monitored (Table 2). The single best measure of this system is diagnostic accuracy. However, this is a measure that is not easily obtained. To reliably, fairly, and accurately obtain a diagnostic error rate, all cases would have to be reviewed. Instead, some programs use review of a limited sample or use amended reports as a surrogate for diagnostic error. Many reports are amended for reasons other than diagnostic error and some diagnostic errors occur due to errors that do not reflect the pathologist's handling of the specimen (eg, if a specimen is interpreted with the wrong clinical history, an improper diagnosis could be rendered).8,12 Traditional error rates that are usually measured include frozen section-permanent section correlation, histology-cytology correlation, and amended reports as discussed above.
In an effort to reduce errors many surgical pathology and cytology departments mandate that selected cases be reviewed by a second pathologist before the case is signed out. A QA monitor could test adherence to this policy and could represent an alternative measure of the analytic phase of the test cycle.19 Tracking the percentage of case reviewed by a second pathologist before sign-out also serves as reinforcement of this policies. A recent Q-Probes study showed an aggregate rate of review of 8.2% of cases. An appropriate target rate of review could be 5% to 15% of cases that are signed-out depending on the case mix that is seen at that institution and the extent of the list of cases that are mandated for review.
There are many measures that can and should be implemented in the gross room and histology.1 The most important of which is intralaboratory specimen identification monitoring the rate of misidentified or mislabeled cases, specimens, blocks, and slides.20 This is as important as patient identification. Mislabeling and misidentification errors also can occur at the microscope and during transcription. Needless to say, all identification errors should be monitored regardless of where they occur. Some consider all identification errors as preanalytic, but they can clearly occur at any point in the test cycle with equal and deleterious consequences for the patient.
External quality assessment (EQA) or proficiency testing is an element of monitoring of the analytic phase of the test cycle. EQA did not exist in the United States until the introduction of cytology proficiency testing and human epidermal growth factor receptor 2 proficiency testing.21,22 Documentation of EQA should be part of QA monitors. The United Kingdom has a long history of robust EQA programs that are used for education and improvement.23 The extent to which EQA will be used in the United States is unknown, but it is safe to say that we are likely to have increased use of EQA over time, particularly if other prognostic and therapeutic markers are found similar to human epidermal growth factor receptor 2.
Two aspects of the postanalytic phase of the test cycle have been repeatedly emphasized in the past few years and deserve attention are: 17,24,25 (1) report completeness of cancer resection and (2) critical diagnoses. Other possible monitors are listed in Table 2.
Evidence-based medicine is the current state of the art, particularly in oncology. This is greatly dependent on staging information provided on surgical pathology reports. Over the past 10 to 20 years consensus agreement has been achieved regarding scientifically valid parameter for most cancers that should be included in reports of cancer resections. The Commission on Cancer of the American College of Surgeons mandates that designated cancer center must have all the required elements in at least 90% of reports. The Laboratory Accreditation Program recently added a standard that cancer reports be audited for completeness as part of a laboratories QA program. The initial measures of The Physician Quality Reporting Initiative also involves adequacy of report in breast and colon cancer.26
Critical or unexpected findings are the other hot issue in the postanalytic phase of the test cycle. There have been many calls recently to establish policies to address critical diagnosis in anatomic pathology.27 There have also been recommendations that when a clinician is notified of a critical diagnosis, this should be documented in the body of the diagnostic reports. One recommended audit is to examine the rate at which notification of a critical diagnosis is documented correctly. Others have also advocated auditing the time it takes to reach the primary caregiver with the critical results.
Many measures of TAT are mandated by The Laboratory Accreditation Program and should be checked for a period of time at least annually.1 Among these are frozen section TAT (90% of cases within 20 min), surgical specimen TAT (95% of cases within 2 working day), nongynecologic cytology specimens (90% of cases within 2 working day), autopsy preliminary diagnosis (2 working day), and autopsy final diagnosis (30 working day). A list of TAT monitors is shown in Table 3.
TAT of most monitors typically includes the time from when the specimen is delivered to pathology to the moment that the report is released. In evaluating TAT, keep in mind that physicians and others instinctively define TAT from the point of obtaining the specimen till the point at which they read the report. The difference between the 2 TAT's can be dramatically different and may need to be addressed in some situations.
Many industries define quality exclusively on the basis of customer satisfaction. Customer or clinician satisfaction surveys are relatively new concepts in pathology but are rapidly gaining acceptance and importance.9,10 A list of issues that could be addressed in a clinician survey for pathology are listed in Table 3. Although many shy away from surveys for fear of what they may bring out, in the long run, surveys followed by a genuine efforts to remedy problems usually engenders loyalty. Surveys can sometimes be the only opportunity for communication with some clinicians. This should be exploited to find out physicians' expectations of the laboratory. Some expectations may be unreasonable and should be followed with educational efforts to explain appropriate expectations. By opening this avenue of communications and with follow-ups, clinicians are more likely to view the laboratory as a source of solutions rather than a hindrance.
USE OF QUALITY ASSURANCE DATA FOR ASSESSMENT OF INDIVIDUALS
The use of QA data to evaluate individual performance seems to be a common practice, although very little is written on this issue. The Joint Commission requires that all physicians be evaluated using evidenced-based validation of a physician's knowledge, skills, ability, and behavior every 2 years by hospitals to maintain hospital privileges.28 QA data, if properly collected tend to be objective and includes peer review. It is advisable to use multiple measures to assess an individual's performance. Typical QA data that may be used include TAT for various services, measures of diagnostic accuracy such as amended report rates and case review error rates, measures of productivity and involvement in departmental activities, customer satisfaction or complaints, and adherence to policy and professionalism.
The convenience of using QA data is its availability on a continuous basis with the ability for peer comparison. When evaluating individuals, there is a need to evaluate an individual's performance relative to their peers. The advantage of using multiple measures is to temper the judgment of individuals who may have a statistical outlier for any particular measure or who may be struggling with a small aspect of their work that is over represented if only 1 or 2 measures are used.
One important practice measure that is unrelated to error is adherence to policy, particularly in departments that mandate secondary review on selected types of specimens. This type of measure encourages behavior that hopefully leads to error prevention.
1. Nakhleh RE, Fitzgibbons PL. Quality Management in Anatomic Pathology: Promoting Patient Safety Through Systems Improvement and Error Reduction
. Northfield, IL: The College of American Pathologists; 2005:5–8.
2. Valenstein P. Quality Management in Clinical Laboratories: Promoting Patient Safety Through Risk Reduction and Continuous Improvement
. Northfield, IL: The College of American Pathologists; 2005:167–198.
3. Nakhleh RE. What is quality in Surgical Pathology. J Clin Pathol. 2006;59:669–672.
5. Krouwer JS. An improved failure mode effects analysis for hospitals. Arch Pathol Lab Med. 2004;128:663–667.
6. Kornstein MJ, Byrne SP. The medicolegal aspect of error in pathology. A search of jury verdicts and settlements. Arch Pathol Lab Med. 2007;131:615–618.
7. Troxel DB. Medicolegal aspects of error in pathology. Arch Pathol Lab Med. 2006;130:617–619.
8. Meier FA, Zarbo RJ, Varney RC, et al. Amended reports; development and validation of a taxonomy of defects. Am J Clin Pathol. 2008;130:238–246.
9. Nakhleh RE, Sours R, Ruby SG. Physician satisfaction with surgical pathology reports: a 2-year College of American Pathologists Q-Tracks study. Arch Pathol Lab Med. 2008;132:1719–1722.
10. Zarbo RJ. Determining customer satisfaction in anatomic pathology. Arch Pathol Lab Med. 2006;130:645–649.
11. Associations of Directors of Anatomic and Surgical Pathology. Recommendations for quality assurance and improvement in surgical and autopsy pathology. Hum Pathol. 2006;37:985–988. Am J Clin Pathol. 2006;126:337-340 and Am J Surg Pathol. 2006;30:1469-1471.
12. Nakhleh RE, Zarbo RJ. Amended reports in surgical pathology and implications for diagnostic error detection and avoidance: a College of American Pathologists' Q-Probes Study of 1,667,547 accessioned cases in 359 laboratories. Arch Pathol Lab Med. 1998;22:303–309.
13. Nakhleh RE. Lost, mislabeled and unsuitable surgical pathology specimens. Pathology Case Reviews. 2003;8:98–102.
14. Slavin L, Best MA, Aron DC. Gone but not forgotten: the search for the lost surgical specimens: application of quality improvement techniques in reducing medical errors. Qual Manag Health Care. 2001;10:45–53.
15. Simpson JB. A unique approach for reducing specimen labeling errors: combining marketing techniques with performance improvement. Clin Leadersh Manag Rev. 2001;15:401–405.
16. Nakhleh RE, Zarbo RJ. Surgical pathology specimen identification and accessioning: a college of American pathologists Q-probes study of 1,004,115 cases from 417 institutions. Arch Pathol Lab Med. 1996;120:227–233.
18. Makary MA, Epstein J, Pronovost PJ, et al. Surgical specimen identification errors: a new measure of quality in surgical care. Surgery. 2007;141:450–455.
19. Nakhleh RE, Bekeris L, Souers R, et al. Surgical pathology case reviews before sign-out: a college of American pathologists' Q-probes study of 45 laboratories. Arch Pathol Lab Med. (in press).
20. Zarbo RJ, Tuthill JM, D'Angelo R, et al. The Henry Ford Production System: reduction of surgical pathology in-process misidentification defects by bar code-specified work process standardization. Am J Clin Pathol. 2009;131:468–477.
22. Wolf AC, Hammond EH, Schwartz JN, et al. American society of clinical oncology/college of American pathologists guideline recommendations for human epidermal grown factor receptor 2 testing in breast cancer. Arch Pathol Lab Med. 2007;131:18–43.
24. Srigley JR, McGowan T, Maclean A, et al. Standardized synoptic cancer pathology reporting: a population-based approach. J Surg Oncol. 2009;99:517–524.
25. Association of Directors of Anatomic and Surgical Pathology. Critical diagnosis (critical values) in anatomic pathology. Am J Surg Pathol. 2006;30:897–899.
26. Elston DM. The physician quality reporting initiative. Clin Lab Med. 2008;28:351–357.
27. Nakhleh RE, Souers R, Brown RW. Significant and unexpected and critical diagnoses in anatomic pathology: a college of American pathologists' survey of 1130 laboratories. Arch Pathol Lab Med. (in press).
Definition of Terms
Measuring attributes of a small step in a process to assure proper functioning. In anatomic pathology, typical quality control steps include recording the temperature of a water bath or the pH of a buffer.
Measuring an attribute that represents several steps in a process. Examples include mislabeled specimen rates, amended report rates, turn-around-time, and customer satisfaction scores.
The use of QA data to target improvement in measures. This typically takes the form of “Plan-Do-Study-Act” cycle that is repeated with various interventions until the desired outcome is achieved.
Continuous Quality Improvement
A principle that is inherent in successful management styles. This begins with dissatisfaction with the status quo, regardless of the level of performance. Organizations such as Toyota and Intel build into their culture the expectation to continuously improve their products and processes. These principles with adequate effort and tools drive quality and innovation. As a result, these organizations become leaders in their fields.
Total Quality Management
A system of management with the goal of embedding quality principles into organizational processes and philosophies.
Patient safety may be defined as “freedom from accidental injury” but Joint Commission requirements have expanded this definition to “ensuring patient safety involves the establishment of operational systems and processes that minimize the likelihood of errors and maximize the likelihood of intercepting them when they occur.”
The process of comparing a performance level with the performance levels of an established comparison group. The performance value derived for the comparison group is called a benchmark.
An error that results in patient harm.
A serious error that could have caused patient harm, but was discovered before any consequence to the patient had occurred.
© 2009 Lippincott Williams & Wilkins, Inc.