Skip Navigation LinksHome > September 2012 - Volume 19 - Issue 5 > Directed Peer Review in Surgical Pathology
Advances in Anatomic Pathology:
doi: 10.1097/PAP.0b013e31826661b7
Review Articles

Directed Peer Review in Surgical Pathology

Smith, Maxwell L. MD*; Raab, Stephen S. MD†,‡

Free Access
Article Outline
Collapse Box

Author Information

*Department of Laboratory Medicine and Pathology, Mayo Clinic, Scottsdale, AZ

Department of Pathology, University of Washington, Seattle, WA

Department of Laboratory Medicine, Memorial University of Newfoundland/Eastern Health Authority, St John’s, NF, Canada

The authors have no funding or conflicts of interest to disclose.

Reprints: Maxwell L. Smith, MD, Department of Laboratory Medicine and Pathology, Mayo Clinic Arizona, Scottsdale, AZ 85259 (e-mail:

Collapse Box


Second pathologist peer review is used in many surgical laboratory quality-assurance programs to detect error. Directed peer review is 1 method of second review and involves the selection of specific case types, such as cases from a particular site of anatomic origin. The benefits of using the directed peer review method are unique and directed peer review detects both errors in diagnostic accuracy and precision and this detection may be used to improve practice. We utilize the Lean quality improvement A3 method of problem solving to investigate these issues. The A3 method defines surgical pathology diagnostic error and describes the current state in surgical pathology, performs root cause analysis, hypothesizes an ideal state, and provides opportunities for improvement in error reduction. Published data indicate that directed peer review practices may be used to prevent active cognitive errors that lead to patient harm. Pathologists also may use directed peer review data to target latent factors that contribute to error and improve diagnostic precision.

Directed peer review in surgical pathology is a method by which a second pathologist reviews a specimen (already signed-out or examined by a first pathologist) based on preidentified parameters, such as anatomic site of origin, level of experience of first pathologist, or first diagnosis of a specific disease, such as breast cancer. Directed peer review is a method of quality improvement and/or quality assurance. As the process of diagnostic decision-making following morphologic interpretation of a slide is so complex,1 a form of peer review is included in most anatomic pathology practices as a mechanism to detect interpretive/diagnostic error. Through implementation of quality-improvement initiatives, some surgical pathology practices may use directed peer review data to improve patient safety.

Other forms of peer review include random review examining a specified percentage of all cases, clinical-pathologic conference review, and consensus conference review. Practices may use different types of peer review practice to detect different types of error and/or to improve patient safety by altering different practice work components.

The Institute of Medicine (IOM) defined 6 domains of quality: timeliness, safety, efficiency, effectiveness, patient centeredness, and equity.2 Although the domains are interrelated, the topic of peer review in surgical pathology focuses primarily on the domains of safety and effectiveness while potentially affecting domains of efficiency and timeliness. Peer review is performed to detect and/or prevent inherent failures in the almost universally practiced system of single pathologist diagnostic sign-out.

Back to Top | Article Outline


Various forms of peer review findings already have been documented in numerous publications.3

The focus of our investigation is the determination of the benefits of surgical pathology-directed peer review practice. Directed peer review practice simply is a method of error detection and the benefits of this practice depend on the specific methods and goals of implementation and the needs of the specific practice. Our first approach will start by examining surgical pathology practice patient safety issues, which may be addressed through the implementation of directed peer review methods. As the implementation of any quality assurance/improvement initiative depends on the overall environment and culture of patient safety, we will compare and contrast direct peer review methods with other methods of secondary review and other quality improvement/assurance practices.

Our assessment does not involve a detailed knowledge of Lean methods, although we recommend The Toyota Way by Jeffrey Liker as an introductory text for those interested. As directed peer review primarily involves ≥2 pathologists, our focus is on the work that these pathologists perform.4 In Lean terms, pathologists have activities (eg, slide examination, making a diagnosis, constructing a report), pathways (the sequential flow of the specimen), and connections (the individuals with whom the pathologist communicates, such as histotechnologists and clinicians). In the Lean system, failures occur as a result of flawed activities, pathways, or connections and quality improvement initiatives are focused on fixing these flaws.

We will use the Lean-based problem solving A3 method (Fig. 1) to define the problem, perform a root cause analysis, hypothesize an ideal surgical pathology practice, and provide an improvement plan to reach this ideal state.5,6 This approach evaluates a single surgical pathology practice or a group of surgical pathology practices from the system point of view and recognizes that failures occur as a relationship to individuals working in systems that currently are prone to a level of failure, which we only may partially know.

Figure 1
Figure 1
Image Tools

In defining the problem, we will narrow our focus on diagnostic errors in surgical pathology practice. In the root cause analysis, we will examine the underlying system factors that contribute to diagnostic error. We will propose an ideal state from the safety perspective of a patient who is undergoing a procedure that produces a surgical pathology specimen. Directed peer review practice will be considered a part of the improvement plan, and we will discuss the components (pathologist activities, pathways, and connections) that are involved and affected in the implementation process.

Back to Top | Article Outline


Defining the Problem

The problem is that the diagnostic/interpretive process is flawed, resulting in medical errors that affect patients.

Back to Top | Article Outline
Definition of Interpretive/Diagnostic Error

The pathology community has long separated error into the categories of an error in accuracy and an error in precision. Diagnostic accuracy refers to how well the diagnosis represents the true pathologic process within the patient. Diagnostic precision refers to what extent pathologists agree on the interpretation of a case.

The IOM defined a medical error as the failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim.2 A surgical pathology example of an IOM failure of a planned action is a pathologist misinterpretation of the pattern of injury on a liver biopsy and the subsequent rendering of a diagnosis that does not represent the true pathologic process within the patient.

Any major diagnostic disagreement (eg, a benign/malignant discrepancy) between 2 pathologists is generally perceived as an error in accuracy based on the belief that there is an underlying truth and 1 pathologist is correct and the other pathologist is incorrect in assessing this truth. It is human nature to categorize these discrepant diagnoses as right and wrong, which unfortunately has ego implications, resulting in the feeling of shame, if a pathologist views his or her diagnosis as wrong. The disagreement by itself is also an example of an error in precision, as it shows the lack of reproducibility of 2 pathologists.

In daily practice, it is difficult to compare our diagnostic interpretation to a gold standard truth, as the long-term follow-up data and outcomes are not readily available and there are many confounding issues such as subsequent biopsies, comorbid conditions, and slow progression of many pathologic disease states.

In the pathology community, if the opinions of the 2 pathologists differ, the more senior, experienced, or presumed expert pathologist opinion is typically offered as correct, or by extension, more accurately representing the pathologic process within the patient.7 The result is a flawed gold standard that cannot always represent the true pathologic process.

Examples of errors of precision and errors in accuracy are shown in Table 1.

Table 1
Table 1
Image Tools
Back to Top | Article Outline
Measures of Precision in Surgical Pathology

One of the best statistical measures of interpathologist and intrapathologist agreement is the Cohen κ coefficient, which measures agreement beyond that of chance alone.8 Published studies evaluating reproducibility often examine a single specimen type and involve a number of pathologists (eg, 6 pathologists interpreting 50 colonic polyps). Landis and Koch9 characterized κ values of <0 as no agreement, 0 to 0.20 as slight agreement, 0.21 to 0.40 as fair agreement, 0.41 to 0.60 as moderate agreement, 0.61 to 0.80 as substantial agreement, and 0.81 to 1 as almost perfect agreement. In surgical pathology practice, the levels of agreement are generally fair or moderate, depending on case selection methods and other study parameters such as level of pathologist training or organ system.

Back to Top | Article Outline
Frequency of Error in the Interpretive Process

Understandably, based on variations in the type of review, error detection method, and definition of error, the published rates of diagnostic/interpretive error vary significantly.10–12 In 1 study in which all 5397 cases from a year were reviewed by 2 pathologists, the error frequency was found to be 0.26%.13 Raab et al,14 using self-reporting data in an international Q-probes study of 74 laboratories found an average error frequency of 6.8% in surgical pathology specimens.

Back to Top | Article Outline
Selection of Cases

The studies that have compared random versus directed or focused review have been fairly conclusive that directed review methodology for case selection increases the error detection rate.10 One study of a large subspecialty academic practice showed that random and directed peer review methods detected an error frequency of 2.6% and 13.2%, respectively. Intuitively, it is not surprising that preselecting secondary review cases that are from known challenging diagnostic specimen types would increase the frequency of error detection, assuming more diagnostically complex cases will generate more diversity of opinions. Some practices develop their own algorithms for directed case selection on the basis of areas of subspecialty diagnostic expertise and case mix. Other practices have targeted case review based on data from malpractice claims for specimen types that are associated with a higher frequency and/or severity of error.15

The timing of the review is also a component of case selection. In general, random review practices tend to be retrospective in nature, although some practices perform prospective random review. A benefit of retrospective review is the ability to review a higher number of cases without significantly disrupting daily practice workflow and specimen sign-out turn-around-time (TAT). Reviewing more cases has the potential to uncover more errors and thus improvement opportunities for individual pathologists. Unfortunately, this process, unless acted upon, does little for patient safety as treatment decisions may have already been made before the review.

Prospective review is performed before the case being verified and the diagnosis being made available to clinicians. The obvious benefit is the potential to catch errors before they reach the patient with the potential to cause harm. Errors caught at this stage may be better classified as near-miss events. Because the goal of secondary review is to improve quality and patient safety, prospective review would be the goal if feasible. The challenge in prospective review is incorporating the process into the workflow to avoid delays and an increased TAT. Owens et al16 implemented a prospective information technology tool for random selection of difficult cases for secondary review and found a slight increase in median TAT.

Back to Top | Article Outline
Costs Associated With Review

Financial costs for peer review of cases has been estimated at approximately $11.50 (adjusted to present day dollar, originally $7) per case reviewed.13

Back to Top | Article Outline
Consequences of Diagnostic Error

The consequences of errors detected by secondary review processes have been classified using different scales. Some pathologists have classified errors detected by peer review by the potential level of patient harm associated with a change in diagnosis (eg, a change in diagnosis from benign to malignant is classified as major and a change in diagnosis from benign to atypical as minor). In a review of 41 manuscripts reporting discrepancies based on peer review of cases sent to a second institution, Raab and Grzybicki reported that the overall discrepant case frequency was 11.4% with a major discrepant case frequency of 4.7%. In a single institutional study examining the directed peer review process of more challenging case types, Raab and colleagues reported an overall discrepancy frequency of 13.2% with a major discrepancy frequency of 3.2%.10 Medical chart review showed that 58.3% of major disagreements were associated with a degree of harm, most often classified as minimal. Directed case review processes were more likely to detect diagnostic errors that were associated with more severe harm.

Back to Top | Article Outline
Summary of the Problem

Diagnostic errors occur in the practice of surgical pathology and errors detected by a secondary peer review processes occur at a relatively high frequency. Errors in diagnostic accuracy occasionally cause severe patient harm. Studies specifically focused on diagnostic precision generally only show fair to good agreement indicating that for some specimen types, pathologist diagnoses are not highly reproducible.

Back to Top | Article Outline
Error Root Cause Analysis

Many steps are involved in producing a surgical pathology diagnosis and a diagnostic error may be caused by failures in test choice, test procurement, processing, interpreting, and reporting. A diagnostic error is more likely caused by a failure in one of the noninterpretation steps than a failure in the interpretation process. However, as we are examining the effectiveness of focused review processes, we will narrow our root cause analysis to errors in interpretation, recognizing that factors in other testing phases (eg, poor sampling of a lesion) may contribute to an interpretation error.17

Back to Top | Article Outline
Active Versus Latent Errors

Reason18 coined the terms active errors and latent factors to describe individual failures in an error-prone system. Active errors are individual mistakes, slips, fumbles, and lapses. Active errors may be more prone to occur in a system with a high level of latent factors inherent in the workflow process. Latent factors include overwork, inexperience, unrealistic expectations, poor training/education, and suboptimal space design.18,19 Patient safety experts use the Swiss cheese model to describe sentinel events, which occur when multiple active errors and latent factors align at the right time to produce the “perfect storm” for patient harm. In directed peer review detected error, we still tend to focus on the individual and active components of error rather than on the latent factors. This may relate to the cultural tendency to blame a pathologist and their failed cognitive process instead of investigating the workflow, space design, absence of expert availability, and suboptimal apprenticeship-style training programs.20

Back to Top | Article Outline
Complexity Inherent in the Cognitive Process of Diagnosis

The active component of an interpretation error is often viewed as a failure in cognition. Kahneman described 2 types of cognitive thinking, fast and slow. Fast, or system 1 thinking, happens quickly, automatically, and with minimal to no effort. Conversely, slow, or system 2 thinking, involves considerable effort, and requires activation and concentration. When pathologists see certain patterns (polypoid architecture and hyperchromisia on low power magnification), in certain circumstances (colonic polyp biopsy), fast thinking immediately, seemingly involuntarily, produces a strong diagnostic impression (tubular adenoma). This experience is in stark contrast to the first-year resident who evaluates a colonic biopsy for the first time. In this scenario, the resident likely concentrates very intently on the histologic features of every aspect of the biopsy, including lesional and nonlesional components. An inordinate amount of time often is spent at high magnification generating detailed descriptions of the pathology. The resident, using slow thinking, cognitively asks questions such as what is normal versus abnormal, what is the most likely diagnosis for a colonic polyp, is this polyp similar to ones seen before, what additional steps must be performed to arrive at a diagnosis, and whether this case matches any of the pictures or descriptions in a text book? This process is laborious, tiresome, and time intensive; and not unlike the process more experienced pathologists use when they come across a case where fast thinking does not quickly produce a diagnosis. Fast thinking, in general, operates automatically, does not require a lot of attention, and is relatively free from distraction. Conversely, slow thinking requires conscious activation and is easily derailed by distractions.

The automatic nature of fast thinking is more prone to specific types of bias than slow thinking (Fig. 2). While glancing at the 2 images fast thinking quickly uses adjacent contextual clues (letters or numbers) to interpret the mildly ambiguous character in the middle, which is identical in both series.21 Considering the significant influence the more clear letters and numbers had on your interpretation of the “B” or “13,” imagine the degree of influence (or bias) that is present in aspects of the interpretive process in surgical pathology. Factors that produce bias include clinical history, submitting clinician characteristics, influence of the submitting department, and prior specimens seen that day or in the recent past. The cognitive diagnostic process is complex and prone to other types of bias, even when slow thinking is induced.

Figure 2
Figure 2
Image Tools

Trainees learn to make surgical pathology diagnoses through slow thinking processes by first examining for the presence of morphologic criteria and then combining these criteria to recognize patterns of disease. Trainees then develop heuristics, or cognitive shortcuts, which allow these individuals to quickly match morphologic patterns with diagnoses. The process of immediate pattern recognition is representative of fast thinking. An example of a cognitive heuristic is seeing a psammoma in a neck lymph node and immediately thinking that the patient may have a metastatic papillary carcinoma of thyroid gland origin. Often, this assessment is the correct one, however, occasionally (as a result of anchoring bias) it is the incorrect assessment.

Diagnostic error has several causes. One cause is cognitive bias, which is a pattern in deviation in judgment.22,23 Some cognitive biases result from a failed heuristic. Other examples of surgical pathology cognitive bias, which may result in an error in accuracy or precision, are shown in Table 2.

Table 2
Table 2
Image Tools

Errors in precision also may be secondary to the failure of pathologists to reach consensus on morphologic criteria. A common example in surgical pathology is the grading of squamous dysplasia and other diseases in which categories of progressive risk have overlapping morphologic criteria. As mentioned earlier, we often use patterns of criteria when we create heuristics to rapidly reach a diagnosis as an example of system 1 thinking. Biases may result in our misapplying agreed upon criteria to specific cases. Or, more importantly, pathologists may anchor on different criteria, because they fundamentally do not agree on the criteria definitive for that disease. Thus, even though the pathologists concur that specific criteria are present, they weigh these criteria differently in disease categorization Table 1. In practice, the classification of these diseases often lacks precision and reproducibility studies show lower interpathologist κ values for specific lesion types.

Although cognitive bias may result in an active error, the cognitive failure is usually associated with latent factors contributing to the event. These latent factors include defective educational and proficiency assessment systems, cultural elements (eg, a physician culture of needing to be always right in judgment), and system features or noise that lead to less than optimal cognitive performance (eg, fatigue, hectic work environments, stress, etc.).

Back to Top | Article Outline
Summary of Root Cause Analysis

Pathologist cognitive failures are only 1 cause of errors in diagnostic interpretation and other causes occur in testing phases outside of the pathologist interpretative steps. Two causes of cognitive error are bias and interpathologist lack of consensus of agreement on diagnostic criteria.

Back to Top | Article Outline
Ideal Surgical Pathology Practice

From the patient perspective, in the truly ideal surgical pathology practice, error does not exist, pathologists agree on the diagnosis and the diagnosis represents exactly the disease process in the patient. In a practice more consistent to the real world, latent factors contributing to error are eliminated and active failures are mitigated by initiatives to reduce contributing factors such as work-related noise that contribute to bias. Interpretive errors generated during the diagnostic process are caught before they reach the patient and result in harm. In addition, error events are used as improvement opportunities with real time root cause analysis, including an assessment of both active and latent factors and directed learning opportunities if appropriate.

Back to Top | Article Outline
Quality Improvement Implementations Possibilities
Secondary Pathologist Review Practices

As mentioned earlier, some pathologist groups use secondary review practice as a method to detect error (post hoc). Pathologists have published results from these practices to establish baseline error frequencies. Surgical pathology groups could implement directed peer review practices to determine their comparative level or diagnostic precision and accuracy. Some authors have suggested that peer review is not practical for determining the error frequency of any individual practice because the number of cases required for adequate appraisal is too high.24

Many pathology groups use secondary peer review practices as a method of prospective error detection, although less data are published in this realm. Of course, prospective review decreases the frequency of errors that cause harm.

Most pathology groups implement methods of peer review in already well-established quality control systems (eg, clinical-pathologic conference review for cases discussed at a tumor board). As peer review practices require different levels of financial and personnel requirements, individual practices have to weigh the costs and benefits of directed peer review practices.

Back to Top | Article Outline
Cultural Changes in Surgical Pathology Practice

Directed peer review may improve interpathologist reproducibility. As previously discussed, one of the major challenges to error reduction is the lack of precision in the diagnostic process. Stated differently, the lack of standardization in diagnostic findings and the lack of standardization in the interpretation of these findings seriously limit the potential for improvement. One of the difficulties in achieving standardization is that following residency or fellowship the process of interpretation of slides is a relatively lonely activity done isolated in an office. Because of the nature of the work, following training, there is little opportunity for correlation between pathologists in how they approach cases or use diagnostic criteria to arrive at a diagnosis. One of the features of Lean is the use of teamwork when solving problems as opposed to isolated problem solving. The culture of anatomic pathology has a significant component of case ownership and a willingness to allow individual pathologists to evaluate and workup their “own” cases as they see fit. One of the main benefits of prospective-directed peer review is that it provides a framework for a team-based approach to case interpretation and focusing on precision and standardization of diagnostic criteria.25

Despite increased recognition of the need for patient safety initiatives over the past few decades, many practice cultures continue to create a negative environment with respect to errors. Instead of using errors as improvement opportunities, errors are blamed on people as opposed to processes and may be used for leverage and coercion. Furthermore, disruptive physicians may tolerat or even condone, creating a hostile environment where errors are not easily discussed or improved upon.26 These cultural changes must be addressed before any significant progress will be made in error rate reduction. The peer review process, performed in a blame-free environment with a focus on both active and latent factors contributing to error, may assist in moving the specialty to a safer culture.

Back to Top | Article Outline
Standardization of Diagnoses and Terminology

Many pathology groups that employ a general sign-out practice with individual subspecialty expertise already utilize directed peer review methods. In these practices, the expert pathologists often are provided a variety of case types to review and the other pathologists use the expert’s diagnostic criteria, final diagnoses, and terminology for standardization.

Practices may also utilize directed peer review in a manner that more explicitly detects interpathologist diagnostic variability to reach consensus on diagnoses and terminology. Raab and colleagues27,28 explored the use of directed peer review to improve the precision of the pathology diagnosis in thyroid gland fine needle aspiration service, and similar efforts could be utilized in surgical pathology practices.

Directed peer review practices may be utilized to determine baseline levels of diagnostic precision and standardization by anatomic site subspecialty. In retrospective processes, relatively few cases need to be reviewed to determine baseline levels. Discussion of methods to reach consensus are beyond the scope of this manuscript, although various articles have shown improvement in surgical pathology consensus through teamwork-based strategies.29,30

Back to Top | Article Outline
Recognition of Bias in the Sign-out Process

The surgical pathology field is beginning to understand the biases in diagnostic interpretation. Researchers recommend a method known as reference range forecasting as a means to reduce bias. In this method, cases of specific subsets are assembled to determine features definitive for the diagnosis (which happens already in examining large series of diseases of specific types) and sources of bias.

A challenge lies in the educational system of teaching not just criteria and patterns, but also the biases that lead to cognitive failure. Gaba31 describes simulation as a technique to replace or amplify real experiences with guided experiences that evoke or replicate aspects of the real world in a fully interactive manner. In the medical educating setting, simulation-based medical education is an educational/training method that allows computer-based or hands-on practice and evaluation of clinical, behavioral, or cognitive skill performance without exposing patients to the associated risks of clinical interactions. The lack of patient exposure to risk is the most important advantage of simulation-based medical education. Other medical specialties, such as surgery and anesthesiology have begun using simulation-based educational practices to improve physician performance.32,33 Many of the simulation-based studies have focused more on initial education rather than retraining or reeducation of physicians recognized as deficient. This is likely due to cultural issues discussed above. The use of simulation in surgical pathology is only in its infancy.34–37 One of the opportunities with directed peer review is the identification of areas of deficiency, either from a practice-wide perspective or an individual pathologist’s perspective. If simulation-based training modules were available, these areas of deficiency could be addressed off-line in a safe environment and at the pace of the trainee.

Back to Top | Article Outline
Summary of Quality Improvement Implementation Possibilities

The implementation of directed peer review in practices not currently doing so has the potential to identify baseline error rates and provide opportunities for individual pathologist improvement. Furthermore, the process of peer review provides an opportunity for pathologists to address their lack of standardization and recognition of bias in the process. These teamwork-based activities may foster improved practice culture.

Back to Top | Article Outline


Directed peer review methods detect diagnostic errors of accuracy and precision. The root causes of these errors are multifactorial and involve active and latent components. One active cause of an interpretation error is bias, which is secondary to failed heuristics, learned in traditional apprenticeship programs. Other latent factors are present in the surgical pathology work environment involving activities, pathways, and connections. Prospective-directed peer review methods may detect errors before they reach the patient, and prospective and retrospective review data may be used to design quality improvement initiatives that result in error reduction. Directed peer review processes, if implemented correctly, have the potential to reduce interpretive error, aide in standardization, reduce practice cultural barriers to patient safety, and improve recognition of bias in the interpretive process.

Back to Top | Article Outline


1. Foucar E. Diagnostic decision-making in anatomic pathology. Am J Clin Pathol. 2001;116(suppl):S21–S33

2. Kohn LT, Corrigan J, Donaldson MS To err is Human: Building a Safer Health System. 2000;xxi Washington, DC National Academy Press:287

3. Raab SS, Grzybicki DM, Janosky JE, et al. Clinical impact and frequency of anatomic pathology errors in cancer diagnoses. Cancer. 2005;104:2205–2213

4. Liker JK The Toyota way: 14 Management Principles from the World’s Greatest Manufacturer. 2004;xxii New York McGraw-Hill:330p

5. Condel JL, jukic DM, Sharbaugh DT. Histology errors: use of real-time root cause analysis to improve practice. Pathol Case Rev. 2005;10:82–87

6. Raab SS, Grzybicki DM, Condel JL, et al. Effect of Lean method implementation in the histopathology section of an anatomical pathology laboratory. J Clin Pathol. 2008;61:1193–1199

7. Raab SS, Meier FA, Zarbo RJ, et al. The “Big Dog” effect: variability assessing the causes of error in diagnoses of patients with lung cancer. J Clin Oncol. 2006;24:2808–2814

8. Carletta J. Assessing agreement on classification tasks: The kappa statistic. Comput Linguist. 1996;22:249–254

9. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–174

10. Raab SS, Grzybicki DM, Mahood LK, et al. Effectiveness of random and focused review in detecting surgical pathology error. Am J Clin Pathol. 2008;130:905–912

11. Zarbo RJ, Meier FA, Raab SS. Error detection in anatomic pathology. Archiv Pathol Lab Med. 2005;129:1237–1245

12. Ramsay AD, Gallagher PJ. Local audit of surgical pathology. 18 month’s experience of peer review-based quality assessment in an English teaching hospital. Am J Surg Pathol. 1992;16:476–482

13. Safrin RE, Bark CJ. Surgical pathology sign-out. Routine review of every case by a second pathologist. Am J Surg Pathol. 1993;17:1190–1192

14. Raab SS, Nakhleh RE, Ruby SG. Patient safety in anatomic pathology: measuring discrepancy frequencies and causes. Archiv Pathol Lab Med. 2005;129:459–466

15. Troxel DB. Diagnostic pitfalls in surgical pathology—discovered by a review of malpractice claims. Int J Surg Pathol. 2001;9:133–136

16. Owens SR, Dhir R, Yousem SA, et al. The development and testing of a laboratory information system-driven tool for pre-sign-out quality assurance of random surgical pathology reports. Am J Clin Pathol. 2010;133:836–841

17. Sams SB, Currens HS, Raab SS. Liquid-based Papanicolaou tests in endometrial carcinoma diagnosis. Performance, error root cause analysis, and quality improvement. Am J Clin Pathol. 2012;137:248–254

18. Reason J. Human error: models and management. BMJ. 2000;320:768–770

19. Smith ML, Raab SS. Assessment of latent factors contributing to error: addressing surgical pathology error wisely. Archiv Pathol Lab Med. 2011;135:1436–1440

20. Raab SS, Grzybicki DM. Secondary case review methods and anatomic pathology culture. Am J Clin Pathol. 2010;133:829–831

21. Kahneman D Thinking, Fast and Slow. 20111st ed New York Farrar, Straus and Giroux:499

22. Baron J Thinking and Deciding. 2007 New York Cambridge University Press

23. Kahneman D, Taversky A. Subjective probability: a judgment of representativeness. Cognit Psychol. 1972;3:430–454

24. Wakely SL, Baxendine-Jones JA, Gallagher PJ, et al. Aberrant diagnoses by individual surgical pathologists. Am J Surg Pathol. 1998;22:77–82

25. Raab SS, Stone CH, Jensen CS, et al. Double slide viewing as a cytology quality improvement initiative. Am J Clin Pathol. 2006;125:526–533

26. Leape LL. When good doctors go bad: a systems problem. Ann Surg. 2006;244:649–652

27. Raab SS, Grzybicki DM, Sudilovsky D, et al. Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error. Am J Clin Pathol. 2006;126:585–592

28. Raab SS, Vrbin CM, Grzybicki DM, et al. Errors in thyroid gland fine-needle aspiration. Am J Clin Pathol. 2006;125:873–882

29. Grzybicki DM, Jensen C, Geisinger KR. Interobserver reproducibility in the diagnosis of ductal proliferative breast lesions using standardized criteria. Mod Pathol. 2007;20(suppl 2):337A

30. Schnitt SJ, Connolly JL, Tavassoli FA, et al. Interobserver reproducibility in the diagnosis of ductal proliferative breast lesions using standardized criteria. Am J Surg Pathol. 1992;16:1133–1143

31. Gaba DM. Adapting space science methods for describing and planning research in simulation in healthcare: science traceability and Decadal Surveys. Simul Healthc. 2012;7:27–31

32. Cramer SF, Roth LM, Mills SE, et al. Sources of variability in classifying common ovarian cancers using the World Health Organization classification. Application of the pathtracking method. Pathol Annu. 1993;28(pt 2):243–286

33. Fernandez GL, Page DW, Coe NP, et al. Boot cAMP: educational outcomes after 4 successive years of preparatory simulation-based training at onset of internship. J Surg Educ. 2012;69:242–248

34. Dintzis S, Mehri S, Luff DF. Pathology residence performance in simulated clinician communication hand-offs. Mod Pathol. 2012;25(suppl 1):138A

35. Mack HM, Dintzis S, Mehri S. Inter-rate variability in checklist assessment of resident performance. Mod Pathol. 2012;25(suppl 1):139A

36. Mack HM, Smith ML, Vielh P. Simulation-based medical education (SBME) in breast fine needle aspiration (FNA) cytopathology as a means of quality improvement. Mod Pathol. 2011;24(suppl 1):131A

37. Sams SB, Smith ML, Vielh P. Assessment tools of baseline and expert levels of pathologist and trainee competence in diagnostic breast cytology. Mod Pathol. 2011;24(suppl 1):132A–133A

Cited By:

This article has been cited 1 time(s).

Archives of Pathology & Laboratory Medicine
Second Opinions Pathologists' Preventive Medicine
Allen, TC
Archives of Pathology & Laboratory Medicine, 137(3): 310-311.
Back to Top | Article Outline

medical error; surgical pathology; quality improvement

© 2012 Lippincott Williams & Wilkins, Inc.


Article Tools



Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.