JTI Blog
Current events in cardiopulmonary radiology, updates about the journal’s web site features, and links to other web sites of interest to cardiopulmonary radiologists.

Wednesday, June 8, 2011

Jeff Kanne--"The QA Guy"--on Peer Review in Radiology

Peer review continues to be stressed as an important part of quality management in radiology.  However, what constitutes peer review varies across departments, often varies within a department, and frequently has different meanings to each individual radiologist.  Establishing a successful peer review program requires establishing specific goals of the program and obtaining acceptance from colleagues.

The primary goal of peer review is to improve patient care by reducing future errors in interpretation and management.  Peer review in radiology is most commonly performed through retrospective review of a case and deciding whether or not one agrees with the final radiologic report.  A variety of scoring systems can be used, but general themes focus on severity of error and whether or not the error has clinical significance.   Cases with significant discrepancies are generally discussed with the original interpreting radiologist.  Finally, these cases are best reviewed and discussed in a “missed case” conference in an anonymous and non-punitive setting with the goal for all participants to learn from the mistakes of others before making the same mistake one’s self.

In establishing a peer review program, defining the type and number of cases to be reviewed is essential.  For example, an individual or group needs to decide what percent of exams will be reviewed, which imaging modalities will be included, and which individuals will be participating.  Realistically, reviewing more than a few percent of exams daily becomes too burdensome, especially in very busy practices, so targeting reviews to high-risk areas such as studies originating in the emergency department may be useful.

Random peer review provides the best chance to limit biases and can be performed in number of fashions.  One example is to have the technologist randomly flag a case or two for review each day.  Another possibility is to review the first case with a comparison radiograph encountered during a shift and reviewing the comparison study.  The advantage of the former is that the review is near real-time, allowing referring physicians to alter patient care when a significant error is detected.  The advantage of the latter is that no human selection is required.  Peer review can be performed with pre-printed index cards or small sheets of paper and a locked repository for completed reviews.  For practices with extensive IT resources, a sophisticated system can be developed providing randomization, an electronic data entry interface, data mining, and reporting.  Alternatively, several commercial products are available, some of which can be fully integrated with PACS. 

Once a peer review program has been proposed, all radiologists involved should have the opportunity to provide feedback before the program is enacted. A new program should focus primarily on participation and not so much on specific review results so that, over time, radiologists see the educational value of a peer review program.  Specific participation metrics should be defined in advance, and any incentives for meeting these participation metrics clearly stated upfront.  A financial incentive, even if small, for meeting a participation goal will generally result in high participation rates.  The key to obtaining meaningful results from a peer review program is active participation by everyone so that results are a true reflection of a practice and so everyone has the opportunity to improve the overall quality of the practice.

Editor's Note: If you like this blog entry, you may also like a recent interesting perspective on peer review by David Larson and John Nance that was published in Radiology. Click here to link to the article.