Skip Navigation LinksHome > May 2012 - Volume 27 - Issue 3 > Repeat Rates in Digital Chest Radiography and Strategies for...
Journal of Thoracic Imaging:
doi: 10.1097/RTI.0b013e3182455f36
Original Articles

Repeat Rates in Digital Chest Radiography and Strategies for Improvement

Fintelmann, Florian MD; Pulli, Benjamin MD; Abedi-Tari, Faezeh BS; Trombley, Maureen MBA; Shore, Mary-Theresa MSc; Shepard, Jo-Anne MD; Rosenthal, Daniel I. MD

Free Access
Article Outline
Collapse Box

Author Information

Department of Radiology, Massachusetts General Hospital, Boston, MA

The authors Florian Fintelmann and Benjamin Pulli contributed equally.

The authors declare no conflicts of interest.

Reprints: Florian Fintelmann, MD, Department of Radiology, FND II 216, 55 Fruit Street, Boston, MA 02114 (e-mail: ffintelmann@partners.org).

Collapse Box

Abstract

Purpose: To determine the repeat rate (RR) of chest radiographs acquired with portable computed radiography (CR) and installed direct radiography (DR) and to develop and assess strategies designed to decrease the RR.

Materials and Methods: The RR and reasons for repeated digital chest radiographs were documented over the course of 16 months while a task force of thoracic radiologists, technologist supervisors, technologists, and information technology specialists continued to examine the workflow for underlying causes. Interventions decreasing the RR were designed and implemented.

Results: The initial RR of digital chest radiographs was 3.6% (138/3818) for portable CR and 13.3% (476/3575) for installed DR systems. By combining RR measurement with workflow analysis, targets for technical and teaching interventions were identified. The interventions decreased the RR to 1.8% (81/4476) for portable CR and to 8.2% (306/3748) for installed DR.

Conclusions: We found the RR of direct digital chest radiography to be significantly higher than that of computed chest radiography. We believe this is due to the ease with which repeat images can be obtained and discarded, and it suggests the need for ongoing surveillance of RR. We were able to demonstrate that strategies to lower the RR, which had been developed in the era of film-based imaging, can be adapted to the digital environment. On the basis of our findings, we encourage radiologists to assess their own departmental RRs for direct digital chest radiography and to consider similar interventions if necessary to achieve acceptable RRs for this modality.

Image repeat rate (RR) has been developed into an established quality assurance measure and contributes to both technologist education and radiation protection.1 In the era of analog radiography, rejected films could simply be collected from the trash bin and analyzed.1 With the advent of digital radiographs, determining RR has become more difficult because images deemed unsatisfactory by the technologist are easily deleted on the acquisition station and most likely never transferred to the Picture Archiving and Communication System (PACS) for storage.1,2 In addition, the introduction of filmless radiography has eliminated one of the incentives of counting repeats: silver can no longer be reclaimed from discarded films, as additional exposures require only time and storage space. Furthermore, it was thought that RR analysis was no longer necessary as the transition from film screen to computed radiography (CR) was accompanied by a well-documented decrease in RR for all body parts,1,3–5 largely due to elimination of repeats secondary to incorrect exposure.6 Therefore, the effect on RR of the more recent introduction of direct radiography (DR) has not yet been thoroughly investigated, apart from small studies, which reported an increase in RR within months after the introduction of DR.7,8

To our knowledge, no studies have been published that compare the RR of CR and DR after years of experience. Stimulated by a perceived decline in the quality of chest radiographs in our hospital, we decided to investigate the RR of chest radiographs taken by the technologists staffing our emergency department (ED) who have been working with portable CR and installed DR since 1997 and 2001, respectively. We also report on the design and effect of a quality assurance program implemented to improve upon the high RRs discovered in this process.

Back to Top | Article Outline

MATERIALS AND METHODS

Clinical Setting and Equipment

We included all radiographs of the chest obtained by the technologists providing 24-hour coverage in the ED. This group of technologists acquires about 155,000 images of the chest per year in the ED and in the inpatient setting during off-hours, which represents roughly 45% of all chest examinations obtained at our teaching hospital. Apart from acute chest pain and dyspnea, common indications for chest radiographs included different forms of multiple trauma, as well as urgent care issues such as “rule out pneumonia.” The same group of technologists is also responsible for off-hours bedside radiographs for all inpatients, including the routine early morning bedside chest radiographs for patients in the intensive care units. These examinations are usually ordered to assess for tube placement and related complications, pneumothorax, pneumonia, and pulmonary edema. The experience of the 93 technologists involved was derived from their initial licensing date and ranged between 6 months and 21 years; mean experience was 9.47 years (SD 7.23 y).

Bedside chest radiographs of acutely ill patients who were unable to stand up were acquired using a mobile unit (AMX, GE Healthcare, Buckinghamshire, UK) with CR digitizer workstations (ADC_5148, AGFA Health Care Corporation, Greenville, SC), whereas 3 installed DR systems (Revolution XR/d, GE Healthcare) were used for nonportable examinations.

Back to Top | Article Outline
Measurement of Repeat Rates

To track image RR, the ability of the technologist to permanently delete images was disabled at all acquisition stations, similar to what has been described before.1 The technologist was thus forced to flag images for rejection. Flagged images were stored locally until reviewed by a supervisor. The supervisors were required to assign the rejected images to one of the following categories: positioning, motion, artifact, improper technique, and equipment failure. Improper technique was defined as incorrect exposure resulting from suboptimal kVp, mAs, collimation, or exposure time. Equipment failure was defined as workstation errors compromising image quality. The date and time of examination, technologist initials, body part, and acquisition station were also recorded. These data were collected daily from October 2007 to December 2007 and again throughout May 2008 and January 2009.

The data were separated into 5 observation periods, each corresponding to a month’s worth of data. Image RR was calculated as the ratio of rejected images divided by the number of rejected images plus the number of images sent to PACS.

Back to Top | Article Outline
Statistical Analysis

Statistical analysis was performed with SPSS 16.0 for Windows (IBM, Armonk, NY). The Fisher exact test was used for all comparisons. P values of <0.05 were considered statistically significant.

Back to Top | Article Outline
Quality Assurance Program

A task force charged with elucidating and remedying underlying causes for repeated radiographs was assembled by the division chief of thoracic radiology. The team consisted of thoracic radiologists, technologist supervisors, technologists, and information technology specialists. The technologist supervisors presented detailed workflow diagrams showing every step of image acquisition and image processing, as well as bar diagrams of the frequency of repeat reasons to the group at monthly meetings. Applying the principles of root-cause analysis, the task force collectively reviewed the data to identify sources of error. All team members were encouraged to offer their opinions on possible causes and remedies in a “brain storming” session. Both technical and teaching interventions were thus defined and approved by the task force during these meetings and successively implemented from November 2007. Technologist supervisors reviewed all rejected images on a daily basis throughout the duration of the program and were thus able to provide feedback to the team with regard to the effect of each intervention, thereby assuring effectiveness and guarding against unforeseen untoward effects.

Teaching interventions were designed by a thoracic radiologist in collaboration with the technologist supervisors and representatives from our vendors. The technologist supervisors were invited by the radiologists to spend time in the reading room. Having the technologists witness difficulties encountered by radiologists during image interpretation facilitated the communication of complex notions such as the advantage of collimating during image acquisition over cropping the digital image at the workstation and the limitations introduced by a technologist who windows an image at the workstation to highlight a particular finding such as the tip of a peripherally inserted central catheter. The technologist supervisors presented examples of these sessions to the technologists. The division chief of thoracic radiology lectured the technologists on proper positioning and breathing techniques, and the vendor representatives provided education on capabilities and limitations of CR equipment.

Technical improvements included standardizing the postprocessing algorithms between CR digitizer workstations because differences in the protocols had introduced considerable variation in image quality. To minimize exposure errors, exposure indicators were added to the DICOM overlay of all chest radiographs, and technologists were instructed to keep the log of median exposure (logM) between 1.9 and 2.3. Finally, CR cassettes were checked daily for artifacts, and the portable radiography device was serviced at regular intervals, a process that was discovered not to have been sufficiently enforced previously. A visual reminder regarding best practice was placed on all acquisition units (Fig. 1).

Figure 1
Figure 1
Image Tools
Back to Top | Article Outline

RESULTS

Quality Assurance Program

Successive task force meetings identified 3 technical and 5 human problems during the image acquisition process and 1 technical and 3 human issues associated with image processing.

Image acquisition was affected by: (1) technologists imaging patients in a “semi-upright” position as opposed to either supine or upright; (2) tubes, wires, or snaps overlying the field and obscuring vital structures; (3) absence of collimation; (4) the x-ray beam aiming too high or too low; and (5) incorrect source-to-image distance. Technical problems were: (1) malfunction of the radiography unit; (2) artifacts due to faulty CR cassettes and incorrect exposure.

Problems associated with image processing included: (1) technologists not providing sufficient annotation, including the absence of initials; (2) technologists adjusting the window level to demonstrate a particular finding, thereby limiting the radiologist’s ability to interpret the remainder of the structures included in the field of view; and (3) cropping the image at the workstation to compensate for not collimating at the time of image acquisition. The technical issue was differences in processing algorithms between CR digitizer workstations.

Back to Top | Article Outline
Repeat Rate

An average of 4010 CR images of the chest were acquired during each of the 5-month-long observation periods (range, 3818 to 4476). The average number of DR images of the chest was 3539 (range, 3210 to 3748) (cf. Table 1). The initial combined digital chest radiograph image RR in October 2007 was 8.3% (614/7393), or 3.6% for CR and 13.3% for DR. With the onset of interventions in November 2007, the RR decreased slightly to 3.1% for CR and 13.1% for DR and continued to decrease to 2.6% for CR and 11.1% for DR in December 2007 once more interventions had taken effect. Repeat analysis 7 and 15 months after the initiation of interventions demonstrated a further decrease in RR to 2.5% for CR and 10.8% for DR (May 2008), which decreased further to 1.8% for CR and 8.2% for DR by January 2009. In summary, a statistically significant reduction of the combined RR from 8.3% (614/7393) to 4.7% (387/8224) was achieved within 15 months (cf. Table 1).

Table 1
Table 1
Image Tools

No significant difference was seen between CR and DR in the rate of positioning errors (84.7% vs. 84.8%; P=0.97) or artifacts (8.0% vs. 8.9%; P=0.68). However, equipment failure was more common with CR than with DR (1.8% vs. 0.2%; P=0.001), as was improper technique (3.2% vs. 1.5%; P=0.038), whereas patient motion was more common with DR than with CR (4.6% vs. 2.3%; P=0.045) (Table 2).

Table 2
Table 2
Image Tools
Back to Top | Article Outline

DISCUSSION

Our results show a disturbingly high RR for digital DR of the chest before intervention. The RR of 13.3% for DR is significantly higher than the RR of 3.6% for CR, and it is also well above most historical reports of film/screen imaging.3,9–12 A similarly elevated RR for DR (12%) has been reported by others.8 However, that evaluation was performed shortly after the introduction of DR, and the high RR was attributed to inexperience. This reasoning does not explain our results, as we have been using DR since 2001. We also doubt that inadequate training or supervision of individual technologists is the explanation, because our RRs for CR before intervention (3.6%) compare favorably with previously reported data, which suggest an average RR for CR of 4.8%3,13,14 (5271/105,631 reported cases; P<0.01). Furthermore, a mean work experience of more than 9 years suggests that the average technologist had considerable knowledge.

It is very likely that the difference between DR and CR of more than 200% (3.6% vs. 13.3%) is in part due to higher expectations for a chest radiograph acquired with a stationary device compared with one acquired with a portable unit. The patient population imaged with portable CR differs from that imaged with stationary DR devices. Portable chest radiographs are obtained in acutely ill patients who cannot stand up for a radiographic study of the chest, and these patients often have multiple support devices and rapidly evolving findings. In this setting, even technically limited radiographs can provide valuable information, and a repeat study would not be indicated. This explanation is supported by the greater number of repeats due to motion artifacts in DR. Therefore, comparison of the RRs of portable CR and portable DR, or installed CR and installed DR, would be more accurate to prove the effects of the particular modality on RR. Nevertheless, previously reported RR for chest CR examinations excluding portable studies was 8.8%,13 which is significantly lower than our finding of 13.3% (P<0.0001).

We therefore suspect that the ease of acquiring, reviewing, and repeating a digital radiograph lends itself to a higher RR as hypothesized by Waaler and colleagues.7,13 This is supported by the fact that the distribution of causes for repeat examination remained unchanged after the introduction of educational and technical interventions, except for a decrease in equipment failure. At a time when radiation dose is on the minds of both the public and the radiology community, recognizing ease of acquisition as a drawback of DR is important in order to keep total patient dose as low as reasonably achievable, even though the actual dose of radiation from a chest radiograph is very low compared with CT.

Inherent incentives or disincentives built into the work process can have profound effects. For example, at the time of data collection, no vendor supported the automated collection of RR data. Manual data collection is laborious, and many hours were spent on manual tasks in the course of this project. This may account for the fact that very few evaluations of RR have been performed for DR. Only recently did vendors begin to offer repeat-tracking software as optional tools for their most recent systems,15 a much welcome addition for any quality improvement program.

Our data also show that vigilant application of a model joining the force of radiologists, technologist supervisors, technologists, and information technology specialists can have very beneficial effects on RR. Combining traditional RR measurement with workflow and root-cause analysis allowed us to design targeted interventions that resulted in a sustainable reduction of RR for both DR and CR, similar to what has been described in the era of film-based imaging.12 We emphasize the role of supervisors who continuously monitor image quality using dedicated PACS-viewing stations set up in close proximity to the technologists and are thus able to provide immediate feedback to the technologists. This concept resembles the “QA subspecialists” championed by Reiner et al.16 It enabled us to quickly identify ineffective interventions and to replace them with more appropriate measures. Although the supervisors will continue their work indefinitely, we plan to repeat collecting 1 month’s worth of RR data every year throughout the entire department using the described data collection method.

We chose to study chest radiographs taken by technologists covering both ED radiographs and off-hours bedside radiographs taken on inpatients, because the large volume of examinations by this group allowed collection of large data samples of both portable bedside and routine upright examinations. The fact that the ED is staffed with a technologist supervisor around the clock ensured uninterrupted data collection during weekends and holidays.

In conclusion, we found the RR of direct digital chest radiography to be significantly higher than that of computed chest radiography. We believe that this is because of the ease with which repeat images can be obtained and discarded, and it suggests the need for ongoing surveillance of RR. We were able to demonstrate that strategies to lower RR, which had been developed in the era of film-based imaging, can be adapted to the digital environment. On the basis of our findings, we encourage radiologists to assess their own departmental RRs for direct digital chest radiography and consider similar interventions if necessary to achieve acceptable RRs for this modality.

Back to Top | Article Outline
ACKNOWLEDGMENTS

The authors would like to thank all the technologists and technologist supervisors involved for their help with collecting data.

Back to Top | Article Outline

REFERENCES

1. Nol J, Isouard G, Mirecki J. Digital repeat analysis; setup and operation. J Digit Imaging. 2006;19:159–166

2. Tucker DM, McEachern M. Quality assurance and quality control of an intensive care unit picture archiving and communication system. J Digit Imaging. 1995;8:162–167

3. Weatherburn GC, Bryan S, West M. A comparison of image reject rates when using film, hard copy computed radiography and soft copy images on picture archiving and communication systems (PACS) workstations. Br J Radiol. 1999;72:653–660

4. Peer S, Peer R, Giacomuzzi SM, et al. Comparative reject analysis in conventional film-screen and digital storage phosphor radiography. Radiat Prot Dosimetry. 2001;94:69–71

5. Lau S-l, Mak AS-h, Lam W-t, et al. Reject analysis: a comparison of conventional film-screen radiography and computed radiography with PACS. Radiography. 2004;10:183–187

6. Honea R, Elissa Blado M, Ma Y. Is reject analysis necessary after converting to computed radiography? J Digit Imaging. 2002;15(suppl 1):41–52

7. Waaler D, Hofmann B. Image rejects/retakes—radiographic challenges. Radiat Prot Dosimetry. 2010;139:375–379

8. Lee B, Junewick J, Luttenton C. Effect of digital radiography on emergency department radiographic examinations. Emerg Radiol. 2006;12:158–159

9. Dunn MA, Rogers AT. X-ray film reject analysis as a quality indicator. Radiography. 1998;4:29–31

10. Al-Malki MA, Abulfaraj WH, Bhuiyan SI, et al. A study on radiographic repeat rate data of several hospitals in Jeddah. Radiat Prot Dosimetry. 2003;103:323–330

11. Arvanitis TN, Parizel PM, Degryse HR, et al. Reject analysis: a pilot programme for image quality management. Eur J Radiol. 1991;12:171–176

12. Gadeholt G, Geitung JT, Gothlin JH, et al. Continuing reject-repeat film analysis program. Eur J Radiol. 1989;9:137–141

13. Foos DH, Sehnert WJ, Reiner B, et al. Digital radiography reject analysis: data collection methodology, results, and recommendations from an in-depth investigation at two hospitals. J Digit Imaging. 2009;22:89–98

14. Prieto C, Vano E, Ten JI, et al. Image retake analysis in digital radiography using DICOM header information. J Digit Imaging. 2009;22:393–399

15. Minnigh TR, Gallet J. Maintaining quality control using a radiological digital X-ray dashboard. J Digit Imaging. 2009;22:84–88

16. Reiner BI, Siegel EL, Siddiqui KM, et al. Quality assurance: the missing link. Radiology. 2006;238:13–15

Keywords:

chest; direct radiography; repeat rate; computed radiography; digital radiography

© 2012 Lippincott Williams & Wilkins, Inc.

Login

Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.