Abstract: Tracking the radiation dose to medically-exposed populations can promote adoption of best practices among medical facilities that use ionizing radiation. Dose index registries provide an important tool for practices to benchmark their radiation doses for medical imaging and highlight areas where improvements may be made. However, individual patient dose tracking has many confounding variables to consider. It is not clear which dose measures should be tracked, and the variation among these dose measures must be understood relative to the variations in body habitus that are encountered in clinical practice. In addition, there are many uncertainties associated with risk estimation from low-dose radiation that relate to the age, gender, and life expectancy of the exposed individual. Other sources of variation in the use of ionizing radiation for medical imaging are of concern. Specifically, deviation from best practice in the use of medical imaging should be reduced, if not eliminated. Several tools exist to help reduce variation among practices when it comes to rational examination selection. Computerized order entry with decision support offers the promise to introduce these tools at the point of care, which should increase their use and adoption in the medical community at large.
Introduction of Dose Tracking and Rational Exam Selection (Video 1:59, http://links.lww.com/HP/A33)
*Juan M. Taveras Professor of Radiology, Harvard Medical School, Radiologist-in-Chief, Department of Radiology, Massachusetts General Hospital, 175 Cambridge Street, 2nd Floor, Boston, MA 02114.
The author declares no conflicts of interest.
For correspondence contact the author at the above address, or email at email@example.com.
Supplemental Digital Content is available in the HTML and PDF versions of this article on the journal’s Web site (www.health-physics.com).
(Manuscript accepted 27 August 2013)
INCREASING CONCERNS about the magnitude of ionizing radiation imparted to the U.S. population for medical imaging have led to a rising mandate for radiation dose tracking among the medically exposed population. Tracking the radiation dose to medically exposed populations can promote adoption of best practices among medical facilities that use ionizing radiation. Participation in a dose index registry allows medical imaging facilities to send pooled dose data to the registry so that individual facilities can assess their performance relative to peer institutions regarding the radiation dose used for common imaging examinations.
In May 2011, the American College of Radiology (ACR) launched a dose index registry that allows imaging facilities to submit patient-specific dose data for comparison of average dose indices among similar practices across the country. The ACR dose index registry provides an important tool for practices to benchmark their radiation doses for medical imaging and to highlight areas where improvements may be made.
While some U.S. states have passed legislation that requires the reporting of individual patient dose estimates and potentially tracking those dose estimates over time, individual patient-dose tracking has many confounding variables to consider. First, it is not clear which dose measures should be tracked. Second, the variation among these dose measures must be understood relative to the variations in body habitus that are encountered in clinical practice, both for constant levels of image noise and for various levels of image noise that can be accommodated with different clinical indications. Estimation of stochastic risk is the primary driver behind cumulative radiation exposure tracking on an individual basis. However, there are many uncertainties associated with risk estimation from low-dose radiation that relate to the age, gender, and life expectancy of the affected individual (Hendee and O’Connor 2012).
In computed tomography, manufacturers have sought to increase the gantry rotation speed in order to improve temporal resolution. This has required manufacturers to produce x-ray tubes with higher power limits so that the tube-current time product can be maintained at an adequate level for good image quality. However, higher tube power has the potential for excessively high dose if not used appropriately. Fortunately, manufacturers have also developed methods to tailor the examination to the size of the individual. Automatic tube-current modulation adjusts the tube current based on the thickness of the body part that is being irradiated, much like cruise control on an automobile. Automatic tube-current modulation alters tube current through anatomic areas of differing thickness in a given patient, adjusting milliampere settings for cross-section size and shape. This was intended, in part, to avoid over-exposing individuals with a small body habitus and potentially to boost the tube current in patients with a larger body habitus to produce constant levels of image noise. In practice, many facilities put a cap on the maximum tube current that is allowed, accepting higher levels of image noise in patients with large amounts of body fat.
While automatic tube-current modulation reduces the variation in image noise from patient to patient, it increases the variation in the radiation dose associated with each examination. In a recent study by Israel et al. (2010), 91 patients undergoing body computed tomography (CT) examinations were imaged with automatic tube-current modulation, and the radiation exposure needed to produce images with a constant level of image noise was evaluated. These investigators found that the computed tomography dose index (CTDIvol)† varied by a factor of three between patients weighing 60 and 100 kg, and the mean organ dose to the liver varied by a factor of two. Note that the CTDIvol was adjusted for patient size in a manner analogous to the size specific dose estimate described by the American Association of Physicists in Medicine (AAPM 2011). Thus, substantial variation in individual patient doses is expected based purely on patient size differences.
Presuming that individual patient dose tracking can be used to estimate individual radiation risks, patient age has a large bearing on the risks associated with patient doses. Classically, solid tumor cancer risk has been thought to decrease exponentially with rising age up to ∼30 y of age. The risk flattens through middle age and then progressively declines into old age (NA/NRC 2005). Practitioners have taken some comfort in the understanding that most imaging examinations are performed in patients of advancing age when cancer incidence from individual exposures is declining. However, more detailed analyses of the atomic bomb survivor data suggest significant variability in the age dependency of various solid organ cancers. For example, Shuryak et al. (2010) found significant differences among Japanese atomic bomb survivors in the age dependency of breast cancer versus lung cancer. The authors hypothesize that these differences are based on differences in cancer initiation versus cancer promotion. Previously, premalignant cellular initiation has been thought to occur as the predominant tumorogenic event following radiation exposure. In people irradiated at younger ages, initiated cells have longer to exploit their growth advantage over normal cells, and thus cancer incidence declines with increasing age of exposure. Conversely people irradiated at older ages, when there are more premalignant cells for promotion to act upon, are expected to have larger promotion driven risks. Thus, the authors speculate that the relative contribution of initiation versus promotion is 10-fold larger for breast cancer than for lung cancer among atomic bomb survivors. Reflecting this difference, radiation-induced breast cancer decreases with age at exposure at all ages, whereas radiation-induced lung cancer risks do not. However, the authors point out that other interpretations are possible. For example, the data may be consistent with an abrupt age-dependent increase in smoking and/or drinking patterns among survivors, reminding us of the uncertainty between age of exposure and relative risk.
In addition to the age of the exposed individual, his or her life expectancy also has a profound effect on the lifetime radiation risk imparted from a radiation exposure related to medical imaging. Brenner et al. (2011) documented this phenomenon using colon cancer as an example. Here, the authors illustrate that for a 70-y-old patient with colon cancer, the estimated reduction in lifetime radiation associated lung cancer risk is ∼92% for stage four disease versus 8% for stage zero or stage one disease. This indicates a decrease in cancer risk due to radiation exposure when there is a decrease in life expectancy. Clearly, individuals who undergo medical imaging may have morbid or co-morbid conditions that impact their overall life expectancy a priori. This reduction in life expectancy must be taken into account and factored into the relative risk of ionizing radiation and its impact on the overall life expectancy of the individual.
In addition to the expected variation imparted by body habitus and the uncertainty of associated risk based on age and life expectancy, the actual radiation exposure metric of interest should be considered. Many investigators and electronic medical record manufacturers have focused on a single numeric indicator of the overall radiation exposure that has been received by a patient who is undergoing a medical imaging procedure. Although it was intended for population-based exposures, the effective dose has made its way into the medical literature in spite of efforts to clarify its intended use (Bankier and Kressel 2012). The effective dose can be a useful measure to compare the relative magnitude of radiation doses imparted among various imaging procedures for a clinical condition. In addition, it can be used as a quality control tool to allow practitioners to identify quickly examinations that may have been performed with an excessively high radiation exposure. In CT, this is commonly calculated by multiplying the dose-length product by a conversion factor specific to the body part being examined (recognizing that separate conversion factors are available for children and adults). For example, a practitioner who is monitoring a CT scan of the chest and abdomen for a potential aortic dissection may identify an excessively high dose by multiplying the dose-length product (e.g., 3,400 mGy cm) by the standard conversion factor for chest and abdominal CT scans [0.017 mSv (mGy cm)−1]. In this manner, the practitioner can quickly estimate an effective dose (58 mSv) for this examination. Recognizing that the effective dose is too high for this study, the practitioner can query the technical staff as to proper use of examination technique and tube-current modulation to lower the dose to as low as reasonably achievable for a gated study of the aorta.
However, electronic medical record manufacturers too often focus on this single metric as the actual dose received by the individual. This has significant ramifications when doses are tracked over time as they may not reflect the doses imparted to critical organs. Moreover, they do not take into account the size, age, and life expectancy of the individual when translating these procedures to relative risk.
Organ doses provide a more meaningful measure of absolute dose and risk to the patients than effective dose. In a recent study of the relative dose imparted with CT versus nuclear medicine imaging for parathyroid adenomas, Mahajan et al. (2012) found organ doses to be more useful than effective dose. Here, the authors compared the radiation dose associated with contrast enhanced imaging of the neck for suspected parathyroid adenoma, in comparison to the dose associated with Sestamibi imaging in nuclear medicine. While the effective doses were found to be similar (10.4 mSv for CT as compared to 7.8 mSv for Sestamibi), mean organ doses were quite variable. First, the authors identified the critical organs affected by each imaging examination. For the Sestamibi examination, the colon was found to be the critical organ receiving 33 mGy, whereas the thyroid gland was the critical organ for the CT scan receiving 92 mGy. Translating those doses to risks, the authors found that for patients under 30 y of age, particularly in women, the risk to the thyroid increased exponentially with declining age. However, over 30 y of age, the risk to the thyroid was minimal, approaching zero by 60 y of age. Conversely, the risk to the colon showed less age dependency and exceeded the risks to the thyroid over 30 y of age. These authors used this information to guide surgeons in the appropriate use of medical imaging, suggesting use of the Sestamibi examination in patients under 30 y of age and ameliorating their concerns about the radiation dose and risk to the thyroid gland for patients over 30 y of age. Overall, many surgeons prefer the CT scan compared to the Sestamibi examination owing to the increased resolution and improved spatial localization of small parathyroid adenomas that may be found preoperatively.
While substantial variation is expected in medical radiation exposures and their estimated risks, other sources of variation in the use of ionizing radiation for medical imaging are of concern and prompt regulators to require dose tracking. Specifically, deviation from best practices in the use of medical imaging should be reduced, if not eliminated. The overuse of two-phase chest CT examinations, which are performed both prior to and following administration of intravenous contrast material, is a good example of unnecessary radiation exposure. A recent report in the lay press noted that more than 200 hospitals administered “double scans” more than 30% of the time when best practice is <5%. Moreover, failure to adopt best practice tends to cluster geographically, suggesting that local influences may drive resistance to adoption (NY Times 2011).
Several tools exist to help reduce variation among practices when it comes to rational examination selection. The ACR Appropriateness Criteria® (ACR 2013) provide a mechanism for guiding practitioners to the appropriate imaging examination. The appropriateness criteria return a numeric score for any given combination of medical topic, variant, and imaging examination. These criteria can help guide practitioners to the appropriate imaging examination. However, a numeric score alone may not be sufficient in complex medical circumstances. When navigating complex roadways, one prefers having directions for the roads to take rather than assigning relative weightings for the roads ahead leaving the driver to choose the route on a turn-by-turn basis. Thus, multidisciplinary diagnostic algorithms are needed that go beyond the appropriateness criteria to guide practitioners to the appropriate diagnostic pathway for a given clinical scenario. Work is underway in this regard; however, the pressure for rapid throughput, particularly in the emergency room, confounds the adoption of such tools in clinical practice on a wide scale. The benefit is maximized when algorithms are not overly complicated and are jointly endorsed by multiple specialties. Some of the best examples exist in Western Australia where the Royal Australian and New Zealand College of Radiologists and the Royal Australian College of General Practitioners joined forces to develop diagnostic algorithms for common clinical conditions (WA 2013). Here, practitioners are guided to the appropriate test, some that involve imaging and some that do not, for common clinical scenarios. In the United States, ACR has published detailed algorithms to guide practitioners in the evaluation of incidental findings that are found in the liver, kidney, pancreas, and adrenal gland (Berland et al. 2010). Work is underway to develop additional algorithms to guide practitioners in the evaluation of similar findings in a host of other organs and physiologic systems.
Computerized order entry with decision support offers the opportunity to introduce these tools at the point of care, which should increase their use and adoption in the medical community at large (Hendee et al. 2010). Decision support coupled with computerized order entry systems have been shown to curb the growth rate of imaging use at the Massachusetts General Hospital (Sistrom et al. 2009). At the Virginia Mason Clinic, decision support tools were shown to decrease inappropriate use of advanced imaging tests for certain clinical conditions, including lumbar magnetic resonance imaging (MRI) for low back pain, head MRI for headache, and sinus CT for sinusitis (Blackmore et al. 2011). These results are encouraging and inspire us to further develop diagnostic algorithms and incorporate them in decision support tools with the hopes of reducing the variation that accompanies the practice of imaging on a wide scale.
In summary, while dose tracking among populations of patients can be used to promote best practices through the creation and adoption of radiation dose benchmarks, the variation in many patient-specific variables makes individual dose tracking problematic. However, variation in the use of medical imaging should be minimized through promotion of appropriate use with referral guidelines (appropriateness criteria), diagnostic algorithms, and decision support for referring physicians.
American Association of Physicists in Medicine. Size-specific dose estimates (SSDE) in pediatric and adult body CT examinations. College Park, MD: AAPM; Report of AAPM Task Group 204; 2011.
Bankier AA, Kressel HY. Though the looking glass revisited: the need for more meaning and less drama in the reporting of dose and dose reduction in CT. Radiol 265: 4–8; 2012.
Berland LL, Silverman SG, Gore RM, Mayo-Smith WW, Megibow AJ, Yee J, Brink JA, Baker ME, Federle MP, Foley WD, Francis IR, Herts BR, Israel GM, Krinsky G, Platt JF, Shuman WP. Managing incidental findings on abdominal CT: white paper of the ACR incidental findings committee. JACR 7: 754–773; 2010.
Blackmore CC, Mecklenburg RS, Kaplan GS. Effectiveness of clinical decision support in controlling inappropriate imaging. JACR 8: 19–25; 2011.
Brenner DJ, Shuryak I, Einstein AJ. Impact of reduced patient life expectancy on potential cancer risks from radiologic imaging. Radiol 261: 193–198; 2011.
Hendee WR, O’Connor MK. Radiation risks of medical imaging: separating fact from fantasy. Radiol 264: 312–321; 2012.
Hendee WR, Becker GJ, Borgstede JP, Bosma J, Casarella WJ, Erickson BA, Maynard CD, Thrall JH, Wallner PE. Addressing overutilization in medical imaging. Radiol 257: 240–245; 2010.
Israel GM, Cicchiello L, Brink JA, Huda W. Patient size and radiation exposures in chest/abdomen/pelvis/CT examinations performed with automatic exposure control. AJR 195: 1342–1346; 2010.
Mahajan A, Starker LF, Ghita M, Udelsman R, Brink JA, Carling T. Parathyroid 4DCT: Evaluation of radiation dose exposure during preoperative localization of parathyroid tumors in primary hyperparathyroidism. World J Surg 36: 1335–1339; 2012.
NA/NRC. Health risks from exposure to low levels of ionizing radiation: BEIR VII. Washington, DC: National Academies Press; National Research Council; 2005.
NY Times. Hospitals perform needless double CT scans, records show. 18 June 2011; 1.
Shuryak I, Sachs RK, Brenner DJ. Cancer risks after radiation exposure in middle age. J Natl Cancer Inst 102 (21): 1628–1636; 2010.
Sistrom CL, Dang PA, Weilburg JB, Dreyer KJ, Rosenthal DI, Thrall JH. Effect of computerized order entry with decision support on the growth of outpatient procedure volumes: seven year time series analysis. Radiol 251: 147–155; 2009.
WA. A clinical decision support tool and educational resource for diagnostic imaging. Government of Western Australia, Department of Health. Available at www.imagingpathways.health.wa.gov.au
. Accessed 13 April 2013.
†CTDIvol is defined as the weighted CT dose index normalized bythe pitch of helical scan acquisitions. Cited Here...