Colorectal cancer (CRC) screening plays an important role in preventing the incidence of advanced CRC, the third most common cancer diagnosis and the fourth leading cause of cancer death in the world.1 Despite advances in screening methodology, only 65.1% of U.S. adults were up-to-date with their CRC screening.2 Annual fecal-based CRC screening tests are often offered as a first-line test, given their low cost, sensitivity (range, 73–88%) and specificity (range, 90–96%), and convenience to patients.3 A fecal immunochemical test (FIT) is one such test and is frequently preferred over a guaiac fecal occult blood test due to the simplicity of FIT sample collection and its subsequent translation to higher collection rates.4,5 Despite these advantages, return rates as low as 18.5% have been documented.6 Patient surveys indicate that refusal to complete FIT screening tests is often due to discomfort, disgust, and embarrassment in collecting stool samples.7 Accordingly, investigators have implemented various interventions to improve FIT completion rates, such as live reminder phone calls, with modest success.6,8,9
Although many studies have sought to improve the number of patients who complete FIT CRC screening tests, few have focused on one important determinant: ensuring that a valid specimen has been submitted for processing. Limited data exist from one health center, which developed visual interventions that included highlighting collection date instructions and an instructional graphical insert to reduce rejected specimens caused by the absence of a recorded collection date.6 Laboratory rejection of FIT specimens in this study ranged from 17% to 42%. The consequences of a rejected specimen include specimen recollection, delay in result availability, and a high rate of specimen/test abandonment by the patient.10 Therefore, it is critical that institutions have low rates of rejected specimens. Although principles may be extrapolated from other fecal-based CRC screening studies, the dearth of published information on FIT rejections has led to a lack of information on effective interventions to decrease FIT specimen rejections and the lack of a clear benchmark for FIT rejection rates.
In our healthcare system, the rate of rejected FIT specimens in June 2017 received by the laboratory was 28.6%, which was felt to be unacceptably high by our facility leadership. We therefore embarked on an initiative to reduce the number of rejected FIT specimens received by the laboratory from 28.6% in June 2017 to less than 10% by December 2017, with the broader goal of improving our healthcare system's CRC screening rate. Although there is no clear benchmark for FIT rejection rates, hospital leadership believed that a less than 10% rejection rate would be an achievable goal in the specified time frame.
Setting and Context
Our healthcare system consists of a tertiary care medical center, two ambulatory care centers, and eight community-based outpatient clinics serving Veterans in five counties. The healthcare system is an integrated system that provides care solely for eligible Veterans and is funded by the federal government. We adhere to the United States Preventive Services Task Force guidelines for CRC screening and offer all recommended screening modalities including colonoscopy, flexible sigmoidoscopy, fecal tests, and computed tomography (CT) colonography to patients starting at the age of 50 years. Fecal immunochemical test is used as an initial screen for CRC in low-risk patients and each year, the laboratory processes approximately 10,000 FIT specimens.
When a patient visits primary care, a clinical reminder in the electronic health record prompts primary care providers (PCPs) and nurses to distribute the one-sample FIT kit if the patient is due for annual screening. The patient obtains the sample at home and has the option of returning it to the clinic or satellite laboratory or mailing it directly to the main laboratory using the provided envelope. If the sample is received at the clinic or the satellite laboratory, the sample is mailed to the main hospital laboratory. Once the specimen arrives at the main laboratory, the laboratory technologist accessions and loads the specimen onto the analytical instrument on the same or the following day. Due to the instability of the specimen, the specimen is required to be tested within 14 days of collection. Once analyzed, the specimen result is documented in the medical record by the technologist. If the results are positive or invalid, there are two processes in place to communicate this information back to the provider. First, providers are notified of a positive or canceled FIT result through the electronic medical record system. Second, a staff member in laboratory services collates a list of positive/cancelled FITs and relays it to an individual in primary care who then communicates the information to the appropriate individuals.
The primary outcome measure was the monthly rate of rejected FIT specimens received by the laboratory, which was determined by dividing the number of rejections in each month by the total number of specimens received that month (i.e., specimens tested plus specimens rejected) by the laboratory. The monthly number of rejected specimens was retrieved from the rejection log, where each rejected specimen was manually recorded and classified by the laboratory technologist. We used quality improvement (QI) Macros Version 2017.09 to create a statistical process control p-chart for rejection rates and R Studio Version 1.1.423 to run statistical analyses. We used the Wilcoxon–Mann–Whitney test to determine significance between Phase 1 and Phase 2 rejection rates and CRC screening rates.
The CRC screening rate was obtained through an External Peer Review Program that collects monthly institutional data based on Healthcare Effectiveness Data and Information Set (HEDIS) measures.11 This project was determined to be nonresearch by the institutional review board.
Phase 1 (May 2017–November 2017)—Interventions
A hospital steering committee consisting of individuals from primary care, pathology and laboratory services, palliative care, nursing, social work, pharmacy, and the deputy chief of staff found that FIT specimens were primarily rejected because the samples were received by the laboratory later than the 14-day time window necessary to be reliably analyzed. As a result, in May 2017, the executive leadership team developed an automated telephone reminder system, a technology already in use for other aspects of patient care. Phone outreach has successfully been used elsewhere to remind patients of returning FITs9 to reduce the number of specimens received beyond the specimen's 14-day window. The automated phone reminder process began with a data analyst who generated a weekly list of patients who had received a FIT kit, and their phone numbers. This list was delivered to the automated call center where a second individual activated the automated phone calls to patients.
To identify additional targets for improvement, the project champion, a board-certified pathologist trained in quality improvement (CC), performed a deeper analysis of specimen rejection causes by interviewing laboratory staff and observing laboratory and primary care clinic processes. The data before June 2017 were found to be unreliable, as it depended on intermittent communication of data between two individuals in separate laboratory areas, which thereby precluded the ability to obtain consistent baseline data. In June 2017, rejection data were migrated to a SharePoint site, and the documentation of rejected specimens was performed within the microbiology laboratory to more reliably capture the data; however, limitations of the software and the lack of standardization on categorizing rejection causes by the laboratory technologists made analysis of the data inefficient. Therefore, the data collection process migrated to an Excel spreadsheet in August 2017 to resolve these limitations and to allow for convenient visualization of rejection causes. A Pareto chart (Figure 1) revealed that most specimen rejections were due to receipt of an expired specimen; in other words, the laboratory did not receive the specimen within the 14 days of specimen collection as required by the manufacturer. Other causes for rejection included lack of a specified collection date, lack of patient information, lack of physician orders, and illegible handwriting. This problem analysis led to the creation of five interventions requiring collaboration with multiple disciplines.
First, in July 2017, at clinic sites, nurses placed bright red stickers on FIT envelopes that read, “ATTENTION! MAIL SPECIMEN WITHIN 48 HOURS OF COLLECTION,” to remind patients of the time sensitivity of returning the specimen. A time limit of “48 hours” was used to increase the sense of urgency and ensure prompt return of the specimen.
Second, Primary Care and Pathology & Laboratory Medicine created a new laboratory standard operating procedure (SOP) that was implemented in August 2017. The goal of the SOP was to provide laboratory technologists with a standardized way to triage FIT specimens received and to build understanding as to why specimens were being rejected. The SOP also allowed laboratory technologists to create an order under the patient's PCP name when FIT specimens received in the laboratory did not have a FIT order. If there was no PCP listed for the patient, the laboratory technologist could create an order under the Chief of Primary Care or his/her designee, ensuring that every FIT kit lacking a PCP order would still be analyzed. In addition, if there was no collection date written on the FIT kit, but the date of the FIT kit order was less than 14 days before receipt, the laboratory technologist could proceed with testing the specimen.
Third, we initiated a focus group of front-line nurses at an ambulatory care center in September of 2017 to better understand the issues relating to successful completion of FIT tests. Based on recommendations from this focus group and discussions with nursing leadership, the clinical informatics group revised the FIT clinical reminder. Whenever a user selected a FIT clinical reminder to complete it, a standardized script appeared, prompting the user to remind patients to (1) return FIT kits within 48 hours of specimen collection, (2) record collection date on the FIT envelope, and (3) return FIT kits either by mail or to the nearest clinic/laboratory. The scripts served as reminders for staff to share patient instructions at “point of care” to facilitate timely specimen submissions.
Fourth, in October 2017, we created a process to allow satellite laboratories to accession specimens if the patient returned the FIT kit directly to a satellite laboratory. Previously, satellite laboratories had mailed the received specimen directly to the laboratory without opening the specimen envelopes, resulting in rejection of the specimen by the main laboratory if the specimen was missing critical information (e.g., collection date). By allowing satellite laboratories to accession specimens, the laboratory staff could request any missing information from the patient at the time that the patient submitted the specimen, thereby reducing the number of rejected specimens from a lack of patient information or recorded collection date.
Finally, multiple meetings with clinic sites and analysis of illegible handwritten specimens revealed that there were three clinic sites that did not have working FIT-compatible printers and were therefore handwriting patient information onto the FIT kits. Our institution's information technology group subsequently procured and deployed compatible printers to these sites, allowing them to print FIT labels and decreasing the need to handwrite patient information.
Phase 2 (November 2017–March 2018)—Sustainability
Once interventions were in place, we monitored the data for sustainment of the rejection rate; however, several deficiencies were revealed.
Automated phone calls were not made on a weekly basis as planned. The causes of this deficiency were likely multiple, including the absence of a backup process, lack of standardization for when the automated phone calls would be made, and an inefficient process. The first two issues were resolved by developing backup processes when the primary individuals were unavailable and by designating calls to be made every Monday. The process was made more efficient by transferring responsibilities to an individual skilled in writing Structured Query Language scripts, completing the data extraction process in minutes as opposed to hours.
In addition, due to competing priorities, several of the interventions were delayed. The revised clinical reminder was not deployed until November 2017 and FIT-compatible printers were not delivered to clinic sites until April 2018. Throughout these processes, constant communication was necessary to ensure progression of these initiatives.
During Phase 1, excluding May (because reliable data were not available during this month), the rejection rate averaged 17.7% (716 rejections/4046 total specimens; range, 9.0–28.6%/month). In June 2017, the first month where data were reliably collected, the laboratory received 928 FIT specimens and rejected 265 of these specimens for a 28.6% rejection rate (Figure 2). The breakdown of rejections as a percent of total specimens received was as follows (Figure 1): 13.7% (expired), 6.4% (no collection date/time), 5.2% (no physician orders), 1.5% (no patient information), 1.5% (illegible handwriting), and 0.3% (miscellaneous).
The rejection rate for Phase 2 averaged 7.7% (443 rejections/5,768 total specimens; range, 5.6–11.1%/month). In May 2018, the breakdown of rejections as a percent of total specimens received (n = 794) was as follows: 3.5% (expired), 2.6% (no collection date/time), 0% (no physician orders), 1.3% (no patient information), 0.1% (illegible handwriting), and 0.3% (miscellaneous) (Figure 3). The Wilcoxon–Mann–Whitney test demonstrates a statistically significant difference between Phase 1 and Phase 2 (p-value = .015).
The statistical process control chart (SPC) p-chart shows a decrease in average rejection rate and improved statistical process control and tighter control limits in Phase 2 compared with Phase 1 (Figure 4). In Phase 2, six of seven data points are within the control limit. The CRC screening rate between June 2017 and February 2018, the latest month for which data were available at the time of writing, ranges from 78% to 91%, but did not demonstrate a statistically significant change (p-value = .69).
Limitations of this project include the inability to obtain a preintervention baseline due to an unreliable data collection process. We were also unable to compare the effectiveness of each individual intervention that was implemented, and it is not clear if the interventions were more effective for specific subsets of patients. Specifically, with the automated phone call reminder systems, there was no method to ensure that phone numbers were correct and there was no recourse if our system did not have the correct contact information. Finally, because successful completion of a FIT kit satisfies the CRC screening requirement for 1 year, the full impact of the rejection rate on CRC screening cannot be determined until additional CRC screening data points are collected.
Although most interventions to improve CRC screening rates focus on completion rates, we focused on reducing the rate of rejected specimens. We successfully reduced the rate of rejected specimens from 28.6% to <10% by December 2017 and accomplished our immediate goal for this initiative. We attribute the success to four factors. First, we implemented a simple but robust data collection system allowing us to categorize rejected specimens. As a result, the data provided a better understanding of the root causes for specimen rejection. Second, interventions were specifically targeted at the root cause, beginning with the most common concern identified. The rationale for this strategy of intervention falls in line with the Pareto principle,12 which states that 80% of the issues can be explained by 20% of the causes. Third, individuals from multiple healthcare disciplines worked together to allow for successful execution of the interventions. The FIT screening process is a multidisciplinary effort and therefore it was necessary to create interventions that affected all parts of this process.
We sought to reduce FIT specimen rejections and increase completion rates for CRC screening, and were successful in our endeavor by understanding root causes and implementing multidisciplinary targeted interventions. In addition, we have been able to document a rejection rate that is near process control with a range of 5.6–11.1% at our institution. In the absence of a clear externally established benchmark, this range of rejection rates may serve as a reasonable new baseline for our institution.
This initiative tackles one aspect of ensuring high CRC screening rates, by reducing the number of rejected specimens. The success of this project demonstrates that structured project management with a focus on understanding root causes and multidisciplinary collaboration is important in creating change in healthcare. Future plan-do-study-act cycles will work toward reducing the rejection level even further and ensuring a sustainable mechanism to maintain low specimen rejection levels. In addition, future initiatives will examine other aspects of the FIT CRC screening process such as patient FIT kit return rates. Overall, these initiatives will increase CRC screening, with the promise of dramatically reducing the incidence of advanced CRC.
Linda Kim, PhD, RN, and Maryanne Chumpia, MD, assisted in various parts of this initiative. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States Government.
1. Arnold M, Sierra MS, Laversanne M, Soerjomataram I, Jemal A, Bray F Global patterns and trends in colorectal cancer incidence and mortality. Gut. 2017;66:683–691.
2. Centers for Disease C, Prevention. Vital signs: Colorectal cancer screening test use—United States, 2012. MMWR Morb Mortal Wkly Rep. 2013;62:881–888.
3. Lin JS, Piper MA, Perdue LA, et al. Screening for colorectal cancer: Updated evidence report and systematic review for the US Preventive Services Task Force. JAMA. 2016;315:2576–2594.
4. Rabeneck L, Rumble RB, Thompson F, et al. Fecal immunochemical tests compared with guaiac fecal occult blood tests for population-based colorectal cancer screening. Can J Gastroenterol. 2012;26:131–147.
5. Vart G, Banzi R, Minozzi S. Comparing participation rates between immunochemical and guaiac faecal occult blood tests: A systematic review and meta-analysis. Prev Med. 2012;55:87–92.
6. Coury J, Schneider JL, Rivelli JS, et al. Applying the Plan-Do-Study-Act (PDSA) approach to a large pragmatic study involving safety net clinics. BMC Health Serv Res. 2017;17:411.
7. Gordon NP, Green BB. Factors associated with use and non-use of the fecal immunochemical test
(FIT) kit for colorectal cancer screening in response to a 2012 outreach screening program: A survey study. BMC Public Health. 2015;15:546.
8. Green BB, Fuller S, Anderson ML, et al. A quality improvement
initiative to increase colorectal cancer (CRC) screening: Collaboration between a primary care clinic and research team. J Fam Med. 2017;4:1115.
9. Coronado GD, Rivelli JS, Fuoco MJ, et al. Effect of reminding patients to complete fecal immunochemical testing: A comparative effectiveness study of automated and live approaches. J Gen Intern Med. 2018;33:72–78.
10. Karcher DS, Lehman CM. Clinical consequences of specimen rejection: A college of American pathologists Q-probes analysis of 78 clinical laboratories. Arch Pathol Lab Med. 2014;138:1003–1008.
11. Trivedi AN, Wilson IB, Charlton ME, et al. Agreement between HEDIS performance assessments in the VA and medicare advantage: Is quality in the eye of the beholder? Inquiry. 2016;53:1–3.
12. Harolds J. Quality and safety in health care, part I: Five pioneers in quality. Clin Nucl Med. 2015;40:660–662.
Caleb Cheng, MD, is a board-certified pathologist and transfusion medicine subspecialist who is currently a VA Quality Scholar at the VA Greater Los Angeles Healthcare System in Los Angeles, CA. He is a practicing pathologist and is currently involved predominantly in quality initiatives relating to SAIL measures.
David A. Ganz, MD, PhD, is a practicing internist and geriatrician, and serves as associate director of the VA Greater Los Angeles HSR&D Center for the Study of Healthcare Innovation, Implementation and Policy (CSHIIP) in Los Angeles, CA. He is also an associate professor of Medicine at UCLA, and an adjunct natural scientist at RAND.
Evelyn T. Chang, MD, MSHS, is a physician investigator at the VA Greater Los Angeles Healthcare System and serves as the associate director of the VA Quality Scholars Fellowship at the VA Greater Los Angeles Healthcare System. Her research interests are in primary care-mental health integration and high-needs, high-cost patient populations.
Alexis Huynh, PhD, MPH, is a healthcare policy analyst and health services researcher with interests in quality improvement and implementation science. She has specific training and expertise in mixed-methods research designs including multilevel modeling in quantitative methods and qualitative methods and analyses, as well as evaluations of evidence-based quality improvement interventions. She is an investigator at the VA HSR&D Center for the Study of Healthcare Innovation, Implementation, & Policy (CSHIIP) in Los Angeles, CA, where her current work focuses on enhancing patient engagement and retention of women Veterans, patient experiences in patient-centered medical homes (PCMH), and intensive management of high-risk patients in PCMH.
Shelly de Peralta, DNP, ACNP-BC, is an acute care nurse practitioner with 11 years of experience in quality improvement at the VA Greater Los Angeles in Los Angeles, CA. She has been an active member of the SAIL steering committee for the past 8 years, focusing on measures related to length of stay, standardized mortality ratio, inpatient complications, infection control, HEDIS measures, and national hospital quality (ORYX) measures. She is a member of the Quality Executive Council and reports to both the Chief of Staff (through Deputy Chief of Staff for Quality, Safety and Value) and Nurse Executive at GLA. Dr. de Peralta is the responsible program mentor for fellows in the clinician-executive track.
Journal for Healthcare Quality is pleased to offer the opportunity to earn continuing education (CE) credit to those who read this article and take the online posttest at www.nahq.org/journal/ce. This continuing education offering, JHQ 277 (41.2 March/April 2019), will provide 1 hour to those who complete it appropriately.
Core CPHQ Examination Content Area
IV. [Domain - Performance Measurement & Improvements.]
Journal for Healthcare Quality CE
- Understand an institution's process and experience in conducting an improvement project
- Describe attributes of a successful quality improvement initiative
- Understand how various quality improvement (QI) tools were used in a QI project
- 1. Which colorectal cancer screening test requires annual screening?
2. Where is the fecal immunochemical test sample typically obtained?
- Fecal immunochemical test
3. Which type of statistical process control chart did the authors use to evaluate their binary (“yes/no”) data about whether specimens were rejected?
- Urgent Care
4. What chart did the author use to understand the relative importance of the root causes of the specimen rejections?
5. Why did the automated phone call intervention not work as planned?
- Run Chart
- Pareto Chart
- Statistical Process Control Chart
6. To which factor did the authors attribute the success of their project?
- Software downtown did not allow for calls to go out as planned
- Lack of a backup process when the primary responsible staff member was absent
- The system was too costly to run on a weekly basis
- The patient contact phone numbers were wrong
7. How did the authors seek to understand the process?
- Multi-disciplinary targeted interventions
- Exemplary leadership
- Adequate funding
- Fortunate circumstances
8. Which cause for rejection decreased to zero as a result of creation of the standard operating procedure?
- The authors reviewed past manuscripts
- The authors recalled the process from prior experience
- The authors spoke with front-line staff
- The authors discussed the process with executive leadership
9. Which of the authors' interventions would theoretically decrease rejected specimens due to illegible handwriting down to zero?
- Rejected specimens due to expiration of specimen sample
- Rejected specimens due to lack of ordering provider
- Rejected specimens due to illegible handwriting
- Rejected specimens due to missing patient information
10. What was a stated limitation of the author's quality improvement project?
- Provision of label printers
- Clinical reminder prompts
- Automated call reminder system
- Focus groups
- The authors could not compare the effectiveness of the various individual interventions they undertook on the rejection rate.
- The authors could not show a significant reduction in specimen rejection rate
- The authors could not demonstrate the effectiveness of their interventions
- The authors could not sustain the improvements they made over a period of 6 months.