Share this article on:

Methods Employed in Monitoring and Evaluating Field and Laboratory Systems in the ANISA Study: Ensuring Quality

Connor, Nicholas E. MSc; Islam, Mohammad Shahidul MSc; Arvay, Melissa L. MPH; Baqui, Abdullah H. DrPH; Zaidi, Anita K. SM; Soofi, Sajid B. FCPS; Panigrahi, Pinaki PhD; Bose, Anuradha MD; Islam, Maksuda BA; Arifeen, Shams El DrPH; Saha, Samir K. PhD; Qazi, Shamim A. PhDfor the ANISA Methods Group

The Pediatric Infectious Disease Journal: May 2016 - Volume 35 - Issue 5 - p S39–S44
doi: 10.1097/INF.0000000000001105
ANISA Supplement

Background: The Aetiology of Neonatal Infection in South Asia (ANISA) study maintains operations in Bangladesh, India and Pakistan. We developed and deployed a multilayered monitoring system to measure performance indicators of field sites and laboratory operations. This system allows for real-time provision of feedback to study site teams and project stakeholders. The goal of this monitoring and evaluation system is to promote optimal performance and consistency in protocol application at all sites over the course of the study, thereby safeguarding the validity of project findings. This article describes each of the interdependent monitoring layers that were conceptualized, developed and employed by the ANISA coordination team.

Methods: Layers of monitoring include site-level, central and database-related activities along with periodic site visitation. We provide a number of real-world examples of how feedback from the ANISA monitoring system directly informs a number of crucial decisions and course corrections during the project.

Conclusion: The ANISA monitoring system represents a transparent, understandable and practical resource for development of project monitoring systems in complex multisite health research projects.

From the *Centre for Child and Adolescent Health, International Centre for Diarrhoeal Disease Research, Dhaka, Bangladesh; Child Health Research Foundation, Dhaka, Bangladesh; Centers for Disease Control and Prevention, Atlanta, Georgia; §Department of International Health, International Center for Maternal and Newborn Health, Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland; The Aga Khan University, Karachi, Pakistan; Center for Global Health and Development, College of Public Health, University of Nebraska Medical Center, Omaha, Nebraska; **Christian Medical College, Vellore, India; and ††Department of Maternal, Newborn, Child and Adolescent Health, World Health Organization, Geneva, Switzerland.

Accepted for publication January 10, 2016.

The members of the ANISA Methods Group are listed in the Acknowledgments.

The ANISA study is funded by the Bill & Melinda Gates Foundation (Grant No. OppGH5307). The authors have no other funding or conflicts of interest to disclose.

Address for correspondence: Nicholas E. Connor, MSc, Child Health Research Foundation, 10-Ga, Road-2, Shyamoli, Dhaka-1207, Bangladesh. E-mail: nick@chrfbd.org.

This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial License 4.0 (CCBY-NC), where it is permissible to download, share, remix, transform, and buildup the work provided it is properly cited. The work cannot be used commercially.

The Aetiology of Neonatal Infection in South Asia (ANISA) study is designed to collect high quality demographic, epidemiological, clinical and microbiological data to describe the risk factors and etiology of young infant infections in South Asian communities. This project captures infection cases in the community from the first day of life up to 2 months of age (0–59 days) in a mix of periurban and rural settings.1 The project team is composed of local field and laboratory staff at study sites in Bangladesh, India and Pakistan led by pediatric and global health experts, whereas overall activities are coordinated by the Child Health Research Foundation (CHRF), located in Dhaka, Bangladesh. The monitoring activities operate under the close oversight and guidance of the project principal investigator (PI) and the project coordination team, which includes personnel from the International Centre for Diarrhoeal Disease Research, Bangladesh (icddr,b), the World Health Organization and the U.S. Centers for Disease Control and Prevention (CDC). An external Technical Advisory Group provides advice to the PI and coordination team. The study’s findings will be used to inform meaningful strategies to improve child survival.2

ANISA’s combination of intensive pregnancy and neonatal infection surveillance, referral, clinical assessment, specimen collection from both cases and controls and diagnostic laboratory activities is complicated.3 Furthermore, the project is conducted under different conditions and preexisting management structures at each site. Managing this complexity and achieving project goals necessitates harmonization of key project activities by using a comprehensive monitoring and evaluation system.

Back to Top | Article Outline

STUDY OPERATIONS

The ANISA study is located at 5 sites: Karachi and Matiari in Pakistan; Odisha and Vellore in India and Sylhet in Bangladesh. In total, the study employs nearly 400 community health workers (CHWs), myriad laboratory staff, clinicians, nurses, phlebotomists, researchers, adjunct staff and site managers. Community-based possible serious bacterial infection (pSBI) surveillance takes place at sites that have a total population of over 2 million and represent a broad cross-section of the South Asian population.3 CHWs and clinicians fill out a set of data collection forms for each study participant. A centrally designed specimen labeling and tracking system records details of specimens drawn from cases and healthy controls, from collection to diagnostic testing.4 Standardized laboratory forms are also used to record each logical step of the diagnostic procedures performed on each specimen type.

A standard data capture system was developed to record data collection forms from the field, as well as from the laboratory data collection forms, which are all uploaded weekly to the central study database in Dhaka. The ANISA data system also includes a customized text message system that operates in real time to select healthy controls in the field.5

The strength of the ANISA study rests on the expertise of its partner sites that were chosen based on their experience in conducting population-based maternal and child health studies in their communities.2,3,6–13 Although the coordination team in Dhaka has to ensure that the protocol is properly implemented, the team relies on the experience and innovation of the site teams to apply the protocol and fulfill study objectives while taking local conditions into account. Thus, the monitoring and evaluation system needs to be vigilant enough to provide useful feedback to sites while taking their different contexts into account.

Back to Top | Article Outline

DESIGN OF THE MONITORING STRUCTURE

The ANISA project employs a 4-level monitoring structure, which can be conceptualized as a pyramid, with the base supporting and informing the subsequent higher layers. Internal quality assurance/quality control (QA/QC) activities are conducted on a weekly basis by site personnel. These activities underpin the figures in the monthly reports that are returned to the coordination team. An additional layer of monitoring through a series of queries to the database is repeated on a bimonthly basis to monitor adherence to protocols. Lastly, periodic site visits ensure that the aforementioned monitoring activities are working properly (Fig. 1). This system provides a common platform for collection of monitoring information and informs questions and feedback between the Coordination Center in Dhaka, its affiliates (icddr,b and CDC) and the field and laboratory teams (Fig. 2). This feedback allows for detailed, confident reporting of study progress to the coordination center via web conferences, and from Dhaka to other study stakeholders.

FIGURE 1

FIGURE 1

FIGURE 2

FIGURE 2

Back to Top | Article Outline

Level 1: Internal Routine Site QA/QC Checks

The base of the monitoring pyramid relies on direct site-level QA/QC of all field and laboratory activities. Although management structures vary by site, the basic field hierarchy involves site managers overseeing field supervisors, who then oversee the CHW supervisors, who in turn oversee the CHWs.

Among CHWs, supervision and auditing with feedback are effective in achieving and maintaining high-quality performance, and having several parallel support and monitoring techniques improves and maintains performance levels.14 Therefore, ANISA project sites employ various mutually supportive methods to ensure optimal continued performance of CHWs. These methods include routine data collection form rechecking and random observations of visits by supervisors, who subsequently provide feedback or re-training to CHWs to improve and maintain performance. CHW home visits are observed by supervisors, who also conduct random independent visits at a subset of households to re-check the CHW’s assessment accuracy and data collection form entry. The coordination team provides guidelines, including minimum frequencies of random and systematic checks, although sites are free to increase their oversight in this regard (Table 1). Additionally, refresher training is provided via both regular and targeted retraining to maintain quality of young infant assessment and record-keeping skills over time. Common inconsistencies found in the on-site data centers are relayed to field management, so that they can identify the source of these errors and correct problems in filling out the data collection forms. Together, these internal site QA/QC techniques allow for rapid detection and correction of issues and serve to safeguard the quality of study data.

TABLE 1

TABLE 1

All sites face different contextual challenges, including different levels of in-migration and out-migration, internal movement of pregnant women, traffic conditions, weather, law-and-order situations as well as unique geographical features. Study staff address these challenges by employing locally appropriate strategies. Strong internal site QA, setting realistic site-specific targets for activities and ongoing community engagement create a system of accountability and project ownership at the site level, forming the strong base of the monitoring and evaluation pyramid.

Back to Top | Article Outline

Level 2: Performance Monitoring via Monthly Field and Laboratory Report Forms

The primary goal of the study is the collection and analysis of a representative sample of quality biological specimens from both sick young infants showing signs of pSBI and healthy control infants, in the first 59 days of life in community settings. Achieving this goal involves coordination of research activities in the field and laboratories via continuous monitoring of a number of performance indicators. These indicators are connected through a chain of dependent activities at each site, starting from recruitment of participants and scheduled visitation, toward the successful collection and processing of specimens. Each stage of the workflow may potentially act as a bottleneck and limit the overall field success and thus the representativeness of the study, threatening the validity of findings. The coordination team monitors activities using quantitative report forms submitted by the sites on a monthly basis. These forms contain 64 numerical elements of the field (40) and laboratory (24) components with specific significance to project performance (Table 2). Laboratory forms capture key numbers, samples tested, bacterial isolates from blood culture and molecular test results.

TABLE 2

TABLE 2

Monthly report evaluation is performed across all sites using simple, transparent formulae allowing for the pooling and cross-comparison of overall project performance data. When evaluating these forms, readily interpretable estimates are used to show performance, initiate dialogue, and encourage site teams to continue real progress or undertake appropriate corrective actions where indicated. The economy of elements in field and laboratory activities and straightforward evaluation and feedback are preferable in our multisite study to more complex calculations, which could lead to disagreements about the calculations, rather than a discussion of underlying performance.

The monthly monitoring reports are collected from all sites within 10 days of the end of the preceding month. The figures from these reports are recorded in Microsoft Excel, which is used to produce routine performance charts each month. This method provides a comparison of the same metrics against both the study targets and performance of the site in previous months. Charts and figures are reviewed and shared; monthly declines in performance of over 10% in any of the critical areas or other interesting trends are routinely discussed with each respective site during conference calls.

Back to Top | Article Outline

Level 3: Database-based Monitoring

All completed field data collection forms and laboratory results are uploaded to a central database and are available for analysis by the coordination team to support monitoring and evaluation activities. Querying incoming data allows for more in-depth monitoring of field, clinical and laboratory performance than is possible via the monthly reports. The database query also helps determine if a site’s data upload quantity and quality are being maintained and are consistent with expectations.

Database frequency tables of entered forms are routinely cross-compared with the expected number of forms collected and studied alongside the monthly reports. This cross-comparison is key to evaluating a site’s data entry performance and to avoid any large form entry backlog.

The quality of the incoming data is scrutinized using (i) the proportion of data collection forms that undergo the required double entry and (ii) automated internal consistency and logical checking algorithms. Feedback tables are regularly shared with data personnel and site teams to ensure early resolution of problems. We utilize historical data on field activities, detected clinical signs and real-time data found in the text message system, which synchronizes with the data server daily to detect births, case diagnoses and control enrollment. Together these data sets are scrutinized, and feedback is provided to sites. With the help of the site teams, appropriate inquiries and solutions are devised by cross-comparing monthly monitoring forms, the database and the text message data. Table 3 illustrates how we used the database to identify and solve problems related to healthy control selection.

TABLE 3

TABLE 3

Project laboratory data are entered into the site database directly by the laboratory staff. These data are securely transmitted to the coordination center and subsequently to CDC for in-depth analysis, evaluation, detailed feedback and suggestions for resolution of problems found in processing and analysis of specimens. The project database allows the coordination team to confirm and detail issues detected by the monthly monitoring reports, streamline data capture, ensure data quality and inform CDC of all relevant laboratory details for evaluation and feedback.

Back to Top | Article Outline

Level 4: Site Visits

Although local, monthly and data monitoring and evaluation activities provide the coordination team with copious monitoring data, there is no alternative for meeting with and observing project personnel in action. Thus, periodic visits are made to the study sites by internal and external monitors to observe implementation of the protocol, the daily activities of staff operating in the community, health care facilities, data centers and study laboratories. Internal monitors are selected from the ANISA coordination team and project administrators, whereas external monitors are subject matter experts from the World Health Organization, the Hospital for Sick Children in Toronto, CDC and other affiliated organizations with expertise in critical appraisal of similar projects in complex field and laboratory environments.

These visits are informed by monthly database monitoring activities and provide further information on site functionality, workflow, staff skills and other practical issues.

The monitors write comprehensive visit reports, which are reviewed and shared with the coordination team and the site leaders for discussion and prompt resolution of detected issues. External monitors also provide expert advice and ideas that are beneficial to project implementation at the site. A standard list of project elements is checked during these visits (Table 4), including logistical aspects. Site visits are scheduled once or twice a year and are indispensable for detecting and resolving issues.

TABLE 4

TABLE 4

Both internal and external monitors are briefed in detail before these visits and provided with site-specific recommendations generated by previous visits and current database and monthly monitoring reports. Site visits present a unique opportunity to build rapport, provide feedback to sites and solve in person any pending issues flagged by other monitoring activities. Feedback from these visits and recommendations for improvement of protocol application are shared with the sites and stakeholders, so appropriate remedial actions can be agreed together with the site leaders.

Back to Top | Article Outline

APPLICATION AND UTILITY OF THE MONITORING STRUCTURE—THE ANISA EXPERIENCE

Pilot Study

The pilot phase of the study was implemented at each ANISA site. All study physicians, support and laboratory staff and data entry personnel used this time period to become familiar with the protocol and to gain proficiency working together in implementing it, identifying issues, and providing feedback to the coordination team. Critically, the monitoring system was in place, and monitoring data were collected from the beginning of the pilot phase. The pilot phase also allowed for refinement and improvement of the elements within the monitoring tools and analysis techniques used.

The pilot phase involved monitoring key performance elements such as the timing of the first assessment of the newborn after delivery, the proportion of successful referrals, levels of consent to specimen collection, timing of specimens reaching the laboratory and blood contamination rates. For the sites in Bangladesh and Pakistan, basic thresholds were agreed upon by the respective site PIs and the coordination team to indicate whether the site was operating satisfactorily (Table 5). Based on evaluation of this site performance data, the site leaders and the coordination team agreed upon the end date of the pilot phase and commencement of the main study at that site in a retrospective manner (see Fig. 3 for more details)

TABLE 5

TABLE 5

FIGURE 3

FIGURE 3

For the sites in Bangladesh and Pakistan, the earliest sites to commence, the pilot phase was retroactively defined (as explained above), but the Indian sites undertook a predefined pilot period of 1 month as they started much later and had the benefit of joining the study after extensive streamlining of procedures. Conducting a pilot phase at an ANISA site until the desired performance levels were reached and maintaining that performance afterwards required commitment of the site leaders and regular feedback from the coordination team and stakeholders.

Back to Top | Article Outline

SUMMARY AND CONCLUSION

Together, ANISA’s 4 overlapping monitoring layers and straightforward evaluation and communications allow the study investigators to detect issues and track numerous developments in the field, clinical and laboratory components of the project to make informed decisions.

A comprehensive system of harmonized site monitoring elements increases the odds that each site will adhere strictly to all elements in the study protocol evenly. It allows for both the coordination team and the donor to be confident in the site investigators’ understanding and control of issues within the bounds of the study. The ability to examine the key activities across project sites in a simple, inclusive and transparent way also allows for a shift from an implementation focus to a management focus. Finally, it enables a shift to a results-based monitoring focus once the sites are functioning satisfactorily, and the project is on a path towards fulfilling its objectives.

Continuous monitoring of project activities leads to the adoption of various new strategies, staffing and oversight which strengthen both the active and passive surveillance structures at study sites. These strategies have already improved coverage and maximized engagement of sites with their at-risk communities.

A blood contamination workshop in February 2012 (Fig. 4) was the direct result of regular monitoring of contamination rates at the study sites from the pilot phase. If the results of laboratory investigations had not been monitored and scrutinized so regularly and closely, it would not have been possible to justify devising new strategies and developing more intensive guidelines for limiting contamination of neonatal blood specimens; it certainly would not have allowed these measures to be implemented mid-stream to improve study outcomes.

FIGURE 4

FIGURE 4

In conclusion, the straightforward monitoring structure of ANISA is indispensable to identifying, sharing and resolving underlying issues with the help of partners and continues to facilitate the success of the project and safeguarding of the validity of the study findings.

Back to Top | Article Outline

ACKNOWLEDGMENTS

The authors gratefully acknowledge the technical assistance of Mr. Mahmudur Rahman and Ms. Mahfuza Marzan, as well as our numerous talented and supportive colleagues at CHRF, icddr,b and CDC.

The ANISA Methods Group: A. S. M. Nawshad Uddin Ahmed and Belal Hossain (Child Health Research Foundation, Dhaka, Bangladesh); Qazi Sadeq-ur Rahman and Tanvir Hossain (Centre for Child and Adolescent Health, International Centrefor Diarrhoeal Disease Research, Dhaka, Bangladesh); Jonas M.Winchell, Maureen H. Diaz, Nong Shang, Yoonjoung Choi, and Stephanie J. Schrag, DPhil (Centers for Disease Control and Prevention, Atlanta, GA); Aarti Kumar and Vishwajeet Kumar (Community Empowerment Lab, Lucknow, India); Arif Billah, LukeMullany, Mathuram Santosham and Nazma Begum (Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD); Daniel E. Roth (Department of Paediatrics, Hospital for Sick Children and University of Toronto, Canada); Derrick Crook (John Radcliffe Hospital, University of Oxford, Oxford, United Kingdom); Stephen P. Luby (Stanford Woods Institute for the Environment, Stanford University, Stanford, CA) and Abdul Momin Kazi, Imran Ahmed, Shahida M. Qureshi, Sheraz Ahmed and Zulfiqar A. Bhutta (The Aga Khan University, Karachi, Pakistan).

Back to Top | Article Outline

REFERENCES

1. Waters D, Jawad I, Ahmad A, et al. Aetiology of community-acquired neonatal sepsis in low and middle income countries. J Glob Health. 2011;1:154–170
2. Saha SK, Arifeen SE, Schrag SJ. Aetiology of Neonatal Infection in South Asia (ANISA): an initiative to identify appropriate program priorities to save newborns. Pediatr Infect Dis J. 2016;35(Suppl 1):S6–S8
3. Islam MS, Baqui AH, Zaidi AK, et al. Infection surveillance protocol for a multicountry population-based study in South Asia to determine the incidence, etiology, and risk factors for infections among young infants 0 to 59 days old. Pediatr Infect Dis J. 2016;35(Suppl 1):S9–S15
4. Connor NE, Hossain T, Rahman QS, et al. Development and implementation of the ANISA labeling and tracking system for biological specimens. Pediatr Infect Dis J. 2016;35(Suppl 1):S29–S34
5. Rahman QS, Islam MS, Hossain B, et al. Centralized data management in a multicountry, multisite population-based study. Pediatr Infect Dis J. 2016;35(Suppl 1):S23–S28
6. Zaidi AK, Tikmani SS, Sultana S, et al. Simplified antibiotic regimens for the management of clinically diagnosed severe infections in newborns and young infants in first-level facilities in Karachi, Pakistan: study design for an outpatient randomized controlled equivalence trial. Pediatr Infect Dis J. 2013;32(Suppl 1):S19–S25
7. Bhutta ZA, Memon ZA, Soofi S, et al. Implementing community-based perinatal care: results from a pilot study in rural Pakistan. Bull World Health Organ. 2008;86:452–459
8. Bhutta ZA, Soofi S, Cousens S, et al. Improvement of perinatal and newborn care in rural Pakistan through community-based strategies: a cluster-randomised effectiveness trial. Lancet. 2011;377:403–412
9. Baqui AH, El-Arifeen S, Darmstadt GL, et al.Projahnmo Study Group. Effect of community-based newborn-care intervention package implemented through two service-delivery strategies in Sylhet district, Bangladesh: a cluster-randomised controlled trial. Lancet. 2008;371:1936–1944
10. Arifeen SE, Mullany LC, Shah R, et al. The effect of cord cleansing with chlorhexidine on neonatal mortality in rural Bangladesh: a community-based, cluster-randomised trial. Lancet. 2012;379:1022–1028
11. Singh JC, Kekre NS. CMC Vellore—in the service of our nation for more than a century. Indian J Surg. 2009;71:284–287
12. John SM, Thomas RJ, Kaki S, et al. Establishment of the MAL-ED birth cohort study site in Vellore, Southern India. Clin Infect Dis. 2014;59(Suppl 4):S295–S299
13. Chandel DS, Johnson JA, Chaudhry R, et al. Extended-spectrum beta-lactamase-producing Gram-negative bacteria causing neonatal sepsis in India in rural and urban settings. J Med Microbiol. 2011;60(Pt 4):500–507
14. Rowe AK, de Savigny D, Lanata CF, et al. How can we achieve and maintain high-quality performance of health workers in low-resource settings? Lancet. 2005;366:1026–1035
Keywords:

monitoring and evaluation; multi-site; ANISA; neonatal; surveillance; project design

Copyright © 2016 Wolters Kluwer Health, Inc. All rights reserved.