The Clinical and Translational Science Awards (CTSAs) of the National Institutes of Health (NIH) have provided a substantial investment in clinical and translational research infrastructure. Expertise in biostatistics, epidemiology, and research design (BERD) is an essential component of this infrastructure. The CTSA’s BERD units are typically charged with the task of supporting methodological aspects of research, developing needed analytic techniques, and building collaboration with other investigators. However, there is no blueprint from which to design an optimal BERD unit, and many different models exist across the CTSA Consortium, which, at the end of our study in 2013, consisted of 61 U.S. academic health centers (AHCs). There is a need to understand the different models so that future development of BERD units can leverage common practices and benchmark productivity. A comprehensive characterization of the BERD units’ structure and productivity is essential for this purpose.
Since the beginning of the CTSA program in 2006, evaluation has been a required component and has been emphasized in each NIH funding opportunity announcement. In response to the call for systematic evaluation, the Evaluation Working Group of the CTSA BERD Key Function Committee developed a matrix approach for measuring BERD activities.1 The three domains for evaluation were defined as
- development and maintenance of collaborations with clinical and translational science (CTS) investigators,
- application of BERD-related methods to clinical and translational research, and
- discovery of novel BERD-related methodologies.1–3
These three broad domains were linked to six categories of measures:
- consultations with CTS investigators,
- grant applications submitted and funded,
- protocols developed and reviewed,
- abstracts and manuscripts submitted and accepted,
- new methodologies developed, applied, and distributed, and
- educational activities, courses, and students.2
On the basis of all these criteria, the Evaluation Working Group, of which some of us were members, developed survey instruments and administered them to collect information on these metrics across the consortium. Below, we describe our study and its findings concerning the characteristics of BERD units across the United States.
In 2010, 2011, 2012, and 2013, members of the Evaluation Working Group administered the survey to U.S. AHCs that were members of the CTSA Consortium.
In 2010, 46 institutions were funded as members of the CTSA Consortium. In 2013, the CTSA Consortium had expanded to 61 members. All institutions that were in receipt of CTSA funding between January and April of each year of this survey were invited to participate. Although the precise wording we used in the surveys to define BERD practitioners evolved over the years of the study, we generally defined them as methodologists—such as biostatisticians, epidemiologists, or other quantitative research specialists—who are involved in the CTSAs. The institutional review board (IRB) at the University of Cincinnati determined that the survey did not meet the definition of human subjects research and therefore did not require IRB review and approval.
We sent the BERD survey electronically to each CTSA BERD director or codirector using Research Electronic Data Capture (REDCap), a metadata-driven electronic data capture software and workflow methodology, to provide translational research informatics support.4 Information collected included the percentages of BERD personnel effort dedicated to consultation and collaboration, educational activities, development and review of IRB protocols, and development and review of intramural and extramural grants. Additionally, we sought information regarding the total numbers and FTEs for master’s- and doctoral-level epidemiologists and biostatisticians as well as for personnel in other related specialties. The 2010 survey was the first assessment and included questions about methodological research, funding, and career development for BERD personnel. Although the survey instrument evolved over the years, the core information was consistently captured and mapped from year to year. A description of the additions and deletions to the survey instruments over the years is provided in Chart 1. A copy of the 2013 survey is provided as Supplemental Digital Appendix 1, which may be found at http://links.lww.com/ACADMED/A384.
To analyze the survey data, several steps were involved in harmonizing and merging those data across the years of administration. First, we defined a unique identification number for each CTSA institution. Next, we checked for duplicate records within a single CTSA and, when necessary, created a unique record based on the available information in the duplicate records. When response categories were revised over the years, we created a derived variable that mapped across years.
We conducted a descriptive analysis of the characteristics of BERD units that participated in the survey each year, including the number of years the institution responded to the survey and the number of academic units making up the BERD group. We computed summary statistics to characterize the distributions of reported productivity by year, including annual number of BERD-supported consulting projects, BERD-submitted proposals, published manuscripts with a BERD member listed as first or corresponding author, and whether a BERD unit provides mentoring to junior faculty. Similar analyses were conducted for non-BERD-related project activities; productivity was measured by the number of non-BERD proposals in which BERD assistance was provided, proposals with funding for BERD members, non-BERD published manuscripts acknowledging CTSA funding, and the percentage of consulting provided for junior investigators. All statistical analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, North Carolina).
Response rates to the 2010, 2011, 2012, and 2013 surveys were 93.5% (43/46), 98.2% (54/55), 98.3% (59/60), and 86.9% (53/61), respectively. Because CTSA institutions were funded in a staggered manner, the number of surveys completed by the participating institutions varied from year to year: 53 institutions participated in at least one survey; 36 of the participating institutions contributed data to all four years. Additional details about these results are presented in Table 1.
Sizes of BERD units
The sizes of BERD units varied markedly, ranging from a low of 3 to a high of 86 individuals over the length of our study. This reported wide range, which is caused by an extreme value of 86 individuals, may be due to the many different ways that institutions defined and structured their BERD units and how the survey responders interpreted the definition of BERD practitioners over the survey years. For example, the total number of BERD personnel decreased over time from a median of 14 individuals in 2010 to a median of 10 individuals in 2013, but the median total BERD FTEs in 2010, 2012, and 2013 remained nearly the same (3.0–3.5 FTEs). Note: These data were not collected in survey year 2011.
Compositions of BERD units
BERD units consisted primarily of biostatisticians, mostly at the doctoral level. In each year, the median FTE for doctoral-level biostatisticians was between 1.1 and 1.4, and the median FTE for master’s-level biostatisticians ranged from 0.9 to 1.2. The median FTE for doctoral-level epidemiologists was at most 0.1. Master’s-level epidemiologists were the least represented group in BERD units in every year (range of 0–9 individuals with median FTE of 0). The number of FTEs paralleled these trends over survey years 2010, 2012, and 2013. When averaged over three survey years (2010, 2012, and 2013), the means of first quartile (Q1) and third quartile (Q3) for total FTE personnel were 2.1 and 5.3, respectively, indicating that half of BERD units were resourced to provide between 2.1 and 5.3 FTEs. Additional descriptive data regarding the BERD resources are displayed in Table 2.
Although data related to indicators of scholarly output were not collected in survey year 2010, in each of three survey years (2011, 2012, and 2013), more than a third* of BERD units provided consulting support on 101 to 200 projects per year; more than 22% consulted on 1 to 100 projects. A majority of BERD units reported that between 25% and 75% (in 2011) and 31% to 70% (in 2012) of their consulting was provided to junior investigators. More than two-thirds of BERD units reported their contributions to the submission of 20 or more non-BERD grant or contract applications annually; most BERDs—53.7% in 2011, 48.3% in 2012, and 65.9% in 2013—reported assisting in the preparation of up to 50 non-BERD-related grant applications per year. Over the three years, the number of CTSA institutions that were assisted with submission of more than 50 non-BERD grants ranged from 11 to 22, of which only 5 to 12 (about half) requested funding for BERD. In 2012 and 2013, more than two-thirds of BERD units—67.8% in 2012, and 69.4% in 2013—reported submitting 10 or fewer proposals annually for which BERD personnel would be listed as principal investigator.
Nearly half of BERD units—49.0% in 2011, 44.8% in 2012, and 53.1% in 2013—reported 1 to 10 manuscripts submitted annually with a BERD practitioner as the first or corresponding author. Whereas 5.6% to 37.3% of BERD units over the years reported that they did not track data regarding the creation of publicly available software packages, more than one-third of the BERD units created at least one software package and made it publicly available in each year of the survey. About one-third of BERD units—34.0% in 2011, 36.2% in 2012, and 34.8% in 2013—submitted 5 to 25 manuscripts that acknowledged CTSA funding and included a BERD practitioner as an author. However, 29.3% to 41.5% of BERD units reported that they were not tracking data as to whether these manuscripts acknowledge CTSA funding. Regarding development of new methodologies, the majority of BERD units (56.9%–68.8%) reported submitting fewer than 5 proposals, including funding for methods development. For education and mentoring activities, the mean (SD) percentages of personnel time spent during 2010, 2012, and 2013 were 9.6% (6.9%), 15.6% (13.3%), and 11.2% (6.9%), respectively. Other descriptive results are displayed in Table 3.
The purpose of the CTSA Consortium is to improve and transform the efficiency and quality of clinical and translational research by promoting best practices and a team science approach among researchers. Just as one would expect individual institutions to vary according to their capacities and records of accomplishment with respect to this aim, BERD units would also be expected to vary. Systematically assessing BERD units was viewed as an essential step towards understanding common practices and developing national productivity benchmarks for BERD units. The longitudinal data about BERD units obtained through our evaluation provide important insights into BERD functions and national trends within and across years. At a very high level, these data on BERD composition and function illustrate how BERD units were engaging in the mission of the National Center for Advancing Translational Sciences. Also, the data facilitate review of the BERD units at the national level using the evaluation domains proposed by Rubio et al.1,2
Sizes and compositions of BERD units
BERD units were generally resourced to apply a median FTE of just 3.0 to 3.5 over the years. Although the range of FTEs applied was substantial, half of BERD units were resourced to provide between 2.1 and 5.3 FTEs. In contrast to the amount of effort applied, the number of persons engaged in BERD activities was much higher, with a median of 14 individuals in 2010, and 10 individuals in both 2012 and 2013. Overall, units ranged in size from 3 to 86 individuals. The variability in FTEs and number of personnel reflects differences in organizational structure. Whether the ratio of FTEs to number of personnel predicts institutional scholarly output cannot be addressed with the current data.
BERD units were composed mostly of biostatisticians. The median number of doctoral-level biostatisticians in a BERD unit was double that of the number of master’s-level biostatisticians. BERD units provided much smaller FTEs with respect to epidemiological expertise. The limited involvement of epidemiologists may be a result of institutional resources and research topics aligned with CTSAs, but even so, the data support further exploration into how to integrate epidemiological principles into translational research.
Consultations and collaborations
CTS investigators often approach BERD practitioners for short consultations on a wide range of topics.5 Our surveys revealed that more than one-third of BERD units provided consulting support on 101 to 200 projects annually. It is unknown how many of these consultations were for students, residents, or fellows; our experience is that access to CTSA-supported consulting by these groups varies across the CTSA Consortium. We did find that while the number of CTSA institutions that were assisted with submission of more than 50 non-BERD grants ranged from 11 to 22 over three of the survey years (2011, 2012, 2013), only about half (i.e., 5–12 institutions) requested funding for BERD support in more than 50 grants, apparently contradicting the importance of developing a collaborative relationship. However, some of the contradiction can be explained via the larger organizational structures that BERDs are housed in. The survey demonstrated that at some institutions, BERD-related expertise was the primary resource available (in terms of FTEs and number of people). At other institutions, BERD-related resources may be only a small fraction of available expertise. Thus, one must be cautious not to assume that all BERD-related productivity and capacity were assessed by our surveys.
Development and dissemination of innovative methodologies
In addition to providing support for clinical and translational studies, BERD practitioners are often expected to develop their own projects focused on development of innovative methodologies. This requirement, along with other academic and professional responsibilities—such as grant review, manuscript review, involvement in oversight and advisory committees, IRBs, ethics committees, and mentoring of junior investigators and students—can reduce the amount of time BERD practitioners can dedicate to the projects of others. These issues should be considered in developing and funding BERD units; it is important to balance the expectations for providing consultation and collaboration with the expectations of leading and teaching in the BERD practitioner’s own discipline. Unfortunately, we do not have data describing how individual BERD practitioners balanced these priorities. We have discussed the challenge of competing priorities for the individual collaborative methodologist elsewhere.6
Opportunities to develop innovative statistical and epidemiological methods naturally arise from BERD practitioners’ collaborations with clinical and translational researchers. Although the surveys did not elicit information about the number of statistical methodology articles published by the BERD units, nearly half of the institutions reported that 1 to 10 manuscripts were submitted annually on which a BERD practitioner was the first or corresponding author. Also, some BERD practitioners created publicly available software, analysis tools, and educational materials for conducting clinical and translational research.7
Educational activities and training
The importance of training on quantitative research methods for translational researchers has been a cornerstone of the CTSA program.8 BERD units reported engagement with both traditional didactic training and also informal mentoring through the consultative resources. For example, a majority of BERD units reported that between 25% and 75% (in 2011) and 31% to 70% (in 2012) of their consulting was provided to junior investigators. The data from the surveys are too limited to discern the full nature of the educational impact to the institution.
Insights and recommendations
Our study’s data characterizing the structure and productivity of BERD units across the CTSA Consortium suggest extensive heterogeneity, both across sites within a given year and also within sites across years. Although this finding precludes simple comparative analyses to identify best practices for structuring BERD units, it reflects the evolving environment of team science, collaborative research, and the CTSA program. Nonetheless, we have gained several key insights from our data.
We found very little engagement of epidemiologists with BERD units. Epidemiologists are valuable for effectively developing and refining research questions and designing studies.9,10 They are also often able to report statistical results in a way that can be more directly applied to clinical and public health implementation, acting as “translators” between the statisticians and the medical or community audience. Pending a detailed comparison of the contributions of biostatisticians and epidemiologists, we recommend that institutions consider the potential advantages and costs of more fully integrating epidemiologists into BERD activities.
It is important to help consultations develop into collaborations. Our data suggest that within the CTSA Consortium this does occur, but there is room for improvement. Reviewing research protocols and providing consultation are central responsibilities of BERD practitioners at many CTSA institutions that can sometimes lead to the BERD practitioner’s becoming involved in the further development and refining of the research, which can subsequently lead to the development of collaborative relationships with CTS investigators. Although our data do not address how many of these encounters ultimately evolve into long-term collaborative relationships between BERD practitioners and CTS investigators, it is critical that BERD practitioners be active collaborators with vested interests in the success of research projects, and that they use their unique skill sets as part of the conduct of research studies.11 It is possible that developing ongoing collaborations is beyond the scope of some BERD units. Instead, ongoing collaborations may only be catalyzed by the BERD units, transitioning later on to other institutional resources for ongoing support. In this way, the BERD units could serve as an important “front door” to the resources. Future surveys are needed to assess the long-term outcomes when individuals seek consultations through the BERD units.
Although the high response rate of survey respondents across the four years is a major strength, we acknowledge that there were several limitations with our approach. First, the definition of BERD practitioners likely varied with respondents’ interpretations, particularly when related to the scope of the “BERD unit.” It would also appear from our results that some sites included the totality of their institutional resource, such as an entire academic department of biostatistics, as opposed to a limited resource associated directly with the CTSA Consortium. However, this does reflect the varied nature of BERD units; at some institutions the BERD unit has a global scope for all methodological consultations and collaborations across the entire organization, while at others, the BERD unit has a narrowed scope aligned only with the CTSA.
Any survey is limited by the accuracy of reported data, and we have not attempted to verify our study’s reported data against an independent source. Categories for certain questions were changed between years on the basis of feedback provided on earlier surveys. Thus, we had to group certain categories that were similar but not identical. Some questionnaire items were also deleted during certain survey years, limiting our ability to provide data for all four years of the survey. However, because we used the median as our measure of central tendency to describe various distributions that were skewed, we believe that interpretation of our results is not influenced by potential outliers due to improper interpretation of questions in the questionnaires over the survey period. Although highly desirable, existing data are insufficient to construct derived variables indicative of impact, quality, and efficiency of BERD units.
Our findings can be used to inform the creation or restructuring of BERD units in terms of size and composition as well as expected scholarly output. The “B” (biostatistics) in BERD is well represented, and there appears to be an opportunity to expand the units to include more “E” (epidemiology) and other forms of research design (“RD”) expertise. With the evolution of the CTSA program, we believe it is important to continue emphasizing that BERD members should collaborate with clinical and translational research teams. The level of scholarly output reported here could serve as a comparator for ongoing evaluation of the BERD units throughout the country.
Acknowledgments: The authors acknowledge contributions by Deborah del Junco, PhD, Rafia Bhore, PhD, Madhu Mazumdar, PhD, Felicity T. Enders, PhD, Yi-Ju Li, PhD, and Maurizio Macaluso, MD, DPH, who provided substantive input to the process of development of the survey questionnaires, and also acknowledge other biostatistics, epidemiology, and research design unit directors and codirectors in other institutions who responded to our surveys over the years.
1. Rubio DM, Del Junco DJ, Bhore R, et al.; Biostatistics, Epidemiology, and Research Design (BERD) Key Function Committee of the Clinical and Translational Science Awards (CTSA) Consortium. Evaluation metrics for biostatistical and epidemiological collaborations. Stat Med. 2011;30:27672777.
2. Rubio DM, Del Junco DJ, Bhore R, et al.; Biostatistics, Epidemiology, and Research Design (BERD) Key Function Committee of the Clinical and Translational Science Awards (CTSA) Consortium. Evaluation metrics for biostatistical and epidemiological collaborations. Stat Med. 2011;30:27672777.
3. Kane C, Trochim WM. The end of the beginning: A commentary on “Evaluation metrics for biostatistical and epidemiological collaborations.” Stat Med. 2011;30:27782882.
4. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—A metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42:377381.
5. Deutsch R, Hurwitz S, Janosky J, Oster R. The role of education in biostatistical consulting. Stat Med. 2007;26:709720.
6. Mazumdar M, Messinger S, Finkelstein DM, et al.; Biostatistics, Epidemiology, and Research Design (BERD) Key Function Committee of the Clinical and Translational Science Awards (CTSA) Consortium. Evaluating academic scientists collaborating in team-based research: A proposed framework. Acad Med. 2015;90:13021308.
7. Clinical and Translational Science Awards institutes. CTSPedia. 2015. https://www.ctspedia.org/do/view/CTSpedia
. Accessed July 2, 2016.
8. National Institutes of Health. Clinical and Translational Science Award (CTSA) U54 funding opportunity announcements. 2015. http://grants.nih.gov/grants/guide/pa-files/PAR-15–304.html
. Accessed June 13, 2016.
9. Dowdy DW, Pai M. Bridging the gap between knowledge and health: The epidemiologist as accountable health advocate (“AHA!”). Epidemiology. 2012;23:914918.
10. Dowdy DW, Pai M. The epidemiologist as accountable health advocate (“AHA!”): A useful model for promoting health. Epidemiology. 2012;23:927928.
11. Welty LJ, Carter RE, Finkelstein DM, et al.; Biostatistics, Epidemiology, and Research Design Key Function Committee of the Clinical and Translational Science Award Consortium. Strategies for developing biostatistics resources in an academic health center. Acad Med. 2013;88:454460.