Secondary Logo

Share this article on:

Variability in the Costs of Institutional Review Board Oversight

Byrne, Margaret M. PhD; Speckman, Jeanne MSc; Getz, Ken MS, MBA; Sugarman, Jeremy MD, MPH, MA

Financial Issues at AHCs

Background Previous studies have shown wide differences between institutions in economies of scale with regard to the costs of institutional review board (IRB) oversight of research. In this study, the authors explored variability among IRB costs, taking into account organizational size, components of the costs of oversight, and protocol type.

Method The authors conducted a survey of academic medical centers to collect information on resource utilization associated with IRB oversight in 2002. They used national cost weights to assign a cost to each type of resource used, and summed weighted resource utilization for IRB costs. Descriptive statistics were generated for costs over all, tertile of protocol volume, cost component, and type of review. They also determined where the greatest cost variability is found.

Results IRB costs per protocol reviewed are highly variable both overall and within tertiles of volume. Higher-volume institutions have lower costs, which is indicative of economies of scale. However, not all components of IRB costs (e.g., board time) are subject to economies of scale. Expedited reviews of protocols are not less expensive at low-volume institutions.

Conclusions IRB costs for oversight are highly variable, and only some of the variation may be attributable to economies of scale. Given such wide variation in costs, the authors conclude that some institutions are conducting reviews in a manner that is inefficient or of low quality. Future work is needed to determine specific practices in reviews, and what leads to the best quality and most efficient oversight and review system.

Dr. Byrne is research assistant professor, Miller School of Medicine, University of Miami, Miami, Florida.

Dr. Speckman is research epidemiologist and data analyst, Boston Medical Center, Boston, Massachusetts.

Dr. Getz is senior research fellow, Tufts Center for the Study of Drug Development, and chair, Center for Information and Study on Clinical Research Participation, Boston, Massachusetts.

Dr. Sugarman is Harvey M. Meyerhoff Professor of Bioethics and Medicine, The Johns Hopkins University, Baltimore, Maryland. For the Consortium to Examine Clinical Research Ethics (CERCE).

Correspondence should be addressed to Dr. Byrne, Department of Epidemiology and Public Health, PO Box 016069 (R-669), University of Miami, Miami, FL 33101; telephone: (305) 243-3482; fax: (305) 243-5544; e-mail: (mbyrne2@med.miami.edu).

An essential part of conducting research with human participants is to ensure that the health, safety, and rights of the participants are protected. In the United States, institutional review boards (IRBs) bear this responsibility. Formal requirements for the establishment of IRBs were outlined in regulations arising from the National Research Act of 1974, specifically the federal regulations 45 CFR 46, Protection of Human Subjects, as well as subsequent additions regarding vulnerable populations and parallel regulations for research subject to oversight by the U.S. Food and Drug Administration.1

The central mission of IRBs is to protect human research participants as outlined in the Common Federal Policy of 1991.2 This includes ensuring minimization of risk to participants, voluntary informed consent, and continual oversight of ongoing trials, including the evaluation of adverse events. Federal regulations require that IRBs have at least five members, including one whose interests are primarily nonscientific and one who is otherwise unaffiliated with the research institution.

Whereas all IRBs in the United States must follow federal guidelines, each IRB operates differently. Several studies have identified some of the implications for the cost of review based on such differences. In one study based on information documented in an earlier report, Wagner et al.3 found that the total estimated costs for operating high-volume and low-volume IRBs were $770,674 and $76,626, respectively. High-volume IRBs handled an average of 2,782 “actions,” or protocols, per year, compared with 95 actions for low-volume IRBs. Thus they found very large economies of scales in IRB function, with high-volume IRBs having much lower average costs per action than their low- volume counterparts ($277 versus $799). Such economies are found in many businesses and university activities, and may not in themselves be problematic. Indeed, in many settings, such economies of scale are desirable. However, based on a study of the Department of Veteran’s Affairs (VA) system, Wagner et al.4 found that the average high-volume VA IRB had a cost of $187 per action and the average low-volume VA IRB had a cost $2,781 per action, and the variation within both large and small VAs was also very substantial. We have studied economies of scale previously as well5 and found that high-, medium-, and low-volume IRBs (<350, 350–699, and ≥700 protocols per year) have median costs per protocol of $644, $612, and $431, respectively. Again, we found substantial variation within each volume type.

There is growing evidence that there may be both substantial variation and economies of scale in the costs of IRB oversight. However, despite some supporting data from the studies cited above, several important questions remain unanswered, including whether there is variability in costs for review of different types of protocols or in the costs due to different components in the review process, and whether previous results will hold using standardized cost metrics. To address these gaps, we undertook this analysis.

We start with the premise that observed variability between IRBs has implications for the efficiency and perhaps for the quality of review. Wide variability in the specific resources that are used by an IRB (e.g., amount of board and staff time for protocol review and in overall costs suggests that institutions are managing their IRB operations in very different ways. For instance, board composition and its operating procedures, administrative functions, and infrastructure may be organized and managed dissimilarly across institutions. Some IRBs may be conducting their operations efficiently; others may be doing so inefficiently. It may well be the case that no IRB is operating at optimal efficiency, but given such wide variability, some IRBs must be operating more efficiently than are others.

In this paper, we describe additional analyses of the data we previously reported5 to look more closely at the variability in IRB operating costs. Our data were derived using a cost methodology that measures activities at each institution and then assigns a standard, national-average cost to each activity. In this way, a comparison of costs by IRBs is not biased by higher prices for input in some parts of the country. It is only with this methodology that we can get an unbiased measure of the variation in costs nationwide. In the current study, we examined variability, nationwide and by tertiles of volume, in IRB costs per protocol review. In addition, we explored whether various components of the costs (e.g., staff time, board time) also exhibit high variability, and which components vary most. Finally, we explored whether institutions also vary in their costs and component usage for the various types of reviews of protocols for which they are responsible (i.e., full, expedited, exempt, continuing).

Back to Top | Article Outline

Method

As described previously,5 to determine the costs of activities being undertaken by IRBs, we collected survey information for resource utilization in 2002 on the number of units of various resources that were used at a given IRB. The study protocol was approved by the Duke University Medical Center IRB; the data analysis phase was deemed exempt by the IRB at the Johns Hopkins Medical Institutions. The study was funded by the Doris Duke Charitable Foundation and the Burroughs Wellcome Fund. We asked participants about the time that board and staff members spent on IRB review, travel, supply and equipment purchases, and space used by the IRB. Board members were defined as those individuals who contributed to the scientific review of protocols, and staff as individuals employed to carry out administrative and assistant functions for the IRB. Second, we applied standard “prices” to the units of resources so that a monetary cost of IRB activities could be calculated. See Sugarman et al. 20055 for elaboration of the methods used. Ultimately, 69 academic institutions in the United States responded to the questionnaire. For the analyses conducted here, 59 institutions had complete enough data to be used in the analyses.

Data analysis was performed using Stata 8.0 SE (StataCorp LP, College Station, TX), a data analysis program used extensively for economic analyses. We completed both descriptive and regression analyses using this program. We first present descriptive statistics on the cost per protocol at all institutions as a total, and for each of the components of that cost. Then we divided all of the institution into three groups based on the number of new protocols that were submitted to that institution in 2001: low (<350), middle (350–700), and high (>700). For each tertile, we analyzed the cost per protocol review overall and for each component of cost. Finally, we divided the protocols into four review categories (full, expedited, exempt, and continuing), and analyzed the cost per protocol for all institutions, and for institutions under each volume category.

To determine which components and which types of review had the greatest variability across IRBs, we calculated the coefficient of variation for all variables. The coefficient of variation is the ratio of the standard deviation to the mean. Because the coefficient of variation is a dimensionless number, it allows us to compare the variation of resources used in protocol reviews that have very different absolute magnitudes. For example, for a given IRB, the costs of supplies are much less than the cost of board member time. Thus, a direct comparison of the variation would not allow us to determine which component had greater relative variability.

We were concerned that our results might be dependent on the chosen cut-points for the tertile categorization of institutions. Thus, as a sensitivity analysis, we recategorized the institutions into tertiles that are based on the number of total protocols handled in the year of the survey, rather than the number of new protocols. For this categorization, the cut offs we chose were <1,200, 1,200–2,000, and >2,000. Using these new categories, we repeated all descriptive analyses.

Finally, we ran regression analyses with overall cost per protocol as the dependent variable and components of cost as the independent variables to determine which component of cost, if any, was significantly associated with the cost per protocol over all of the institutions.

Back to Top | Article Outline

Results

Table 1 shows our findings for the costs per protocol overall and per tertile. The overall median cost of reviewing a protocol is $560. Low-volume institutions had higher median costs than high-volume institutions, with the mean cost within each category varying even more substantially. Staff time was the largest component of cost for protocol review over all institutions (65% of the total cost [$361/$560]) and within each tertile of volume, followed by board time (22% of total cost [$121/$560]). For almost all cost components, the cost per protocol decreased as the volume of protocols increased. High-volume institutions had the lowest costs per protocol in all categories except for outside services.

Table 1

Table 1

When we recategorized institutions into tertiles based on total protocol volume, 12 institutions were categorized differently. Two institutions moved from the low to middle volume category, two from the middle to low category, four from the middle to high category, three from the high to middle category, and one from the low to high volume category. Those institutions that changed volume category generally were close to the original tertile cut-off points. There were no changes in our findings when we analyzed the data using these new categorizations, demonstrating the robustness of our results.

Table 2 shows costs per protocol for each of the types of reviews that IRBs perform, both overall and by tertiles of volume. Surprisingly, overall institutions’ protocols that undergo expedited review were slightly higher in cost than those undergoing full IRB review ($1,060 versus $1,020), while protocols that were exempt from review ($694) and protocols receiving continuing review ($271) were the least costly. However, when looking at institutions by tertile of volume, we found that expedited review protocols were the most costly for low-volume institutions. Protocols exempt from IRB review were most costly at high-volume institutions, with a very large standard deviation. For all other types of reviews, low-volume institutions had the highest costs. Again, our findings did not vary substantially when we analyzed the data with institutions divided into tertiles based on total number of protocols.

Table 2

Table 2

We examined coefficients of variation to explore which components of cost and which types of review had the most variability (Table 3). For all institutions, exempt protocols showed the greatest variation, and protocols requiring full review the least. Results were strikingly different, however, for low- and mid- to high-volume institutions. Low-volume institutions had low variation in the costs of exempt protocols, where as mid- to high-volume institutions had low variation in the costs of continuing review of protocols. For components of cost, overall, staff salary and space showed the least variation. Travel, outside services, and equipment showed the highest variation in cost. This high variation is due to the fact that many institutions reported zero costs for these categories. These patterns are present for the most part in each of the tertiles, although mid-volume institutions have lower variation than the others in staff and board salary costs, and subsequently lower variation in overall cost per protocol.

Table 3

Table 3

To display the patterns of variation in costs per protocol and volume of protocols in a more fine-grained manner, we generated scatterplots of cost per protocol and protocol volume for all protocols and for protocols in each review category. Figure 1 presents the results for all protocols, with the cost per protocol on the y axis and protocol volume on the x axis. The graphs for protocols in each of the four review types are very similar to that of all protocols, and thus are not shown. From these plots, it appears that there is a downward sloping trend of the cost per protocol with increasing volume, and a few outliers account for a substantial amount of the variation. Most of the outliers occurred at those institutions with a low or middle volume of protocols. Institutions with the very highest volumes of all types of review had among the lowest costs per protocol reviewed.

Figure 1

Figure 1

We performed multivariate regression using cost per protocol as the dependent variable to determine which, if any, factors significantly affected costs per protocol. As would be expected, staff time was highly associated (p < .001) with costs per protocol, and this was the only component with a significant coefficient. By construction, the model explained a large amount (R 2 = 0.61) of the variation in cost per protocol.

Back to Top | Article Outline

Conclusions

Our research goals were two-fold: first, to document that variations in the costs of IRB review exist in a nationally representative sample where costs are measured so that comparison among IRBs is possible, and second, to understand the factors behind the wide variability in IRB operating costs. We found substantial variation in both the overall costs of operating IRBs and the individual costs per type of protocol review. Recognizing that economies of scale can appropriately occur in activities such as IRB review, we also considered the variability after dividing institutions into three categories based on protocol volume. Even within more homogeneously sized classes, the variability in costs is quite substantial.

Staff and board time costs account for the great majority of costs that go into IRB activities. This is not surprising, given the fact that IRB operations are very labor intensive, as the majority of work done is administrative and intellectual. The fact that this is the largest component of cost for all four types of protocols helps explain why exempt and expedited reviews still cost a substantial amount to conduct. Even though a protocol may be judged exempt from IRB approval, staff must review the application and file all necessary paperwork, and a board member must make a decision about the exempt status. A similar situation applies of course for those protocols that undergo expedited review.

One interesting finding is that the staff cost for each type of protocol review is almost three times great than board costs overall. It is unknown if this is the right balance of IRB member and staff time. It seems plausible to suggest that the ideal balance would be for staff to prepare protocols for review by ensuring compliance with applicable regulations and completeness of materials so that IRB members could focus their time on comprehensive review. Thus, if the ratio of staff to board time is larger than ideal, this might suggest the presence of cumbersome and inefficient administrative processes and procedures, or alternative cursory examinations by board members. Neither of these is an acceptable situation.

In addition, the variability of the ratio of staff to board costs at an institution is substantial (0.33–11.30 range) in our sample. As we discussed in the introduction, this large variation may indicate that IRBs are conducting reviews in different manners/methods. Although some may be following appropriate procedures for efficiency and/or quality, it is evident that there are differences between institutions in review their procedures.

Consistent with previously published work, we found that overall, low-volume institutions had the highest costs per protocol review. This can, and has been, attributed to economies of scale in the review process. However, it is important to note that not all components (e.g., board time, supplies, equipment) are alike in activities such as protocol review, and we should not expect there to be economies of scale in all components. From Table 1 we see that space, equipment, and supplies costs are highest at the low-volume institutions, and lower at high-volume institutions. These are components where indeed we would expect to see economies of scale. However, the cost of board time spent per protocol is also substantially lower at high-volume institutions. Because our estimate of board costs is based on the amount of time board members spend on review multiplied by a wage price, this indicates that board members at high-volume institutions are spending approximately one third less time in reviewing each protocol than their counterparts at low-volume institutions. Variations in board member time spent per protocol reviewed are not as easily or intuitively attributable to economies of scale. It may be that board members at low-volume institutions are less “efficient” in their reviews, perhaps because they only review a few protocols a year and have to keep looking up the rules; alternatively, board members at high-volume institutions may be spending too little time on reviews because of a lack of board member resources or other factors. As well, all differences in staff time between low- to mid- and high-volume institutions are unlikely to be reasonably attributed to economies of scale, whereas difference in travel may be. Although this study cannot tease out why such variation is occurring, this finding of apparent strong “economies of scale” where little or none should exist points us to areas for future research.

Our analyses to determine which components of the review process had the highest variation, and how this differed among institutions with different volumes of protocols, highlight other interesting findings that we hope will lead to additional research. We found that high- and low-volume institutions have higher variability within their size classes for almost every component and type of protocol than do mid-volume institutions. Even more interesting, however, is the fact that between tertile categories, the cost components and types of protocols that have the highest and lowest variation are not the same. For example, high-volume institutions have low variation in equipment and supply costs, whereas these costs are highly variable among low-volume institutions. For review type, low-volume institutions have the highest variability in cost per protocol for protocols undergoing continuing review, and mid- to high-volume institutions have highest variability in review of exempt protocols. This finding again indicates that institutions differ somehow in their review of protocols. The differences may occur due to dissimilar processes and procedures, differing ability and/or fastidiousness of review, or a multitude of other reasons.

Our study has several limitations. First, our questionnaire was only completed in a form usable for this analysis by 59 academic medical centers, which is 49% of all American Association of Medical Colleges (AAMC) listed institutions. Responders to our questionnaire were less likely to be in the South and more likely to be private institutions than all AAMC institutions overall. They were very similar in number of faculty and students, however [currently unpublished data]. Second, the questionnaire and thus the data relied on self-report of staff and board time, space usage, numbers of protocols reviewed, and so forth. Any component of IRB review may have been over- or underestimated by the respondent. Finally, for a number of items, respondents did not know the answer or did not answer. For these instances, we either imputed data or removed the observation. All of these factors might limit the generalizability of our results.

In summary, we found, as in previous research, that there was both wide variability and economies of scale in IRB oversight. In addition, we identified specifically where the largest variation is occurring in the components of costs and types of reviews of protocols. We believe that these findings provide a starting point for future research to examine with more granularity the efficiency and quality of IRB review. For example, institutions that are inefficient in the review process will have higher than optimal costs, and institutions that have a poor quality of review may have lower than optimal costs. Thus, it is possible that an institution that is inefficient but that does low quality reviews will have similar costs per review as an institution that is efficient but does high quality reviews. It is impossible with our data to separate the effect of quality of review from efficiency of review. Future studies that attempt to separate the effect of quality and efficiency will not be easy, as the standards for both the quality and the efficiency of a review are undefined. To develop such standards of measurement, the first step will be to develop standards and methodologies for evaluating the actual effectiveness and quality of protocol reviews in protecting human participants. Following that, research can proceed to evaluate the IRB review process. This research will neither be easy nor without controversy. However, such work is necessary in the face of increasing scrutiny of the IRB review process and institutions conducting these reviews.

Back to Top | Article Outline

Acknowledgments

For the Consortium to Examine Clinical Research Ethics (CERCE). The members of CERCE are: Angela Bowen, David Cocchetto, Ezekiel Emmanuel, Ruth Faden, Alan Fleischman, Kenneth Getz, Dale Hammerschmidt, Carol Levine, and Jeremy Sugarman.

Work on this project was made possible by financial support provided by the Doris Duke Charitable Foundation and the Burroughs Wellcome Fund.

Back to Top | Article Outline

References

1 Sugarman J, Mastroianni A, Kahn JP, eds. Ethics of Research with Human Subjects: Selected Policies and Resources. Frederick, MD: University Publishing Group, 1998.
2 Common Federal Policy for the Protection of Human Subjects. Laws Related to the Protection of Human Subjects. 10CFR745. 1991.
3 Wagner TH, Cruz AME, Chadwick GL. Economies of scale in institutional review boards. Med Care. 2004;42:817–23.
4 Wagner TH, Bhandari A, Chadwick GL, Nelson DK. The cost of operating institutional review boards (IRBs). Acad Med. 2003;78:638–44.
5 Sugarman J, Getz K, Speckman JL, Byrne MM, Gerson J, Emanuael EJ, for the Consortium to Evaluate Clinical Research Ethics. The cost of institutional review boards in academic medical centers. N Engl J Med 2005;352:1825–27.
© 2006 Association of American Medical Colleges