The roles of nonaffiliated and nonscientific institutional review board (IRB) members at academic medical centers have received some attention in the literature, but many questions about them remain. For instance, whom, if anyone, do they represent, how are they selected, and what do they do? This report describes how IRB chairs, directors, administrators, and members view a range of issues concerning nonaffiliated and nonscientific IRB members, and it examines a series of questions and decisions that IRBs confront regarding these individuals.
Section 46.107 (a) of the federal regulations governing the protection of human subjects requires IRBs to “be sufficiently qualified through the experience and expertise of its members, and the diversity of the members, including consideration of race, gender, and cultural backgrounds and sensitivity to such issues as community attitudes.” Sections 46.107 (c) and (d) of the same regulations state that IRBs will include “at least one member whose primary concerns are in nonscientific areas” and “at least one member who is not otherwise affiliated with the institution and who is not part of the immediate family of a person who is affiliated with the institution.”1 For review of research concerning prisoners, 45 CFR §46 specifically requires that this population be represented.
Important principles underlie these mandates. The views of nonscientists and of those outside an academic medical center are valuable and, presumably, reflect the concerns of potential research subjects and the broader community. They also are less affected by the possibility of financial and nonfinancial conflicts of interest (COIs), which may arise with someone who works at an institution or is a close family member of an employee. Yet, the regulations do not address who exactly these nonscientific and nonaffiliated members should be, how they should be chosen, and what roles and functions they should fill. The National Institutes of Health (NIH) Office of Human Subjects Research (OHSR) determined that a single person could fit both these roles.2 Yet, such a determination may, arguably, counter the intent of the regulations.
Surprisingly, prior studies of such members have usually combined the nonscientific and nonaffiliated IRB members. Anderson3 found that 16 “nonaffiliated/nonscientific” IRB members from 11 institutions were unclear whom they represented, and these members felt that they should receive more respect from the rest of the IRB. Porter,4,5 also not differentiating between the two groups, found that these individuals saw themselves playing a variety of roles and valuing several traits as ideals (e.g., assertiveness and communication). Allison and colleagues6 also combined the two groups, as “nonscientists,” in examining how they viewed their specific functions and found that they were also more likely than scientists to think that their major role was to review informed consent documents. In one question, however, Allison and colleagues compared affiliated and nonaffiliated nonscientific members and found that nonaffiliated members were more likely, in general, to see themselves as laypersons. Sengupta and Lo7 surveyed 32 nonaffiliated and nonscientific members from 11 IRBs. These participants had been on IRBs for a mean of 8.4 years, 71.9% held advanced degrees, 88% had occasionally felt intimidated and disrespected by the scientists on the IRB, and 78% wanted more education. Yet, it is not clear whether these members included those (e.g., nurses and social workers) who are clinicians even if they are not scientists per se. Affiliated clinicians may also vary markedly from nonaffiliated nonclinicians because of training. Additionally, nurses and social workers may have experience conducting research, raising questions about the definition of nonscientists.
Rothstein and Phuong8 analyzed three groups (physicians, nurses, and nonaffiliated members) and found certain differences in what was very important to them personally, with nonaffiliated members being less likely to list protecting confidentiality and privacy and ensuring IRB oversight postapproval as concerns. It is unclear whether these nurses are scientific or not (i.e., they have clinical training but may not be researchers). Still, this study suggests that nonaffiliated and other IRB members differ. Critical questions thus remain about how IRB chairs and other members view the roles and functions of these nonscientific and nonaffiliated members.
I conducted an in-depth, semistructured interview study of the views and approaches among IRB chairs, directors, administrators, and members toward research integrity (RI), which these participants define very broadly.9 Interviewees revealed how they viewed and addressed RI, how they responded to RI violations in various ways related to how they saw and approached their roles and responsibilities, and how they interpreted and applied federal regulations. IRBs were affected by their members’ personal views and psychological and personality issues and also by institutional factors. Issues frequently arose concerning differences between nonscientific, nonaffiliated, and other IRB members’ roles, functions, contributions, work, and identities. Because my study used qualitative methods, I was able to further explore these domains and shed light on these issues. I have reported elsewhere on other, distinct sets of issues regarding interviewees’ views and approaches concerning COIs,10 central IRBs,11 perceptions of variations between IRBs,12 and research conducted in the developing world.13 But other distinct and important sets of issues and decisions arose as well concerning nonscientific and nonaffiliated members, which I have analyzed and explored in this report.
Method
As described elsewhere,9–13 I conducted in-depth telephone interviews of approximately two hours each with 46 IRB chairs, directors, administrators, and members in 2007–2009. I contacted the leadership of 60 IRBs around the United States, representing every fourth institution on the list of the top 240 by NIH funding, and interviewed IRB leaders from 34 of these 60 institutions (response rate of 56.7%). In some cases, I interviewed both a chair and/or a director as well as an administrator from the institution (e.g., the chair thought that the administrator might be better able to answer certain questions). From these 34 institutions, then, I interviewed a total of 39 chairs/directors and administrators. I also asked half of these leaders (every other one interviewed on the list by amount of NIH funding [n = 17]) to distribute information about my study to members of their IRBs to recruit 1 member from each of these IRBs to interview as well. I interviewed 7 additional members (1 nonscientific and 6 regular members), for a response rate of 41.2% (7 of 17 potential additional members).
My interview guide (see List 1) included questions to elicit detailed descriptions of participants’ views of RI, COIs, and related realms, probing how IRBs viewed and made decisions about these areas, and the factors involved in those decisions. My method was informed by grounded theory.14
;)
List 1 Sample Questions From Semistructured Interviews of Institutional Review Board (IRB) Chairs, Directors, Administrators, and Members, 2007–2009
I drafted the questionnaire, drawing on prior research that I conducted and on published literature. I transcribed and performed my initial analyses of the interviews during the same period in which I conducted the interviews, and these analyses helped shape subsequent interviews. The Columbia University Department of Psychiatry IRB approved the study, and all participants gave informed consent.
Once I completed the full set of interviews, a trained research assistant and I conducted subsequent analyses in two phases.
In phase I, we independently examined a subset of interviews to assess factors that shaped the participants’ experiences, identifying recurrent themes and issues that were then given codes. We read each interview, systematically coding blocks of text to assign core codes or categories (e.g., comments about so-called community or other members). We inserted a topic name (or code) beside each excerpt of the interview. We then worked together to reconcile these independently developed coding schemes into a single scheme and prepared a coding manual, defining each code and examining areas of disagreement until reaching consensus. We discussed new themes that did not fit into the original coding framework and modified the manual when appropriate.
In phase II of the analysis, we independently performed content analysis of the data to identify the principal subcategories and ranges of variation within each of the core codes. We reconciled the subthemes identified by each coder into a single set of secondary codes and an elaborated set of core codes. These codes assess subcategories and other situational and social factors (e.g., differing backgrounds of and views of so-called community and other members).
We used these codes and subcodes then in our analysis of all of the interviews. To ensure coding reliability, two coders analyzed all interviews. Where necessary, we used multiple codes. We examined areas of disagreement through closer analysis until we reached a consensus through discussion. We checked regularly for consistency and accuracy in ratings by comparing earlier- and later-coded excerpts. In these ways, we systematically developed the coding schemes for the core codes and subcodes and documented them carefully to ensure that they were valid (i.e., well grounded in the data and supportable) and reliable (i.e., consistent in meaning).
Results
As summarized in Table 1, the 46 interviewees included 28 chairs/cochairs, 10 administrators (including 2 directors of compliance offices), and 7 members. In all, 27 of the 46 interviewees (58.7%) were male, and 43 (93.5%) were Caucasian. Interviewees were distributed across geographic regions and institutions by ranking in NIH funding.
Table 1: Characteristics of the 46 Interviewed Institutional Review Board (IRB) Chairs, Directors, Administrators, and Members, 2007–2009*
As outlined in List 2, several critical themes emerged. Interviewees often expressed confusion as to who the nonscientific or nonaffiliated members were, or should be, and whether these members did, or should, represent anyone and, if so, whom. IRBs encountered challenges in finding, training, and retaining these members. Tensions emerged because nonscientific members, by definition, had no scientific training, so they had difficulty understanding key aspects of protocols and felt unempowered to contribute much to IRB deliberations. IRBs varied widely in how much they encouraged these members to participate, in what ways, and with what success. As indicated in List 3, IRBs thus faced a series of decisions concerning these members.
;)
List 2 Themes Concerning Nonscientific and Nonaffiliated Members From In-depth Interviews With Institutional Review Board (IRB) Chairs, Directors, Administrators, and Members, 2007–2009
;)
List 3 Decisions Faced by Institutional Review Boards (IRBs) Concerning Nonscientific and Nonaffiliated Members
Who are they?
Interviewees were often unclear who these members should be and generally referred to them as community members rather than nonaffiliated or nonscientific members. These members may be professionals, often retired, who are not, strictly speaking, officially affiliated with the institution but are not necessarily representative of the community either.
They tend to be well-educated people, generally retired—lawyers, social workers, some in health. They generally have an interest in the welfare of human research subjects. They often come from a background where they were involved in providing a service, or involved in care.
IRB12
Yet these roles can be murky. One community member said that she
… was the IRB administrator, then left, was unemployed for a period of time, then asked to chair. After my tenure I stayed on as a community member while working as the office manager for a smaller research company, whose protocols are reviewed by that IRB.
IRB35
Definitional questions thus emerge because she is not affiliated with an institution now but was in the past and because she is not a scientist by training but works at a research company whose protocols are reviewed by her IRB. Hence, gray areas exist regarding how nonaffiliated or how much of a nonscientist she or others should be. Will someone who is not a scientist by training but who works at a research organization be able to contribute the perspective of a nonscientist, which was presumably the intent of the regulations?
Many interviewees struggled with whether these members should in fact be more representative of the communities from which subjects are recruited.
There might be some better criteria about who our community members are. We have not had people like leaders of local churches, [nongovernmental organizations], or other social service agencies—something with a strong minority membership…. One long-term community member is a medical malpractice attorney. He’s been a great member and contributed important things. But I don’t think that’s the idea of what a community member really is or brings. Another community member is a retired director of research at a few local institutions. During retirement, he’s been on our IRBs and another institution’s IRBs. He contributes a lot and brings a nice cross-fertilization from the other institution. But he’s not what you think of as a community member. On the other hand, we’ve also had attorneys from the juvenile public defenders’ office. They are very good and genuine advocates for people in their community.
IRB7
These professionals, in part because of their training (some are even scientists), may understand protocols and the ethical complexities involved but may also be far from vulnerable communities, again raising the question of whom, if anyone, these members indeed should, or do, represent.
Not all interviewees were certain that such direct or broad representation is necessary, as opposed to an understanding of the presumed viewpoints of study subjects. Several interviewees felt that their IRB had good community members who were not necessarily representative of the community (e.g., lawyers or nurses who help clarify consent forms). One chair described clergy who had helped with recruitment and with promoting the science.
They may not be representative of the community as a whole, but we’ve had a law professor for a number of years … he’s interested in people’s ability to understand consent forms, and he’s been a very strong advocate of making things clear. That’s very nice. We have a woman with a master’s in counseling. She’s also a very good layperson. We’ve had some Reverends, which has been very good because they’ve been able to talk about these studies at their churches on Sunday: “They’re doing good stuff if you’ve got diabetes or hypertension.” So they’ve been able to help promote the science, too, and actually help recruit subjects.
IRB3
Yet, such assistance with recruitment is not the intent of the regulations and can potentially raise problems, causing COIs. IRB members thus range in whether they think these members should always represent a particular community in some way. Interviewees tended to see these members’ roles, ideally, as representing subjects, not as being simply nonaffiliated, per se. These members also often seem to be found not in any systematic way but indirectly through word of mouth.
What do they do?
IRBs vary widely in the roles and functions that they ask nonaffiliated or nonscientific members to fill—how much they have these members contribute. One chair compared several IRBs at his institution: “Committees differ in how they use their community members…. Some other committees don’t empower their community members as much. I think our committee members are happier” (IRB3). Community members range from reviewing only informed consent forms to being primary or secondary protocol reviewers: “With some rare exceptions, we don’t favor making a primary reviewer a physician or a scientific member. So sometimes the community members are the primary reviewers” (IRB12).
Other IRBs do not have these members serve as main reviewers but give extra weight to their opinions. For example, some chairs try to call on community members regularly or periodically for input. Yet, even within institutions, IRBs vary in how they employ these members: “We tend to give extra weight to the community members’ opinion in terms of do we table this protocol or is this a minor revision. If the consent form isn’t clear, it’ll get tabled” (IRB3). This particular IRB committee was very interested in “community” representation because a few members were physicians who themselves in their clinical work provided treatment to vulnerable populations.
On many IRBs, nonscientific and nonaffiliated members play particularly important, unique roles in reviewing consent forms. Physicians on the committee may recognize that they have difficulty reading a consent form as a layperson would; hence, they appreciate laypersons’ input.
We have docs interested in consumer perspectives, which helps. But a doc reads a consent form, and it’s straightforward. He knows what all the words mean. It’s pretty easy. It’s very hard for us to put ourselves in the shoes of someone with a sixth grade education, trying to make sense of what they’re being asked to do. We say: “OK, we can’t read that as a layperson, so let’s hear what our laypeople have to say.”
IRB3
Many IRBs have found that school teachers also fit these roles and functions well: “They tend to want to be involved in something like this. We have a lot of students from college ed, which is a good pool to draw from” (IRB28).
Whom do they represent?
Problems arise in assessing what community or communities these nonaffiliated or nonscientific members represent. They may seemingly represent a particular community but in fact do not do so. Membership in a community does not necessarily signify an ability or interest in representing that group’s perspectives or best interests. Similarly, an individual may know about a particular community but not be interested in facilitating relationships or interactions. One IRB trusted an American Indian representative who turned out not to know much about the culture.
We learned that just because someone says they’re an Indian … doesn’t mean they know anything about doing research in Indian country. We made some assumptions in the expertise we called in to help us review it … some individuals just have clunky ways of being in the world and rub other people wrong.
IRB26
Communities may also be divided, and finding an appropriate single representative may be difficult. Often, the IRB has to trust that the principal investigator (PI) has the right community gatekeepers. But in conducting and reviewing research, PIs and IRBs may end up relying on the wrong community representative. One chair described this problem as it applied to IRBs’ efforts to obtain community members:
You want to do a program, go in a community, and find a leader or a gatekeeper, and they invite you into the community, and then [you] find out that you’re talking to the wrong person. You’ve gotten a bad reputation because you’ve aligned yourself with the wrong person. So investigators need to have and understand that level of smarts when they go into a community—from not only a human subject’s perspective but a successful enterprise perspective.
IRB26
Yet, these goals can be hard, and, at times, IRBs have to assess the representativeness of such an individual. However, how to do so, and how to know whether an individual is representative enough, is difficult.
If you’re not dealing with an Indian tribe that is organized and has a government or structure, the IRB has to trust the investigator that the PIs are aligning themselves with the right people, so that they’re going ahead in the appropriate manner in the community. That’s where you get into local knowledge of the research context.
IRB26
Relationships with other IRB members
Interviewees generally said that they appreciated and valued the community members on their committees and were often aware of wide social, economic, and educational gaps between themselves and these nonscientific and nonaffiliated members. IRBs appear to vary in how much they encourage the contributions of these members, who may feel frustrated with their role though uncomfortable articulating their predicament. One interviewee, a longtime IRB administrator, said, “I’m not a scientist or physician, so sometimes I think my questions are stupid” (IRB31). She did not feel looked down on but, rather, saw herself apart in a room of white men in white coats—not purposefully disrespected but at times intimidated. Several interviewees recognized that community members may feel similarly.
Some IRBs reported very active and effective community members but difficulty finding and retaining appropriate ones. IRBs may make efforts to empower these members, compensating community members but not other, noncommunity members for their time and effort or for parking.
IRB membership orientations
Whereas several IRBs held a detailed orientation to support community members, other institutions did not. A few very well-resourced institutions, with hundreds of IRB members, had a full-time staff member to orient and train members, particularly community members.
We’ve hired a full-time person to identify and retain, train, and orient board members—especially community members who have not had as much orientation as they probably need…. [You] need a full-time person … if you have 220 board members.
IRB5
Yet most community members received little, if any, orientation—often only what other, noncommunity members received, which frequently was little (e.g., “I was told, ‘Just do what everyone else is doing’”).
Finding and retaining community members
The terms of these members vary in length, and many IRBs find it hard to keep qualified members and to avoid fairly rapid turnover: “Not every community member has been great. We’ve had a few disasters, though they don’t last very long” (IRB3). A self-selection process then may occur.
Discussion
In viewing, choosing, and engaging nonscientific and nonaffiliated members, academic medical center IRBs often struggle, and their practices vary widely. IRBs range from having these members review entire protocols to having them perform only limited roles (particularly reading consent forms) to being more pro forma. Nonetheless, a few interviewees described situations in which these members’ input was very important.
Quandaries emerge as to whether these individuals do, or should, represent anyone specifically and, if so, whom they should represent, how well they do so, and how IRBs assess that role. Revealingly, confusion exists concerning even their name—whether they are nonaffiliated, lay, or community members—underscoring the ambiguities of who these members are. IRBs seem to vary even in the degrees to which they are disturbed by the apparent lack of representativeness of these members to their communities and in how they interpret the intent of the regulations.
Confusion about who community members really are, or should be, may arise partly because the spirit of Section 46.107 (a) appears to include members with sensitivity to community attitudes. Whereas this section of the regulations mandates the participation of nonaffiliated members, interviewees often thought that these members ideally should somehow represent research populations. Yet, ambiguity persists.
Current guidelines essentially define these members by what they are not, that is, nonscientific and nonaffiliated. But questions then emerge as to what they are. They may have friends who work in the institution and thus be indirectly, even if not directly, affiliated, or they may be former employees, which could cause direct or indirect financial or nonfinancial COIs. Current guidelines also do not address how representative, and of whom, these members should be. The fact that these members are often found in unsystematic ways, through happenstance, may be acceptable and perhaps inevitable. But the lack of guidance concerning these issues is disturbing. These issues thus need further attention; clearer, more operationalized guidelines, then, may be helpful.
Getting community members to join and stay on IRBs poses challenges, too. Sengupta and Lo’s7 sample stayed an average of 8.4 years, which may not be representative of those who join the IRB at some point, because this research may have included nonscientific affiliated members (e.g., hospital social workers). Rather, many nonaffiliated members may leave earlier, feeling ill prepared or overwhelmed.
Whereas prior studies have tended to analyze nonaffiliated and nonscientific members together3–5,7 (except in one question by Allison et al6), my findings suggest that these two types of members may dramatically differ. Combining them does not appear to reflect the lived realities of these individuals’ experiences. Instead, these individuals may constitute two very different groups, with different functions, backgrounds, training, roles, and, potentially, views and approaches toward reviews. Affiliated and nonaffiliated nonscientists also can differ because the former may have implicit financial and nonfinancial COIs, working for the institution whose research they are reviewing, whereas the latter would not have these COIs.
Future research should thus separate these groups, given their very different characteristics and roles, and should investigate more fully, with larger samples, who these members in fact are. Compared with prior research, my study, the first to interview chairs and regular members about their views of nonaffiliated and nonscientific members using qualitative methods, also highlights difficulties finding and retaining members. Future studies should examine more fully how different IRBs locate these members, how long these individuals stay on their IRBs, and how often one individual jointly fills both roles.
These findings have important policy implications—for instance, concerning the decision by the NIH OHSR that a single individual can simultaneously serve as both a community member and a regular member of an IRB. Given recent scandals involving financial and nonfinancial COIs,15–17 and tensions with communities (e.g., in the case of the Havasupai Indians in the American Southwest),18 academic medical center IRBs should work as much as possible to avoid both types of problems and, arguably, include both nonaffiliated and nonscientific members. The astonishing lack of data regarding these different types of members, and the fact that prior studies generally do not distinguish between them, is concerning. Reconsideration of OHSR’s interpretation of these regulations thus appears warranted.
In addition, broader and fuller discussions and deeper considerations of federal guidelines are needed regarding whether these members do, or should, represent others and, if so, whom, to what degree, and how well; whether such representativeness should be assessed or evaluated in some way and, if so, how; and whether they should be called community members or not. Questions emerge, too, of whether IRBs should include more than one such member in each category and, if so, how many, and how to make such decisions.
Effective strategies and best practices—approaches and experiences of various IRBs that have worked more or less well in recruiting, maintaining, and facilitating participation by these members (e.g., materials developed by better-resourced institutions)—could also be better compiled and shared through organizations online (e.g., Public Responsibility in Medicine and Research). Chat rooms could also be developed that would spare these members from having to travel to conferences elsewhere in the country.
Limitations
This study has several potential limitations. First, the data are based on in-depth interviews with IRB chairs, directors, administrators, and members and did not include direct observation of IRBs making decisions or IRB records. Future research should collect such data, which may however be difficult to do because, anecdotally, some IRBs have required that such researchers obtain consent from all IRB members, PIs, and funders of protocols for such investigations. These interviews also explored respondents’ perspectives now and in the past but not prospectively over time to see whether these changed. Future studies should use prospective approaches to determine whether and how perspectives may change over time.
In addition, this study focused on interviewing chairs, members, and administrators to see how they viewed the roles of nonaffiliated and nonscientific members, because prior research has suggested that such members frequently feel unempowered and disrespected. But no qualitative data have ever been reported on how the rest of the IRB perceives these concerns, which is critical to understanding the range of views, interactions, and dynamics among those involved, and how problems that arise might best be addressed. I did not interview a large group of nonscientific and nonaffiliated members themselves, but future research should do so. The fact that the response rate from chairs was higher than that from other members may reflect the fact that not all chairs disseminated recruitment information about the study to the other members.
Finally, this study is qualitative and thus is designed to elucidate, in ways that quantitative data cannot, the variety of beliefs, attitudes, and perspectives that emerge and the relationships between these, generating research questions and hypotheses that further studies can probe in greater detail among larger samples using both qualitative and quantitative methods. Qualitative research is not designed to measure responses quantitatively. But future studies should do so, quantifying the overall frequency of each of these views among IRB chairs, directors, administrators, and members.
Conclusions
These data highlight a series of challenges and questions that academic medical center IRBs confront concerning finding, training, and retaining nonaffiliated and nonscientific members—who they are, whom, if anyone, they represent, and what they should, and in fact, do. These issues require additional attention to ensure that IRBs are functioning as effectively as possible to protect human research subjects.
Acknowledgments: The author wishes to thank Patricia Contino, Meghan Sweeney, Jason Keehn, Renée Fox, and Paul Appelbaum for their assistance with this report.
Funding/Support: The NIH (R01-NG04214) and the National Library of Medicine (5-G13-LM009996-02) funded this research.
Other disclosures: None.
Ethical approval: The New York State Psychiatric Institute/Columbia University Department of Psychiatry institutional review board granted ethical approval for this research.