DESIGN AND EVALUATION of effective prevention interventions for HIV and other sexually transmitted infections (STIs) require accurate assessment of behaviors associated with their transmission. Yet because of the difficulty and expense of gathering biologic data, most such evaluations must rely on self-reported risk behaviors as their primary outcome measure, and these behaviors, which may be embarrassing, sensitive, or socially undesirable to discuss, are consistently misreported in surveys.1
In general, systematic misreporting of sensitive issues takes 2 forms: underreporting of socially undesirable behaviors and overreporting of socially desirable ones.2 Studies have shown that respondents consistently underreport socially undesirable behaviors such as drug use, alcohol consumption, smoking, abortion, crime victimization, and criminal behavior, whereas they over report socially desirable behaviors such as voting, seat belt use, energy conservation, church attendance, and exercise behavior.2 HIV/STI related sexual behaviors can include both socially desirable activities (e.g., consistent condom use) and socially stigmatized activities (e.g., same sex or nonvaginal sex).
Overall, survey methodologists concur that there is a net negative bias in reporting sensitive, socially undesirable behaviors, thus researchers tend to interpret higher reporting rates of sensitive behaviors under more private survey conditions as more accurate, and the reverse for highly normative behaviors.3,4 This has led researchers to develop alternatives to the face-to-face interview, or interviewer-administered questionnaires to enhance the accuracy of self-reported behaviors. In particular, self-administered questionnaires (SAQs) have been shown to increase reporting levels of undesirable behaviors when compared with interviewer-administered questionnaires for behaviors such as abortion, alcohol consumption, and illicit drug use.1 However, basic pen-and-pencil SAQs require participants to be able to read and have “forms literacy,” or the ability to follow specific survey instructions, select consistent responses, and correctly follow branch or skip patterns.5
More recently developed SAQ methods attempt to alleviate these challenges via computerization, including computer-assisted personal interviewing, computer-assisted self-administered interviewing (CASI), audio-CASI (ACASI), and video-CASI. Studies have shown that computerized systems increase the likelihood of reporting socially undesirable behaviors such as illicit drug use, sharing needles, sexual contact with injection drug users, male–male sex, premarital sex, large number of sex partners, visiting sex workers, and interpersonal violence.6–8 ACASI is arguably the most commonly used computerized method, as the program reads questions aloud for respondents, rather than requiring them to read each question themselves. It has proven feasible in various international settings, including China, India, Peru, Russia, and Zimbabwe,9,10 and has also been shown to increase the reporting of socially undesirable behaviors, when compared with face-to-face-interviews11–20 and SAQ,6–8 although results are not consistent across all behaviors or populations. However, ACASI too may pose some challenges for participants with low literacy, less education,9,10 or less experience with technology.
In resource poor settings, several noncomputerized administration methods for enhancing the privacy of survey administration have been developed for low literacy populations. For instance, a secret voting technique called “pocket-chart voting” elicits anonymous responses from a small group of participants, which are then discussed collectively.21,22 In Zimbabwe, the pocket-chart voting technique was adapted for a sexual health survey by using small portable voting boxes, divided into 3 compartments with each slot labeled a separate color. Colored strips of cardboard, matching the 3 colors on the box, were used as voting tokens, each divided into 6 sections and prelabeled with a number. Before reading each question, the interviewer wrote the respondent’s ID on the back of each token and handed it to the participant who marked his/her response in the appropriate sections, using the lid of the box to shield the answer. After completing their responses, the participants put the token into the voting box through the slot of the same color. In Karnataka, India, a slightly different approach was used in which “polling booth” structures for each participant were separated from each other and the interviewer by a sari or sheet. Ten or 12 yes/no questions were asked of a group of respondents simultaneously, and if the response was “yes,” the participant put a card inside the box in their polling booth. The only mark on the card was the question number. At the conclusion, the aggregate number of“yes” responses was tabulated for each question (J. Blanchard, Personal correspondence, 2006).
The Zimbabwe approach requires a high degree of literacy among participants, but allows for more complex questions than the Karnataka method, which is limited to yes/no responses. In addition, the Zimbabwe method permitted polling booth responses to be matched to individuals. In both cases, privacy was enhanced by allowing respondents to provide nonverbal responses that neither the interviewer, nor anyone else could observe.
We conducted our study, Project Parivartan, among female sex workers in Andhra Pradesh, India, where 45% of women have had no formal education23 and only 30.4% of female sex workers were literate in a comparable survey.24,25 We sought to design an approach to survey administration, which we call the “polling box,” that combined the strengths of both the Zimbabwe and Karnataka methods. Specifically, it allows for each response to be individually attributable, does not require literacy, but still permits response categories beyond simple “yes” and “no.” In this article, we compare responses with questions on socially desirable and undesirable HIV/STI related sexual behavior generated by this polling box method and by face-to-face interviews.
Materials and Methods
The polling box was administered as part of a cross-sectional survey of 812 female sex workers recruited from the greater Rajahmundry area in Andhra Pradesh, India between April 1st and June 29th, 2006. To be included in the study, participants had to report being 18 years of age or older and exchanging sex for money at least once in the year before the interview. Participants were recruited using respondent driven sampling, a modified chain-referral technique used to recruit hidden populations.25,26 Specifically, 5 initial participants, or “seeds,” were recruited from the target population, and asked to distribute up to 3 coupons to members of their social networks who met the study inclusion criteria. Subsequent participants, who could only screen into the study if they had a coupon, were similarly provided 3 coupons to recruit social network members meeting the study criteria. Recent reviews have suggested that respondent driven sampling may be the best available option for producing samples of hidden populations from which to make statistical inferences.25,27 After ensuring participants’ free and informed consent, local Telugu speakers conducted structured interviews with participants in the Parivartan field office. Interviews lasted an average of 90 to 120 minutes. The research protocol was approved by the Yale University Human Investigation Committee and the VHS-YRG Care Medical Centre Institutional Review Board in Chennai.
Sensitive questions were selected to be administered via the polling box method that focused on 2 kinds of sexual behaviors: those where we suspected that the social desirability of the behavior would lead toward over reporting, such as condom use; and those where we suspected the social undesirability of the behavior would lead toward underreporting, such as anal or oral sex. We also selected a series of questions unrelated to sexual behavior to be administered via the polling box method to compare responses with more and less sensitive questions. Questions were interspersed throughout the survey instrument, although the majority of questions were located midinterview in the sexual behavior section.
Given the high degree of illiteracy in the region, we created response cards with graphic depictions, in addition to words. Based on a review of the literature28 and pilot testing of multiple pictorial scale options for appropriateness and comprehension, we created 2 types of response cards. For yes/no questions, response cards depicted a simple tick mark and cross, whereas for 5-point frequency questions (never, rarely, sometimes, usually, and always), response cards depicted 5 glasses filled with progressively more water (see Fig. 1).
To directly compare the polling box method with face-to-face interviews, every third participant was assigned to be asked a subset of questions via polling box and all other questions via face-to-face interview. Polling box participants were informed that they had been chosen by chance to mark some of their answers onto a card and deposit the card into a locked box. Participants were told that the interviewer would not know the answers and that upon completion, someone else from the study team would remove the cards from the box and enter the answers into a computer. All other participants were asked every question via face-to-face interview. The questions and their ordering were the same for both groups, but for polling box participants questions were highlighted on the questionnaire so that interviewers would recognize when they were to be administered using the polling box method. To reduce the novelty of the polling box experience, small sets of polling box questions were interspersed throughout the survey.
Polling boxes were designed out of wood of approximately 2 feet wide, 1 foot high, and 1 foot deep with a locked removable lid large enough to shield participants’ responses (see Fig. 2). Beneath the lid, the box contained a single slot into which participants placed their response cards.
Preordered response cards matching the relevant question numbers were given to each interviewer for every selected polling box participant. Before each polling box question, interviewers read a script explaining each response category and how to complete the response card. Participants were instructed to indicate their answer by marking the appropriate response and placing the card in the box without informing the survey administrator of their selection. After each polling box interview, an office assistant discretely collected the response cards and wrote the participant’s unique ID on the back of each for later data entry along with their face-to-face survey responses.
We designated 3 questions for which we expected affirmative responses would be socially undesirable. These included having anal sex with a client without a condom in the last 30 days, having oral sex with a client without a condom in the last 30 days, and being forced to have sex the last time in jail or prison. For another set of 4 questions, we expected affirmative responses to be socially desirable: condom use at last sexual encounter with paying clients (both regular and occasional) and consistent condom use with both types of paying clients (as defined by an “always” response to the question “Overall in the past 7 days, about how often did you use condoms with …”). Affirmative responses to a final series of 4 questions relating to views on community funds and positive police interactions were not expected to be either socially desirable or undesirable.
We used χ2 tests to compare both the baseline characteristics and proportion of participants who reported specific behaviors in the polling box versus face-to-face interview groups. We calculated odds ratios with 95% confidence intervals to assess the degree of difference between behaviors reported by the 2 interview methods. We then used multiple logistic regression to adjust the odds ratios for various sociodemographic characteristics, including age, caste, literacy, and marital status. All data were analyzed using SPSS (SPSS for Windows, Rel. 14.0.1. 2005. Chicago: SPSS).
Of the 812 total participants, 269 were selected, using a systematic assignment allocation sequence (every third respondent), to complete the subset of questions via polling box. We compared sociodemographic characteristics of participants by interview method and found no statistically significant differences (all P > 0.05). Overall, the average age of the sample was 32, the majority of participants had received no formal education (54.4%), and most women were separated or divorced (67.6%). There was little variation across interview methods for women’s sex work related characteristics, including type of sex work locale, the number of clients in the last 7 days, the amount paid by last client, and having supplemental income to sex work.
For questions where we expected affirmative responses to be socially undesirable, we found that polling box respondents were consistently more likely to report undesirable behaviors when compared with respondents in the face-to-face interviews. Unadjusted odds ratios varied from 1.2 to 1.6 (see Table 1). Although the differences between the interview methods are statistically significant for only 1 of the 3 questions, one of the statistically nonsignificant questions relating to forced sex had a relatively large effect size [OR 1.6 (0.3–7.6)]; the lack of statistical significance for this question may be due to the small sample size (n = 34) created by skip patterns. For all three questions, odds ratios adjusted for sociodemographic characteristics including age, caste, literacy, and marital status were almost identical to unadjusted odds ratios.
For questions where we expected an affirmative response to be the socially desirable response, we found that polling box participants were consistently less likely to report the desirable behavior when compared with participants in face-to-face interviews. Unadjusted odds ratios varied from 0.7 to 0.9. Although odds ratios adjusted for sociodemographic characteristics including age, caste, literacy, and marital status are similar to the unadjusted ratios, they are statistically significant or marginally significant in all but one instance.
We found no differences between the responses of polling box and face-to-face interview participants for all but one of the four questions whose responses we considered neither socially desirable nor undesirable.
During survey implementation, interviewers reported the tendency for some participants to breach the polling pox protocol. To investigate whether some participants were unable, or chose not to follow the polling box instructions, a question was added for the last 112 polling box participants. Specifically, interviewers were asked to indicate whether, for any polling box question in the session, the respondent either (a) said her answer out loud, or (b) pointed to her answer on the response card. In total, 65 of the last 112 polling box participants (58%) breached the polling box protocol by informing the administrator of at least one of their responses. Those who breached protocol were more likely to be illiterate and have no formal education; however, we found no significant differences between the responses given by those who breached protocol and those who did not (results not shown).
To test whether the differences between the polling box approach and face-to-face respondents might be magnified for those who more closely followed the polling box protocol (e.g., nonbreachers; n = 47), a post hoc analysis was conducted (see Table 2). For what is likely the most sensitive of these questions (relating to anal sex) the difference was significant, while for one of the others it was marginally significant and, for the other, sample size was too small to test for significance. For questions where we expected an affirmative response to be socially undesirable, the adjusted odds ratios increased to between 1.8 and 3.8, and were significant in all but one case. For questions where we expected an affirmative response to be socially desirable, adjusted odds ratios decreased to between 0.3 and 0.5. There were no differences for all 4 of the questions whose responses we considered neither socially desirable nor undesirable, including the one question that was originally different in Table 1 (“Went to the police to speak for the rights of sex workers in the past 6 months”).
Our study sought to extend earlier efforts to improve the accurate reporting of sensitive behaviors among a low-literacy population in a resource poor setting. Previously, in contexts where self-administered or computer-assisted methods have not been possible, the performance of alternative methods has been limited to aggregate data or constrained by the simplicity of dichotomous-only responses. To enhance privacy without literacy requirements, the “polling box” method was designed to permit response categories beyond simple “yes” and “no” while allowing each response to be case attributable.
Consistently higher reporting of risky sexual behaviors among polling box participants suggests that this method seems to have increased respondents’ willingness to report socially undesirable behaviors. Likewise, consistently lower reporting of condom use among polling box participants points to a reduction in the likelihood of reporting socially desirable behaviors. These findings are further supported by the absence of conclusive response patterns among sensitivity-neutral items also embedded in the survey. These trends occurred despite there being substantial amount of breaching of the intended confidentiality of the polling box by participants (e.g., saying the response out loud). Furthermore, post hoc analyses showed that the effect of the polling box was strongest among those who followed the intent of the polling box by not breaching the confidentiality of their response. This suggests that the purpose and procedures of the polling box need to be stressed and made clear to participants. If these additional efforts are made to clearly explain the procedures of the polling box, it could result in more accurate reporting of sensitive and socially desirable behaviors.
Like any study, ours must contend with limitations. To systematically compare the polling box and face-to-face methods, we sought to reduce the novelty of the polling box experience by embedding the polling box questions within a larger survey. However, since the polling box questions were administered within a larger face-to-face interview, we do not know if we successfully created a sense of increased anonymity from the interviewer. Another limitation is that the high ratio of participants who breached the polling box protocol suggests that we may not have adequately conveyed the polling box’s purpose or procedures, despite our best efforts, limiting the validity of the overall data. However, our data showed trends in the expected direction despite these breaches of protocol. Finally, sample size may have limited the power to show differences between the groups (especially for the subanalyses of those who did not breach the protocol). Most of the differences had odds ratios showing small/medium effect sizes; therefore, future studies are needed with larger sample sizes to more definitively test the utility of the polling box approach.
In sum, the trends in our findings suggest that the polling box approach provides a promising technique that warrants further development. Additional research is needed to test the method as we have described it, use a broader range of sensitive questions, explain the causes and consequences of breaching, and test logistical adaptations of the polling box approach.
1. Tourangeau R, Smith TW. Asking sensitive questions: The impact of data collection mode, question format, and question context. Public Opin Q 1996; 60:275–304.
2. Tourangeau R, Rips L, Rasinski K. The Psychology of Survey Response. Cambridge: Cambridge University Press, 2000.
3. Gribble JN. Interview mode and measurement of sexual behaviors: Methodological issues. J Sex Res 1999; 36:16–24.
4. Turner C, Miller H, Rogers SM. Survey measurement of sexual behavior: Problems and progress. In: Bancroft J, ed. Researching Sexual Behavior. Bloomington, Indianapolis: Indiana University Press, 1997:37–60.
5. Al-Tayyib AA, Rogers SM, Gribble JN, et al. Effect of low medical literacy on health survey measurements. Am J Public Health 2002; 92:1478–1480.
6. Le LC, Blum RW, Magnani R, et al. A pilot of audio computer-assisted self-interview for youth reproductive health research in Vietnam. J Adolesc Health 2006; 38:740–747.
7. O'Reilly JM, Hubbard ML, Lessler JT, et al. Audio and video computer assisted self-interviewing: Preliminary tests of new technologies for data collection. J Off Stat 1994; 10:197–214.
8. Turner CF, Ku L, Rogers SM, et al. Adolescent sexual behavior, drug use, and violence: Increased reporting with computer survey technology. Science 1998; 280:867–873.
9. NCHSPT Group. The feasibility of audio computer-assisted self-interviewing in international settings. AIDS 2007; 21(suppl 2):49–58.
10. van de Wijgert J, Padian N, Shiboski S, et al. Is audio computer-assisted self-interviewing a feasible method of surveying in Zimbabwe? Int J Epidemiol 2000; 29:885–890.
11. Des Jarlais DC, Paone D, Milliken J, et al. Audio-computer interviewing to measure risk behaviour for HIV among injecting drug users: A quasi-randomised trial. Lancet 1999; 353:1657–1661.
12. Ghanem KG, Hutton HE, Zenilman JM, et al. Audio computer assisted self interview and face to face interview modes in assessing response bias among STD clinic patients. Sex Transm Infect 2005; 81:421–425.
13. Hewett PC, Mensch BS, Erulkar AS. Consistency in the reporting of sexual behaviour by adolescent girls in Kenya: A comparison of interviewing methods. Sex Transm Infect 2004; 80(suppl 2):43–48.
14. Kurth AE, Martin DP, Golden MR, et al. A comparison between audio computer-assisted self-interviews and clinician interviews for obtaining the sexual history. Sex Transm Dis 2004; 31:719–726.
15. Macalino GE, Celentano DD, Latkin C, et al. Risk behaviors by audio computer-assisted self-interviews among HIV-seropositive and HIV-seronegative injection drug users. AIDS Educ Prev 2002; 14:367–378.
16. Mensch BS, Hewett PC, Erulkar AS. The reporting of sensitive behavior by adolescents: A methodological experiment in Kenya. Demography 2003; 40:247–268.
17. Metzger DS, Koblin B, Turner C, et al. Randomized controlled trial of audio computer-assisted self-interviewing: Utility and acceptability in longitudinal studies. Am J Epidemiol 2000; 152:99–106.
18. Minnis AM, Muchini A, Shiboski S, et al. Audio computer-assisted self-interviewing in reproductive health research: Reliability assessment among women in Harare, Zimbabwe. Contraception 2007; 75:59–65.
19. Simoes AA, Bastos FI, Moreira RI, et al. A randomized trial of audio computer and in-person interview to assess HIV risk among drug and alcohol users in Rio De Janeiro, Brazil. J Subst Abuse Treat 2006; 30:237–243.
20. Waruru AK, Nduati R, Tylleskar T. Audio computer-assisted self-interviewing (ACASI) may avert socially desirable responses about infant feeding in the context of HIV. BMC Med Inform Decis Mak 2005; 5:24 [serial online].
21. Gregson S, Zhuwau T, Anderson RM, et al. Is there evidence for behaviour change in response to AIDS in rural Zimbabwe? Soc Sci Med 1998; 46:321–330.
22. Srinivasan L. Tools for Community Participation: A Manual for Training Trainers in Participatory Techniques. New York: PROWWESS/UNDP, 1990.
23. International Institute for Population Sciences. 2005–2006 National Family Health Survey Fact sheet: Andhra Pradesh. Available at: www.nfhsindia.org/pdf/AP.pdf
. Accessed November 7, 2007.
24. Dandona R, Dandona L, Kumar A, et al. Demography and sex work characteristics of female sex workers in India. BMC Int Health Hum Rights 2006; 6:5.
25. Magnani R, Sabin K, Saidel T, et al. Review of sampling hard-to-reach and hidden populations for HIV surveillance. AIDS 2005; 19(2 suppl):67–72.
26. Semaan S, Lauby J, Liebman J. Street and network sampling in evaluation studies of HIV risk-reduction interventions. AIDS Rev 2002; 4:213–223.
27. Passmore C, Dobbie AE, Parchman M, et al. Guidelines for constructing a survey. Fam Med 2002; 34:281–286.