Share this article on:

Psychometric Development of the Research and Knowledge Scale

Powell, Lauren R. BS; Ojukwu, Elizabeth BS; Person, Sharina D. PhD; Allison, Jeroan MD, MSc; Rosal, Milagros C. PhD; Lemon, Stephenie C. PhD

doi: 10.1097/MLR.0000000000000629
Original Articles

Background: Many research participants are misinformed about research terms, procedures, and goals; however, no validated instruments exist to assess individual’s comprehension of health-related research information. We propose research literacy as a concept that incorporates understanding about the purpose and nature of research.

Objectives: We developed the Research and Knowledge Scale (RaKS) to measure research literacy in a culturally, literacy-sensitive manner. We describe its development and psychometric properties.

Research Design: Qualitative methods were used to assess perspectives of research participants and researchers. Literature and informed consent reviews were conducted to develop initial items. These data were used to develop initial domains and items of the RaKS, and expert panel reviews and cognitive pretesting were done to refine the scale. We conducted psychometric analyses to evaluate the scale.

Subjects: The cross-sectional survey was administered to a purposive community-based sample (n=430) using a Web-based data collection system and paper.

Measures: We did classic theory testing on individual items and assessed test-retest reliability and Kuder-Richardson-20 for internal consistency. We conducted exploratory factor analysis and analysis of variance to assess differences in mean research literacy scores in sociodemographic subgroups.

Results: The RaKS is comprised of 16 items, with a Kuder-Richardson-20 estimate of 0.81 and test-retest reliability 0.84. There were differences in mean scale scores by race/ethnicity, age, education, income, and health literacy (all P<0.01).

Conclusions: This study provides preliminary evidence for the reliability and validity of the RaKS. This scale can be used to measure research participants’ understanding about health-related research processes and identify areas to improve informed decision-making about research participation.

Departments of *Quantitative Health Sciences and of Medicine, Division of Behavioral and Preventive Medicine

Quantitative Health Sciences, University of Massachusetts Medical School

Medicine, Division of Preventive and Behavioral Medicine, University of Massachusetts Medical School, Worcester, MA

Supported by grant funding from the following sources: National Institutes of Health, National Institute on Minority Health and Health Disparities, Grant #: 5 P60 MD006912; and Centers for Disease Control and Prevention, UMass Worcester Prevention Research Center, Grant #: 5 U48 DP005031 and 5 U48 DP001933.

The authors declare no conflict of interest.

Reprints: Stephenie C. Lemon, PhD, Department of Medicine, Division of Preventive and Behavioral Medicine, University of Massachusetts Medical School, 55 Lake Avenue North, Worcester, MA 01655. E-mail: stephenie.lemon@umassmed.edu.

This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially. http://creativecommons.org/licenses/by-nc-nd/4.0/

Medical researchers have an ethical and legal obligation to thoroughly inform research participants about studies for which they volunteer.1 The informed consent process was developed to protect participants from harm, and promote informed decision-making.2,3 Despite advances in research ethics and standardization of the informed consent process, many research participants remain misinformed about research terms, procedures, and goals.4–9

A meta-analysis of recent clinical trials measuring participant understanding of informed consent10 showed that 25%–50% of research participants did not understand specific components of informed consent; estimates remained consistent over the last 3 decades.10 Poor comprehension of informed consent is coupled with misunderstanding of therapeutic aspects of clinical trials. Some research participants believe that research is done for their personal advantage, rather than for generalized knowledge or future patients’ benefits.7

Several tools have been developed to assess comprehension of informed consent and the research process.11 However, few have been validated,11–13 and their effectiveness unexplored.11 Among existing scales, none addressed the concepts we were interested in or were developed for diverse groups. Existing scales often measure only certain aspects of health-related research, such as understanding informed consent or therapeutic misconception. To our knowledge, there is a paucity of instruments that assess comprehensive understanding of health-related research, a significant concern when performing studies among vulnerable and diverse populations. Given the need to elucidate knowledge gaps among diverse research participants, validated surveys that assess comprehension in a literacy and culturally sensitive manner are essential.

We propose research literacy as a comprehensive concept incorporating individuals’ understanding about the goals and nature of health-related research with informed decision-making in research participation.14 We define research literacy as “the capacity to obtain, process, understand, and act on basic information needed to make informed decisions about research participation.” Our definition, adapted from the US Surgeon General’s definition of health literacy,15 was developed using mixed-method approaches with lay and expert participants. We sought to develop a novel scale, the Research and Knowledge Scale (RaKS), to assess general understanding of research by prospective research participants and the public, in a manner sensitive to diverse cultural backgrounds and literacy levels. This manuscript describes the development and psychometric properties of the RaKS.

Back to Top | Article Outline

METHODS

Developing the RaKS

We took a multistep approach to developing the RaKS, depicted in Figure 1. Health-related research was defined as any health-related study with human participants. We first conducted a literature review and synthesized best practices of the informed consent process by reviewing basic informed consent forms. We conducted qualitative research including perspectives of research participants and researchers. Initial domains and items were developed and reviewed through expert panels and refined through cognitive pretesting. A community-based survey was administered to conduct psychometric analysis and finalize the scale. All procedures were approved by the University of Massachusetts Medical School Institutional Review Board.

FIGURE 1

FIGURE 1

Back to Top | Article Outline

Literature and Informed Consent Reviews

We performed a comprehensive literature search pertaining to patients’ understanding of research using PubMed, Google Scholar, and PSYCInfo databases, and search terms “patient AND understanding AND research,” “understanding AND research,” and “patient AND confusion AND research.” After title and abstract review, 22 articles were identified and reviewed for common themes and relevance. We coded findings into themes representing unique areas of confusion for participants while concurrently reviewing generic informed consent templates.

Back to Top | Article Outline

Focus Groups

To inform development of the domains of research knowledge and understanding, and the resulting scale, we conducted 8 focus groups with 80 former research participants (22 African American, 32 Latino, and 26 non-Latino white). During Summer 2013, we held focus groups in Massachusetts locations including: 2 groups in Worcester (UMass Medical School), 2 groups in Lawrence (Lawrence Senior Center), and 4 groups in Roxbury (Reggie Lewis Center) facilitated by L.R.P. using a scripted guide of open-ended questions. Participants were asked to share perspectives on their research experience including: (1) learning about the study; (2) deciding whether to be in the study; (3) the informed consent process; and (4) advice for others about research. The focus group guide was based on concepts covered in an informed consent form, was developed by L.R.P., and refined by study team members. Questions included: “Can you tell me the details about the research study you were a part of?”, “Can you explain how you signed up for the study?”, “How well do you think the study was explained to you?” Focus groups were audiorecorded and responses were coded by L.R.P. using thematic analysis to group common subjects and identify recurring themes. Focus groups revealed important areas of misunderstanding for research participants. Transcripts and thematic analysis coding were reviewed by L.R.P. and another research team member.

Back to Top | Article Outline

Initial Survey Item Format

Combining results from the literature and informed consent reviews and focus groups, we identified 8 potential domains of research literacy, understanding of: the goals of research, human subjects protections, ethical research conduct, randomization and experimentation, the relationship between research and treatment, confidentiality, research as a choice, and researcher responsibility.14 Each reflects an important factor inherent in all types of health-related research studies. An initial bank of 22 survey items based on these domains was drafted. Participants were asked to indicate whether each statement was True or False. Statements were worded positively (eg, “Health-related research studies are done to provide data for medical decision-making”) and negatively (eg, “People who take part in health-related research do not have legal rights”) to add variety and limit respondent reporting bias.

Back to Top | Article Outline

Refining the RaKS

Cognitive Pretesting

We conducted 15 cognitive pretesting interviews on the initial 22 survey items. Participants were community members identified through postings on Craigslist, emailed invitation, and word of mouth. L.R.P. conducted individual 60-minute interviews following a scripted guide. Participants (1) decided whether each statement was true or false, (2) paraphrased each item in their own words, identifying words or phrases that were confusing, and (3) described how they decided upon the answer to each question. Interviews were conducted in-person, through phone, and through video-chat using Facetime and Google Hangout. Participants received a $25 Target gift card for their time.

Back to Top | Article Outline

Expert Panel Review

A panel of research experts (researchers, scientific thought leaders, and former/current research participants) was assembled to review the 22 initial survey items and assess content validity. L.R.P. conducted individual interviews with 10 individuals (6 researchers and 4 research participants). Each expert was asked to assess relevance, clarity, and conciseness of items. We calculated a content validity index score for the scale, which indicated a consensus by field experts on the appropriateness of topics included.16 We calculated an individual rating score, per item for each expert panelist by dividing the number of items rated with high relevance and clarity, by the total number of items in the scale. An average of these values was calculated as the scale content validity index score.

Back to Top | Article Outline

Testing the RaKS

Sample

We conducted a cross-sectional administration of the RaKS using purposive sampling methodology (n=430). We aimed to recruit a diverse sample with respect to age, race/ethnicity (mostly African Americans, Latinos, and whites), socioeconomic status (low, middle, and high), and sex. Participants were US residents at least 18 years of age, English speaking, and cognitively able to provide informed consent to complete the survey. A multitiered strategy to recruitment included: engagement of community partners and attendance at community-based events, email blasts, and Web-based posts on social media (Twitter, Craigslist).

Back to Top | Article Outline

Administration

The University of Massachusetts’ accessed Research Electronic Data Capture (RedCap) Web-based system was used to administer, store, and manage data. The survey was self-administered. Participants recruited in-person could complete the survey by paper or online through wireless tablets. Individuals recruited through social media, email, and Craigslist were sent a link to their email address to complete the survey from their own personal Web-enabled device. This embedded link was specific to the participant’s email address and could not be forwarded for completion by anyone else. We entered data for individuals who completed the survey in-person at community events, into RedCap.

In addition to RaKS items, we also collected data on age, race/ethnicity, sex, level of education, health literacy, and perceived income. To assess health literacy, we used the question, “how comfortable are you filling out medical forms by yourself?” (extremely, quite a bit, somewhat, a little bit, not at all).17,18 We used a perceived income variable developed by community-engaged researchers at UMass Medical School: “in general, would you say you (and your family living in the same household) have more money than you need, just enough money for your needs, or not enough money to meet your needs?”

Participants who indicated willingness to complete the RaKS again 2 weeks after their initial survey completion date were asked to provide their email address for follow-up. They were sent an automated email 14 days later to recomplete the RaKS.

Back to Top | Article Outline

Psychometric Analyses

Figure 2 outlines psychometric analyses conducted to evaluate the RaKS. We incorporated the “I don’t know” answer option initially in response to feedback from cognitive pretesting of the scale, and to discourage guessing, but ultimately collapsed “I don’t know” responses into the incorrect response category per item for analysis. For all analyses, we recoded respondents’ answers as 1=correct, 0=incorrect.

FIGURE 2

FIGURE 2

First, we assessed individual item characteristics and item-test correlation for each item in the RaKS. We checked items for missingness, and summarized mean, SD, and item-test correlation. Item elimination in this step was based on low item-test correlation (r<0.40). Second, we conducted exploratory factor analysis. Using a polychoric correlation matrix structure to account for our binary survey response options, we built exploratory models including all remaining factors with high item-test correlation values. Models were rotated using varimax rotation to simplify interpretation of loadings. We evaluated the exploratory factor loadings for each individual item and classified factors from those with a correlation r>0.40 per the respective loadings.19,20 Items with low (r<0.40) or negative correlations were dropped. We evaluated whether items cross-loaded on multiple factors and whether grouping of individual items loading onto factors made conceptual sense. Third, to assess internal consistency reliability, we calculated a Kuder-Richardson-20 (KR-20) score21 for the overall scale, and by administration method (online vs. paper). A canonical correlation estimate was calculated to evaluate test-retest reliability of the scale.22,23

To assess convergent validity, we also conducted analyses of variance (ANOVA) to test differences in mean research knowledge score within certain sociodemographic subgroups, and examined the KR-20 reliability of the RaKS within subgroups. We hypothesized that mean research knowledge scores would be significantly higher among non-Latino whites, women, and those with higher education, perceived income, and health literacy compared with their counterparts. All statistical analyses were performed using STATA, version14.

Back to Top | Article Outline

RESULTS

Face/Content Validity

Expert panel reviews and cognitive pretesting interviews confirmed the overall face/content validity of the scale. All 22 items initially created were retained at this stage and no new items were developed. Minor wording changes improving comprehension and conciseness were identified. Cognitive pretesting interview participants indicated the importance of adding an “I don’t know” response option to the True/False format. We refined the RaKS to reflect this feedback. The content validity index score for the initial 22-item scale was 0.85.

Back to Top | Article Outline

Classic Item Testing of the RaKS

Table 1 shows the mean, SD, and item-test correlations for each item. Most items demonstrated variability in response. Items with the highest mean of 0.83 were #1 (Health-related research studies are done to provide data for medical decision-making) and #2 (People who take part in health-related research do not have legal rights). Items were eliminated based on low item-test correlations (<0.40); thus #7 (All health-related research is experimental), #10 (Randomization means researchers choose which treatment is received by participants in a health-related research study), and #21 (Agreeing to take part in the study always involves signing a document) with correlations of 0.14, 0.11, and 0.08, respectively, were eliminated at this stage.

TABLE 1

TABLE 1

Back to Top | Article Outline

Construct Validity

Exploratory factor analysis was performed. Factor structures were explored using eigenvalues >1 and evaluation of the scree plots. Two-factor, 3-factor, and 4-factor solutions were explored but were a poor fit with the data either because of multiple cross-loadings, low factor loadings, or poor conceptual fit. A single factor structure fit the data best and explained 76% of the variance in research literacy.

Back to Top | Article Outline

Test-Retest Reliability

We assessed stability of answers in a subsample of respondents (n=84) over 14 days. The canonical correlation for test-retest reliability of the scale was 0.84.

Back to Top | Article Outline

Internal Consistency Reliability.

We assessed the KR-20 reliability of the scale by administration method. It did not differ greatly by online (r=0.82) versus paper (r=0.79) methods, so it was unnecessary to further evaluate the scale stratified by method. The internal consistency reliability for the full RaKS using KR-20 was 0.81.

Back to Top | Article Outline

Convergent Validity: Demographic Differences in Mean Research Literacy Scale Score

Mean RaKS scores and KR-20 reliability estimates by sociodemographic subgroups are detailed in Table 2. There were statistically significant differences in mean scores by age, education, perceived income, and health literacy. Persons who were over age 50 (ages 18–34 and 35–49 vs. 50–64 and 65+), had a college degree (vs. not having a college degree), perceived their income to be enough to meet their needs (vs. not enough), and had high health literacy (vs. low health literacy) had higher mean RaKS scores (all P<0.01). No sex differences were observed.

TABLE 2

TABLE 2

Back to Top | Article Outline

DISCUSSION

Mandates from the National Institutes of Health, the Food and Drug Administration, and the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, articulate the necessity of processes and methods to insure that participants in health-related research studies have a clear understanding of the studies they participate in and are able to make informed, uncoerced decisions about participation.24–28 The RaKS is responsive to such mandates and was developed to measure individuals’ “capacity to obtain, process, and understand basic information needed to make informed decisions about research participation.”

Our findings support the preliminary internal consistency reliability and validity of the RaKS as a tool to assess individual understanding of health-related research procedures and expectations. The good internal consistency estimate (KR-20=0.81) and test-retest reliability (r=0.84) for the RaKS suggest that the 16 items comprising the scale collectively form a consistent and preliminarily reliable measure of research knowledge and understanding. Our exploratory factor analysis results suggest that the RaKS is unidimensional, and that all 16 items assess an aspect of one’s understanding of health-related research.

We examined preliminary construct validity by evaluating mean research literacy scale score across sociodemographics. The scale demonstrated high reliability within demographic subgroups (Table 2). Although no sex differences were observed, our hypothesis of difference in mean scores by demographic subgroups was otherwise supported. Scores varied across race/ethnicity (mean research knowledge score: 12.3 vs. 11.3 vs. 9.9, non-Latino whites, blacks, and Latinos, respectively). These differences in scores may speak to additional broad drivers of race-related treatment and racial discrimination both within greater society,29–31 and specifically within the health care sector.32–34 Such experiences may impact the way minorities perceive and interact with the health care system,32,34,35 and thus their underlying knowledge as health care consumers. Thus, experiences of race-based treatment and racial discrimination may shape research knowledge and understanding.

Research understanding and knowledge was also lower among those with lower education and health literacy levels, consistent with the literature that indicates that level of education and health literacy proficiency are associated with generally better-informed health care consumers.36–40 Research knowledge was also higher among older participants, particularly over age 50. Plausible explanations may point to prolonged exposure to the health care system throughout the lifespan, or increased e-health literacy (use of Internet and social media to locate and evaluate health information) and health consumerism in this age group.41

The variations in scores across sociodemographic subgroups demonstrates the potential of the RaKS to discriminate differences across substrata within the general population. Differences in levels of understanding by demographics coincide with literature deeming characteristics such as race/ethnicity as traits associated with individuals less likely to participate in health-related research.6,42–44

Misperceptions about health-related research may deter racial/ethnic minorities and individuals of low socioeconomic status from participating in research.44 With growing racial/ethnic and socioeconomic diversity,45 researchers need to engage broad groups of potential participants by ensuring communication is clear and effective. The scale holds promise as a potential screener to verify participants’ understanding of research expectations and procedures before study enrollment. We envision the RaKS administered within a research setting by research assistants to prospective research participants before obtaining informed consent, or within community settings as a baseline assessment of how well individuals understand research, for future interventions. Such interventions could result in increased engagement of diverse populations.

Our findings should be viewed within the context of certain limitations. First, the RaKS was administered as a cross-sectional survey at 1 timepoint (except for test-retest reliability participants). We cannot draw definitive longitudinal conclusions about research knowledge or research literacy as either a trait or state. Plausibly, research literacy is similar to health literacy—a trait for which proficiency is hypothesized to be context specific and situation specific.46 But the overarching concept of research literacy should be viewed as separate from health, scientific, and general literacy. The world of health-related research has very specific goals, jargon, and outcomes. So understanding the multiple facets of health-related research requires knowledge specific to these nuances, distinguishing research literacy as a separate yet necessary concept. We observed a ceiling effect indicating the RaKS’ limitations for further distinction of relatively well-informed respondents, with very high scores. Conversely, this ceiling effect emphasizes the tool’s potentially strong ability to identify individuals who score lower and therefore struggle with understanding health-related research information—which is its purpose. Second, it is possible that there are context-specific facets to participation in health-related research that we were not able to assess through the RaKS. The focus group participants in the formative phase of survey development have been involved in survey studies, behavioral intervention studies, and community-based studies. It is possible that their answers about research were colored by the nature of the research in which they participated. We recognize that we could not accommodate the unique aspects of the range of health-related research studies, so we chose to focus on core understanding essential to being an informed research participant, regardless of type of study in which one may choose to participate. Further work on this topic might include subscales specific to research literacy for different types of research studies. Third, we did not assess the sensitivity of the RaKS to change, in the context of an intervention. This is a topic that warrants future investigation. Fourth, we recognize that the vocabulary and reading level of the items included in our scale may be rather sophisticated. We completed comprehensive cognitive pretesting and pilot-phase testing with diverse lay community members to address this. Yet some of the challenges with the reading and vocabulary level of the scale relate directly to the very jargon and vocabulary that researchers use to communicate about research. This adds further credence to the necessity for a concept such as research literacy, which may prompt researchers to recognize the bidirectional communication skills needed to work and communicate effectively with research participants. Finally, the potential for bias borne from self-report and guessing are a threat to any psychometric self-administered assessment. Participants were asked to respond to items in the RaKS regarding information that they may have either been exposed to in the past, or never known. We cannot guarantee that guessing did not occur.

The RaKS attempts to evaluate how well individuals process and understand health-related research. To our knowledge, the scale is the first of its kind to: (1) evaluate the concept of research literacy in a diverse sample, (2) rely on both qualitative and literature findings and conceptual grounding as the basis for defining and measuring research literacy, and (3) incorporate the perspectives both of former/current research participants and researchers in its development. Research literacy is a new and dynamic concept that considers how individuals process and understand written and verbal information necessary for making an informed decision about initial and ongoing health research participation. As the processing of written and verbal information are both underlying tenets of literacy, the term research literacy is an appropriate concept to capture this topic.

The RaKS is a tool that could be used for screening to better facilitate research participants’ understanding before consenting to a study. Both the domain of research knowledge and understanding and its accompanying scale are foundational elements of research literacy, created and defined through our study. Our study should prompt continued investigation to uncover other domains and components of the broader concept, research literacy. Future research should explore Rasch modeling to further refine the scale, whether levels of research literacy are associated with willingness to participate in research, and seek to expand upon operationalization and application of the concept of research literacy. The RaKS has the potential to foster transparency toward long-term improvements in engaging and communicating with research participants.

Back to Top | Article Outline

ACKNOWLEDGMENTS

The authors would like to acknowledge the following individuals for their contributions to the qualitative research and data collection portions of this project (Dr Heather Lyn-Haley, Dr Laura Hayman, Dr Suzanne Cashman, and Robert Gakawaya) and scholastic contributions to the execution of this manuscript (Dr Carol Bova, UMMS CPHR classmates, Hilary Powell, Chris Powe, Dr Tamara Butler, and Dr Sarah Ann-Anderson). The authors would also like to thank those community partners and members who welcomed them into their community to conduct focus groups, and administer their survey, including: Mosaic Cultural Complex (Worcester, MA); Lawrence Senior Center (Lawrence, MA); Reggie Lewis Center (Roxbury, MA); Ashmont Neighborhood Association (Ashmont, MA); Urban League of Eastern Massachusetts (Roxbury, MA); YMCA of Central Massachusetts (Worcester, MA); Greendale YMCA (Worcester, MA.).

Back to Top | Article Outline

REFERENCES

1. Annas GJ. Doctors, patients, and lawyers—two centuries of health law. N Engl J Med. 2012;367:445–450.
2. Jefford M, Moore R. Improvement of informed consent and the quality of consent documents. Lancet Oncol. 2008;9:485–493.
3. Will JF. A brief historical and theoretical perspective on patient autonomy and medical decision making: part I: the beneficence model. Chest. 2011;139:669–673.
4. Knifed E, Lipsman N, Mason W, et al.. Patients’ perception of the informed consent process for neurooncology clinical trials. Neuro Oncol. 2008;10:348–354.
5. Corbie-Smith G, Williams IC, Blumenthal C, et al.. Relationships and communication in minority participation in research: multidimensional and multidirectional. J Natl Med Assoc. 2007;99:489.
6. Barata PC, Gucciardi E, Ahmad F, et al.. Cross-cultural perspectives on research participation and informed consent. Soc Sci Med. 2006;62:479–490.
7. United States. Advisory Committee on Human Radiation Experiments. The Human Radiation Experiments. New York, NY: Oxford University Press; 1996.
8. Burke W, Evans BJ, Jarvik GP. Return of results: ethical and legal distinctions between research and clinical care. Am J Med Genet C Semin Med Genet. 2014;166C:105–111.
9. Freimuth VS, Quinn SC, Thomas SB, et al.. African Americans’ views on research and the Tuskegee Syphilis Study. Soc Sci Med. 2001;52:797–808.
10. Tam NT, Huy NT, Thoa LTB, et al.. Participants’ understanding of informed consent in clinical trials over three decades: systematic review and meta-analysis. Bull World Health Organ. 2015;93:186–198H.
11. Montalvo W, Larson E. Participant comprehension of research for which they volunteer: a systematic review. J Nurs Scholarsh. 2014;46:423–431.
12. Joffe S, Cook EF, Cleary PD, et al.. Quality of informed consent: a new measure of understanding among research subjects. J Natl Cancer Inst. 2001;93:139–147.
13. Sugarman J, Lavori PW, Boeger M, et al.. Evaluating the quality of informed consent. Clin Trials. 2005;2:34–41.
14. Powell L, Person S, Allison J, et al.. Research literacy: a conceptual framework to inform individual understanding of health-related research. Patient Educ Couns. Under review.
15. Institute of Medicine. Health Literacy: A Prescription to End Confusion. Bethesda, MD: National Institute of Health; 2004. Available at: https://www.iom.edu/~/media/Files/Report%20Files/2004/Health-Literacy-A-Prescription-to-End-Confusion/healthliteracyfinal.pdf.
16. Waltz CF. Measurement in Nursing and Health Research. New York: Springer Publishing Company; 2005.
17. Wallace LS, Rogers ES, Roskos SE, et al.. Brief report: screening items to identify patients with limited health literacy skills. J Gen Intern Med. 2006;21:874–877.
18. Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Health. 2004;11:12.
19. Nunnally J, Bernstein I. Psychometric Theory, 3rd ed. New York: MacGraw-Hill; 1994.
20. Stevens J. Applied Multivariate Statistics for the Social Sciences. Mahway, NJ: Lawrence Erlbaum Associates; 1992.
21. Kuder GF, Richardson MW. The theory of the estimation of test reliability. Psychometrika. 1937;2:151–160.
22. Hair JF, Black WC, Babin BJ, et al.. Multivariate Data Analysis. Upper Saddle River, NJ: Pearson Prentice Hall; 2006.
23. Koch HJ, Gurtler K, Fischer-Barnicol D, et al.. Determination of reliability of psychometric tests in psychiatry using canonical correlation. Psychiatr Prax. 2003;30(suppl 2):S157–S160.
24. Hawkins JS, Emanuel EJ. Clarifying confusions about coercion. Hastings Cent Rep. 2005;35:16–19.
25. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Bethesda, MD: ERIC Clearinghouse; 1978.
26. Wendler D, Miller FG. Deception in the pursuit of science. Arch Intern Med. 2004;164:597–600.
27. Emanuel E, Abdoler E, Stunkel L. (NIH) Research Ethics: How to Treat People Who Participate in Research. Bethesda, MD: National Institutes of Health Clinical Center, Department of Bioethics; 2014. Available at: http://www.bioethics.nih.gov/education/pdf/FNIH_BioethicsBrochure_WEB.PDF.
28. US Department of Health and Human Services Food and Drug Administration Office of Good Clinical Practice. Informed Consent Information Sheet Guidance for IRBs, Clinical Investigators, and Sponsors. Bethesda, MD: US Department of Health and Human Services; 2014. Available at: http://www.fda.gov.dowloads.RegulatoryInformation/Guidances/UCM405006.pdf.
29. Feagin JR, Sikes MP. Living With Racism: The Black Middle-Class Experience. Boston: Beacon Press; 1994.
30. Henry PJ, Sears DO. Symbolic and Modern Racism. Encyclopedia of Race and Racism. Farmington Hills, MI: Macmillan; 2008.
31. Clark R, Anderson NB, Clark VR, et al.. Racism as a stressor for African Americans. A biopsychosocial model. Am Psychol. 1999;54:805–816.
32. Doescher MP, Saver BG, Franks P, et al.. Racial and ethnic disparities in perceptions of physician style and trust. Arch Fam Med. 2000;9:1156–1163.
33. Harrell CJ, Burford TI, Cage BN, et al.. Multiple pathways linking racism to health outcomes. Du Bois Rev. 2011;8:143–157.
34. Halbert CH, Armstrong K, Gandy OH Jr, et al.. Racial differences in trust in health care providers. Arch Intern Med. 2006;166:896–901.
35. Charatz-Litt C. A chronicle of racism: the effects of the white medical community on black health. J Natl Med Assoc. 1992;84:717–725.
36. Williams MV, Parker RM, Baker DW, et al.. Inadequate functional health literacy among patients at two public hospitals. JAMA. 1995;274:1677–1682.
37. Parker RM, Williams MV, Weiss BD, et al.. Health literacy—report of the council on scientific affairs. JAMA. 1999;281:552–557.
38. Simonds S. Health education as social policy. Health Educ Monogr. 1974;2:1–10.
39. Yen IH, Moss N. Unbundling education: a critical discussion of what education confers and how it lowers risk for disease and death. Ann N Y Acad Sci. 1999;896:350–351.
40. Berkman ND, Sheridan SL, Donahue KE, et al.. Low health literacy and health outcomes: an updated systematic review. Ann Intern Med. 2011;155:97–107.
41. Tennant B, Stellefson M, Dodd V, et al.. eHealth literacy and Web 2.0 health information seeking behaviors among baby boomers and older adults. J Med Internet Res. 2015;17:e70.
42. Baquet CR, Commiskey P, Daniel Mullins C, et al.. Recruitment and participation in clinical trials: socio-demographic, rural/urban, and health care access predictors. Cancer Detect Prev. 2006;30:24–33.
43. Aristizabal P, Singer J, Cooper R, et al.. Participation in pediatric oncology research protocols: racial/ethnic, language and age-based disparities. Pediatr Blood Cancer. 2015;62:1337–1344.
44. Calderon JL, Baker RS, Fabrega H, et al.. An ethno-medical perspective on research participation: a qualitative pilot study. MedGenMed. 2006;8:23.
45. US Census Bureau. Statistical Abstract of the United States. The National Data Book. 2012. Available at: http://www.census.gov/library/publication/2011/compendia/statab/131ed.html. Accessed August 17, 2016.
46. Baker DW. The meaning and the measure of health literacy. J Gen Intern Med. 2006;21:878–883.
Keywords:

patient and health communication; research ethics; measurement development

Copyright © 2017 Wolters Kluwer Health, Inc. All rights reserved.