When doing research, there are many ways to get into trouble.1 Some of these involve intentionally engaging in unethical behavior—for example, fabricating data for studies2 or conducting experiments of unknown safety on humans without informed consent.3,4 Thankfully, such behavior appears to be rare, and researchers who engage in it are frequently fired and debarred from funding.3 But there is another, much larger, subset of behaviors that includes less egregious actions that nevertheless can cause serious problems for investigators, institutions, human participants, and animal subjects, and can potentially compromise the integrity of experimental data. Such behaviors include failing to obtain signatures to document informed consent, deviating from anesthesia protocols in animal research, or neglecting to oversee raw data analyzed by trainees (thereby increasing the risk of data falsification). Such behaviors may reflect a lack of attention, rather than an intention to commit wrongdoing; yet, they can lead to serious disciplinary actions from the Food and Drug Administration, the Office of Laboratory Animal Welfare, or the U.S. Office of Research Integrity. Researchers may then find their research privileges suspended, while institutions struggle to identify appropriate actions that will ensure that such behaviors do not recur.
In this article we describe the first remediation program for researchers working in the United States who have violated such rules or regulations in science. We present the rationale behind the program and outcomes from our first nine workshops involving 39 researchers from 24 different institutions throughout the United States.
A recently conducted needs assessment survey of research administrators showed that institutions confront research violations on a regular basis, often without an effective response.5 While effective remediation programs exist for physicians who misprescribe, commit boundary violations, or are disruptive,6–11 no remediation program for researchers existed prior to the creation of the program we describe here. This gap was particularly problematic given that standard training programs in responsible conduct of research (RCR) and human subjects protections often fail to achieve their goals.12–14 Many of the most effective programs focus on knowledge rather than professional decision making or behavior,15,16 when decision making and behavior clearly need to be targeted following disciplinary action.
The Restoring Professionalism and Integrity in Research Program—now called the Professionalism and Integrity in Research Program (or PI Program)—was created in 2013 to meet the specific needs of investigators who violated rules or regulations in research. In a recent article, we described the kinds of violations that led to program referrals (most commonly failures to provide lab oversight, informed consent and recruitment violations, plagiarism, and animal care violations), and why these violations occurred (most commonly due to investigators being overextended, not prioritizing compliance, being unsure of the rules, or failing to communicate effectively).17
Because the path to research wrongdoing is clearly multifactorial, it is naïve to think that all instances of serious noncompliance or all lapses in research integrity can be prevented through proactive, one-size-fits-all education. Researchers are often overextended as they attempt to balance multiple responsibilities such as conducting research, seeking new funding, teaching, seeing patients, and tending to administrative responsibilities. Moreover, projects may be understaffed, and staff members may not be adequately prepared for their roles. Further, principal investigators are frequently high achievers and creative learners but are not always highly disciplined and detail oriented regarding matters of paperwork and documentation.17,18
Accordingly, we designed a program that would identify the root causes of individual researcher lapses and that would coach researchers on a range of compensatory and management strategies:
- Reducing bias by managing emotions, testing assumptions, and seeking help from others;
- Anticipating consequences of actions, including long-term and short-term consequences to others and themselves;
- Holding regular meetings to provide leadership and oversight of research teams; and
- Developing standard operating procedures for matters of research integrity and compliance.
Program Development Process
The PI Program was developed through an administrative supplement to the Washington University in St. Louis Clinical and Translational Science Award from the National Institutes of Health (NIH) and a partnership with faculty at Saint Louis University. The award enabled us to establish an advisory committee and a development team that comprised research ethicists, researchers, research administrators, and experts in industrial-organizational, clinical, educational, and moral psychology. (A full list of our advisory committee and development team members is available from the program’s Web site.19)
The advisory committee and development team met for a face-to-face meeting in February 2012 to discuss program goals and strategies. A team of applied psychologists at the University of Oklahoma, led by Michael Mumford, compiled materials informed by the lab’s work on sensemaking strategies, mental models, and compensatory strategies.20–22 Elizabeth Heitman at Vanderbilt University adapted Collaborative Institutional Training Initiative (CITI) training courses and knowledge questions to create modules to address specific areas of wrongdoing. John Gibbs at Ohio State University compiled materials on moral development and addressing self-serving biases.23–27 William Swiggart, codirector of the Center for Professional Health at Vanderbilt, permitted the PI Program Director (J.M.D.) to participate in a three-day workshop for disruptive physicians,7 which provided a template for intensive, small-group remediation training with professionals.
We four coauthors—all PI Program faculty—then developed the PI Program Manual, with the principal investigator (J.M.D.) producing an initial draft. While content was informed by the work of the development team members described above, all materials were developed de novo to ensure appropriateness for adult professional learning in a small-group, short-term setting using the principles of career coaching,28,29 which we deemed most likely to facilitate behavior change.
The PI Program is offered three times per year. Here, we share outcomes from the first nine workshops (the first three years), offered from January 2013 through December 2015. Following each workshop, faculty met to evaluate the curriculum and revise the manual.
The PI Program consists of preworkshop activities, a three-day on-site workshop, and postworkshop activities. Using an electronic newsletter that is delivered to more than 3,000 research administrators who were identified using publicly available information, the PI Program shares information about its services and upcoming workshops. Contact and registration information are provided in each newsletter and on the PI Program Web site.19
When an individual or institution contacts the PI Program Coordinator, a brief call is arranged to determine whether the candidate is a good fit for the program. We consider the workshop to be appropriate for individuals who do empirical research at graduate or postgraduate levels. We do not train undergraduates, humanities scholars, or individuals whose difficulties arise from unmet treatment needs for substance use or mental disorders. Thus far, the only individuals who were denied enrollment were reporters and RCR instructors who wanted to observe; institutions have made only appropriate referrals. Upon registration, participants are required to complete an assessment battery that examines knowledge of RCR, professional decision-making skills, levels of compliance disengagement, personal stress, and workplace stress. Baseline data for all measures have been reported in the supplemental materials of a separate paper.17 Following the assessment battery, we conduct an enrollment interview with the prospective participant and/or an institutional official (depending on participant preferences, which may be influenced by institutional demands). During the interview we assess the nature and scope of noncompliance or other violations, learn about the kind of research being done, and determine whether the institution requires any other actions as part of a remediation plan.
The heart of the PI Program is a three-day, face-to-face workshop held in St. Louis. Workshops are facilitated by two faculty members. All PI Program faculty members hold doctoral degrees in psychology, have conducted federally funded research, and have served on institutional review boards (IRBs). Workshops are attended by three to eight participants to ensure adequate opportunities for small-group engagement. Prior to attending the workshop, all participants sign a confidentiality agreement, and, at the beginning of the workshop, faculty and participants reiterate the promise to maintain the confidentiality of workshop discussions.
Day 1 of the workshop explores the values that attracted participants to research, examines the norms we expect others to follow, investigates bias in research, surveys how stress can negatively affect decision making, and teaches a concrete stress management strategy.
Day 2 is focused on discussion of each individual’s situation. Each participant shares the circumstances precipitating his or her enrollment in the workshop, including the nature and history of the research violations. During this time, faculty and other participants collaborate in identifying ways that similar problems could be avoided in the future and also provide emotional support. During the afternoon, participants explore their professional strengths based on results from the StrengthsFinder test, a measure that identifies an individual’s top 5 professional talents from a list of 34 talents, such as achieving, learning, responsibility, discipline, communication, and relating well to others.30 Subsequently, participants consider how they might partner with individuals who have complementary strengths to meet their professional and compliance goals.
Day 3 examines how to address institutional and environmental barriers to research compliance and integrity, explores the management and leadership needs of participants, and culminates in the development of a written professional development plan. Such plans focus on a small number of feasible and well-defined actions, usually with specific target dates for completion.31
Aside from the daily workshop activities, participants are assigned homework each evening. Assignments include practicing a stress management technique, drafting a personal story (for Workshop Day 2), and identifying resources for a professional development plan (for Workshop Day 3).
The workshop approach adopted in the PI Program has proven capable of meeting the unique needs of participants despite the fact that they are referred for different reasons. The specific knowledge that participants require is often quite distinct (e.g., informed consent best practices, effective data management strategies, or proper citation practices). Faculty share this knowledge during workshop discussion as appropriate, and we recommend specific CITI training program online training modules as part of participants’ professional development. However, most of the program addresses other root causes of problems—poor time management, communication, or data management practices; inadequate leadership on matters of compliance; and failure to use good professional decision-making strategies—and relies heavily on interaction, discussion, and strategizing. Throughout the three-day workshop, participants complete a series of eight worksheets that enable them to identify needs and opportunities to develop new habits, knowledge, skills, and relationships, which become the focus of their professional development plans and subsequent coaching activities. Because the program is tailored to individual needs, we have found little need to change the fundamental design of the program, though we have modified our didactic approach, moving toward greater reliance on interaction (e.g., discussion and role-play) and worksheets.
In the week following the workshop, participants complete two assessments and finalize their professional development plans with input from program faculty. Participants then complete two to four follow-up coaching calls over the next two to three months. During coaching calls, program faculty provide assistance to participants as they execute their professional development plans. The number of calls is individualized, based on the needs of the participant.
Ongoing program support
NIH funding for the PI Program ended in May 2013. From May 2013 through the period reported in this article, the PI Program was supported through workshop fees and with a sponsorship by the CITI training program, then housed at the University of Miami, which offered online training on diverse topics related to research ethics. Participants pay a fee for the workshop, including a biofeedback device, workshop meals, assessments, and coaching calls. The CITI training program collected all fees and ensured a minimum operating budget, which was essential during the initial years of program development when revenues fell short of program costs.
PI Program participants granted permission for the use of deidentified assessment data for research purposes. The assessment data analysis and survey activities were approved by the Human Research Protections Office at Washington University School of Medicine.
As of January 2016, 39 individuals from 24 institutions had completed the PI Program. Table 1 presents basic demographic information for participants. Program participants represent diverse disciplines and career stages with a mixture of government and industry funding.
Nearly twice as many participants were born outside of the United States than would be expected based on the percentage of faculty-level researchers in U.S. institutions.32 This has led to increased workshop discussion of the role of culture and cultural assumptions in research. Such discussions have proven to be relevant to all participants, regardless of country of origin, because each discipline and lab has its own culture with its own attendant assumptions and biases.33–35
Participants complete an evaluation at the end of each workshop day. Table 2 presents mean evaluation scores. On a scale from 1 to 5 with 5 indicating the strongest endorsement, items related to program faculty quality yielded mean scores of 4.7 to 4.8, and items related to the quality of course content yielded scores of 4.5 to 4.6.
After participants complete all program requirements and receive their Certificate of Completion (approximately two months after attending the workshop), we request a final overall program evaluation, which includes several open-ended items pertaining to the value of the program and areas for improvement. Because response rates are understandably lower for this follow-up evaluation (N = 16), we focus on qualitative data. The following are representative responses from several participants to the follow-up evaluation:
At the time I came to the course, I was demoralized and convinced that I would be stuck in my situation indefinitely. It was such a relief to find the course was a place to dispassionately examine the factors that led up to problems, realize the roles played by myself and others, and to plan out how I could change constructively to accommodate the institution.
This is a great program for anyone interested in learning new organizational and leadership skills for the high-paced, usually very stressful work that is academic research.
The facilitators are often profound about your specific situation that often leads to positive outcomes for you.
When asked how we might improve the course, most participants made no suggestions. The most common suggestions we received, however, focused on the desire for more diverse case studies, especially cases relevant to a given individual’s field of research, as several participants’ comments illustrate:
Add more diverse case studies.
Would be helpful to have some dedicated material for the physician–scientist. There are areas that are unique to this group of researchers and could be helpful to address some of them specifically.
Broaden the scope of the program to include different types of research concerns other than IRB and medical ethics.
Accordingly, recent iterations of the PI Program have incorporated more diversified cases. It is worth noting that case studies are the only field-specific material in the course; all other modules derive content from the participants’ own experiences and work.
Pre- and postworkshop assessment data
Aside from the program evaluation data described above (and with funding from the U.S. Office of Research Integrity), we developed and validated two new measures to assess PI Program outcomes. The How I Think about Research (HIT-Res) test assesses the degree to which participants use self-serving cognitive distortions such as blaming others or assuming the worst to justify deviations from research compliance or integrity. The Professional Decision-making in Research (PDR) measure assesses the degree to which participants use evidence-based professional strategies in their research decision making: seeking help, managing emotions, anticipating consequences, recognizing rules and regulations, and testing assumptions. We administered these two new measures, along with measures of moral disengagement, narcissism, cynicism, and knowledge of RCR, to 700 NIH-funded researchers at different career stages.
Results of the research supported the psychometric properties of the two scales. The HIT-Res demonstrated excellent internal-consistency reliability (alpha = .92), positive correlations with cynicism and moral disengagement, and negative correlations with PDR scores as predicted.36 The PDR demonstrated good parallel form reliability (r = 0.70); positive correlations with RCR knowledge; and negative correlations with moral disengagement, narcissism, and cynicism as predicted. The PDR was not correlated with socially desirable responding, as measured by the Marlowe-Crowne Social Desirability Scale.14,37,38 The HIT-Res, on the other hand, correlated moderately with socially desirable responding; however, it contains a built-in measure of “anomalous responding” (the AR scale) that allows the cognitive distortion score to be adjusted for social desirability effects.35
Once the HIT-Res and PDR were validated (in Years 2 and following), we administered the HIT-Res and the PDR to 24 PI Program participants. Participants completed both measures prior to the workshop and again one week following the workshop. As Table 3 indicates, HIT-Res scores indicated that the use of cognitive distortions to justify noncompliance decreased significantly, and PDR scores increased significantly following training. Interestingly, AR scores on the HIT-Res also decreased significantly, suggesting that participants were more forthright following the workshop.
To examine the longer-term impact of the PI Program on participants’ attitudes and behaviors, all participants received a follow-up survey. The mean length of time from workshop completion to completion of the survey was 13 months. At follow-up, all participants were still (or once again) actively engaged in research. Table 4 reports the rest of the questions and findings. Although many of the effects are large and statistically significant, secondary to the small sample size and the use of multiple t tests, results should be considered preliminary at this time. The largest observed effects in self-reported behavior change (all P values < .001) pertained to recognizing rules and regulations in research; choosing to view compliance demands as part of the research process; providing training to research staff members to foster compliance and research quality; anticipating the consequences of decisions for oneself and others; using standard operating procedures to support compliance and research integrity; performing self-audits of research operations; reducing job stressors; actively overseeing the work of the research team; testing assumptions or motives when making research-related decisions; and seeking help from colleagues, institutional officials, or others when experiencing uncertainty. Only three targeted behaviors did not change following the workshop: communicating with others in a constructive manner, managing emotional responses to research-related challenges, and consulting with a research mentor.
In 2013, the PI Program won the Health Improvement Institute’s Annual Award for Innovation in Human Research Protections. Not only is it the first remediation program specifically designed for U.S. researchers, but program evaluations to date also indicate that it has been successful in achieving its intended goals. Although our data derive from a small sample, many of the observed effects are large, and we have demonstrated statistically significant improvements in targeted attitudes, problem-solving skills, and self-reported behaviors. Very few behaviors have remained unchanged (such as communicating more constructively); these elements are now emphasized to a greater extent during the workshop.
Overall, as evaluations indicate, the program has been well received by participants. As we have stated elsewhere, PI Program participants were generally successful and productive researchers who did not engage in wrongdoing intentionally, even when the wrongdoing was sometimes serious or persistent.17 The PI Program thus helps institutions to retain talented researchers while fostering compliance and integrity within their research programs, and also provides a valuable opportunity for struggling researchers to share their experiences. Being investigated for research wrongdoing is highly stressful, and many participants report that Day 2—when they share their stories with each other—has been immensely helpful to them.
Implications for institutions
We believe the PI Program offers an important service to universities. Universities often have the tools to address violations at two extremes. In the most severe cases (e.g., serial data fabrication), institutions may terminate employment; in the mildest cases (e.g., failure to update a conflict of interest form in a timely manner), they may send a written reminder of expectations and require that researchers repeat a training module. However, universities may struggle with moderately severe or repeated violations of RCR—for example, the publication of false data where intention to fabricate or falsify data appears absent; persistent failures to obtain signatures on consent forms or to report serious adverse events in a timely manner; or plagiarism that arises from improper citation practices rather than intentional theft of words or ideas. These behaviors must change to protect data, human participants, and animal subjects.
Developing a remediation program that identifies and addresses the root causes of such diverse difficulties, however, requires a significant investment of time; a curriculum informed by the best available evidence on research integrity and behavior change; and independent, highly trained faculty. Additionally, because the most effective remediation programs use a small-group, face-to-face format,8–10 an institution would need regular cohorts of researchers requiring remediation to effect optimal change. Few institutions can provide such remediation in-house. In such cases, the PI Program offers a reasonable training option.
At the same time, lessons from the PI Program might be used to inform RCR instruction and mentoring at research institutions. Lack of knowledge is only one reason why researchers deviate from appropriate conduct. Other reasons include poor oversight and management of teams and a failure to prioritize matters of compliance and integrity. A new emphasis is needed on good practices such as holding regularly scheduled research team meetings, developing standard operating procedures for matters of research compliance, explicitly discussing with teams the importance of compliance and research integrity, and backing up all data to a shared server that principal investigators can access. These may seem like commonsense activities, yet not all investigators engage in them.
Limitations and next steps
The PI Program has been effective in meeting many goals such as improving attitudes toward compliance, fostering the use of good decision-making strategies, and increasing adoption of best practices for running a lab or research program. Nevertheless, it is costly for participants in terms of time and expense. Over the next two years, the PI Program plans to roll out new training options for behaviors that may be simpler to address than persistent noncompliance (such as proper citation practices and strategies for avoiding plagiarism). The program is also planning new recruitment activities: We believe that far more researchers could benefit from the three-day workshop than the relatively modest number who have enrolled to date.
Until now, the PI Program’s one-year follow-up survey has been conducted anonymously to encourage participation by reducing the risk of participant identification. However, this limits our ability to examine potential links between demographic factors and behavior changes. Similarly, although our sample sizes have been large enough to detect many statistically significant changes (e.g., in attitudes, decision-making strategies, and research practices), they are too small to enable analysis at the level of subgroups—whether by training cohorts or by demographic variables. Finally, our one-year follow-up survey is limited to self-reported behavior and would be more robust if augmented with institutional feedback. Unfortunately, past efforts to obtain such data from institutions have been unsuccessful, possibly because research wrongdoing and other employee behaviors are considered to be confidential human resources matters. Alternatively, the lack of institutional feedback may derive from the fact that institutional officials rarely work closely with researchers on a day-to-day basis. We respect the confidentiality of these matters and the need for voluntariness in disclosing information to third parties, while acknowledging that this limits the quality of long-term data we can obtain about participants. Nevertheless, the PI Program will continue to gather assessment data to identify factors that might increase the risk of violating rules and regulations in science and to establish the short- and long-term outcomes of the program.
The authors wish to thank Tessa Gauzy, who served as PI Program Coordinator during the project period, for her support with evaluations and assessment data. The authors thank the participants who permitted use of their evaluation and assessment data in this project.
1. DuBois JM, Kraus E, Vasher M. The development of a taxonomy of wrongdoing in medical practice and research. Am J Prev Med. 2012;42:8998.
2. Kornfeld DS. Perspective: Research misconduct: The search for a remedy. Acad Med. 2012;87:877882.
3. DuBois JM, Anderson EE, Chibnall J, et al. Understanding research misconduct: A comparative analysis of 120 cases of professional wrongdoing. Account Res. 2013;20:320338.
4. Annas GJ, Grodin MA. The Nazi Doctors and the Nuremberg Code: Human Rights in Human Experimentation. 1992.New York, NY: Oxford University Press
5. DuBois JM, Anderson EE, Chibnall J. Assessing the need for a research ethics remediation program. Clin Transl Sci. 2013;6:209213.
6. Spickard A Jr, Swiggart W, Pichert JW, et al. Changes made by physicians who misprescribed controlled substances. J Med Licensure Discipline. 2002;88:110115.
7. Samenow CP, Swiggart W, Spickard A Jr.. A CME course aimed at addressing disruptive physician behavior. Physician Exec. 2008;34:3240.
8. Spickard WA Jr, Swiggart WH, Manley GT, Samenow CP, Dodd DT. A continuing medical education approach to improve sexual boundaries of physicians. Bull Menninger Clin. 2008;72:3853.
9. Brown ME, Swiggart WH, Dewey CM, Ghulyan MV. Searching for answers: Proper prescribing of controlled prescription drugs. J Psychoactive Drugs. 2012;44:7985.
10. Swiggart WH, Dewey CM, Ghulyan MV, Spickard A Jr.. Spanning a decade of physician boundary violations: Are we improving? HEC Forum. 2016;28:129140.
11. Norcross WA, Henzel TR, Freeman K, Milner-Mares J, Hawkins RE. Toward meeting the challenge of physician competence assessment: The University of California, San Diego Physician Assessment and Clinical Education (PACE) program. Acad Med. 2009;84:10081014.
12. Antes AL, Murphy ST, Waples EP, et al. A meta-analysis of ethics instruction effectiveness in the sciences. Ethics Behav. 2009;19:379402.
13. Antes AL, Wang X, Mumford MD, Brown RP, Connelly S, Devenport LD. Evaluating the effects that existing instruction on responsible conduct of research has on ethical decision making. Acad Med. 2010;85:519526.
14. Antes AL, Chibnall JT, Baldwin KA, Tait RC, Vander Wal JS, DuBois JM. Making professional decisions in research: Measurement and key predictors. Account Res. 2016;23:288308.
15. Mulhearn TJ, Steele LM, Watts LL, Medeiros KE, Mumford MD, Connelly S. Review of instructional approaches in ethics education. Sci Eng Ethics. 2016. doi: 10.1007/s11948-016-9803-0.
16. Watts LL, Medeiros KE, Mulhearn TJ, Steele LM, Connelly S, Mumford MD. Are ethics training programs improving? A meta-analytic review of past and present ethics instruction in the sciences [published online ahead of print April 27, 2016]. Ethics Behav. 2016. doi: 10.1080/10508422.2016.1182025.
17. DuBois JM, Chibnall JT, Tait RC, Vander Wal JS. Lessons from researcher rehab. Nature. 2016;534:173175.
18. Feist GJ. A meta-analysis of personality in scientific and artistic creativity. Pers Soc Psychol Rev. 1998;2:290309.
19. P.I. Program: Professionalism & integrity in research. www.integrityprogram.org
. Accessed April 8, 2017.
20. Stenmark CK, Antes AL, Thiel CE, Caughron JJ, Wang X, Mumford MD. Consequences identification in forecasting and ethical decision-making. J Empir Res Hum Res Ethics. 2011;6:2532.
21. Mecca JT, Medeiros KE, Giorgini V, et al. The influence of compensatory strategies on ethical decision making. Ethics Behav. 2014;24:7389.
22. Mumford MD, Connelly S, Brown RP, et al. A sensemaking approach to ethics training for scientists: Preliminary evidence of training effectiveness. Ethics Behav. 2008;18:315339.
23. Gibbs JC, Basinger KS, Fuller D. Moral Maturity: Measuring the Development of Sociomoral Reflection. 1992.Hillsdale, NJ: Lawrence Erlbaum Associates
24. Gibbs JC, Potter G, Goldstein A. The EQUIP Program: Teaching Youth to Think and Act Responsibly. 1995.Champaign, IL: Research Press
25. Gibbs JC. Moral Development and Reality: Beyond the Theories of Kohlberg, Hoffman, and Haidt. 2003.3rd ed. Thousand Oaks, CA: Sage
26. Stams GJ, Brugman D, Deković M, van Rosmalen L, van der Laan P, Gibbs JC. The moral judgment of juvenile delinquents: A meta-analysis. J Abnorm Child Psychol. 2006;34:697713.
27. Barriga AQ, Gibbs JC. Measuring cognitive distortion in antisocial youth: Development and preliminary validation of the ‘How I Think’ questionnnaire. Aggress Behav. 2006;22:333343.
28. MacKie D. The effectiveness of strength-based executive coaching in enhancing full range leadership development: A controlled study. Consult Psychol J Pract Res. 2014;66:118137.
29. Chase SM, Crabtree BF, Stewart EE, et al. Coaching strategies for enhancing practice transformation. Fam Pract. 2015;32:7581.
30. Asplund J, Lopez SJ, Hodges T, Harter J. The Clifton StrengthsFinder 2.0 Technical Report: Development and Validation. 2009.Princeton, NJ: Gallup Organization
31. Johnson SK, Garrison LL, Hernez-Broome G, Fleenor JW, Steed JL. Go for the goals: Relationship between goal setting and transfer of training following leadership development. Acad Manag Learn Educ. 2012;11:555569.
32. National Science Foundation. Chapter 5: Academic research and development. In: Science and Engineering Indicators 2014. http://www.nsf.gov/statistics/seind14/index.cfm/chapter-5/c5h.htm
. Accessed April 8, 2017.
33. Vitell SJ, Nwachukwu SL, Barnes JH. The effects of culture on ethical decision-making: An application of Hofstede’s typology. J Bus Ethics. 1993;12:753760.
34. Tan J, Chow IH-S. Isolating cultural and national influence on value and ethics: A test of competing hypotheses. J Bus Ethics. 2008;88:197210.
35. Kalichman MW. Rescuing responsible conduct of research (RCR) education. Account Res. 2014;21:6883.
36. DuBois JM, Chibnall JT, Gibbs J. Compliance disengagement in research: Development and validation of a new measure. Sci Eng Ethics. 2016;22:965988.
37. Reynolds WM. Development of reliable and valid short forms of the Marlowe–Crowne Social Desirability Scale. J Clin Psychol. 1982;38:119125.
38. DuBois JM, Chibnall JT, Tait RC, et al. Professional decision-making in research (PDR): The validity of a new measure. Sci Eng Ethics. 2016;22:391416.