“Questionable research practice” (QRP) is the terminology used to distinguish between serious research misconduct versus researcher actions that can possibly be justifiable but often are not (John et al., 2012). Misconduct generally refers to serious research wrongdoings, including data falsification or fabrication, plagiarism, and other fraudulent behaviors. QRP, which likely occurs at least as frequently and which can be harmful, generally refers to the selective use of data and divergence from accepted research practices (i.e., exclusion of outliers). Unfortunately, QRP is often not detected by reviewers, or if it is, the QRP may be ignored because it appears unimportant to outcomes. The reality, however, is that QRP leads to incorrect research results that, when published, contribute to scientific misinformation. QRP is, in fact, untruthful, and this untruthfulness hurts science and the public, who are the intended beneficiaries of science.
One frequently encountered QRP is “p-hacking,” or selective data collection, usage, or analysis to obtain “statistically significant” results (Chin et al., 2021). When scientists p-hack, they misreport (i.e., lie) about their results. By selecting specific data to include in analyses in order to report “significant” results, researchers contribute to publication bias (Head et al., 2015). Publication bias creates an artificial “truth” about the relationships among variables, the effects of interventions, and, more broadly, the nature of human health and well-being. Scientists p-hack, in part, because it is hard to get nonsignificant research results published (Gerrits et al., 2020). An unpublished scientist may soon be out of funding and perhaps a job.
The pressure to publish—and to do so in “high-impact journals”—strongly favors publishing significant results (Chuard et al., 2019). However, publishing only significant results has consequences for the usefulness of research and the conduct of science. Manipulated results are difficult to replicate, leading in part to a “replication crisis” that inhibits scientific development (Munafò et al., 2022). Moreover, it is important for scientific progress, if not patient safety, to report results, particularly of interventions where there is no effect. Unless scientists publish their nonsignificant findings, we are doomed to follow research paths that lead nowhere, wasting precious time and resources and perhaps endangering human life.
There are other common QRPs that need mentioning. First among these is post hoc “hypothesizing” (Gray, 2019), which occurs when hypotheses are created after extensive data manipulation. Occasionally referred to as a fishing expedition, multiple examination of interactions or associations among variables is generally an unacceptable means of discovering relationships or testing intervention effects in health-related scientific fields. Unfortunately, this sort of QRP happens frequently, particularly in secondary analyses where data already exist. On the other hand, rigorously gathering descriptive data about an area of science where little is known or understood and where no theories exist can be valuable to generate hypotheses and thus advance science.
QRP can also take the form of “salami slicing,” the production of very thinly developed analyses. These slices of data results are often from data that were not part of the original analytic plan or perhaps differ very little from primary study results and could, in fact, have been included in one manuscript (Gray, 2019). Nonrandom allocation of participants in a randomized trial is another QRP. This selective allocation allows the researcher to arrange the trial in such a way to ensure that a pet hypothesis will be supported. However, perhaps the most common QRP is the measurement of many things using many research instruments, surveys, or tests but only reporting the results of certain measures—the ones that produce the results desired by the researchers.
It is beyond the scope of this editorial to provide instruction about the many legitimate ways to conduct science. What I can say is that I know that by using QRPs, scientists can and do get the research results they want, whether it is support for an intervention or support for a favored theory. I also know that the publication of studies involving QRPs change our scientific sense about what is “true”. Thus, I will be clear; at Nursing Research, if we or our reviewers uncover QRPs, we will, at the very least, request an explanation; we are just as likely to reject the paper. Conversely, we will not reject papers simply because the researchers report nonsignificant results. In fact, the journal welcomes the submission of papers reporting nonsignificant results provided the methods were robust and the authors address the questions of broader significance. Although we are certainly interested in reports of successful trials showing positive intervention effects, and in descriptive study results detailing important relationships among variables, at Nursing Research we will continue to be most concerned about meticulously conducted and carefully reported research. Publication of research findings from rigorously conducted studies that are honestly reported should be the goal of science. Only by following this path can nursing science hope to counteract publication bias and continue scientific advances.
Rita H. Pickler https://orcid.org/0000-0001-9299-5583
Chin J. M., Pickett J. T., Vazire S., Holcombe A. O. (2021). Questionable research practices and open science in quantitative criminology. Journal of Quantitative Criminology
Chuard P. J. C., Vrtílek M., Head M. L., Jennions M. D. (2019). Evidence that nonsignificant results are sometimes preferred: Reverse p
-hacking or selective reporting?PLoS Biology
, 17, e3000127. 10.1371/journal.pbio.3000127
Gerrits R. G., Mulyanto J., Wammes J. D., van den Berg M. J., Klazinga N. S., Kringos D. S. (2020). Individual, institutional, and scientific environment factors associated with questionable research practices in the reporting of messages and conclusions in scientific health services research publications. BMC Health Services Research
, 20, 828. 10.1186/s12913-020-05624-5
Gray R. (2019). Questionable research practices and nursing science. Nurse Author & Editor
, 29, 1–7. 10.1111/j.1750-4910.2019.tb00041.x
Head M. L., Holman L., Lanfear R., Kahn A. T., Jennions M. D. (2015). The extent and consequences of p
-hacking in science. PLoS Biology
, 13, e1002106. 10.1371/journal.pbio.1002106
John L. K., Loewenstein G., Prelec D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science
, 23, 524–532. 10.1177/0956797611430953
Munafò M. R., Chambers C., Collins A., Fortunato L., Macleod M. (2022). The reproducibility debates in an opportunity, not a crisis. BMC Research Notes
, 15, 43. 10.1186/s13104-022-05942-3