Secondary Logo

Share this article on:

The Assessment of Potential Impact of Applications by Grant Review Panels

Doubeni, Chyke A.

doi: 10.1097/EDE.0000000000000452
Commentary

From the Department of Family Medicine and Community Health, and Epidemiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA.

Editors’ Note: A related article appears on page 311.

Dr. Doubeni is supported by funding from the National Cancer Institute at the National Institutes of Health (U54CA163262, U01CA151736).

The author reports no conflicts of interest.

Correspondence: Chyke A. Doubeni, Department of Family Medicine and Community Health, and Epidemiology, University of Pennsylvania Perelman School of Medicine, 3400 Spruce Street Gates 2, Philadelphia, PA 19104. E-mail: Chyke.Doubeni@uphs.upenn.edu.

Scientific discoveries are the foundation for medical and public health advances, but they depend heavily on support from diverse stakeholders through competitive grant applications. Ideally, disinterested experts vet competing ideas on their potential for impact. While it is critical to support only the highest quality research,1 it is not always clear how high-quality research is defined; what may be considered a silly idea today may be the catalyst for ground-breaking discoveries tomorrow. Novel ideas may be unfamiliar to reviewers that vet them, or shortcomings in the review process may get in the way of advancing ideas that in hindsight were transformative. Observing the grant review process as both an applicant and a standing member of an interdisciplinary review panel provides insights on how the process works and the requisites of a strong application. The critique of John Snow’s grant in this issue2 is a must read—it highlights important issues for funders, reviewers, and applicants.

Many consider John Snow the father of modern epidemiology for his study in tracing the source of the cholera outbreak in Soho, London in 1854. Was his idea to identify the source of the outbreak worth funding? Did he provide convincing rationale for why the outbreak was likely from a contaminated water supply and not miasma? Did he have the right training to do that study? Why is the answer to any of these questions necessary for a funder to support such potentially high impact research? Others have similarly asked—“would Fred Sanger [a Nobel Laureate] get funded today,” given his limited productivity?3

Peer review is a nondeterministic but highly influential process and, some argue, not objective. There are elements of luck and timing, craftsmanship, and scientific rigor needed to be successful. John Snow sought to test the “hypothesis that cholera travels through humans and especially through contaminated water,” an enormously important public health question, in a natural experiment of two different sources of water supply to the city. He was criticized for not conforming to prevailing beliefs. Challenging prevailing beliefs underlies innovation, but can be challenging. Our understanding of disease mechanisms evolves constantly and there are many “miasmas” in today’s scientific community and some reviewers may be invested in existing theories. In the same vein, researchers can be so passionate about their own framework as to completely ignore everything else—successful scientists respect and acknowledge opposing paradigms.

Past performance is a good indicator of future success, and is generally the basis for gauging the likelihood that a proposed research study will be executed successfully. Having requisite training is important, but not sufficient. Experience is assessed on the basis of a prior track record of grant funding and publications, career trajectory, and the adequacy of resources at the applicant’s institution to support the research. Not surprisingly, this can be a hurdle for new investigators or experienced investigators venturing into new areas—it is like getting credit: you need a credit history to get credit. The reviewers of Snow’s grant concluded that he lacked “experience in epidemiologic research,” but Snow’s prior work in this area should have mitigated this concern. Other issues identified by the reviewers amplified the perception of Snow’s lack of experience or training, including concerns of lack of scientific rigor: a weak conceptual framework, inadequately characterized measures and study population, and a lack of clear timeline and plan to address potential hurdles. New investigators could deflect a focus on their limited experience by pairing up with experienced investigators. Having a great team is important, but it is critical to demonstrate that each team member played a role in shaping the research plan.

As a crucial process in the advancement of knowledge, and with increasing interest in transdisciplinary science to tackle complex health problems, grant review panels need to also evolve and become more diverse. Such evolution requires careful training to cultivate panel members and chairs who can integrate and translate information across disciplines and give a balanced voice to all viewpoints. In that vein, applicants should keep in mind that some reviewers may not be content experts in their proposed research and should thus present their ideas clearly, defining terms that may be unfamiliar to other scientists. If needed, seeking help with grant preparation could aid in developing a thoughtful rationale and approach and remove signs of carelessness that could get in the way of garnering needed support for innovative ideas.

Snow was encouraged to add an additional objective to his proposal, but it is not always necessary to do so. Reviewers’ comments can provide insight to improve research design, but reviewers should not rewrite an application and they are not obligated to comment on every flaw in an application. When revising an application, it is important to be responsive without losing focus, but also not ignore potential concerns reviewers failed to mention. There is little guarantee that the same reviewer will be assigned to a revised application.

Funders have a clear interest in gaining a return on their investment to ensure sustainability. In fact, the key question for review panels is what is the likely return on investment from this application? Will it create new generalizable knowledge, enhance the adaptation and implementation of proven evidence, and address pressing public health challenges? Peer review aims to improve the odds in this gamble, but review panels should only serve advisory roles and not decide which applications to fund. Careful programmatic review of all applications could help detect and mitigate concerns about reviewer bias and assure equity. Objective parameters for rating grants may decrease variability, but there is currently little evidence base to guide this.

As of October 2015, there have been 148 NIH-supported scientists who have received Nobel Prizes, and, undoubtedly, the peer-review process had a role in this history.4 It is important to train and engage a diverse pool of reviewers and panel chairs and provide them with opportunities to learn from each other to help reduce variability in scoring. Snow had an understanding that the reviewers lacked but it may not have been conveyed convincingly. Innovation alone is not enough for an application to be successful. Applicants should have the right team and package their ideas brilliantly for peers with varying levels of familiarity with the science to grasp and embrace. It is also critical to understand funders’ priorities.5

Back to Top | Article Outline

ABOUT THE AUTHOR

CHYKE A. DOUBENI, MD, MPH, is the Chair of the Department of Family Medicine and Community Health, and Senior Scholar in the Center for Clinical Epidemiology and Biostatistics, in the Perelman School of Medicine at the University of Pennsylvania. Dr. Doubeni is a clinical epidemiologist whose research focuses on the effectiveness and efficient delivery of cancer screening with a particular interest in disparities in mortality from colorectal cancer.

Back to Top | Article Outline

REFERENCES

1. Costello LC.. Perspective: is NIH funding the “best science by the best scientists”? A critique of the NIH R01 research grant review policies. Acad Med. 2010;85:775–779
2. Rothman K.. John Snow’s grant application. Epidemiology. 2016;27:311–313
3. Fields S.. Would Fred Sanger get funded today? Genetics. 2014;197:435–439
4. The NIH Almanac. Available at: http://www.nih.gov/about/almanac/nobel. Accessed February 19, 2016
5. Wescott L, Laskofski M.. Grant writing tips for translational research. Methods Mol Biol. 2012;823:379–389
Copyright © 2016 Wolters Kluwer Health, Inc. All rights reserved.