The Editors' Notepad
The goal of this blog is to help EPIDEMIOLOGY authors produce papers that clearly and effectively communicate their science.
Tuesday, November 1, 2016
Let’s say right up front that, under our hybrid publishing model, space is limited in the print version of EPIDEMIOLOGY. You already know this. We have a strict budget for print pages each issue, and competition for space is fierce. We have been working with Production to make the best use of this limited space, by thinking about the efficiency of the page layout and by keeping an eye on proofs to avoid mostly blank pages. A great way to advance the goal of space efficiency is to put content online, in supplementary digital content (SDC). Your editors may ask you to do this, for example with sensitivity or subgroup analyses, or you can opt to do so voluntarily. Shorter papers are often more engaging to read, authors save page charges, and the journal can publish more papers within its page budget. Everybody wins when papers are short, as long as they are also complete.
In contrast to printed content, online content is essentially unlimited, a service provided by the publisher for the free use of authors. What can go online? Pretty much anything you have produced that supports what you have written in the main text. SDC is a good place to park large tables and figure panels, descriptions of study populations, details of methodology, and statistical computing code (which we encourage all authors to submit as SDC). You can also use color freely; color figures come with a fee in the printed journal, but are free in SDC. You, the author, are fully responsible for SDC. Although peer reviewers and editors look at it, we don’t copy-edit it; SDC goes up exactly as you have prepared it (which means it’s probably not a bad idea to save it as a PDF, rather than editable or readily copied text). We create a link to the SDC and place it appropriately in the printed text. If it needs to be revised or corrected, you can email us a new version and we’ll just swap them out.
Our only restrictions: because of server limitations, each file has to be no more than 100 MB in size. Larger total amounts of content can be broken down into smaller files. In addition, labels of sections need to correspond to the way you refer to them in the text, and for that the journal has a convention:
eTable 1, eAppendix 2, eFigure 4 etc.
Most types of content will fit into these categories, with ‘eAppendix’ referring to any text that is not a table or a figure. Numbering them helps guide readers to the relevant content, especially when all the content is saved in a single file and, as with tables and figures appearing in the main (printed) text, make sure they are cited in order.
We have been told that combining all the SDC as a readily downloadable file is helpful to readers, so we will usually ask you to combine them. Most file types, except for spreadsheets with formulas and PowerPoint files with animation, can be saved as PDFs and combined. Statistical computing code is usually in text format, and so can be exported or at least copied and pasted into a word-processing file and, from there, exported to PDF. If you have more than a handful of sections of SDC you may also want to consider including a table of contents at the top.
Because SDC is a separate document, it must - if you are citing other work - have its own bibliography. Number citations in SDC separately from those in the main text (citations can appear in both the published bibliography and the SDC), but you only need one bibliography across all the SDC content. Our copyeditors will go look at your main-text bibliography to make sure there is a corresponding citation for each reference, and will flag any that have none, or that are only in the SDC (which, again, they don’t edit). Please also note, however, that citations in SDC do NOT count towards the Science Citation Index or other indexing services.
Naturally, the less space each paper takes up, the more papers we can publish, and you, as authors, can help with this, too; it’s one small contribution you can make toward being a good member of the research community.
Tuesday, August 30, 2016
The typical outcomes paper in epidemiology usually involves a lot of numbers – multiple exposures and measures of exposures, subgroup analyses, and alternative modeling strategies. The standard of practice when making statistical comparisons is to place an effect estimate within a confidence interval, rather than using a p-value (Epidemiology generally only allows p-values for tests of trend or heterogeneity, and even then strongly discourages comparison with a Type 1 error rate). Outcomes papers thus tend to have three or four tables of data, often with more online, each with up to a dozen columns, but organized in intuitive, digestible, easy-to-follow chunks. If figures are possible, so much the better.
Writing the text of the Results section to summarize the tables and figures may feel like an afterthought. But it is still important, in part because you, as a researcher, know your data better than anyone else, and also because not all readers absorb information the same way. So it’s worth your time to think about what you want to highlight (hint: go beyond the obvious statements along the lines of x was associated with y, z was not associated with y).
I hope you’ll agree it’s also important to make the Results section appealing and useful to read. Many results sections fail to provide any mention of the descriptive finds. These, however, help to put the study into context. How many people were eligible, how many participated, how many cases were observed, and what were patterns of missingness? These and similar questions immediately help the reader to understand who was studied and the quality of the evidence.
When transitioning to internal comparisons, one element to keep in mind is context. Even if you’ve done so in the Methods section, precede each result you give with a hint of what you were looking for in that step of your analysis. Just as important is the flow of language. Of course we don’t expect an epi Results section to read like Walt Whitman, but you’d be surprised how a strategy regarding the presentation of data can improve how well the reader engages with it.
I’ll start with an example of a sentence that, while not particularly long, is seriously hard work to get through:
Similar results were found for lung cancer, colorectal cancer, and breast cancer: lower consumption of jelly beans was associated with an estimated 4%-8% lower hazard ratio (95%CI 0.67 to 1.22, 0.76 to 1.34, and 0.92 to 1.13, respectively), although these estimates were imprecise.
Do you see how you have to go back and forth from the outcomes in the first line to the confidence intervals in the third to match them up, because of the “respectively” device? In addition, it’s hard to parse that range of percentages of lower risk - if there are only three outcomes, why not give just give all three? (More about the imprecise estimates below.) To simplify, keep each outcome in the same phrase as its data:
Consumption of jelly beans was associated with a 4% lower hazard ratio (95% CI 0.67, 1.22) of lung cancer, 7% lower risk (95% CI 0.76, 1.34) of colorectal cancer, and 8% lower risk (95% CI 0.92, 1.13) of breast cancer, although the estimates were imprecise.
A second concern is the use of the percentage hazard ratio. It is too easily confused with a difference estimate of association, when in fact the associations are estimated on the ratio scale. Furthermore, it has different different units than the CI, so you can’t automatically place it within the interval. An even better revision would be:
The hazard ratio associating consumption of jelly beans with lung cancer was 0.96 (95% CI 0.67, 1.22), with colorectal lung cancer was 0.93 (95% CI 0.76, 1.34), and with breast cancer was 0.92 (95% CI 0.92, 1.13) of breast cancer, although the estimates were imprecise.
Next, I hope this idea is not too radical, but consider not putting data in a sentence at all: leave the numbers in the table, if possible, and describe the results in words. That way, a reader can first read your simple summary, and then turn to the tables to pick out the details for him or herself. This strategy works best for secondary findings; results pertaining to the primary aim should always be reported with data. Revising the report of these secondary findings, the edit of the sentence would be:
Consumption of jelly beans was associated with imprecisely measured decreased hazards of lung, colorectal, and breast cancer (Table 3).
Finally, what exactly do the authors mean when they say that the estimates were imprecisely measured? The intervals were actually fairly narrow. We suspect they mean that the intervals include the null, which has nothing to do with the precision. The final, zen edit of the troublesome sentence would be:
The hazard ratios associating jelly beans with the incidence of lung, colorectal, and breast cancer were all near null (Table 3).
We invite you to look at a few outcomes papers and think about the above. Do you even read Results sections? If not, why not? What would you do differently? We’d be happy to discuss.
Take-home messages that will take you a long way toward a readable Results section:
Be sure to open the results section with the descriptive findings
As the topic sentence in each paragraph, provide a bit of context for each section of the analysis.
Keep the outcome with its data (avoid the dreaded “respectively”).
Break up long sentences containing a lot of data.
Be sure to use the measure of disease occurrence that you are estimating (“risk”, “rate”, “hazard”, etc).
For secondary findings, consider leaving effect estimates and confidence intervals out of the text altogether.
While the above recommendations are stylistic, here’s a reminder of a couple of additional requirements relevant to reporting of results in Epidemiology: Avoid causal language – verbs such as impact, affect, increase/decrease – in favor of the language of association. And avoid significance testing as follows:
Leave out p-values (except for tests of trend and heterogeneity, but even then do not compare with an acceptable Type 1 error rate)
Instead of “x was not significantly associated with y,” just say “x was not associated with y” or “x was associated with an imprecisely measured increase/decrease in y” or “the association of x with y was near null”
Avoid the word “significant” in non-statistical senses of the word, and instead choose from the less-loaded words “considerable,” “important,“ “material,” “appreciable,” or “substantial.”
Null results are good! We have recently published an editorial seeking persuasively null results. You might even edit the result in the example one step further:
Consumption of jelly beans was not associated with decreased hazard of lung, colorectal, or breast cancer (Table 3).
Sorry, jelly beans.
Monday, June 6, 2016
My inaugural post to this blog discusses abbreviations and how we treat them at EPIDEMIOLOGY: mostly, I’m afraid, we avoid them, as you’ll know if you have worked with me. But today, I am happy to explain why. Epidemiologists, we are in this together.
In my role as Deputy Editor, also known as Science Wordsmith-in-Chief, I spend more time considering and (usually) spelling out abbreviations than on any other class of edits. That’s because, in addition to scientific accuracy, a top goal is to deliver papers that are clearly written and as effortless for our target audience to read as possible.
And as someone with an epidemiology PhD whose training may have gotten a little rusty, I may be a useful test case. I’m sure, for some of you, reading a methods paper is like falling off a log. You do this stuff all the time. You can glance briefly at a formula consisting of stacks of Greek letters meaningfully embellished with bold and italics, and the concept behind a method for correcting for selection bias crystallizes in your mind in three dimensions. Similarly, a new regression model with a 10-syllable name attached to a 10-letter abbreviation sticks firmly in your mind. I know, because I trained with many of you and now I read and am impressed by your papers…which I have to read slowly. I envy you a bit, but never mind: mainly, I want to learn what you to have to offer.
But because I don’t get to spend most of my days immersed in methods and biostatistics, it’s helpful to have an unfamiliar abbreviation spelled out each time it’s used. Our readers and I sometimes have to work to decipher and internalize the concept behind the method. Our work is easier when we can avoid thinking ‘Wait, what does that stand for?’ and having to scroll up, find, and re-read the definition…and usually lose the train of thought.
Overall, spelling out abbreviations helps forward our goal of publishing epidemiology papers that read like English, not like jargon. Therefore, please think of your wider community of colleagues and spell it out—our rule of thumb is whether it would be understandable to someone outside your subspecialty. If you don’t, I will, and rather than use search-and-replace I will do it each time individually and look for ways to avoid wordiness and awkward phrasings that sometimes arise. However, it does take time, and really, I suspect you can do it more smoothly and accurately than I can, if you do so as you write.
We understand there are other reasons you might want to use abbreviations. For example:
* To popularize a new method. We sympathize. But if the name of a method is really unwieldy when spelled out, an acronym will naturally evolve, and there may be workarounds (see below). Meanwhile, as above, allowing broadly trained epidemiologists access via conceptual transparency that avoids the hard work of repeated scrolling up to a definition, can also accomplish the goal of popularizing it.
* It’s the shorthand you use within your research team.
* To meet the word limit. Sorry, but you’re busted, and my colleagues who write a lot assure me there is always a way to shorten a paper that does not compromise clarity.
* To avoid typing. Really? OK, never mind, I can’t believe you would do this.
Meanwhile, there are additional reasons to spell out:
* To avoid ambiguity. As an example, MSM abbreviates “men who have sex with men” to one community of epidemiologists and “marginal structural modeling” to a second community. For a reader who is not an enshrined member of either community, the abbreviation is ambiguous without context to help.
* To make sentences flow better. Many abbreviations are more awkward to read and pronounce than their spelled-out forms.
* To avoid bureaucracy-speak, which is not a recognized dialect of English. Those who work for large government agencies should be particularly able to relate to this.
So, when will we allow an abbreviation?
* When it is likely to be familiar and unambiguous to most epidemiologists - I understand this is a judgment call, and in some cases my thinking has evolved.
* When it is impossibly unwieldy to read when spelled out.
* When it is used as a variable name in an adjacent equation (in which case it will also be italicized).
* In tables and figures, to help save space, but it must also be defined in a legend or caption.
* For study names and similar proper nouns.
If spelling out is moderately wordy or unwieldy, I will try to find a workaround (for example: ‘hereafter referred to as…’), such as a partial spelling out, or using pronouns. And finally, I often don’t make these decisions unilaterally, and will check with other editors.
Sunday, November 27, 2011
The recent publication in EPIDEMIOLOGY of a graph about semen quality over time  - data that were somehow buried in a governmental report in Denmark - again raises the much-debated point of public access to data [2, 3, 4].
The mere fact of questioning a policy of public access to data, seems like being ‘against motherhood and world peace’. Isn’t it true that “Science is about debates on findings,” “Science serves people, and people (taxpayers) paid for it,” and “Expensive research data should become available to others”? Yet, the issues are more complex than the simple idea that ultimately we will all benefit from open access to data.
Firstly, what is meant by ‘data’? The original unprocessed MRI scans, blood, tissue, questionnaires? Or the processed data – determinations on blood, coded questionnaires? The cleaned data - with the possibility that the authors already have ‘massaged’ inconveniences? The analysis files – in which the authors have extensively repartitioned and recoded the data (another round of subjective choices)? Data should be without personal identifiers – of course – but in our digital age people can be identified by combinations of seemingly innocent bits of information. And, finally, should all discarded analyses, or discarded data, also become publicly available – to check what the authors ‘threw way’ and whether their action was ‘legitimate’?
Secondly, to what extent is the public as the taxpayer, or any organization that pays for the research, really the full owner of the data? Data exist because of ideas about how to collect and organize them. There is intellectual content, not just by the researchers, but also by their research surroundings, their departments, universities, and governmental organizations that make research intellectually possible. Data in themselves are not science. Giving your data to someone else is not an act of scientific communication. Science exists in reducing data according to a vision - some of which may develop during data analysis. Should researchers not have a grace period for the data they collected, or perhaps two: first a period in which they are the sole analysts, and then a period in which they share data only on conditions?
Thirdly, how protective can a researcher remain about her data? Should a researcher have the right to deny access to her data to particular other parties? Richard Smith, the former editor of the BMJ, stated in his blog that denying access is a wrong strategy – why fear open debate, it will only lead to better analyses? In his opinion, one should not deny data access even to the Tobacco Industry .
Reality is different: researchers know that when a party with huge financial interests wants access to data, there are three scenarios.
Scenario 1: they search and find some error somewhere in the data. This is always possible –no data are error-proof. The financially interested party will start a huge spin-doctoring campaign, proclaiming loudly in the media that the data are terrible. Remember the discussions on the climate reports?
Scenario 2: another analyst is hired by the interested party, and comes to the opposite conclusion. This is published with a lot of brouhaha. The original researcher writes a polite letter to the editor, explaining why the reanalysis was wrong. The hired analyst retorts by stating that it is the original analysis which was in error. Soon, only the handful of people who really know the data can still follow the argument. That is the signal for a new wave of spin-doctoring, in which medical doctors give industry-paid lectures stating that “even the experts do not know any more; we poor consumers should use common sense; most likely, nothing is the matter”. I witnessed this scenario in a controversy on adverse effects of oral contraceptives. A class action suit was deemed unacceptable by a UK court because, in a meta-analysis in which two competing analyses of the same data were entered (!!), the relative risk was 1.7. This number fell short of the magical 2.0, which is wrongly held by many courts as proof that there is ‘more than 50% chance’ that the product caused the adverse effect . Without studies and reanalyses directly sponsored by the industry, the overall relative risk was well over 2.0 . This was money well spent by the companies!
Scenario 1 and 2 have a name: “Doubt is our product” as it was originally coined by the tobacco industry: it is not necessary to prove that the research incriminating your product is wrong – nor that the company is right – it suffices to sow doubt. 
Scenario 3 is that the financially interested party subpoenas the researcher to testify over all parts of allegedly questionable aspects of the data in court. Detail upon detail is demanded. The researchers lose months (if not years) of research and their personal life. That scenario was played out against epidemiologists who did not find particular adverse effects of silicone breast implants . It is recently feared again as the next strategy by the tobacco industry in the UK .
Advocates of making data publicly available seem to live in an ideal dream world, in which for every Professor A whose PhD students always publish A, there exists a Professor B whose PhD students publish B. Such schools of thought combat each other scientifically with more or less equal weapons. Other scientists watch this contest and make up their mind as to who has the strongest arguments and data. This type of ‘normal science’ disappears when strong financial incentives exist. Then the weapons are no longer scientific publications, but public relations agents and lawyers. Of course, also in ‘normal science’, there are rivalries that can be strong. It happens that researchers do not want to share their complete data, or only part of the data under conditions. Often this is for the very simple reason that some sources of data, like blood samples, are finite.
Calls for making data publicly available need to take into account these scenarios. Some people hope that open information in the long run provides the ‘real’ truth. But in a shorter timescale, open information may also allow mischief by special interests, with plentiful resources, that are ruthless in their attempts to shape public policy. It seems difficult to ‘experiment’, i.e. to try open access to data for some time and then turn it back when the drawbacks seem too great.
An intermediary solution might be much more easy to implement. Tim Lash and I, following ideas of others, have proposed to make public registries of existing data . This would make it possible to start negotiating with the owners of the data about possible re-use. Such a registry might also facilitate the use of data in ways that were not originally planned. If controversy and distrust complicates the picture, trusted third parties can be sought to organize a reanalysis, with public input possible – a strategy recently proposed by a medical device maker .
In short, public access to data is much more complex than the proclamation of some principles that look so wonderfully scientific that nobody can argue against them.
Commentaries about this topic are greatly welcome. They can be published a full guest blog of about 450 words maximum. Please mail to Epidemiologyblog@gmail.com
Note: an earlier version of this blog was published as an opinion piece in the Dutch language newspaper NRC-Handelsblad in the Netherlands on 12 October 2011
© Jan P Vandenbroucke, 2011
Wednesday, September 21, 2011
Scientists often portray themselves as the noble but hapless victims of sensationalism and exaggeration in the popular media . But are scientists in fact sometimes complicit in these abuses, hyping their work in media interviews, making claims that would not survive peer review in the published articles? If so, this constitutes an important ethical violation that deserves further scrutiny, since communication with the public is at least as socially consequential as communication between scientists. Public opinion plays a long-term role in funding levels for competing research programs, for example, which makes exaggeration in news stories a serious abuse of the power granted to the scientist by a credulous and trusting media and public.
Here’s one example I came across recently which may fit this description. Nature Genetics published a meta-analysis by Dara Torgerson and colleagues in their September issue. The authors pooled North American genome-wide association studies of asthma that included over five thousand cases, including individuals of European, African and Latino ancestry . They reported a number of susceptibility loci, most of which showed similar associations across ethnic populations and had been previously described. But one variant was novel, and the association was described as being specific to individuals of African descent. Table 2 of the paper reported a SNP near the gene PYHIN1 on chromosome 1 with an odds ratio (OR) among African Americans and Afro-Caribbeans of 1.34 (95% CI: 1.19-1.49). In a replication data set, this association remained substantial (OR=1.23), although at a slightly different locus. For European Americans, the corresponding association for this SNP was reported as “NA”, which a footnote defined as “not available (the SNP was not polymorphic).” As noted by the authors, this finding is potentially interesting and important because of the substantial racial/ethnic disparity in asthma prevalence in the US (7.7% in European Americans versus 12.5% in African Americans).
Although the main text of the paper reports only the odds ratios and their confidence intervals, Table 1 on page 18 of the electronic supplement details the allele frequencies by group. Surprisingly, it is the minor allele, which was not observed in European Americans, that is associated with lower risk. The major allele had reported prevalences of 77.0% and 71.9% in African-origin cases and controls, respectively. There is no association in European Americans because 100% have the major allele. If this SNP is taken to be causal, therefore, the pattern for this variant would be opposite of the observed disease phenotype prevalences, with 100% of European Americans having the high risk variant. Under the more likely interpretation that the SNP is a marker in linkage disequilibrium with a causal variant in the gene PYHIN1, however, the data have nothing at all to say about PYHIN1 and asthma in European Americans. The authors would have a basis to consider the unknown variation in PYHIN1 as explaining some cases of asthma within the African-origin population, but no claim to this being relevant in any way to racial/ethnic disparities. European Americans might have more or less of the high risk version of this gene; the data are completely silent on this issue.
It came as a surprise, therefore, to see the news reporting on this publication. For example, the Reuters story published on July 31st began "U.S. researchers have discovered a genetic mutation unique to African Americans that could help explain why blacks are so susceptible to asthma."  The story seemed to portray the SNP as the causal variant itself:
"But because the study was so large and ethnically diverse...it enabled the researchers to find this new gene variant that exists only in African Americans and African Caribbeans. This new variant, located in a gene called PYHIN1, is part of a family of genes linked with the body's response to viral infections, Ober said. "We were very excited when we realized it doesn't exist in Europe," she said."
How can one make sense of this text in relation to the published paper? If the reported SNP is by some great stroke of luck the causal variant itself, then it cannot explain the observed racial/ethnic disparity since it would lower risk in some blacks in relation to whites. If, on the other hand, the SNP is merely a marker for a causal variant somewhere nearby, presumably in PYHIN1, then it is nonsense to say of this unknown variant that it “doesn’t exist in Europe.” The data reveal nothing at all about the distribution of this variant in European Americans since no marker for this gene was found in that population. Either way, therefore, the news story did not seem to reflect the data that were reported in the article.
Thinking that this was an example of the press being irresponsibly sensationalistic, and misrepresenting the peer-reviewed article, I sent a letter on August 2nd to the Reuters science reporter and editor, signed by myself and about a dozen colleagues. We also sent a copy of the letter to the corresponding author of the article, the University of Chicago statistical geneticist Dan Nicolae.
The Reuters editor sent a detailed response without delay. She reviewed the statistical significance of the association measure and the proposed biological mechanism for how the PYHIN1 gene might affect asthma risk, and noted that the science reporter’s text was supported by interviews with two of the researchers as well as from a contact at the National Institutes of Health. To document this, she attached an e-mail from two of the authors, Dan Nicolae and Carole Ober, in which they affirmed their approval of the coverage their work had received. “First let us say that we think that the article is very well written and we have no major issues with it. We do not understand the issues raised in Dr. Kaufman's letter,” they wrote. They went on to note that perhaps the Reuters title might “slightly overstate the conclusion of our study”, but that it was a “subtle distinction” at best. “We thank you for helping us promote our science,” they concluded.
I then wrote to Dan Nicolae directly, asking him how the Reuters text could be construed to be consistent with the information in the paper. “I understand that race is a sensitive issue, subject to many debates,” he responded. “My research is on understanding molecular mechanisms of complex diseases, with the hope that this will lead to better treatments. It has nothing to do with this debate. On the Reuters news item, let me state that there are several scenarios where our data would fit with that headline. I will not discuss these scenarios here because I am convinced they will produce other discussion, and I prefer to use my time on my research projects.”
Apparently, Dr. Nicolae was comfortable that the Reuters reporting did not reflect the content of the paper because he believed that there were theories, not explored in the published article, which could make the news story valid. On the basis of his reply, I came to believe that this incident was not the result of a science reporter misunderstanding the published paper. Rather, it seemed to be the case of the scientist providing a speculative interpretation that was not vetted by the reviewers or the editors of the journal. Dr. Nicolae offered that my confusion may have arisen from ignorance, and recommended that I read up on tag SNPs, differences in linkage disequilibrium patterns between Europeans and Africans, and association signals produced by interactions. “These will lead you to these scenarios I am referring to,” he concluded.
While it is possible for a risk factor to operate in different directions across two populations, this entirely sidesteps my concern, which is that the reporting strayed from what could be said based on the content of the published article. There could be no evidence of effect measure modification presented for this variant, since there was no exposure variation in the European Americans, and therefore no association measure could be estimated in that group. Dr. Nicolae did not appear to disagree with me on this point, but seemed to view the media interview as an opportunity for presenting his research program as relevant to racial disparities in a way that could not be directly derived from the published data. This is surely a fine line, because journalists often want scientists to give their expert opinions on the broader interpretation of the published work. But how far should authors go in describing what they might speculate to be true, rather than what they actually found? The impetus for the news story was the publication of an article in a respected scientific journal. Are there really no constraints on how far authors can extend their interpretation while claiming to be referring to the article? Should they clearly indicate that they are speculating - and should they also present at the same time the potential contrary or skeptical view? With so much attention and funding riding on efforts to understand and reduce minority excess burden of disease, the authors’ speculation risks the appearance of being self-serving. If scientists sometimes disparage science reporters as the source of popular misinformation, the fair reply might therefore be “Cura te ipsum!”
If you like to comment, Email me - or in this case Dr Kaufman - directly at email@example.com or submit your comment via the journal which requires a password protected login. Unfortunately, published comments are limited to 1000 characters.
 Torgerson DG, et al. Meta-analysis of genome-wide association studies of asthma in ethnically diverse North American populations. Nat Genet. 2011 Jul 31;43(9):887-92. doi: 10.1038/ng.888.