Editing a scientific journal does not go without problems and challenges. Having to deal with more than 1000 papers in a year (as happens to the editor of the Journal of Hypertension) would be a frustrating burden, except that experiences have been the source of reflections, which in turn provided stimuli for continuously readjusting the parameters of judgement and becoming aware of the difficult balance of pros and cons in each decision. At the beginning of a new year for the journal I would like to share some of my reflections with the readers, and stimulate their reactions.
Role of a scientific journal
The main purpose of science is new discovery, but new discoveries would be meaningless and ineffective without communication, this is particularly true for an applied science like medicine. If there is no science without communication, then scientific journals are an integral part of research. This enhances the editor's responsibility in accepting or rejecting manuscripts, accelerating or delaying their publication, requiring multiple revisions, and so on. The greater the difference between the number of submitted papers, and the numbers of these papers that are actually published by the journal, the greater the editor's role is, and in the worse case the editors' arbitrary power. The main task of a journal editor is choosing among the submitted manuscripts, to select those that are the best scientifically and most suitable to the journal characteristics. This is an important task since it initiates a virtuous cycle by which publication of many excellent papers heightens the status of the journal and attracts the submission of other excellent manuscripts. Selection, however, is a very difficult task. There is no doubt that peer reviewing is the gold standard, but even peer reviewing has not only lights, but darknesses also, which editors, authors and readers should not ignore.
Role of reviewers
Conflict of interest
Potentially, the most suitable reviewers are those working in the same area of research, they are likely to be those most knowledgeable of problems being explored and of techniques being employed. It is also likely that they will be the most interested in reviewing, therefore most likely to accept and to provide a prompt and well argued comment. At times, however, the reviewers could have some conflict of interest (which they are asked to declare), this may eventually cause them to decline the invitation to review. In addition to commercial reasons and to the competition represented by submission or preparation of another paper or grant proposal in the same specific area of research (conflicts for which declining is only too obvious), the span of potential conflicts of interest is very wide and sometimes difficult to perceive; it also includes biases related to previous experiences, personal ideas, prejudices, preferred techniques, even the public image a reviewer has made for himself. This is an aspect the editor must be aware of when choosing a reviewer and when considering their comments.
Conflict of opinion
Differing opinions are not infrequently given by reviewers. sometimes these can result from a biased reviewer, or from precipitous and careless judgement. These biases can often be recognized by the editor, but in many cases divergent opinions are well argued and pondered, and reflect the different view points under which the investigation can legitimately be considered. In most of these cases, asking for a further opinion is necessary, although the final decision rests on the editor's responsibility. Obviously, asking for other opinions will unfortunately prolong the process of reaching a first decision about a manuscript.
Conflict between careful revision and rapidity of publication
Careful and often laborious revision and rewriting of a manuscript as a result of peer reviewing does often delay publication of a paper to a remarkable degree, while the mission of a journal is to publish good papers as promptly as possible. A reasonable objection is that carefully prepared papers reporting well performed experiments are most likely to have a fast track publication, but it is also true that reviewers requiring revisions are not always prompt in returning their subsequent reviews, and at this stage they can hardly be substituted for other referees. Nonetheless in the potential conflict between quality and rapidity of publication, quality should undoubtedly be favoured.
Conflict between authors' and reviewers' opinions
There is no doubt that scientific discussion between authors and reviewers is often rewarding for the authors themselves, for the journal and therefore for the scientific community. The editor should be aware, however, of the temptation of certain reviewers substituting themselves for authors, obliging authors to rewrite the manuscript in the way reviewers would have prepared it, so that the paper ends up becoming the reviewer's work rather than the authors’.
Conflict between anonymity and openness
Should reviewers be anonymous or for openness sake, should they sign their reviews? There is ongoing debate about this, and different journals follow different policies. The Journal of Hypertension has chosen anonymous reviewing, since it is the editor's opinion that freedom of judgement, especially within a relatively small community as that of hypertension research, has a much greater value than openness of criticism. When reviewers, however, do not wish to be anonymous, their names are provided to authors. Furthermore, some reviewers are invited to write editorial comments, and in this way (though post hoc) some reviewers lose their anonymity. Finally, when founded on scientific arguments, authors' appeal is guaranteed, and renewed review of an article by the previous or new referees is possible.
The dilemma for the editor: worshipping and manipulation of the impact factor
Quality of the journal is the obvious goal for its editor, but often what matters is how this quality is measured and perceived, and how the standard of other journals is compared. The Impact Factor has now become the most popular index of quality. With crude approximation, it is applied to assess the value of individual investigators, a much better assessment of who would consist in measuring the number of citations received by their own papers, rather than the average number of citations received by the journals they have been publishing in.
Admittedly, the Impact Factor is a better quality measure for journals than for individual papers. Even for journals the Impact Factor has problems and limitations. A first limitation is the relatively short time span (the 2 years subsequent the publication) on which the Impact Factor is calculated: probably, a longer time span, say 3 to 5 years, would be a more realistic basis for judgement, but this contrasts with the requirement, or the convenience, of rapidly adjourning classification of journals.
The major problem, however, is that worshipping or targeting the Impact Factor may sometimes distort the editor's decisions. Indeed, the size of the Impact Factor can be manipulated. Increasing disproportionately the space devoted to reviews, meta-analyses, guidelines, for example, papers likely to receive more citations than original papers, greatly helps the Impact Factor to rise. However, science advances not through reviews (useful as these may be for teaching and updating), but through original papers, without which there would be no reviews nor meta-analyses. Editors often have some responsibility in the trend to citing reviews or meta-analyses more than original contributions, when - in order to save journal space - they ask authors to substitute citations of reviews and meta-analyses for citations of original papers.
Aiming at raising the Impact Factor may further distort the editor's judgement, by inducing him to favour publication of mediocre manuscripts on topics of widespread interest, which are more likely (sometimes for commercial reasons) to receive many prompt citations, rather than very innovative work on restricted areas of more specific interest, which are likely to receive less citations or to be cited in a longer time-span than that covered by Impact Factor calculations. Every year the Journal of Hypertension publishes an analysis of the citations of articles in the journal during the two preceding years [1–3]: it is evident that the most widely cited papers were reviews and guidelines, and among original articles, clinical papers (often on drug therapy) frequently receive more citations than basic research papers. It is the appeal of this evidence that the editor must resist in order to keep the Journal of Hypertension devoted to original contributions, and well balanced between clinical and basic research.
Access to scientific journals
So-called open access journals have gained an increased appeal recently, on the basis of the concept that it is very popular politically, that the results of science should be easily distributed, and it is potentially freely accessible to everybody. This concept is certainly attractive, but it should not ignore that open access publication does not really abolish costs, it shifts them from the readers to the authors. In this way an author's ability to cover costs adds to, and may sometimes preclude, the assessment of research quality, which is still the major criterion for peer-reviewed journals. It seems wiser for institutions to allocate money to purchasing journals of known high standard, rather than spending money to help publications, the quality of which the institution is not apt to evaluate. On the whole, experience shows that there may be space for both kinds of journal, the traditional proprietary ones to which readers have paid access, and the new wave ones with open access, the costs of which are contributed by authors. In the era of electronic communication all barriers to publication may finally fall, every author or institution being able to open a web site where they are able to publish their contributions even without any form of review: easiness of communication may end up turning into a flood of information, and the difficult burden of discriminating good from bad, true from wrong, important from trivial will rest on the readers.
Ownership of scientific journals
In most cases, scientific journals are owned by professional publishing companies, however, sometimes ownership is by scientific societies, in other cases ownership is by publishing companies but sponsorship is by scientific societies through special agreements between companies and societies. The last is the case of the Journal of Hypertension. All types of ownership have pros and cons, but I must say I have found the case of the Journal of Hypertension particularly positive. A professional publisher takes care of all the technicalities of publishing, printing and distribution, while the sponsorship of scientific societies, the International and the European Societies of Hypertension for the Journal of Hypertension, helps the scientific image of the journal and strengthens the editor in his duties of guaranteeing scientific excellence and avoiding commercial pressures in the selection of papers. In my long experience as editor of the Journal of Hypertension, I am particularly grateful to both the publisher and the sponsoring societies for having never received any extraneous pressure to publish or not to publish any article, review or supplement.