08 February 2010
Head of Stem Cell Biology and Developmental Genetics at the MRC National Institute for Medical Research in LondonAppeared in BioNews 544
These are bold claims to make, and the response from at least one journal editor has been a robust denial. Why did we make them and what was our evidence?
I should stress that many of the problems we raise are not confined to stem cell research. We chose to highlight this field, in part because we know it well, but also because it is prone to hype with some strong characters involved. In addition, our previous 'open letter', and the discussion surrounding it, had largely been ignored by journals (see: website).
To place our complaints in context, I should briefly outline the process that operates when a manuscript is submitted to a journal. First, the editor will often make a subjective decision about whether the work is sufficiently novel and interesting. If they don't have sufficient expertise to make this call, they will sometimes ask a 'trusted reviewer' to have a quick look and decide for them. This is the first step that can be prone to bias. If the manuscript gets over this hurdle, it will be sent to two or three scientists who are experts in the area or areas of research it describes. One role of the editor is to choose the reviewers. If they do not know the field well enough, they might make honest mistakes and choose reviewers who are inappropriate, perhaps because of a known history of extreme competiveness with the author of the manuscript. Or they may deliberately choose rivals, knowing that the review will be a tough one.
But the editor may also have a small set of reviewers who they use more than others. These reviewers may be 'trusted' simply because they are thorough and efficient. However, they may also have a certain power in the field, want to retain their dominance, and/or have a bias. Such biases don't have to be strong, although they can be, and they can be against other scientists, or even countries, or for their friends and colleagues. These reviewers may also have a slightly unfair relationship with the journal. As Austin Smith said: 'Certain individuals can more or less force editors to accept their papers. And those same (people), if they want to kill a paper, the editor will go along with it'.
The paper can be 'killed' by either a flat rejection, criticising the science (fairly or unfairly), by simple statements such as 'the work is not of sufficient novelty or interest' (with or without any justification), or by asking for many additional experiments to be carried out, which may be pointless or even impossible.
The journal editor usually accepts a paper for publication when all - or most of - the reviewers are satisfied. But even a single negative review can lead to rejection if the editor deems the reviewer's opinion to be sufficiently important. Sometimes the reviewer has noticed a perfectly valid problem that others have not, but it might also be a simply subjective view rather than an objective criticism.
We feel that some reviewers are increasingly sending back negative comments, or asking for unnecessary experiments to be carried out, for spurious reasons. This may be done simply to delay or stop the publication of the research so that the reviewers or their friends can be the first to have their research published.
By relying on a few 'trusted' reviewers, there is a danger of having a clique where only papers that satisfy this group are published. The problem lies with weak editors, who go along with these reviewers when they are being unfair. The author of a paper can complain that a set of comments is unfair, but the editor does not want to go against the reviewer (even though the author may be just as good a scientist).
Why would editors be persuaded unfairly by one reviewer over another? Journals are in competition with each other, and they all strive to have a high impact factor. Editors have become dependent on their favoured experts to review other people's research and to submit their own. If the editor offended the reviewer, by rejecting their opinion, they may lose future papers to a rival.
Even if research is being not deliberately stifled, high-quality work may be overlooked as an 'accidental consequence if journal editors are relying too much on the word of a small number of individuals'. Relying on the view of one individual can distort what gets published so it doesn't reflect where the field should be going, especially if a paper challenges dogma arising from work of the reviewer in question.
So why do we bother to submit papers to these high-profile journals? It is simply because research grants and career progression are now determined almost entirely by whether a researcher gets published in these journals. We have little time to read papers, so we make a shortcut when judging proposals or CVs by asking where the work has been published. We are all to blame for this, and it is difficult to see how to solve this problem.
What evidence do we have that this is happening? This is a difficult question to answer. Several colleagues and I have had papers delayed and/or rejected in a way that we felt was unfair. On one occasion (where there was a significant delay) the identity of the 'difficult' reviewer became known to me - and it was only then the editor had to admit there was unfair bias. In another case, we had three out of four reviewers strongly endorsing acceptance of a paper in a 'top' journal, but the editor sided with the one negative reviewer - whose comments were not justified (This paper was resubmitted to another 'top' journal and went straight in with glowing comments from all three reviewers). I have also had conversations with editors who have openly admitted that they are basing their decision on a trusted reviewer, even if this reviewer's comments were poor quality.
Austin Smith has his own stories - hence his refusal to publish in, or review for, Nature journals (see his website). Moreover, we received supportive comments from other stem cell experts before the story was broadcast by the BBC. Subsequent to the broadcast (and website coverage), I also had several emails from scientists who supported our position, some who gave examples of unfair treatment.
But all these examples are simply 'anecdotes' and each case could be explained away by journals. For example, by saying the editor received additional, unsupportive private comments from reviewers (should such private comments be allowed?) or it was a simple case of bad luck. It is unlikely they will admit to choosing a reviewer who had a bias or an unstated conflict of interest. They are even less likely to admit to having weak editors.
The phenomenon of the 'trusted reviewer' is iniquitous, because the editor is delegating responsibility to a few individuals who then become powerful determinants of what does and does not get published. This effectively transfers editorial decisions to a few 'important' reviewers rather than the editor, when it is the latter who should take a balanced view. Such individuals are likely to have biases - as we all do. These biases may be against individuals, countries or types of study, or for their friends, or those who already have a track record in the field. The latter will discriminate against young scientists and those changing fields - even though it is often such people who will produce the dogma-changing stories - in part because they are not weighed down by the established view.
Moreover, the problem is not just papers rejected unfairly, but it creates the perception that to publish in the 'top' journals, you have to be a member of the 'club'. I am often asked to look over a manuscript and suggest journals to which it should be submitted. I have seen some very nice stories but, when I suggest a 'top' journal, the author says they are reluctant to send it there because these journals are only for high-profile people. This is a common perception in non-western countries.
As authors, we do not have access to the data that would prove or disprove our contentions. But the journals could search their own records and come up with some data. For example, there is a strong perception in the stem cell field that certain authors in the Boston area are unreasonably dominant. The journals could simply ask how many stem cell papers are submitted from Boston institutions, compared with elsewhere (perhaps by country), and let us know the relative rates of acceptance. This will not be perfect, of course, but it would be interesting to see if the answers support or refute this accusation.
Other measures may help. The proposal put in our open letter was that, if a paper was published, the accompanying reviews should be provided as supplementary material online. This might help ensure reviewers do their job properly. Only one journal, the EMBO Journal, has taken up the idea, and it will be interesting to see how this works. Importantly, however, this will not tell us anything about manuscripts that were rejected. Nevertheless, I hope that simply by raising these issues, especially those of the 'trusted reviewer' and weak editorial decisions, senior editors will think about them and come up with procedures that will reduce the chance of bias creeping into their own journals.