InvestorsHub Logo
icon url

longfellow95

08/03/19 2:27 PM

#238411 RE: JerryCampbell #238407

Why assume that?

That paper could easily be reviewed in a couple of days.

Most peer reviewers fail to spot errors anyway, as experiments have shown.
Here is JTM's policy on peer review:-

Peer-review policy

Peer-review is the system used to assess the quality of a manuscript before it is published. Independent researchers in the relevant research area assess submitted manuscripts for originality, validity and significance to help editors determine whether the manuscript should be published in their journal. You can read more about the peer-review process here.

Journal of Translational Medicine operates a single-blind peer-review system, where the reviewers are aware of the names and affiliations of the authors, but the reviewer reports provided to authors are anonymous.

The benefit of single-blind peer review is that it is the traditional model of peer review that many reviewers are comfortable with, and it facilitates a dispassionate critique of a manuscript.

Submitted manuscripts will generally be reviewed by two or more experts who will be asked to evaluate whether the manuscript is scientifically sound and coherent, whether it duplicates already published work, and whether or not the manuscript is sufficiently clear for publication. The Editors will reach a decision based on these reports and, where necessary, they will consult with members of the Editorial Board.



The permanent Open Access policy is more important anyway, imo.

Open access

All articles published by Journal of Translational Medicine are made freely and permanently accessible online immediately upon publication, without subscription charges or registration barriers.



The peer review process in general, is a bit of a standing joke.

WHAT IS PEER REVIEW?

My point is that peer review is impossible to define in operational terms (an operational definition is one whereby if 50 of us looked at the same process we could all agree most of the time whether or not it was peer review). Peer review is thus like poetry, love, or justice. But it is something to do with a grant application or a paper being scrutinized by a third party—who is neither the author nor the person making a judgement on whether a grant should be given or a paper published. But who is a peer? Somebody doing exactly the same kind of research (in which case he or she is probably a direct competitor)? Somebody in the same discipline? Somebody who is an expert on methodology? And what is review? Somebody saying `The paper looks all right to me', which is sadly what peer review sometimes seems to be. Or somebody pouring all over the paper, asking for raw data, repeating analyses, checking all the references, and making detailed suggestions for improvement? Such a review is vanishingly rare.

What is clear is that the forms of peer review are protean. Probably the systems of every journal and every grant giving body are different in at least some detail; and some systems are very different. There may even be some journals using the following classic system. The editor looks at the title of the paper and sends it to two friends whom the editor thinks know something about the subject. If both advise publication the editor sends it to the printers. If both advise against publication the editor rejects the paper. If the reviewers disagree the editor sends it to a third reviewer and does whatever he or she advises. This pastiche—which is not far from systems I have seen used—is little better than tossing a coin, because the level of agreement between reviewers on whether a paper should be published is little better than you'd expect by chance.1

That is why Robbie Fox, the great 20th century editor of the Lancet, who was no admirer of peer review, wondered whether anybody would notice if he were to swap the piles marked `publish' and `reject'. He also joked that the Lancet had a system of throwing a pile of papers down the stairs and publishing those that reached the bottom. When I was editor of the BMJ I was challenged by two of the cleverest researchers in Britain to publish an issue of the journal comprised only of papers that had failed peer review and see if anybody noticed. I wrote back `How do you know I haven't already done it?'



And on bias in peer review:-

Bias

The evidence on whether there is bias in peer review against certain sorts of authors is conflicting, but there is strong evidence of bias against women in the process of awarding grants.5 The most famous piece of evidence on bias against authors comes from a study by DP Peters and SJ Ceci.6 They took 12 studies that came from prestigious institutions that had already been published in psychology journals. They retyped the papers, made minor changes to the titles, abstracts, and introductions but changed the authors' names and institutions. They invented institutions with names like the Tri-Valley Center for Human Potential. The papers were then resubmitted to the journals that had first published them. In only three cases did the journals realize that they had already published the paper, and eight of the remaining nine were rejected—not because of lack of originality but because of poor quality. Peters and Ceci concluded that this was evidence of bias against authors from less prestigious institutions.



https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1420798/