Re: Self-Selected Vetting vs. Peer Review: Supplement or Substitute?

From: Stevan Harnad <>
Date: Thu, 16 Dec 2004 03:25:32 +0000

On Wed, 15 Dec 2004, [identity deleted] wrote:

> I read your article "Free at Last: The Future of Peer-Reviewed Journals".
> I think you are too conservative and not radical enough. Why not just
> eliminate peer review altogether?

I have written on that question too:

    Harnad, Stevan (1998/2000/2004) The invisible
    hand of peer review. Nature [online] (5 Nov. 1998)
    Longer version in Exploit Interactive 5 (2000): and
    in Shatz, B. (2004) (ed.) Peer Review: A Critical
    Inquiry. Rowland & Littlefield. Pp. 235-242.

In brief: The objective is to free the peer-reviewed literature from
access-tolls, not from peer review (until/unless an alternative
to peer review is found, tested, and demonstrated to yield a literature
of at least quality to what we have now).

> What is the need for it? It is supposedly a form of quality control.

Not supposedly. It just *is* qualified experts, vetting the work of
fellow-experts, in order to save the research community as a whole the trouble of
having to wade through unvetted work for themselves.

> However quality control and filtering are much easier to do on the internet.

Peer review *is* being done on the Internet now, by virtually all major journals.
It is medium-independent, being just qualified experts vetting specialized work:

> If you want to ensure an article is
> of good quality then why not just post it on the internet, along with all
> the simulation data and programs and allow people to directly reproduce
> everything you have done.

Because people don't have the time to peer review everything for themselves. It's
hard enough to get busy referees to do that when invited by the editor.

> If nobody finds any mistakes and many people
> succeed in reproducing your results then obviously your research is correct.

And if they *do,* then you have wasted an awful lot of people's work and time.

> On the internet people can post criticisms of your paper, point out errors
> and easily cite your research.

They can and should do that at the preprint stage, while it's being
refereed. But for those many (most) busy researchers who haven't the time
to wade through all that unfiltered work, it is the refereed postprint
they will want to read and use, and try to build upon.

> An article is obviously good if it has no
> remaining errors, its results have been successfully reproduced numerous
> times and it is heavily cited. Determining which articles are good and
> which are bad can be done in the same way as google determines which
> websites are good.

If you think scientific and scholarly soundness is a popularity contest -- and
that scientists and scholars have nothing better to do than to wade through
all the raw drafts that pass over the poor editor's submission desk, for
which the editor must find qualified referees, to which the author is
answerable for revision. This has been proposed many times, never tested
(though some small tests are currently underway), and never yet shown to
be sustainable, scalable, and capable of yielding a literature of at least
the quality and reliability we have now. Till the experiment is done, and
shown to be successful, I suggest that we stick with what we know works:

    Peer Review Reform Hypothesis-Testing (started 1999)

    A Note of Caution About "Reforming the System" (2001)

    Self-Selected Vetting vs. Peer Review: Supplement or Substitute? (2002)

> An article is good if it is heavily linked to. Also an
> author is worth looking at if he has published good papers in the past.

This all works fine if you mean peer-reviewed postprints, and citations
rather than links -- a good a-posteriori *supplement* to peer review, but
not a *substitute*, applied only to unrefereed preprints. There is some
correlation (between preprint downloads and citations, and later postprint
citations), but not nearly enough to imply that one can replace the other.

> A scientific search engine can determine both of these things automatically.

I know, because Tim Brody has created one (citebase, above). But it's for both
postprints and preprints, and the preprints are almost all headed for
becoming postprints, via peer review.

> You could even have meta journals set up which would be websites that would
> have links to only the articles of highest quality and importance in a
> particular field. If you don't want to waste time searching through trash
> just use the appropriate metajournal.

All fine, and desirable -- for postprints. But no substitute for peer
review. Peer review is pre-filtering, a-priori, to meet established
quality standards (at various levels in the journal quality hierarchy),
so users know what is safe to use and try to build upon. Citation is
post-filtering, a-posteriori, on the basis of the outcome of that peer
review, showing what has not only met standards at a certain level,
but proved useful.

> In fact in my field we already have a
> metajournal, namely Peer review is a poor form of
> quality control in the current state of science.

Compared to what? and on what evidence?

> This is because science is
> very specialized and rapidly changes so that many of the peer-reviewers
> don't understand the articles they review.

This is too vague and general. Peer review (which is merely qualified expert
evaluation) is not perfect, but if the experts are competently chosen, by a
competent editor, and the author is answerable to those of their recommendations
the editor judges valid, it (on average) generates the peer-reviewed literature
we have now, at the various levels of the journal quality hierarchy (each based
on a known track-record). Systematic drops in peer-reviewing standards result in a
drop in journal quality and reputation (reflected usually also in a drop in
citation impact). For example, Nuovo Cimento used to be one of the top physics
journals once, now it is one of the bottom ones.

> Also peer-review is often used
> to suppress research that powerful people in the scientific community don't
> like and it is also used to plagarize research results (by a peer-reviewer
> rejecting the article at peer-review but then reproducing the results
> elsewhere).

Often? How often? Do you have objective data on relative frequency? And compared
to what?

> Peer-review is also used to convey quick and easy legitimacy on
> papers that in many cases have not earned it.

What is the evidence for that? and compared to what? And for which journals, at
what level in the quality hierarchy? and for what proportion of their articles?

> I would say a paper is only
> legitimate if its results have been reproduced numerous times and it has
> sucessfully withstood scientific criticisms over a long period of time.

First, peer review applies to non-science scholarship as well as science,
so replication is not a universal criterion. But even before investing the
time in trying to replicate or build on something, a busy researchers
surely prefers to have an idea of whether it is a safe investment.

Stevan Harnad

A complete Hypermail archive of the ongoing discussion of providing
open access to the peer-reviewed research literature online (1998-2004)
is available at:
        To join or leave the Forum or change your subscription address:
        Post discussion to:

UNIVERSITIES: If you have adopted or plan to adopt an institutional
policy of providing Open Access to your own research article output,
please describe your policy at:

    BOAI-2 ("gold"): Publish your article in a suitable open-access
            journal whenever one exists.
    BOAI-1 ("green"): Otherwise, publish your article in a suitable
            toll-access journal and also self-archive it.
Received on Thu Dec 16 2004 - 03:25:32 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:47:44 GMT