Re: A Note of Caution About "Reforming the System"

From: Stevan Harnad <>
Date: Fri, 18 Oct 2002 02:52:51 +0100

On Wed, 16 Oct 2002, [Identity removed] wrote:

> Dear Dr. Harnad--I have been doing a web search on peer review in
> connection with an article I am doing on the journal [deleted].
> [Deleted] published the paper but later on said it erred and while not
> retracting the paper said it never should have published it. It arrived at
> that decision after essentially having sent the authors back to the lab to
> verify their findings. What I am exploring is whether an improved
> refereeing procedure might have avoided the flap...

Although it sounds serious and important, in reality scientific fraud
is not a real problem for the simple reason that science is cumulative,
hence self-corrective. Consider these two alternatives:

(1) If a finding is not important enough for other researchers to bother
to try to use it and build upon it, then it hardly matters if it is
erroneous, because it has no effect on further work.

(2) If a finding is important enough for other researchers to try to use and
build upon, and it is erroneous, then it will collapse under the weight
of further attempts to build on it (because it is erroneous).

The only thing that might go undetected is a false NEGATIVE finding,
but, first, there is not much motivation to publish negative findings
(let alone false ones); and second, if the potential positive finding
was promising and important, others will want to test whether it really
was a nonstarter (and will find that it is not). Self-correction again.

Nor is peer review in general to be blamed for the occasional failure to
detect error or fraud: Peer review is simply qualified experts
evaluating the work of other qualified experts. Occasionally they will
err; occasionally they will be biassed. But there is no "system" to
protect against such human foibles as long as the judgments depend on
human minds. At best, journals can use more referees -- but refereeing
is done for free, using precious time that referees poach from their
research, so it is unlikely that this overharvested resource can be
harvested even more intensely. Besides, more referees for everything
would simply mean doing in parallel -- for everything -- what would
be detected serially (through the self-corrective process I mentioned)
for the minority of undetected errors and fraud.

People often speak about the need to reform the peer review system, but
they usually don't realize that there is no peer review "system": it is
merely human experts evaluating one another's work. It's hard to imagine
who could be better qualified to do so than qualified experts (referees),
selected by qualified experts (editors), but if someone has a hunch about
a better system, it would be best if it were tested first, to see whether
it really worked at least as well as the classical system, before being
advocated or adopted.

On the other hand, there are many ways that the new online medium can
be used to make peer review more efficient and equitable, to select
qualified experts more broadly and evenly, to distribute the load more
widely, and to speed the process:

Finally, about "retractions": A journal is not a law court. There is no
way, after a work has appeared, to make it disappear or unappear.
Journals stay on shelves. The only thing that can be done if there has
been an error is to make that known: I would say that [Deleted]'s saying
that the paper in question had been in error and that it should never
have been published more or less accomplishes this public corrective
function. What do you think a "retraction" would add (or mean)?

Here too, the online medium is preferable, because it allows corrigenda
and updates to be attached to papers directly, so all users see that
there have been corrections or additions. One cannot do with paper
articles on shelves.

Below is a relevant recent posting on this from Peter Suber:

    FOSN for 5/23/02, linking the retraction problem to FOS issues,
    though loosely:

    * In the May 20 _Tech Central Station_, Howard Feinberg reports
    on the survival of bad scientific ideas after their retraction or
    invalidation. A 1998 study by John Budd showed that 235 scientific
    articles "retracted due to error, misconduct, failure to replicate
    results or other reasons" had been cited 2,034 times after their
    retraction, and that most of the citing papers did not mention the
    retraction. Feinberg uses the Budd study to set up a discussion
    of the recent fiasco at _Nature_, in which a paper was withdrawn
    after publication by the editors who faced intensive lobbying both
    scientific and non-scientific. (PS: Will FOS aggravate the problem
    of overlooking retractions, by keeping old studies circulating
    forever in the Google cache and Wayback Machine? Or will it
    mitigate the problem, by allowing more intelligent searching and

Harnad, Stevan (1985) Rational disagreement in peer review. Science,
Technology and Human Values, 10 p.55-62.

Harnad, S. (1996) Implementing Peer Review on the Net: Scientific
Quality Control in Scholarly Electronic Journals. In: Peek, R. & Newby,
G. (Eds.) Scholarly Publishing: The Electronic Frontier. Cambridge MA:
MIT Press. Pp 103-108.

Harnad, S. (1997) Learned Inquiry and the Net: The Role of Peer Review,
Peer Commentary and Copyright. Learned Publishing 11(4) 283-292.

Harnad, S. (1998/2000) The invisible hand of peer review. Nature
[online] (5 Nov. 1998)

Best wishes,

Stevan Harnad

NOTE: A complete archive of the ongoing discussion of providing open
access to the peer-reviewed research literature online is available at
the American Scientist September Forum (98 & 99 & 00 & 01 & 02):

Discussion can be posted to:

See also the Budapest Open Access Initiative:

the Free Online Scholarship Movement:

the OAI site:

and the free OAI institutional archiving software site:
Received on Fri Oct 18 2002 - 02:52:51 BST

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:46:40 GMT