Re: Independent scientific publication - Why have journals at all?

From: Stevan Harnad <>
Date: Wed, 3 Mar 1999 19:59:28 +0000

Bruce Edmonds <b.edmonds_at_MMU.AC.UK> wrote:

 sh> That is all a journal is. The "journal" part is simply the quality
 sh> controller and the provider of the quality-control tag.

be> No, a journal usually implies a holding of the published papers, including:

be> * copyright

Should be held only by the author, with limited rights assigned to publisher
in the refereed journal literature, where authors give their papers
away, to both publisher and readers.

be> * responsibility for its final mark-up and presentation

Covered by quality control (for form: copy-editing/markup).

be> * storage on site of the paper

No paper, no need for publisher to store. Public archives will do it,
and do it better, and for free.

be> I am suggesting separating these roles from the review/quality control
be> process.


 sh> But why should there be only one "relevant subject review board" for
 sh> each subject
be> I did not intend for there to be only one board. I would have thought
be> that there would be as many such boards as there are presently
be> journals.

Then why call them boards instead of journals (journals already have

(And why conflate public archiving of successive drafts, even accompanied
by their referee reports, with quality-controlled publication?)

be> Further more, if journals do not hold papers then there is
be> no reason why different boards my validate the same paper in different
be> ways for different audiences. Thus the reader can still make use fo
be> the hierarchy of control-tags etc.

My earlier posting suggested that this scheme was unrealistically
profligate with referee time and good will; this sounds more profligate

And I said why the tags in such a system could not signify or certify
what they do in real peer review.

 sh> But besides keeping the reports confidential, if the author doesn't want
 sh> them publicised, what provision is there for seeing that they are heeded
 sh> by the author?

be> If the author want a higher grade then they will have to heed them,
be> just as with re-submission now.

But what would make referees want to heed all these submissions, as
they heed them when invited by a known journal editor, who will see to
it that their recommendations are heeded if the paper is to be accepted
("tagged") at all?

 sh> Refereeing is not like school grading of essays. Peer review does not
 sh> consist of giving stars for content and presentation.

be> At the moment there is, in effect a one-star system - a paper is either
be> published or it is not.

That is only the tip of the iceberg: The CONTENT of the referee reports,
and the assurance by the editor that only when they are heeded does the
article get published, is the quality-control-feedback to the author. It
is not JUST a tagging system, and the tags derive their validity from
that very fact.

be> I am proposing that more information be made
be> available to the readers, so they may pick and choose in a more
be> informed way suiting their purposes at the time. For example: I may be
be> just looking for new ideas in using a particular technique, I could
be> then search on low quality papers but with tight constraints on subject
be> matter; alternatively I may want to be braodly informed of important
be> developments accross a wide range of topics, in which case I would look
be> for only the best papers over any topic.

Almost all of that can be done with an unrefereed public archive. Why should
real quality control be sacrificed for this lesser constraint? And why
should referees lend their time and expertise to it?

 sh> What
 sh> referees would devote their limited time and expertise to such a free
 sh> for all, where every draft is "published" with "stars," and there is no
 sh> answerability to referees' recommendations

be> For much the same reasons as they devote their time now: control and
be> influence over what is read, influence over the future development of
be> the paper, prestige from being a board member (depending on the
be> prestige of the board, of course).

But that prestige derives from the rigour of the peer review and the
resulting quality of the contents of the journals. In the ad lib,
open-ended free-for-all system described here, with its profligate use
of referee time, it is not at all clear that any incentive remains for
serving as a referee at all (the incentive is frail as it is!).

 sh> except endless rounds of
 sh> further star-seeking: What referees have the time to contribute to an
 sh> open-ended free-for-all like that?
be> I would expect that each board would have its own policy on repetitious
be> star-seeking, as fits its work-load, prestige, purposes etc.

A high-quality journal is usually a high rejection-rate journal. The
purpose of the exercise is not the 90% that are rejected, but the 10%
that are accepted. And within that 10%, the purpose is to guide revision,
by means of the substantive comments (not ratings) on the contents of
the submission, which the editor makes sure are heeded if the paper is to
be accepted. The successful results of that prepublication process are
what refereeing is for. I do not believe that referees will contribute
their time to a star-grading system where heeding the reports is optional.

Almost all papers eventually get published somewhere. Rejection rate
varies with fields; in some fields researchers are more realistic in
submitting to their proper level in the first place; in other fields,
authors shoot for the top, and then work their way down, with
considerable waste of referee time -- which is already a problem, and one
that your proposed system would aggravate markedly.

At bottom, refereeing is not about tagging but about revision, and your
star system is just about tagging. It's as if refereeing were like
assigning grades to eggs, and you are replacing a Grade A-E system with
a grade A1-A5, B1-B5...E1-E5 system -- finer-grained on the face of it,
but who's got the time to quality-control all those eggs? And how does
one figure out what, if anything, this finer-grained tagging system
really means, assuming there are unpaid quality-controllers who are
willing to go along for the ride at all?

 sh> The real cost (though small) is in administering the peer review

be> I agree. I would think that established boards might charge per paper
be> submission and academics keen on opening up new areas and catering for
be> new audiences would do it for free. Boards with a low number of
be> submissions could manage the work-load for free, ones with high-demand
be> can presumably tap this for funds.

Charging for peer review is fine (I advocate it myself), but the cost
is for implementing it, not for the refereeing itself, which is
done for free. What empirical evidence is there, however, that referees will
continue to volunteer their services where all they are doing is public
star-tagging rather than guiding revision towards an enforced
all-or-none quality threshold and seal of approval? And what is the
point of diluting quality control and risking the loss of the
controllers, when the palpable gains (public archiving of everything,
whether or not accepted by a journal) can already be had without
tampering with peer review before one has a tested alternative that is
at least as good at ensuring quality?

be> I am suggesting a quite different system to the current one - one where
be> the whole process of paper-development is more open to the readership.

Nothing stops authors from publicly archiving every one of their
successive drafts on the Web within the present system! They could even
couple them with the referee reports (anonymous where referees are
anonymous, or do not allow their names to be used) and the refereeing
journal's name. No stars, but otherwise identical to the system
proposed, and available already without compromising peer review in any

But none of this public orgy of unrefereed successive drafts, even
coupled with their referee reports, amounts to a refereed literature.

The refereed literature is the one tagged as having met a journal's
acceptance threshold in the hierarchy. That, we know, is vouchsafed by
classical peer review, which need be neither modified nor abandoned in
order to have just about every one of the rest of the benefits of which
you speak in your proposal.

be> (It is a sort of evolutionary system of knowledge development as
be> opposed to a foundationalist one). There would be NO process of
be> author-reviewer/editor discussion and revision, no adjudicting
be> revisions. You would submit your paper to your chosen board and then
be> the result would be available to the public. Any process of
be> improvement would be public. There would be no (general) distinction
be> between submitting a new paper and a revised version of an old one. The
be> pressure would be on authors to submit good papers.

Again, if the author publicly archives all successive drafts, with their
referee reports and the name of the journal, virtually all your
desiderata are met and classical peer review remains intact, indeed untouched
by any of it.

be> It would require a different attitude and way of working, much of which
be> would merely be a public admitting of what already happens, e.g.:

No; if the enforcement and answerability and revisions of peer review
were dropped (yielding little more than what you could have already
anyway without dropping them), this would not be a public admission of
what happens, but the discarding of the substance of the quality
control mechanism. No one can guess what sort of literature would
result from discarding it, but this would be anything but a "record"
of what happens already!

be> * papers and thought do progress incrementally a lot of the time, the
be> papers would reflect this, rather than pretend that each paper
be> represents new thought

Public archiving of drafts (with or without referee reports, naming the
journal) would do it at the same time as leaving well enough alone till
something at least as good or better has been successfully

be> * the same papers do have different attractions, strengths and
be> weaknesses for different audiences, so they could be judged differently
be> by different boards (`journals' if you prefer), rather than doing
be> different versions for each.

Apart from the profligacy already mentioned -- multiple submission is
rightly outlawed almost everywhere except in Law Reviews -- which are
refereed not by peers but by students, who have endless time on their
hands -- how many papers do you really think merit this much attention?
And how many man-hours of attention do you think there really are to go
around? Don't think in terms of the voluntarily surfing user's side, but the
poor overloaded reviewers' side that is supposedly going to be doing
the quality control on all this for the sake of us all.

be> * you would have to accept that the public perception of your work
be> would be less polished and fait-a-complet than as now because people
be> would be able to see (and contribute to!) the process of the thought
be> development rather than only have access to the finished product.

All achievable with public archiving without sacrificing classical peer

be> Thus this would indeed be a huge free-for-all, lots of papers, lots of
be> competing boards, free access to the information, but it would quickly
be> evolve. If a board did not provide the quality control or the
be> classificatory system readers want they would use another. If a board
be> did not get the right volume of work they would adjust their criteria
be> and system or give up. Authors would tend to send to successful
be> boards.

Or quality control (and quality) could vanish in the free-for-all, till
someone re-discovered classical peer review...

Perhaps you will agree that this is all a trifle speculative, and ought
to be given some empirical testing before being launched into in

be> The main advantages:
be> * the whole process of knowledge development would be open, there would
be> be no closed author-journal discussion

Already available in principle with public archiving.

be> * it would provide for far greater access for readers in ways which
be> would be customisable to the readers needs

The online medium is what provides that. If the public archiving of
drafts and reports were done as a supplement to classical peer review,
there would be no loss, and only gains.

be> * readers would still be able to access the quality they want, but be
be> able to set their own criteria without having to trawl throught the
be> complete archive itself

This is not at all clear: If the free-for-all did destroy quality
control, as I would predict, then it would certainly destroy quality
too, along with the means of finding what's left of it.

be> * the system would be far more adaptive and responsive then present

Comparing it to the present system in paper is a comparing it to a
straw man -- or a lame duck. Paper's demise is a foregone conclusion.
So is public archiving of preprints. So what "present system" is this
an alternative to?

be> * it can be set-up to reference papers in current public paper archives

So can countless other systems, without the downside.

be> * it extends the ideas of such services as Web site reviews/awards and
be> such as Encyclopedia Britanica's commercial site selection service

Irrelevant to the peer-reviewed literature, which is neither commercial
nor generic web-fare. And inasmuch as there are relevant access issues
here, they are already covered by a service such as Los Alamos's XXX.

be> * no special subscriptions have to be taken up by institutions, since
be> costs are subsumed by academics themselves (in terms of their time)

This is already covered by online-only journals, publicly archived for
free, their remaining quality-control expenses covered out of
author-end page charges funded out of just a small portion of library
cancellation savings from jettisoning the pricy paper flotilla.

Stevan Harnad
Received on Wed Feb 10 1999 - 19:17:43 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:45:29 GMT