Re: The True Cost of the Essentials (Implementing Peer Review)

From: Stevan Harnad <>
Date: Wed, 2 Feb 2000 13:59:55 +0000

> From: [Anonymous]
> Subject: Re: The True Cost of the Essentials (Implementing Peer Review)
> I still have a few disagreements. First, I agree that $YYY costs
> less than $XXX and the difference is the publishers operating costs and
> profits. I do not defend the publishers' interests here, as they are in
> a business that is undergoing a disruptive technology change, and will
> eventually become less important or even extinct. But before we bury
> them, I would like to make sure that a few of the useful things they do
> are preserved.

I agree completely. What is needed is a reliable, stable and fair
transition strategy. Subversive self-archiving is all well and good, but
we also have to think ahead. Even in Physics, where the evolution
towards the optimal and inevitable is the most advanced, there is not
yet a transition strategy -- nor, for that matter, have S/L/P
cancellations begun (but I think they're around the corner).

Without a rational transition strategy, getting "there" (QC/C service
only, with costs recovered from S/L/P savings) from "here" (S/L/P
toll-based product) looks like an impossible Escher drawing or a Moebius
strip. (See the thread "The Urgent Need to Plan a Stable Transition" in
the American Scientist Forum (1998) Archive

> One of these is peer review. I gather that we agree on this. Although
> it is fine for a journal that publishes mainly reviews with occasional
> line drawings to do its business on-line, for journals that publish
> original images, this is a real problem with the current generation of
> technology. I am sure this will be taken care of in 5-10 years, but 300
> Mb of images (15 pages at full photographic resolution) still takes
> about 50 minutes to download at 100Kb (which is the average rate that
> our internet connection works during the workday because of
> all the traffic). It then takes about 2-3 minutes to open each image
> and display it on a computer screen, and several minutes more to
> examine details of the image by scrolling around (because most computer
> screens are limited to about 1,000 pixels resolution, and photographic
> quality images are 2,000 x 3,000). I am not a pornographer, so I do not
> know the quality of images that they transmit, but I doubt that many of
> their clients would have the patience to wait for images at full
> photographic resolution that neuroanatomy requires. I can assure you
> that if it took 3-4 hours to receive and review the images for one
> paper, our referees and editors would not stand for it either.

I am not an expert on image compression and transmission, and I do not
deny that there may be some technical difficulties with very large
images for the time being. But my reply is as before: For those cases,
there is still the hybrid (paper/online) solution, both for refereeing
and for online publication. The large image literature, however, is a
small minority of the literature, not representative of it, and
certainly no reason for holding back self-archiving: For if online
images are too big to make online refereeing and online publication
practical yet, then online self-archiving of those images at this time
is certainly no threat to anything! (And recall that our exchange began
with the question of the justification for copyright and embargo
policies that forbid online self-archiving.)

> A second issue is where the costs of review are generated. While
> referees are not paid, it requires substantial expenses in faxing,
> postage (for paper review, which I argue above is still necessary, at
> least in some quarters), and office worker time to make this happen.
> This will not go away any time soon.

Even without disputing how much really still needs to be mail/faxed in
the big-image literature, my reply is: The $YYY refers to ALL essential
QC/C costs. If it's still essential to fax some images for refereeing,
so be it; let that cost be part of the $YYY. There is still nothing
here to justify copyright or embargo policies that forbid

> A third issue is who maintains the archive. Ideally, to survive any
> forseeable disaster, the electronic archiving should be permanent, in
> one site and format, and be stored in widely distributed sites.

Correct. And that is precisely what the Open Archives initiative was
launched for: <>, adopting the dienst
protcol and the Santa Fe convention.

The Santa Fe convention was designed to ensure that all Open Archives
are interoperable. This means that they can all be searched by anyone,
anywhere, all at once, as if they were all just one global, seamless
archive. Each Santa Fe compliant Open Archive resides at a University
or Research Institution; for reliability, reundancy and permanence they
will be mirrored around the world (the Physics Archive has 15 mirrors
worldwide). And with all their precious research eggs in this one
global virtual archive, we can be sure that permanence and continuous,
cumulative upgrading of their communal record will be under the
relentless watchful eyes of countless daily users among the continuing
generations of scientists and scholars:
<>. Software agents too, will
patrol the entire corpus daily to watch for any sign of trouble!

The CogPrints Archive software <> is
currently being modified to make it Santa Fe-compliant, generic Open
Archive software for Eprints in all disciplines. Universities will
be able to install it and it will be interoperable with all other
Santa-Fe-compliant Open Archives: <>

> Publishers can do this, and many are now organizing to do so.

Publishers' proprietary online archives, firewalled with
access-blocking S/L/P toll-gates, are not the solution (they are part
of the problem). Publishers are not archivists. That is not their area
of strength and expertise. The QC/C costs, $YYY, are for the
ESSENTIALS, and not for new add-on functions that are designed to
continue holding this giveaway literature hostage to S/L/P access

> Governments, in my opinion, would be a better solution, since they are
> probably more stable and long-lived.

I agree. That is why the NIH/Ebiomed/PubMed Central project is so
welcome. But with interoperability, the distributed network of
universities worldwide is also a natural partner in archiving.

> However, self-archiving is
> problematic. First, different authors may have different capabilities
> for storing and offering information (how many can access 500 Mb of
> server storage, permanently, for one publication?). Second, it would be
> difficult if not impossible to come up with a standard format. If
> viewers needed to have a virtually infinite range of software and
> plug-ins to view papers, this presentation would be far from "free".
> And finally, individual authors and their careers come and go. Do you
> seriously believe that the sole site of availability of the information
> (in the post-Gutenberg era, where paper publication finally ceases)
> should disappear every generation? This would be like a civilization
> with permanent Alzheimer's disease.

Nothing of the sort. Both global, centralized archives, and
distributed, local university-based archives (all of them -- central
and local -- being permanent, interoperable, interconnected archives)
are to be permanent. We are not talking about something sitting on a
researcher's server on the Internet, but about Santa Fe-compliant
University and Centralized Eprint Archives for all the disciplines.

And again, the big-image problem is a special, circumscribed one, and
will be solved. Meanwhile, the formats now include HTML, XML, ASCII,
TeX, PDF and PS.

When an individual author self-archives, all this means is that he
deposits his published papers in his institutional Open Archive, and
perhaps also the Centralized Open Archive for his discipline (although
this could be an automatic procedure, done by software agents). Each
Open Archive is maintained by the university (or the Central Archive).
The author may change institutions (and deposit his papers there too),
or change fields, retire, or die, but his published research stays up,
just as it did in the Gutenberg archives: the distributed libraries of
the world.

> So, who would do the long-term archiving? I did not see this in your
> plan. Do you really think libraries will continue to buy print versions
> of journals that are never looked at, when all the information is
> available on-line?

I certainly hope not; on the contrary, I had proposed that part of the
S/L/P revenue they save from cancelling all those paper journals can be
used to pay the remaining essential QC/C costs. Apart from that, they
can use the S/L/P savings to buy more books (paper or online). The
archiving will be done by the University and Central Open Archives (and
at negligible marginal cost).

> Or do you think that the paper versions will still
> be viable because the offering of information in self-archiving is so
> fragmented, difficult, and ephemeral that no one will be able to trust
> it?

Not at all; I think the paper versions will no longer be needed because
the Open Archives will be infinitely better integrated, accessible,
navigable, and free.

> It has to go one way or the other, and if any archiving scheme is
> robust enough to work for the long term, then I have no illusion that
> libraries will spend millions of dollars archiving paper that no one
> looks at.

And that is the objective; and that is why we need to plan a transition
scenario from reader-instutution-end S/L/P to author-institution-end

> This brings us to my original question: who pays? It is fine to
> suggest that libraries will be happy to pay a reduced cost for journals
> on behalf of their users, but it has been my experience that people
> will not pay anything for something that is given away next door, no
> matter how much it cost last week. I can see no way, in a system that
> provides free access to all scientific literature, to maintain any way
> of extracting the costs of the system from any of the users. I do not
> see how your plan addresses this fundamental problem in any way.
> Libraries will NOT pay $YYY gladly. They will pay $000.

The mysterious management of university budget line items is not my
specialty, but the logic seems clear: Universities and their libraries
are suffering from the serials (S/L/P) crisis. There is a way they can
save a lot of money: (1) provide their authors with the Open Archives
to self-archive their journal articles online, (2) this will drive down
demand for the paper version, allowing S/L/P cancellations, but (3) if
the universities want their authors to continue publishing-or-perishing
certified peer-reviewed research at all, they must give them the
resources to pay QC/C costs; (4) the natural source for those QC/C
funds would be a part of the S/L/P savings. But, compared to the price
of conducting the research itself, your own estimate of $500 per
article would be low enough to become a standard "overhead" in research
grants too.

In any case, the advantage of freeing the literature online is so great,
I'm sure necessity will be the mother of invention when it comes to
finding the resources to cover the QC/C costs.

> So, for a plan to be viable, I have to know: where will the money come
> from to maintain peer review and long term archiving? I cannot see
> burning down the house until we have got the valuables out.

I agree that the transition needs to be rationally planned in advance,
but it does not look like a very mysterious or unmanageable task...

Stevan Harnad
Stevan Harnad
Professor of Cognitive Science
Department of Electronics and phone: +44 23-80 592-582
Computer Science fax: +44 23-80 592-865
University of Southampton
Highfield, Southampton

NOTE: A complete archive of this ongoing discussion of "Freeing the
Refereed Journal Literature Through Online Self-Archiving" is available
at the American Scientist September Forum (98 & 99):
Received on Mon Jan 24 2000 - 19:17:43 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:45:40 GMT