Scholar's Forum: A New Model For Scholarly Communication

From: Stevan Harnad <harnad_at_COGLIT.ECS.SOTON.AC.UK>
Date: Fri, 23 Apr 1999 20:18:09 +0100

Here are some comments on the CalTech Proposal:

    Scholar's Forum: A New Model For Scholarly Communication
    Anne M. Buck, Richard C. Flagan and Betsy Coles California Institute of
    Technology, Pasadena, CA, March 23, 1999

All the objectives are right. Most of the pieces are there. But
unfortunately they are put together into an incoherent pattern. Only a
few pieces need moving, but that changes substantially the path we should
take in order to reach the objective we all agree on and share.

First, a quick reminder of the objective:

    It is easy to say what would be the ideal online resource for
    scholars and scientists: all papers in all fields, systematically
    interconnected, effortlessly accessible and rationally navigable
    from any researcher's desk worldwide, for free.

That is the optimal outcome, and what proposals like this one are meant
to do is to help get us there.

I believe this one would fail as it stands, but with a little
rearrangement, it could succeed.

As it stands, this proposal is trying to create an ALTERNATIVE to the
current peer-reviewed journal literature, because it is held hostage by
tolls despite having been freely contributed by the authors, us.

The alternative is based on the correct step of decoupling the
quality-control component (peer review) from the rest of scholarly
journal publication and attempting to provide that in the form of
an alternative service (in place of the toll-based existing journals)
while providing access and archiving for free for all.

This is all very commendable, but it has almost no chance of
succeeding, for the simple reason that it is attempting to compete with
the existing journal corpus for authors, and there is no reason
whatsoever for authors to prefer submitting their papers to a new,
untested quality-control "board" when the existing labels are the ones
that carry the confidence and prestige. The proposal asks authors to
switch, but there is no good reason for authors to switch: The refereed
journals are doing the job of quality control well. It is not their
quality control function that is amiss. It is the fact that they must
fund themselves by putting toll-based barriers to those who wish to
access those papers.

The way to change this is not to try to lure authors away from their
trusted journals. That is like starting not one, but countless new
journals, all unknown commodities, with the usual handicap of new
startup journals that must find their own niche -- except that in this
case they are taking on the entire existing corpus (at least 14,000
refereed journals)!

It is unrealistic in the extreme to imagine that authors can be enticed
away from their known and effective brand-names in favour of a generic
"board" of some sort. With the endorsement of a Consortium of
university associations and learned societies (if those can be persuaded
to give it), the chances are a little better, but still tiny. The
authors risk too much in moving en masse to a brand new, untested,
quality-control authority, even if they are assured that as a reward,
they will get a lot more readers for it. And a mere trickle of authors
would quickly make this whole approach fail, with some residual
disrepute for the whole undertaking, thereby putting us even further
away from the optimal outcome we are all seeking.

Yet with just a few parametric changes, it could work.

First, although journals depend for their pages on authors, they depend
for their wages on readers, and the
Subscription/Site-License/Pay-Per-View (S/L/P) access fees that they
pay or their institutions pay for them. There is little hope in
competing for the authors, if this means asking them not to submit
their work to known, prestigious, high-impact journals, and instead
to submit them to an unknown new entity, be it ever so heartily endorsed.

What we CAN compete for, however, is the journals' READERS, and we can
count on the authors' support in this, as long as we do not ask them to
give up submitting their papers to the traditional journals of their

Here is the LOGICAL (and pragmatic) role that can now be played by the
very feature that makes this literature -- the refereed learned serial
literature -- so anomalous among literatures: Its authors give it away for
free, to both their publishers (in the form of their submitted
manuscripts) and to their readers (in the form of preprints and

Let them continue to give their papers away to publishers to sell, but
let them also archive it online, for free. That is all it will take!
Readers will vote with their eyes. They will of course prefer to access
the literature for free online -- Los Alamos has already proven that.

Once this happens across enough fields and at a sufficient scale, the
library serials budget crunch will be the ally in the next logical step:
With hard-pressed budgets, and authors all accessing online for free,
S/L/P terminations are absolutely inevitable. The journal publishers,
feeling the pressure from this will have to find an alternative, and the
only alternative will be to scale down to online only, with their
providing the only remaining service that is needed of them: quality
control (peer review).

The result will be precisely the outcome the CalTech Proposal seeks,
namely, a decoupling of peer review from archiving and access, with the
publishers continuing to provide the peer review, with the traditional,
prestigious journals, and their known and reliable editorial boards and
referees -- but without the need of ever trying to compete with them
using new, unknown, generic boards.

See: <>

There is only one issue, however, that the CalTech Proposal did not
consider directly, and that is the cost of quality control. It is true
that referees referee for free; and that many editors also devote their
time for free or only a modest honorarium. But implementing peer review
is nevertheless not entirely cost-free (nor is the minimal copy editing
that still needs to be done by way of quality control for the form of
papers, just as peer review quality controls their content). These
residual costs of quality control (per published "page," say) are
minimal compared to the costs of S/L/P, but they are non-zero: Andrew
Odlyzko has estimated them as being as low as $10 per published page.
Let us be conservative and say they might be, at most, 30% of the cost
per paper page, recovered via S/L/P.

The obvious way to pay that small residual cost is up-front, so that
everyone can then access the paper for free. The natural source for
this up-front page-cost is of course not authors' own pockets, but
just 30% portion of the annual 100% their institutions save from the
termination of S/L/P.

So now we know both what the optimal solution is, and the natural way to
pay for it. The only thing that remains is to find a way to get there
from here. The CalTech Proposal as it stands will not get us there,
because it tries to go off in an untested direction which depends on
authors making risky decisions that they do not really have the
incentive to make, abandoning their known-impact journals for brand new
generic ones of uncertain provenance and destiny. (Besides, it has
not explained how the "Boards" will be financed: if by S/L/P then
that's self-defeating!)

Make the following parametric changes, however, and it will fly: Don't
put an AAU Consortium's weight behind rival generic editorial boards,
put it behind author online self-archiving (in both local institutional
archives and global disciplinary or multidisciplinary ones, like Los
Alamos -- indeed why LIKE Los Alamos, why not Los Alamos, which is
already well funded and could easily scale up for the full load, with
mirror sites worldwide?). If this step were taken at a sufficient scale,
the optimal outcome would also become the inevitable one, and very soon.

The only other concern is to make sure there is a stable transition
strategy to prevent chaotic points from materializing as publishers
experience the S/L/P cancellation crunch. So the second thing a
Consortium could do, besides endorsing and encouraging author
self-archiving, is to provide transitional support for publishers who
explicitly commit themselves to scaling down and moving from S/L/P-toll
based cost recovery to up-front page charges. If this is not done,
quality control could break down, as known, experiences publishers pull
out and nothing is in place to take over their function.

Well, that's it; it should be familiar to some of you as my "subversive
proposal" of a few years ago, updated to take into account some of the
further evidence and experience that has accumulated since then.

I now proceed to quote/comment mode for some of the specifics:

> In the meantime, pressure to enact regressive copyright legislation has
> added another important element. The ease with which electronic files
> may be copied and forwarded has encouraged publishers and other owners
> of copyrighted material to seek means for denying access to anything
> they own in digital form to all but active subscribers or licensees.

Precisely. And this is why the main function (as Steve Koonin correctly
perceived) of "endorsing and encouraging self-archiving" on the part of
the Consortium will be to make sure that authors are not intimidated
into signing copyright agreements that deprive them of the right to
self-archive online. That's all they need to retain. Publishers can
have full and exclusive rights to SELL it, in either medium, paper or
online. The author need only retain the right to give it away for free
online. THAT is what needs the weight of an AAU and Learned Society
Consortium, NOT an alternative quality-control board!

> 1. Support peer review and authentication 2. Support new models of
> presentation incorporating network technology 3. Permit "threaded"
> online discourse 4. Adapt to varying criteria among disciplines 5.
> Assure the security of data 6. Reduce production time and expense 7.
> Include automated indexing 8. Provide multiple search options

This is all unrevolutionary and uncontroversial. I would add only the
importance of CITATION LINKING of the entire refereed journals corpus
(which can be readily done in a global Archive like Los Alamos, as well
as an interoperable integration of the local Archives). Citations are
the seamless pathway that links the entire literature. Publishers are
planning to provide them as an "add-on" to the online version, in order
to hold it hostage to S/L/P (mainly L/P), with a kind of "click-through
monopoly" uniting their respective proprietary data bases through a
network of toll-booths.

The self-archived literature can provide this for free, without the
firewalls, and this may prove to be a critical incentive to

> Within the various disciplines, professional societies,
> committees, and working groups continue to establish journals
> with editorial boards that are commissioned to review and
> validate work submitted by authors for final publication.
> Societies retain the power to publish and sell their journals
> in print or non-networked electronic formats such as CD-ROM
> or DVD-ROM; for the foreseeable future, many readers are
> likely to prefer receiving subscriptions as they do now.

As long as online (networked) access is free of S/L/P, this is fine!

> Supported by easy-to-use inputting protocols and standards,
> authors perform their own technical writing, copy editing,
> document formatting, etc., or else contract for these
> services from technical writing consultants (see Section V,
> Document Preparation Services). They may submit preliminary
> findings or preprints to the preprint database, or finished
> work directly to an editorial board for formal review.

There is a fallacy here: Copy-editing occurs after a paper has been
refereed, revised and accepted. Whatever stylistic help an author gets
before that is very important and welcome, but not the real thing.
Quality control for FORM begins after quality control for CONTENT,
and it will continue to be the responsibility of the publisher
(quality-controller); that is part of what the journal "label"
attests to; the author cannot be his own quality-controller.

> The centerpiece of this proposal is a document database that
> incorporates and builds on important features derived from Paul
> Ginsparg's highly successful physics preprint server. Begun in 1991 and
> today comprising nearly 100,000 records in physics and related
> disciplines, demonstrates the viability of a large
> electronic archive that supports alerting services, automated hyperlink
> referencing, indexing, searching, and archiving. The proposed model
> also incorporates Ginsparg's recently developed plan to create an
> "intermediate buffer layer" overlaid on the raw preprint database and
> containing papers that have been subjected to a formal peer review.

> Such refereed papers may be aggregated into one or more journals that
> may exist at the buffer level.

The possibility of authenticated journal overlays for a Global Archive
is NOT captured by this rather naive and unrealistic last sentence.
Archives can be sectored, and sectors can have "certification" tags
that are officially controlled by journals. But there is a confusion here
between self-archiving and refereed publication again. An author can
self-archive both unrefereed preprints and refereed reprints, but he
cannot CERTIFY that the latter have been published by Journal X; only
Journal X can do that. THAT is what the journal overlay on the archive
can provide.

This notion of "aggregating" archived papers into one or more
"journals" is nonsense: We don't need aggregations. Even online
journals will stop aggregating issues and will instead publish single
articles at a time. The rest will be done by intelligent search engines
(especially guided by certification tags authenticated by the Journals)
as well as citation searching. Gather readings together for a course,
if you like, but there's no need for the notion of recombining them
into different "journals." That's obsolete and useless.

> This heterodoxical approach opens the
> possibility for authors to establish their reputations simultaneously
> in a variety of related fields.

Nonsense, again, I'm afraid. The way to establish reputations in a
variety of fields in the online medium is not by doing "virtual
multiple publication" with spurious collation "journals," but via
links, index words, and interdisciplinary contents and mailing lists.
This sort of thinking is still papyrocentric.

The only residual function of journals is providing quality control.
Referees are a scarce, over-used resource. Multiple submission is
already an abusive drain on the system (rightly outlawed by most
journals -- except Law journals, where student review rather than peer
review prevails, and student labour comes cheap). Once a paper has been
refereed and accepted ones, it need not appear in further journals. It
is already there on the Net! It can be linked to; it can be reviewed by
review journals; but there is no point whatsoever in having it re-appear
in still further "journals."


> Further value is added by shortening
> the reader's path to the certified version of a paper and by using
> links to point the reader back to the database of preprints.

One (suitably backed up, mirrored, distributed and protected) certified
version is enough. The rest is just about tags and links.

> Editorial boards obtain permission from the Consortium to create
> and support a journal on Consortium servers. Following the
> tradition of confidentiality, a board determines whether a paper
> merits inclusion; it recommends revisions to authors; it considers
> authors' responses and rebuttals to referees' critiques; and
> ultimately accepts or rejects the work. An editorial board may
> also establish standards for document preparation. Revised
> versions that are placed in the preprint server receive a "version
> stamp". Eventually a "watermark", indicating final acceptance, is
> applied to the certified version that will be retained in all
> permanent archives maintained by the Consortium.

What has been described here is precisely what will be left of the
established refereed journals once they become online-only. It is not a
"new alternative" in any respect except that it pertains to journals that
are NOT established. Hardly an advantage in itself...

> Consortium editorial boards are not granted exclusivity, i.e., any
> paper may be accepted for inclusion in multiple "journals".

Nonsense again, alas, and extremely naive about what a scarce resource
peers' finite refereeing time is. One (successful) peer-review per
article is enough; the rest is just linking.

> In addition, the editorial boards may not exclude a paper based on
> "prior publication" in the preprint server or elsewhere.

This, in contrast, is an important and substantive point, for the
Consortium must encourage authors to self-archive preprints in defiance
of arbitrary and counterproductive strictures like this. (They are
probably also unenforceable strictures: How many changes do I have to
make in a self-archived preprint before it is no longer the same draft I
submit to a journal that endeavours to exclude papers that have already
been archived as preprints? And how are journals to enforce this? Be
constantly trawling the Net for lookalikes for every paper submitted?)


This regressive policy must be attacked head-on.

> Authors may require considerable assistance in preparing
> manuscripts that meet editorial boards' submission standards. In
> this model, the Consortium supports a directory of independent
> technical writers and editors with expertise in a variety of
> fields. These consultants may apply for inclusion or be
> recommended by an editorial board. The Consortium may also devise
> a procedure for certifying those who offer to provide document
> preparation services on a contract basis to authors.

Fine, but don't confuse presubmission stylistic help with
post-acceptance editing and copy editing. The former can come from
colleagues and institutional writing assistants under the author's
solicitation and control, but the latter comes from the quality

> Authors or universities retain copyright according to institution
> policies. A mechanism at the input level requires authors to
> grant a limited, non-exclusive license to the Consortium. This
> agreement grants the right to provide unlimited access to all work
> in either preprint or archival servers for non-commercial purposes
> for the term of the copyright. Authors may grant limited-use
> licenses for their work to other not-for-profits or commercial
> entities, for which they may receive compensation, as long as such
> agreements do not infringe upon any rights previously assigned to
> the Consortium.

This is critical: Authors must be protected, and feel protected, from
any need to give up self-archiving rights. THAT'S ALL!


> The model supports threaded discourse based on the work of
> researchers from Rand and Caltech to create a HyperForum.
> Colleagues may participate in dialogue on findings, however,
> anonymous comments will not be accepted.

Open Peer Commentary is my specialty, and the above component is
well-intentioned but again naive. Nothing critical hinges on it,
however, so I will pass over it.

> The preprint server with its threaded discourse permits editorial
> boards not only to follow comments from the field, but also to
> identify important work and invite submission for review leading
> to inclusion in a journal.

One thing to consider is sorting commentary into (1) comments on
unrefereed preprints and comments on refereed reprints and (2) refereed
vs unrefereed comments.

> Of particular value is the opportunity
> for an editorial board to incorporate into their journal work
> usually associated with another field but of special interest to
> theirs.

Nonsense, if thought of as further collation-journals. All that is
needed is citations and links!

> Concomitantly, this feature overcomes the need to require
> authors to prepare a new version of existing work.

Updates can be archived and linked too, both refereed and unrefereed

> Subjects and names as well as other metadata and full text will be
> searchable using the best available technology, including keyword
> and phrase searching, Boolean operators, proximity, truncation,
> and relevance ranking. It will also be possible to browse the
> archive by subject term, author name, or chronologically.

And one of the best ways of all: via citation links.

> The success of this model depends critically on winning the support of
> "champions" from the research community and attracting participants in
> initial experiments who are likely to come from emerging areas of
> research that have not yet had their journals published either
> commercially or by professional societies. Partnering benefits such
> groups by allowing them to leverage Consortium resources to announce
> their findings economically and to a broad audience.

The only thing that needs championing is self-archiving. Once that is
practised, everything else will follow suit. To champion forfeiting the
established journals and turning to an untested new generic journal is,
in my opinion, Quixotic; nor is it motivated, if the new journals are
still supported by S/L/P.

> Before this is accomplished, research universities must assemble a
> Consortium to support the development and implementation of this model.

There is no model yet. Why should universities back the abandonment of
the established journals for generic newcomers? And how are the newcomers
to be funded? Through S/L/P again? But that just defeats the purpose.

> The Consortium must assign lead participants from university IT
> departments, libraries, and faculty; identify and define elements of
> cost and develop a budget; establish a production schedule; develop
> underlying systems, standards, and protocols to enable champions,
> editors to create new journals; and attract funding from within the
> Consortium and from external sources.

This sounds like getting busy planning new online journals. But we don't
need new journals, online or otherwise. We need to free the existing
journal literature. That requires a realistic plan, and a careful
transitional strategy. So far, this "Model" can be misinterpreted as
just a lot of hoopla about establishing new online-only journals. But
that's not the point! Most of the established journals are or will soon
be available online too. What is needed is a way to free them from all
access barriers.

> A growing number of researchers and information professionals recognize
> that scholarly communication is at a crossroads; many are seeking
> innovative solutions on their own to the wide variety of technical
> challenges that networked alternatives present. While much visionary
> work has emerged, the absence of any significantly new prototype for
> exchanging and preserving research results beyond suggests
> the advantages that may accrue from a more broadly-based, collaborative
> approach.

But local and global (xxx.lanl) self-archiving IS the new prototype;
you need only put the pieces together slightly different to see that.

> A Consortium of universities, committed to developing and maintaining
> an integrated platform supporting all aspects of the scholarly
> communications process, also provides a basis for conducting meaningful
> experiments. Universities have the necessary critical mass of
> participants from varied disciplines. University faculty are already
> well represented on present editorial boards and include many editors;
> strong representation of university faculty on the new editorial boards
> established under the auspices of the Scholar's Forum continues this
> tradition. Universities have close ties to professional societies, have
> expertise in information technology, and have a large pool of creative
> student programmers who can contribute to the infrastructure
> developments that will be needed. Since universities are responsible
> for most of the work that appears in the scholarly literature,
> well-defined, committed administrative support can take advantage of
> major economies of scale to curtail costs as access to the scholarly
> literature is enhanced.

A Consortium will certainly provide the clout, but it won't do any good
until the game-plan is made into a coherent one.

Stevan Harnad
Professor of Cognitive Science
Department of Electronics and phone: +44 1703 592-582
Computer Science fax: +44 1703 592-865
University of Southampton
Highfield, Southampton
Received on Wed Feb 10 1999 - 19:17:43 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:45:30 GMT