Re: Alternative publishing models - was: Scholar's Forum: A New Model...

From: Stevan Harnad <harnad_at_COGLIT.ECS.SOTON.AC.UK>
Date: Sun, 2 May 1999 23:17:16 +0100

On Sun, 2 May 1999, J.W.T.Smith <> has made
some comments and pointed to a proposal, but has not posted the
proposal, so I have retrieved it and appended a critique below.
But first some replies to Smith's comments:

> have your work recognised and validated by your peers - currently
> you still need the journal... The LANL pre-print server still has this
> problem - it can distribute but it can't validate (which is one reason
> I think it is not the way to go). What is needed is a system that
> cleanly separates the validating role from the distribution role. I
> have proposed such a model in the Deconstructed Journal:
> <>

A separation of the quality control function from the distribution
function is what almost EVERY one of the proposals under discussion
here recommends. We will get to the specifics of the "Deconstructed
Journal" Proposal in a moment. I note here only the apparent
inconsistency in the passage above: If we need a system that separates
quality control from distribution, why is it a PROBLEM for LANL that it
does precisely that: It provides the distribution function
independently of the quality control function?

> The LANL model leaves it dependent on journals to validate its content.
> The very journals you condemn for limiting access to knowledge.

Indeed LANL does so, and that is what SEPARATION entails. And I do not
condemn journals, I condemn Subscription/Site-License/Pay-Per-View
(S/L/P) as the mechanism of cost-recovery -- and of course paper as the
means of distribution -- because both restrict access. If LANL provides
the access online for free, journals need only provide the quality
control, and that can be paid for from page-charges, with no access
barriers. Journals continue to exist, but their only function reduces
to quality control (and its certification).

> The LANL model is centralised... Any centralised model is vulnerable to
> control - who shall say you can deposit your work here? Just like the
> journal. The net is distributed and any publishing model based on it
> should take advantage of this.

The LANL model is based on public SELF-ARCHIVING. No physicists are
complaining that their work is being blocked from LANL. Moreover, this
is a VIRTUAL world. Central archives can be integrated with local,
institution-based ones, via gateways like NCSTRL, and both local and
global self-archiving are strongly encouraged, for safety and
redundancy. So what is this problem about "centralisation"? What is
needed is an Archive that LOOKS like a single, integrated, central
resource on the desk of the reader. How it is actually patched together
to be safe, redundant, robust, interoperable and upgradeable for
posterity is a technical matter that will be handled by the relevant
technical experts:

    Davis, J. R. and Lagoze, C. (1999) "NCSTRL: Design and Deployment
    of a Globally Distributed Digital Library," to appear in Journal of
    the American Society for Information Science (JASIS)

So central vs. distributed archives is a pseudo-issue: Nothing of
substance hangs on it.

As to deciding who may self-archive: Anyone can. But getting the
journal quality-control tag is another story: That has to be earned by
successfully passing through peer review.

> If everything is safe in a "safely distributed, redundant and
> mirrored storage architecture" why do we need a "LANL-style Archive"?

This is a false opposition. It doesn't matter whether authors
self-archive locally, and this is then drawn together by a Gateway like
NCSTRL, or they self-archive in a global archive. Preferably they
should do both (or, better, the intelligent software that draws
together the Virtual Archive should do it for them, like making
automatic backups and taking them home for safe-keeping).

> All we need are the 'overlays' (I don't know if this term was
> originally used by you, or Ginsparg,
> or someone else) but not an'overlay' on a centralised archive but an
> overlay on the whole of the net. Which is what my proposed Subject Focal
> Points are intended to be.

The word came from Ginsparg, as far as I know. The overlay requires its
own proprietary sector of the Virtual Archive, for here it is the
Journal that officially certifies papers as refereed. An author can
self-archive a paper and self-tag it as "refereed" ("by Journal X"),
and that's good enough for most purposes, but in an official overlay,
the reader can be sure. (In your own paper, at the URL above, you warn
the reader about possible minor discrepancies between that version,
your own, self-archived one, and the one that appeared in the Journal:
The overlay would contain a version certified as the one that appeared
in the journal.)

We'll get to your "Subject Focal Points" Proposal in a moment.

> I propose a publishing model in the Deconstructed Journal which replaces
> all the roles of the current model. The implicit model within LANL
> service only replaces the distribution half of the main roles.

We'll get to your Deconstructed Journal in a moment. For now note that
the separation of functions is intentional, and LANL is only meant to
provide the distribution half, deliberately.

> When I think about the operation of the LANL pre-print server over time it
> seems it must become like the Deconstructed Journal since the 'journals'
> that currently do the validating will not have a distribution role at all
> so what will they be selling? All they will have is their validation role
> but how can they charge the reader for it since it is the article that is
> validated and once it is validated it is validated for all readers (since
> it is freely available from the server). They will therefore have to sell
> their services to the author or author's institution because it is these
> that benefit most directly from their validation. At this point they will
> cease to be 'journals' and become 'independent evaluators' - a concept
> central to the DJ model

Your inferences about the logical path to page charges from the
separation of function are correct, but they have already been made
explicitly early in this discussion. See, for example, in
the AmSci Forum, the thread on "The Logic of Page Charges to Free the
Journal Literature"

As to this logic leading inexorably to your DJ Model: It would have
been useful if you had said in your posting just exactly what your DJ
Model was. But as you provided the URL, I read it, and my comments
follow below. Unfortunately, the Model seems to be incoherent, making
the same incorrect assumptions that have been discussed in the AmSci
Forum in the thread on "Independent scientific publication - Why have
journals at all?".

In a nutshell, you assume that peer-review can be implemented
willy-nilly, you do not appear to realize that competent referees are a
scarce resource, and (forgive me if this inference is incorrect) you
sound as if you have no prior experience at all with implementing peer
review. But the greatest liability of your proposal is that, like the
components of the Scholars Forum Proposal that drew the criticism that
initiated this discussion, your recommendations are hypothetical and
untested variants on peer review to which there is no reason whatsoever
to yoke the fate of a free refereed literature: Let us free the current
refereed literature and then worry about reforming refereeing.

In <> you

> my new DJ model is a true 'paradigm shift' as described by Thomas Kuhn
> the... insight... [is] that it is possible to have a model
> for STM publishing that can satisfy the needs of the STM community
> without a central publisher/co-ordinator. This can be achieved by
> involving a collection of co-operating actors or agencies.
> The Subject Focal Point[s] [consist of]
> links to relevant items of interest to [each SFP's] readers
> * The major role of the SFP is to act as a 'filter' between the
> contents of the net and the user or subscriber
> * A group (who will probably form the editorial board of the SFP) would
> would search the net for relevant items

So the first part of this new "paradigm shift" is that
"groups/editorial-boards" will be compiling links. Let me, as reader,
reserve the right to pass on this one, and search the literature for
myself. But what literature? Before the revolution, I'd like the
refereed literature I already know and trust: Who is doing that
refereeing in the DJ, and how?

> * Quality control (Content) - This is an area of radical departure from
> the current model. In the DJ model the refereeing role organised by the
> publisher in the current model would be played by independent
> organisations who would evaluate or give their 'stamp of approval' to
> items (articles, sites, services, etc). These could be submitted by the
> author/producer [11], the editorial board of an SFP, or an independent
> agent (literary agent, university, company, etc). These 'evaluator
> organisations' could use paid or unpaid referees (as with the current
> model), or some other mechanism, for deciding whether to give their
> stamp of approval. The evaluator organisations would be paid for all or
> part of their effort by the author, the employer of the author, or
> others.

First, refereeing is not, and never has been, just a matter of giving a
"stamp of approval." Referees evaluate drafts and make recommendations,
the editor "filters" those recommendations and indicates to the author
what needs to be done -- if the material is potentially acceptable --
to make them acceptable, including possible further rounds of
refereeing of the revised drafts: this complex, adjudicated interaction
is in no way equivalent to giving or withholding a "stamp of approval.."

Forget about paying referees to do this for you: There isn't enough
money in the world to make it worth their while. They will do it, as
they do it now, for free, if the material is pertinent to their
interests and expertise, and they are asked to do so by the reputable
editor of a reputable journal, knowing that the author will be
answerable to the editor and the journal.

Now if your SFPs are reputable journals, then we have here simply a
re-description of classical peer review, with some new labels (SFP, DJ).
If SFPs something else, then we have a pig in a poke -- and no reason
to entrust the all-important task of freeing the literature to anything
that depends on them at this time.

> Harnad & Hemus (1997) argue strongly for a model where the
> author or institution pay for publication and their arguments are
> relevant here but they do not clearly separate evaluation from 'making
> available'.

Our proposal was that peer review be paid out of page charges (provided
for the author by his institution out of only a small portion of the
annual windfall S/L/P savings that such up-front payment make possible,
thereby freeing the online literature for all). The underlying logic of
this is 100% dependent on precisely the separation of quality control
from online archiving.

> there is nothing in the DJ model to prevent an author having his or her
> work evaluated by more than one evaluator...

You are very generous with referees' services. Multiple submission
(whether parallel or serial) is already the bane of the current
overloaded referee system. You propose to overload it still further.

Articles rejected by one journal are certainly submitted to another
(and just about everything is eventually published somewhere), but
surely once a paper is accepted ONCE by a journal, no further
refereeing is called for. (Open Peer Commentary is another story, and
one of the Online Medium's enormous strengths, but that is neither here
nor there insofar as the goal of freeing the literature is concerned;
and Peer Commentary is not to be confused with Peer Review.)

    Harnad, S. (1997) Learned Inquiry and the Net: The Role of Peer
    Review, Peer Commentary and Copyright.
    Learned Publishing 11(4) 283-292.

> * Quality control (Form) - This could be carried out by the author,
> intelligent software... local experts... external commercial
> organisations

A lot of writing improvements (including, soon, with the help of
software, markup) can and should be offloaded onto the author, but
copy-editing is still part of the quality control implemented by the
journal, to ITS standards, not by the author, to his.

> The scatter problem is when information pertinent to a specific area of
> research is spread across a number of journals.

Free Online Archiving of the whole corpus is the solution, along with
powerful new search tools; no need for SFPs or DJs...

> The DJ model, because it allows for many independent evaluators, and
> the possibility of grading rather than the simple pass/fail
> (publish/don't publish) approach of the current model, could allow
> unknown authors with a radical new idea to get published ... more
> easily.

Again, see the AmSci thread on "Independent scientific publication -
Why have journals at all?" Peer review is no more a 5-point grading
system than it is a pass/fail system. It is an arbitrated, expert-based
feedback system for upgrading drafts. Naive notions like this
invariably come from the armchair, based on, at most, some individual
experience as author and referee; no one who has any experience with
what it really takes to implement peer review for a nontrivial
population of manuscripts could take proposals like this seriously.


> There are three main problems areas preventing easy adoption of this
> model. These are: community acceptance, funding, and technical.

I am afraid there are even more reasons than that, chief among them
being that the "model" is not based on any empirical data.

Stevan Harnad
Professor of Cognitive Science
Department of Electronics and phone: +44 1703 592-582
Computer Science fax: +44 1703 592-865
University of Southampton
Highfield, Southampton
Received on Wed Feb 10 1999 - 19:17:43 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:45:30 GMT