Re: A Note of Caution About "Reforming the System"

From: Stevan Harnad <harnad_at_ecs.soton.ac.uk>
Date: Sat, 18 Jun 2005 13:28:19 +0100

Prior AmSci Topic Thread:
    Peer Review Reform Hypothesis-Testing (started 1999)
    http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#480

    A Note of Caution About "Reforming the System" (2001)
    http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#1170

    Self-Selected Vetting vs. Peer Review: Supplement or Substitute? (2002)
    http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#2341


This (anonymized) exchange is fowarded from a computer science
discussion list. By way of context, computer scientists, the inventors
of the Internet itself, were the first to start self-archiving (via
UUCP and anonymous ftp), in the 1980s. But they got into the habit of
doing this in a non-optimal way: first, on their local ftp sites, then,
with the invention of the web, on their local websites. Computer science
also produced Citeseer (in 1997-98), which trawled the Web and harvested
(and citation-linked) full-text papers in computer science from local
websites:

    http://citeseer.ist.psu.edu/citeseer.html

But in 1999 was born the OAI protocol for metadata harvesting, making all
OAI-complinant archives interoperable and harvestable without having to
first trawl the entire web to find them among "naked" non-OAI-compliant
local websites.

    http://www.openarchives.org/

This exchange concerns the advantages of self-archiving in OAI-compliant
archives rather than naked local websites, but the discussion also crosses
lines with prior discussions about the role of journals and of peer review,
as well as the merits and methods of the UK Research Assessment Exercise (RAE).

    http://www.ariadne.ac.uk/issue35/harnad/

The prior 2nd-order quotes tagged "sh" are from me, as is the reply text.
The first-order quotes are from a computer scientist:

>sh> Self-archiving on a naked web-page is incalculably better than not
>sh> self-archiving at all; but the few further keystrokes it takes to
>sh> self-archive in one of [University name deleted] OAI-compliant
>sh> archives is worth substantially better still!

On Sat, 18 Jun 2005, [identity deleted] wrote:
>
> I have a colleague who's trying to bully us into doing this. I don't
> like being bullied. I also think that a personal web page adds value
> because I can present publications in order, in context, with links
> to other people's work, conferences and so on

Colleague-bullying should be distinguished from institutional "pressure,"
as in "publish or perish." We may not like that either, but it's for our own
good, and the good of research. Moreover, as the JISC studies I have repeatedly
cited have reported, of the international sample of 1000+ authors across all
disciplines, 81% said they would self-archive *willingly* if required ("bullied")
to do so by their employer or funder (14% would do so reluctantly and
only 5% would not comply).

        Swan, Alma and Brown, Sheridan (2005) Open access self-archiving:
        An author study. Technical Report, Joint Information
        Systems Committee (JISC), UK FE and HE funding councils.
        http://cogprints.org/4385/

As to self-archiving in the institution's OAI-compliant archive:
Given that you already self-archive on your own website, this further
step would be even more trivial in your own case than for someone who
does not self-archive at all; your website contents could be batched
over on your behalf with mediation of a human proxy. See St. Andrews'
"Let us Archive it for you!" service for its researchers.

    http://eprints.st-andrews.ac.uk/proxy_archive.html

The natural place to order, link, and contextualise eprints is in the OAI
harvester, rather than the OAI data-provider itself. Users rarely have
the need or interest to search within individual websites; they search
across them. They are rarely interested specifically in University X's
or Researcher Y's output alone (though they might be, so that "view"
should be available too). They are far more likely to be searching across
the field or research specialty as a whole, and not even knowing about
Researcher Y's articles, just searching on key words. Now google and
google scholar are miracles, to be sure, but they still have far too
much noise to be reliable generators of an exhaustive boolean full-text
search of a research specialty alone, yielding all and only the full-text
research on those terms, the way a dedicated indexing or abstracting
service, which *contains* only research (and not commerce and pornography
and infotainment, and chatter and uninformed opinion) could do.

So there is a lot to be said for OAI-compliance and selective,
dedicated OAI harvesters/search-services over mere google- or even
google-scholar-searchability at this time.

Of course our field (computer science) does happen to have a dedicated
harvester of full-texts from naked websites such as yours: Citeseer.

    http://citeseer.ist.psu.edu/

That makes computer science a prominent exception. But computer science
was *already* an exception, in that it has been self-archiving more, and
longer, than any other discipline (including physics), and doing so roughly
along the home-grown lines you yourself are following (and presumably
recommending). Other disciplines, in contrast, are far behind in
self-archiving *at all*. And some of them, instead of being addicted
to non-optimal *naked local institutional website* self-archiving (as
computer science, and to a certain degree economics, are) are instead
addicted (e.g., physics, mathematics) to nonoptimal *central* archiving
(which, though OAI-compliant, is likewise growing too slowly and failing
to generalise across disciplines and institutions, as distributed
institutional archive is far more naturally doing, and could immediately
do for 100% of the target corpus, across all disciplines, if institutionally
mandated!).

So it is rather early (and suboptimal) days to be declaring ourselves already
too set in our ways to do things right!

And there is yet another anomaly about computer science that makes
it a nonrepresentative case, not generalisable to other disciplines:
Its research output is more conference-based than journal-based. And the
conferences are not all peer-reviewed. And the peer review is not all of a
reliable standard. (How can it be, if it does not allow for an open-ended
series of revisions and re-refereeing, as a journal does, because the
conference has a deadline? And it does not establish a coherent and hence
reliable track-record, because the conference organizers change every
year, unlike journal editors, editorial boards and referees, whose quality
track record is known and answerable and ascertainable across the years?)

But the relative merits of fixed-deadline/roving-editor peer-review
(i.e., conferences) versus open-deadline/fixed-editor peer-review
(journals) have been debated in this forum before and are not
really intrinsic to the question of whether or not to self-archive
(computer science would say yes) nor whether or not to self-archive in
an OAI-compliant institutional/departmental archive rather than just a
naked website. "Conferences versus journals" is an irrelevant side-issue,
unique to computer science and orthogonal to either the OAI-compliance
question or the RAE/mandating question.

>sh> It's not about what you choose to call your peer-review entities. This
>sh> is not about jettisoning full-fledged, traditional peer-review. It is
>sh> about maximising the impact of its outcome (by self-archiving it).
>
> I disagree. In the most advanced communities, journals are dropping
> away, because they are so far behind the front line.

Is that so? And what are these "most advanced communities" and the
evidence for this assertion? As far as I know, the data are that (1)
computer science, *one discipline* (on its degree of "advancement"
relative to others, nolo contendere) happens to be more conference-based
than journal based than the others, and that (2) physics, and to a
lesser degree mathematics and economics, have *added* the practice
of self-archiving unrefereed preprints -- before going on to submit,
revise and publish them in journals, exactly as they always did.

In addition to this, there has been some speculation about the demise
of journals and/or peer review, to be replaced by open self-archiving of
everything, vetted only by usage and commentary polls: But is there any
empirical evidence to support these speculations? I have been following
developments in all disciplines for 10 years, and I do not detect even
the hint of evidence. Merely a tiny number of ongoing experiments whose
outcomes (and their generality and scalability) are still indeterminate
-- plus the widespread misinterpretation of (1) and (2) above *as if*
it were evidence for such speculations -- which is most decidedly is not.

    Peer Review Reform Hypothesis-Testing (started 1999)
    http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#480

    A Note of Caution About "Reforming the System" (2001)
    http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#1170

    Self-Selected Vetting vs. Peer Review: Supplement or Substitute? (2002)
    http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#2341

The only "front line" that actually exists is the 15% of authors (across
disciplines) who are self-archiving their articles (be they journal
articles or conference articles) versus the 85% who are not. Meanwhile,
their parallel submission to journals (or conferences) proceeds apace,
exactly as it always did (although the pace is of course accelerated
and optimised through online implementation of submission, peer-review
and publication by the journals and conferences).

       http://citebase.eprints.org/isi_study/
       http://www.crsc.uqam.ca/lab/chawki/ch.htm

> In most of the
> fields to which I contribute, conferences matter most, preprints
> matter some, external publicity (from New Scientist to New York Times)
> matters a bit, and journals matter almost not at all (or rather, only
> for RAE and for junior coauthors' CV).

What matters is that the "invisible hand" of peer review remains in place,
unchanged, *exactly as before*, providing and then certifying quality
assurance, as it always did, unalterably. You are referring merely to
the newly optimised *mode of access* (which is, of course, online,
and preferably free). It is the outcome of the peer review, and its
certified level of quality (which is correlated with rejection rate and
impact factor) that the assessment/evaluation process is measuring and
rewarding, just as it always did (and should do).

    Harnad, Stevan (1998/2000/2004) The invisible
    hand of peer review. Nature [online] (5 Nov. 1998)
    http://helix.nature.com/webmatters/invisible/invisible.html
    http://www.exploit-lib.org/issue5/peer-review/ and in Shatz,
    B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland &
    Littlefield. Pp. 235-242. http://cogprints.org./1646/

    Harnad, Stevan (1997) Learned Inquiry and the Net: The Role of
    Peer Review, Peer Commentary and Copyright. Learned Publishing
    11(4) 283-292. Short version appeared in 1997 in Antiquity 71:
    1042-1048. Excerpts also appeared in the University of Toronto
    Bulletin: 51(6) P. 12. http://cogprints.org./1694/

    Harnad, Stevan (1996) Implementing Peer Review on the Net:
    Scientific Quality Control in Scholarly Electronic Journals. In:
    Peek, R. & Newby, G. (Eds.) Scholarly Publishing: The Electronic
    Frontier. Cambridge MA: MIT Press. Pp 103-108.
    http://cogprints.org./1692/

    Harnad, Stevan (1985) Rational disagreement in peer
    review. Science, Technology and Human Values, 10 p.55-62.
    http://cogprints.org./2128/

There will be new, online measures of research impact and usage, but
these will be supplements to -- not substitutes for -- peer-reviewed
journal (and conference) publication- and citation-counts:

    Brody, T. and Harnad, S. (2005) Earlier Web Usage Statistics as
    Predictors of Later Citation Impact. JASIST (in press)
    http://eprints.ecs.soton.ac.uk/10713/

> For example, my recent [journal deleted] paper on [deleted] was a total
> pain. I was asked for 20 pages, we wrote 15, it went through peer
> review, and at the last minute the editor demanded a cut to 11 pages.
> Were it not that one of my three coauthors is still a research student
> we'd have told the [journal] to stuff it. Needless to say the online
> version is the full 15 pages. Who cares about the dead tree version,
> except folks reading my student's CV?

What matters is the quality-control and improvements (if any) and
subsequent quality-certification provided by the peer review (by,
presumably, qualified experts, rather than, say, fellow-students, or
self-appointed web commentators, or no one at all).

And if the journal or conference procrusteanly truncates the length:
remedying *that* sort of thing is what self-archiving the revised and
improved post-postprint is for in the web age! That does not in the least
imply that our work no longer needs systematic and answerable vetting
by the qualified experts (as systematically implemented and certified by
the established journals). Moreover, the null hypothesis has *never even
been tested*, because the invisible hand of peer review remains in place,
exactly as before, regardless of the fact that our usage practices have
evolved with the new medium.

So please don't interpret the fact that I no longer *use* the journal's
official version (whether on-paper or online) as evidence that the
journal's function is now merely a decorative one, for appeasing
assessment committees. The journal (or conference) is still the
peer-review service-provider (implementer, really, since the peers
review for free) that is both maintaining and marking the quality of the
research corpus.

*No one* can say what this corpus would all look like and be worth
*without* that invisible hand of peer review. And (human nature being what
it is) I wouldn't want to wrest this corpus from that helping hand without
substantial prior evidence that it would not compromise its quality
(such as it is) and hence its navigability, usability, and evaluability.

Twenty-five years as a journal editor exposed to the unfiltered sludge
that is first submitted to a journal editor's desk ("preprints") -- 90%
of them destined to be rejected and resubmitted to a lower-quality journal
(where the process iterates, with only the rejection rate changing),
and most of the 10% that is destined eventually to be accepted destined
first to undergo substantive change in response to the refereeing, plus
one or more rounds of re-refereeing -- makes me all the more inclined to
protect the quality of the refereed literature (such as it is) and its
users from well-meaning but wildly uninformed calls to abandon peer review
on the grounds of current online usage practices which do not even test the
proposition that peer review (= journals/conferences) is not performing
today the exact same quality-control functions it has always performed.

    http://www.ecs.soton.ac.uk/~harnad/Temp/Kata/bbs.editorial.html
    http://www.ecs.soton.ac.uk/~harnad/Temp/bbs.valedict.html

Stevan Harnad

AMERICAN SCIENTIST OPEN ACCESS FORUM:
A complete Hypermail archive of the ongoing discussion of providing
open access to the peer-reviewed research literature online (1998-2005)
is available at:
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/
        To join or leave the Forum or change your subscription address:
http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html
        Post discussion to:
        american-scientist-open-access-forum_at_amsci.org

UNIVERSITIES: If you have adopted or plan to adopt an institutional
policy of providing Open Access to your own research article output,
please describe your policy at:
        http://www.eprints.org/signup/sign.php

UNIFIED DUAL OPEN-ACCESS-PROVISION POLICY:
    BOAI-1 ("green"): Publish your article in a suitable toll-access journal
            http://romeo.eprints.org/
OR
    BOAI-2 ("gold"): Publish your article in a open-access journal if/when
            a suitable one exists.
            http://www.doaj.org/
AND
    in BOTH cases self-archive a supplementary version of your article
            in your institutional repository.
            http://www.eprints.org/self-faq/
            http://archives.eprints.org/
Received on Sat Jun 18 2005 - 13:28:19 BST

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:47:55 GMT