Re: UK Research Assessment Exercise (RAE) review

From: Jan Velterop <velteropvonleyden_at_btinternet.com>
Date: Thu, 28 Nov 2002 14:14:06 +0000

On Wednesday, November 27, 2002, at 01:06 PM, Stevan Harnad wrote:

> On Wed, 27 Nov 2002, Jan Velterop wrote:
>
>> I meant to give an example of a complement to quantification.
>
> Signed open secondary reviews are certainly a complement to both
> scientometric measures and primary (peer) reviews. All direct human
> judgments are. But they are also countable, content-analyzable, comparable
> against other data, including the track-record of the reviewer's name,
> hence amenable to scientometrics.

I never claimed that that wasn't the case.

> By the way, primary peer reviews are not usually signed by the referees'
> names, but they are always signed by the journal-name. Hence the journal
> and its editor are openly accountable for the quality of the papers it
> accepts (and, indirectly, for those it rejects too!). That is why the
> journal-name and track-record are such important indicators, both for
> scientometric assessment and for navigation by the would-be user trying
> to decide what is worth reading and safe to try to build upon.

Close analysis of the track record of many journals shows an enormous
variability in rates of citation for the articles published in them. If
my journal publishes mostly 'landfill' science, but I manage to
persuade a few brilliant review articles (for instance by paying the
review-author generously), I can secure a reasonable impact factor, the
common measure of a journal's track record. This is not hypothesis, but
widespread reality. Secondary evaluation brings this to the fore, and
that's why secondary evaluation, in the manner of for instance Faculty
of 1000, is so important.

>> Much of the trouble is not quantification per se, but the lack of
>> information to enable weighting the votes.
>
> To a great extent scientometrics is about finding the proper weightings
> for those votes!
>
>> The journals (well, at least some of them) lend a certain weight to
>> their peer-review, but this peer-review is almost always anonymous.
>
> Journal quality varies, both within journals (owing to human
> fallibility) and between journals (owing to systematic differences in
> peer-review standards and hence quality). The journal, however, is never
> anonymous. Its reputation is answerable to the degree to which it
> improves article quality through peer review, and the quality
> selectivity it exercises.

This sounds like an 'ideal market' argument. Ideal markets don't exist
either.

> I will not rehearse here the long, old list of arguments for and against
> referee anonymity. The primary argument against referee anonymity is
> answerability (to ensure qualifications, minimize bias, etc.). The
> primary argument for anonymity is freedom (to exercise judgment without
> risk of counter-bias, e.g., when a junior researcher is reviewing the
> work of a senior researcher). Referee anonymity is normally offered as
> an option which some referees choose to exercise and some do not,
> depending on the referee and the circumstances. But the real protection
> against bias is supposed to be the editor (to whom the referee certainly
> is not anonymous) and the reputation of the journal. A biassed choiceof
> referees will generate biassed referee reports and biassed journal
> contents. That is a matter of public record. The remedy is either to
> replace the editor or to switch to a rival journal.
>
> But this is all on the topic of peer review reform, which is not the
> focus of this Forum. This Forum is concerned with freeing the current
> peer-reviewed research literature (20,000 peer-reviewed journals) from
> access-tolls, not about freeing it from, or modifying, peer review.

The perception that I wanted to steer the discussion in the direction
of peer-review reform is perhaps the reason why Stevan as moderator
chose not to post my full contribution on the September98-list (fair
enough, that's his prerogative) but only the bits to which he reacts
(I'll post the full contribution to the bmanifesto-list shortly, so
that my open access friends can have a complete record of the
discussion; the hiatuses are minor, but I just don't like censorship of
any kind on discussion lists). But my topic is not peer-review reform
per se; the issue is and was the impediments that entrenched,
traditional scientometric qualifyers are putting up for new open access
journals. These impediments are presumably alright if one believes that
open access to peer-reviewed literature is only ever realistically
possible if articles published in entrenched, traditional journals are
being mounted on open institutional or self-archives, but I don't, and
I happen to know quite a few people who believe with me that there are
other ways to the proverbial Rome as well, such as journals published
with open access from the outset.

> That second agenda will first require some empirical testing and comparison,
> which has not yet been done, to my knowledge. To put it another way:
> the alternative to toll-access, namely, open-access, has been tried,
> tested, shown to work, and shown to be far more beneficial to research
> and researchers. The alternatives to peer-review have not (yet) been.
>
> The present RAE assessment/impact thread is about ways to accelerate the
> transition to open access by SUPPLEMENTING classical peer review with
> rich new scientometric measures of impact that are co-evolving with an
> open-access database. It is not about substitutes or reforms for classical
> peer review. Those are another (worthy) matter, for another forum.

Quite. I only wish to talk about peer-review in relation to flawed,
entrenched quality indicators that hamper new open access journals.

> "Peer Review Reform Hypothesis-Testing"
> http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/0479.html
>
>> Reviewers may not even be proper 'peers' in some cases.
>
> Yes, occasionally some conscientious editors err in their choice of
> referees, or in their evaluation of their reports. Some human error is
> inevitable (even by the most peerless of peers), but one hopes that when
> the error is systematic (i.e., bias or incompetence) the open,
> answerable dimension of the system -- namely, the journal's and editor's
> names and reputations -- will help expose, control and correct such
> errors.
>
>> Stevan speculates that "Perhaps reviewer-names could accrue some
>> objective scientometric weight...". I would perhaps remove the
>> 'perhaps'.
>
> Note that I was speaking of secondary, open reviewers, in review journals
> or in open peer commentary or in ratings, all appearing after the article
> has been published. Those are all valuable supplements to the current
> system. But I was certainly not recommending abandoning the option
> of referee anonymity in primary peer review (until the logic and
> empirical consequences of such a change are analyzed and tested
> thoroughly) -- although untested recommendations along those lines have
> been made by others (including some bearing the same surname as myself!
> http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/0303.html ).
>
>> Maybe it has its own set of problems, but disclosing the peers' identity may
>> be a great help in assessing the weight or significance of the review.
>
> And perhaps a great hindrance in getting some peers to review at all,
> under a variety of conditions.

Perhaps. That's why open and 'onomastic' secondary peer-review is so
helpful.

> (There have been similar -- unresolved -- back-and-forths about
> author-anonymization. Characteristically, decisions were taken without
> prior empirical testing, on a-priori ideological or conceptual grounds. I
> have not followed the outcomes, but to my knowledge classical peer
> review has tended to be reverted to after these sorties, and pretty much
> proceeds apace, with optional referee anonymity to the author and author
> non-anonymity to the referee remaining the norm.)
>
>> Besides, it may disclose possible conflicts of interest. All BMC's medical
>> journals have open peer review which works most satisfactorily.
>
> That is interesting to know, and will need to be evaluated and compared
> with suitable control-alternatives after a few years (and once any
> "hawthorn effect" has dissipated). It is not ready to be recommended
> for wider adoption yet: BMC is very much a conscious experiment by the
> self-selected sample of authors and referees who have collaborated so
> far. Any generalizations will require more time, and a larger sample.

Open peer-review is also used by the British Medical Journal and other
journals in the medical field, if I'm correctly informed.

>> All journals also have a comments section enabling a public, open
>> discussion.
>
> This supplement (as opposed to substitute) already has a long
> tried-and-true history (including an open-peer-commentary journal
> I myself edited for 25 years, http://www.bbsonline.org/ , and
> for over a decade now, an open-access online-only journal
> too http://psycprints.ecs.soton.ac.uk/ both modeled on a
> still longer-standing journal, going for over four decades now:
> http://www.journals.uchicago.edu/CA/home.html )
>
>> The point of Faculty of 1000 is that an open, secondary review of published
>> literature by acknowledged leaders in the field, signed by the reviewer, is
>> seen by increasing numbers of researchers (measured by the fast-growing
>> usage figures of F1000) as a very meaningful addition to quantitative data
>> and a way to sort and rank articles in order of importance.
>
> I agree completely. Open peer commentary is an extremely valuable
> supplement to classical peer review:
>
> Harnad, S. (1979) Creative disagreement. The Sciences 19: 18 - 20.
>
> Harnad, S. (ed.) (1982) Peer commentary on peer review: A case study
> in scientific quality control, New York: Cambridge University Press.
>
> Harnad, Stevan (1985) Rational disagreement in peer
> review. Science, Technology and Human Values, 10 p.55-62.
> http://cogprints.soton.ac.uk/documents/disk0/00/00/21/28/
>
> Harnad, S. (1997) Learned Inquiry and the Net: The Role
> of Peer Review, Peer Commentary and Copyright. Learned
> Publishing 11(4) 283-292.
> http://cogprints.soton.ac.uk/documents/disk0/00/00/16/94/
>
> Harnad, S. (1998/2000) The invisible hand of peer review. Nature
> [online] (5 Nov. 1998) & Exploit Interactive 5 (2000):
> http://cogprints.soton.ac.uk/documents/disk0/00/00/16/46/
>
>> Of course one can subsequently quantify such qualitative information. But
>> what a known and acknowledged authority thinks of an article is to many
>> more interesting than what anonymous peer-reviewers think. Any research
>> assessment exercise should seriously look at resources such as offered
>> by Faculty of 1000.
>
> Let 1000 flowers bloom. But it's rather mis-stating the options to
> describe them as open-review vs. anonymous-review! Classical peer review
> is one thing. Then there is post-hoc open-review thereafter.

Your mis-readings; not my mis-statements. Calling F1000 'secondary
review', as I did, is clearly implying that it is complementary, and
not an alternative to conventional journal peer-review. The reason why
any research assessment exercise should look at such secondary
resources is that they offer a) a second opinion by a known reviewer,
and b) an opinion on individual papers rather than average track
records of journals in which those papers were published.

Jan Velterop
Received on Thu Nov 28 2002 - 14:14:06 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:46:44 GMT