British Academy Report on Peer Review and Metrics

From: Stevan Harnad <harnad_at_ecs.soton.ac.uk>
Date: Tue, 4 Sep 2007 13:38:55 +0100

> A pall of gloom lies over the vital system of peer review. But the British
> Academy has some bright ideas.
> The Guardian, Jessica Shepherd reports, Tuesday September 4, 2007
    http://education.guardian.co.uk/higher/research/story/0,,2161680,00.html

Jessica Shepherd's report on peer review seems to be a good one. The
only thing it lacks is some conclusions (which journalists are often
reluctant to take the responsibility of making):

(1) Yes, peer review, like all human judgment, is fallible, and
susceptible to error and abuse.

(2) But, in point of fact, peer review just means the assessment of
research by qualified experts. (In the case of research proposals, it is
assessment for fundability, and in the case of research reports, it is
assessment for publishability.)

(3) Funding and publishing without any assessment is not a solution:

    (3a) Everything cannot be funded (there aren't enough funds), and even
    funded projects first need some expert advice in their design.

    (3b) And everything *does* get published, eventually, but there is
    a hierarchy of journal peer-review quality standards, serving as an
    essential guide for users, to guide them in what they can take the
    risk of trying to read, use and build upon. (There is not enough time
    to read everything, and it's to risky to try to build on anything
    that claims to have been found); and even accepted papers first need
    from expert advice in their revision.)

(4) So far, nothing as good as or better than peer review (i.e.,
qualified experts vetting the work of their fellow-experts) has been
found, tested and demonstrated. So peer review remains the only straw
afloat, if the alternative is not to be tossing a coin for funding,
and publishing everything on a par.

(5) Peer review *can* be improved. The weak link is always the
editor (or Board of Editors), who choose the reviewers and to whom
the reviewers and authors are answerable; and the Funding Officer(s)
or committee choosing the reviewers for proposals, and deciding how
to act on the basis of the reviews. There are many possibilities for
experimenting with ways to make this meta-review component more accurate,
equitable, answerable, and efficient, especially now that we are in the
online era:
http://users.ecs.soton.ac.uk/harnad/Temp/peerev.pdf

(6) Metrics are not a substitute for peer review, they are a *supplement*
to it.

    In the case of the UK RAE, a Dual System of prospective funding of
    (i) individual competitive proposals (RCUK) and (ii) retrospective
    top-sliced funding of entire university departments, based on
    their recent past research performance (RAE), metrics can help
    inform and guide funding officers, committees, editors, Boards
    and reviewers. And in the case of the RAE in particular, they can
    shoulder a lot of the former peer-review burden: The RAE, being a
    retrospective rather than a prospective exercise, can benefit from
    the prior publication peer review that the journals have already done
    for the submissions, rank the outcomes with metrics, and then only
    add expert judgment afterward, as a way of checking and fine-tuning
    the metric rankings. Funders and universities explicitly recognizing
    peer review performance as a metric would be a very good idea,
    both for the reviewers and the researchers being reviewed.

    Harnad, S. (2007) Open Access Scientometrics and the UK Research
    Assessment Exercise. In Proceedings of 11th Annual Meeting of the
    International Society for Scientometrics and Informetrics 11(1), pp.
    27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds.
    http://eprints.ecs.soton.ac.uk/13804/

    Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and
    Swan, A. (2007) Incentivizing the Open Access Research Web:
    Publication-Archiving, Data-Archiving and Scientometrics. CTWatch
    Quarterly 3(3). http://eprints.ecs.soton.ac.uk/14418/

    Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open
    Research Web: A Preview of the Optimal and the Inevitable, in Jacobs,
    N., Eds. Open Access: Key Strategic, Technical and Economic Aspects,
    chapter 21. Chandos. http://eprints.ecs.soton.ac.uk/12453/

Some more generic references on peer review follow below.

Stevan Harnad
American Scientist Open Access Forum
http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.h
tml

Chaire de recherche du Canada Professor of Cognitive Science
Institut des sciences cognitives Electronics & Computer Science
Universite du Quebec a Montreal University of Southampton Montreal,
Quebec Highfield, Southampton
Canada H3C 3P8 SO17 1BJ United
Kingdom
http://www.crsc.uqam.ca/
http://users.ecs.soton.ac.uk/harnad/

Harnad, S. (ed.) (1982) Peer commentary on peer review: A case study in
scientific quality control, New York: Cambridge University Press.

Harnad, Stevan (1985) Rational disagreement in peer review. Science,
Technology and Human Values, 10 p.55-62.
http://cogprints.org/2128/

Harnad, S. (1986) Policing the Paper Chase. (Review of S. Lock, A
difficult balance: Peer review in biomedical publication.) Nature 322:
24 - 5.

Harnad, S. (1996) Implementing Peer Review on the Net: Scientific
Quality Control in Scholarly Electronic Journals. In: Peek, R. & Newby,
G. (Eds.) Scholarly Publishing: The Electronic Frontier. Cambridge MA:
MIT Press. Pp 103-118. http://cogprints.org/1692/

Harnad, S. (1997) Learned Inquiry and the Net: The Role of Peer Review,
Peer Commentary and Copyright. Learned Publishing 11(4) 283-292.
http://cogprints.org/1694/

Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature
[online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B.
(2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp.
235-242. http://cogprints.org/1646/

Peer Review Reform Hypothesis-Testing (started 1999)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#480

A Note of Caution About "Reforming the System" (2001)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#1170

Self-Selected Vetting vs. Peer Review: Supplement or
Substitute? (2002)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#2341

----------------------------------------

>
> No fewer than three academic journals dismissed the economist George
> Akerlof's paper The Market for Lemons as "trivial" and "too generic" when
> it was submitted in the late 1960s. Almost four decades later it was
> regarded as a seminal text and its author thought worthy of the Nobel
> prize for economics.
>
> Peer review, when an academic submits a scholarly work to the scrutiny of
> other experts in the field for publication in a journal or for a grant,
> for example, has always been an imperfect science. But lately it has had
> more, and fiercer, critics.
>
> They say peer review is biased against innovation and originality. They
> argue that it costs too much - more than £196m a year was the estimate by
> Research Councils UK last year. And they say it takes up too much time now
> that more academics than ever are submitting papers and fewer claim they
> can afford the time to "peer review" them.
>
> Today a report published by the British Academy - an academic club of 800
> scholars elected for distinction in the humanities and social sciences -
> speaks up for peer review. The professors quote Joan Sieber, a
> psychologist at California State University, who has said: "One suspects
> that peer review is a bit like democracy - a bad system, but the best one
> possible."
>
> Albert Weale, professor of government at the University of Essex and chair
> of the committee responsible for the report, describes peer review as "the
> essential backbone to knowledge and the crucial mechanism in maintaining
> its quality".
>
> Robert Bennett, professor of geography at Cambridge University, says it is
> "an essential, if imperfect, practice for the humanities and social
> sciences".
>
> The report's writers snap back at those who attack peer review. They back
> up their ripostes with the comments of journal editors, research councils,
> charities and funders, academics and postdoctoral students. To those who
> say peer review is biased against innovation and that journal editors
> "play safe" and are "friendly to their own work", the academy's response
> is that universities and research councils are awarding more grants for
> risky, avant-garde research projects than ever.
>
> The report admits that "there may be scope for the government to consider
> ways in which it can encourage endowments ... within universities to
> support small grants for innovative, high-risk research".
>
> But it warns: "It is important not to commit the fallacy of assuming that,
> because high quality will be innovative, the innovative is necessarily
> high quality ... other criteria include: accuracy, validity,
> replicability, reliability, substantively significant, authoritative and
> so on."
>
> Banality gets acceptance
>
> Marian Hobson, professor of French at Queen Mary, University of London,
> says: "If a journal editor gets everything right all the time, they are
> probably aiming for the middle, banally all-right work, which will be out
> of date in the blink of an eyelid. Really excellent work may sometimes
> take a while to be accepted."
>
> To those who lambast peer reviewing for being too time consuming and
> costly, the professors have the following suggestion: give far more
> recognition to the unpaid, altruistic labour of those who do it and the
> system will be under less strain.
>
> Hobson says: "If done properly, [peer review] entails bibliographical
> searches, checking of statements, repeated visits to the university
> library, not just to Google. Yet this kind of activity counts for nix,
> nothing, zilch in the research assessment exercise [in which every active
> researcher in every university in the UK is assessed by panels of other
> academics to receive grants for their research]."
>
> The academy stops short of demanding peer reviewers be paid. It realises
> this would be impossible for all but the most wealthy journal publishers.
> Instead, the report recommends that the importance of peer reviewing
> should be better reflected in the research assessment exercise. "Those
> responsible for the management of universities and research institutes
> need to ensure that they ... encourage and reward peer review activity,"
> it says.
>
> This might stop some high calibre academics, already overburdened with
> work, from being put off peer reviewing, the professors say. It might also
> attract junior lecturers and even postdoctoral students. More reviewers
> would mean the system was under less strain. The strain is partly
> triggered by an increase of up to 62% in the number of academic papers
> submitted of in some fields in the past five years.
>
> Here lies another problem, says the report. "As we conducted our review,
> we were struck ... by the extent to which there is little attention to
> training in peer review," it says. "Training is important, not just in
> itself, but because of the privileged position that peer reviewers enjoy.
>
> "By virtue of reading a paper, reviewers can acquire access to original
> data sets, new empirical results or innovative conceptual work. In the
> business world, these would count as commercial secrets. In the academic
> world, the ethos is that reviewers are part of the gatekeeping system, the
> ultimate rationale of which is the fast and efficient dissemination of
> research findings.
>
> "The integrity of the peer review system is therefore of great importance.
> One of the ways in which that integrity is maintained is through its
> dependence upon professional and unselfish motivations, and this in turn
> suggests the importance of training in the professional and ethical
> conventions of the practice."
>
> The academy's report ends with a warning to the government: plans to
> overhaul the way research is assessed after next year will change peer
> review for the worse, especially in the humanities.
>
> Metrics-based approach
>
> After 2008, the quality of research - and hence the amount of funding that
> universities receive from the government - will be judged largely on the
> basis of statistics such as grant income and contracts. It is accepted in
> the sector that this "metrics-based" approach will work better for science
> and engineering than for arts and humanities research, which does not
> receive much income and where books take longer to have an impact.
>
> Hobson says: "Metrics is helpful in giving a kind of overview measured in
> terms of items. A bit like a waistline measurement. It doesn't give much
> of an idea of whether they are slim or fat, unless they are at the extreme
> ends of the spectrum.
>
> "Wittgenstein at his death had one book and one article published. Another
> book was on the way, but unfinished. "Heaven knows what would have
> happened to him in today's academia."
Received on Tue Sep 04 2007 - 14:26:03 BST

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:49:02 GMT