Re: Future UK RAEs to be Metrics-Based

From: Stevan Harnad <>
Date: Thu, 6 Apr 2006 12:27:45 +0100

On Thu, 6 Apr 2006, Zuccala, Alesia wrote:

> I recently stumbled across an article online that was published in
> the Chronicle of Higher Education - October 14th, 2005 and I think
> it bears an interesting relationship to the discussion thread on
> metric-based RAEs. I wonder if some of you had seen this article:
> and if anyone has any
> thoughts about what the Princeton researcher has said: "The impact
> factor may be a pox upon the land because of the abuse of that number"
> Open access to research is definitely valuable. I am interested in
> all of its developments etc, but I wonder if we should be concerned
> about how citation impact factors are going to be used in the future -
> should we be wary of potential abuses? Will it become more important
> to introduce complementary qualitative evaluations?

Richard Monastersky's article in CHE was responded to in this Forum on
Oct. 10 2005. The response is reproduced below. (To minimize redundancy
and non-sequiturs, I urge AmSci posters to do a google search on "amsci"
plus the keywords of their proposed posting to check whether the topic
has already been treated.)

The answer to Alesia Zuccala's query is: No, the whole point of scrapping
RAE's wasteful "complementary qualitative [re]-evaluation" in favour of
metrics -- which are already so highly correlated with, hence predictive
of the RAE's outcome rankings -- is to stop wasting time and money
that could have been spent on research itself. Published articles have
already been "qualitatively evaluated," and that process is called *peer
review.* There is no sense in repeating, with a local, inexpert UK panel,
what has already been done by each individual journal by purpose-picked,
qualified experts. Journals vary in quality; so weight them by their known
track-record (of which their impact factor is but one partial correlate)
and "complement" them with other metrics, such as the article's and
author's own exact citation counts (and download counts, and
recursive CiteRank weights, and fan-on/fan-out co-citation weights, and
hub/authority counts, and endogamy/exogamy weights, and profile uniqueness
scores, and latency/longevity scores, and latent semantic co-text
weights, etc. etc. -- the full, rich panoply of present and future metrics
afforded by a digital, online, full-text, open-access research database),
always comparing like with like, and customizing the metric equation to
the field. Abuses will be ever more detectable and deterrable by digital
detection algorithms and naming-and-shaming in an open digital corpus.

Move forward, not backward. A-priori Peer Review remains the most
heavily weighted and important determinant of research quality,
but open-access-based metrics provide a rich and diverse coterie of
a-posteriori complements to Peer Review -- all part of the collective,
cumulative and self-corrective nature of learned inquiry.

    "Chronicle of Higher Education Impact Factor Article" (Oct 2005)
    Comment on:
    Richard Monastersky, The Number That's Devouring Science,
    Chronicle of Higher Education, October 1, 2005
    [text appended at the end of the comment]
        Impact Analysis in the PostGutenberg Era
Although Richard Monasterky describes a real problem -- the abuse of
journal impact factors -- its solution is so obvious one hardly required
so many words on the subject:
A journal's citation impact factor (CIF) is the average number of
citations received by articles in that journal (ISI -- somewhat
arbitrarily -- calculates CIFs on the basis of the preceding two
years, although other time-windows may also be informative; see )
There is an undeniable relationship between the usefulness of an
article and how many other articles use and hence cite it. Hence CIF
does measure the average usefulness of the articles in a journal. But
there are three problems with the way CIF itself is used, each of them
readily correctable:
    (1) A measure of the average usefulness of the articles in the
    journal in which a given article appears is no substitute for
    the actual usefulness of each article itself: In other words, the
    journal CIF is merely a crude and indirect measure of usefulness;
    each article's own citation count is the far more direct and accurate
    measure. (Using the CIF instead of an article's own citation count
    [or the average citation count for the author] for evaluation and
    comparison is like using the average marks for the school from which
    a candidate graduated, rather than the actual marks of the candidate.)
    (2) Whether comparing CIFs or direct article/author citation counts,
    one must always compare like with like. There is no point comparing
    either CIFs between journals in different fields, or citation counts
    for articles/authors in different fields. (No one has yet bothered
    to develop a normalised citation count, adjusting for different
    baseline citation levels and variability in different fields. It
    could easily be done, but it has not been -- or if it has been done,
    it was in an obscure scholarly article, but not applied by the actual
    daily users of CIFs or citation counts today.)
    (3) Both CIFs and citation counts can be distorted and abused.
    Authors can self-cite, or cite their friends; some journal editors
    can and do encourage self-citing their journal. These malpractices are
    deplorable, but most are also detectable, and then name-and-shame-able
    and correctable. ISI could do a better job policing them, but soon
    the playing field will widen, for as authors make their articles open
    access online, other harvesters -- such as citebase and citeseer and
    even google scholar -- will be able to harvest and calculate citation
    counts, and average, compare, expose, enrich and correct them in
    powerful ways that were in the inconceivable in the Gutenberg era:.
So, yes, CIFs are being misused and abused currently, but the cure is
already obvious -- and wealth of powerful new resources are on the way
for measuring and analyzing
research usage and impact online, including (1) download counts, (2)
counts (co-cited with, co-cited by), (3) hub/authority ranks
are highly cited papers cited by many highly cited papers; hubs cite
many authorities), (4) download/citation correlations and other
analyses, (5) download growth-curve and peak latency scores, (6)
growth-curve and peak-latency scores, (7) download/citation longevity
(8) co-text analysis (comparing similar texts, extrapolating directional
trends), and much more. It will no longer be just CIFs and citation
but a rich multiple regression equation, with many weighted predictor
variables based on these new measures. And they will be available both
for navigators and evaluators online, and based not just on the current
database but on all of the peer-reviewed research literature.
Meanwhile, use the direct citation counts, not the CIFs.
Some self-citations follow (and then the CHE article's text):
Brody, T. (2003) Citebase Search: Autonomous Citation Database for
Archives, sinn03 Conference on Worldwide Coherent Workforce, Satisfied
Users -
New Services For Scientific Information, Oldenburg, Germany, September
Brody, T. (2004) Citation Analysis in the Open Access World Interactive
Brody, T. , Harnad, S. and Carr, L. (2005) Earlier Web Usage Statistics
Predictors of Later Citation Impact. Journal of the American Association
Information Science and Technology (JASIST, in press).
Hajjem, C., Gingras, Y., Brody, T., Carr, L. & Harnad, S. (2005) Across
Disciplines, Open Access Increases Citation Impact. (manuscript in
Hajjem, C. (2005) Analyse de la variation de pourcentages d'articles en
libre en fonction de taux de citations
Harnad, S. and Brody, T. (2004a) Comparing the Impact of Open Access
(OA) vs.
Non-OA Articles in the Same Journals. D-Lib Magazine, Vol. 10 No. 6
Harnad, S. and Brody, T. (2004) Prior evidence that downloads predict
citations. British Medical Journal online.
Harnad, S. and Carr, L. (2000) Integrating, Navigating and Analyzing
Archives Through Open Citation Linking (the OpCit Project). Current
79(5):pp. 629-638.
Harnad, S. , Brody, T. , Vallieres, F. , Carr, L. , Hitchcock, S. ,
Gingras, Y. , Oppenheim, C. , Stamerjohanns, H. and Hilf, E. (2004) The
Access/Impact Problem and the Green and Gold Roads to Open Access.
Review, Vol. 30, No. 4, 310-314
Hitchcock, S. , Brody, T. , Gutteridge, C. , Carr, L. , Hall, W. ,
Harnad, S. , Bergmark, D. and Lagoze, C. (2002) Open Citation Linking:
Way Forward. D-Lib Magazine 8(10).
Hitchcock, S. , Carr, L. , Jiao, Z. , Bergmark, D. , Hall, W. ,
Lagoze, C. and Harnad, S. (2000) Developing services for open eprint
globalisation, integration and the impact of links. In Proceedings of
the 5th
ACM Conference on Digital Libraries, San Antonio, Texas, June 2000. ,
pp. 143-151.
Hitchcock, S. , Woukeu, A. , Brody, T. , Carr, L. , Hall, W. and
Harnad, S. (2003) Evaluating Citebase, an open access Web-based
citation-ranked search and impact discovery service. Technical Report
ECSTR-IAM03-005, School of Electronics and Computer Science, University
A complete Hypermail archive of the ongoing discussion of providing
open access to the peer-reviewed research literature online (1998-2005)
is available at:
        To join or leave the Forum or change your subscription address:
        Post discussion to:
UNIVERSITIES: If you have adopted or plan to adopt an institutional
policy of providing Open Access to your own research article output,
please describe your policy at:
    BOAI-1 ("green"): Publish your article in a suitable toll-access journal
    BOAI-2 ("gold"): Publish your article in a open-access journal if/when 
            a suitable one exists.
    in BOTH cases self-archive a supplementary version of your article
            in your institutional repository.
Received on Thu Apr 06 2006 - 12:48:58 BST

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:48:17 GMT