Re: UK Research Assessment Exercise (RAE) review

From: David Goodman <dgoodman_at_phoenix.Princeton.EDU>
Date: Wed, 20 Nov 2002 20:58:17 +0000

I consider the impact factor (IF) properly used as a valid measure
in comparing of journals; I also consider the IF properly used as a
possibly valid measure of article quality. But either use has many
possible interfering factors to consider, and these measurements have
been used in highly inappropriate ways in the past, most notoriously in
previous UK RAEs.

Stevan mentions one of the problems. Certainly the measure of the impact
of an individual article is more rational for assessing the quality of
the article than measuring merely the impact of the journal in which
it appears. This can be sufficiently demonstrated by recalling that any
journal necessarily contains articles of a range of quality.

More attention is needed to the comparison of fields. The citation
patterns in different subject fields varies, not just between broad
subject fields but within them. In the past, UK RAEs used a single
criterion of journal impact factor in ALL academic fields; this was
patently absurd (just compare the impact factors of journals in math
with those in physics, or those in ecology with those in biochemistry).
To the best of my knowledge they have long stopped this. (This incorrect
use did much to decrease the repute of this measure, even when correctly
used.)

In comparing different departments, the small scale variation between
subjects specialisms can yield irrelevant comparisons, because few
departments have such a large number of individuals that they cover the
entire range of their subject field. I'll use ecology as an example:
essentially all the members of my university's department [Ecology and
Evolutionary Biology] work in mathematical ecology, and we think we are
the leading department in the world. Most ecologists work in more applied
areas. The leading journals of mathematical ecology have relatively lower
impact factors, as this is a very small field. This can be taken into
account, but in a relatively small geopolitical area like the UK, there
may be very few truly comparable departments in many fields. It certainly
cannot be taken into account in a mechanical fashion, and the available
scientometric techniques are not adequate to this level of analysis.

The importance of a paper is certainly reflected in its impact, but not
directly in its impact factor. It is not the number of publications that
cite it which is the measure, but the importance of the publications that
cite it. This is inherently not a process that can be analyzed on a
current basis.

There is a purpose in looking at four papers only: in some fields of
the biomedical sciences in particular, it is intended to discourage
the deliberate splitting of papers into many very small publications,
with the consequence that in some fields of biomedicine a single person
might have dozens in a year, adding to the noise in the literature.
One could also argue that a researcher should be judged by
the researcher's best work, because the best work is what primarily
contributes to the progress of science.

In most other respects I agree with Stevan. I will emphasize that the
publication of scientific papers in the manner he has long advocated will
lead to the posiblility of more sophisticated scientometrics. This will
provide data appropriate for analysis by those who know the techniques,
the subject, and the academic organization. The data obtainable from
the current publication system are of questionable usefullness for this.

Dr. David Goodman
Biological Sciences Bibliographer
Princeton University Library
dgoodman_at_princeton.edu
Received on Wed Nov 20 2002 - 20:58:17 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:46:42 GMT