Re: UK Research Assessment Exercise (RAE) review

From: Stevan Harnad <>
Date: Mon, 25 Nov 2002 23:31:18 +0000

On Mon, 25 Nov 2002, Jan Velterop wrote:

> [Braham of HEFCE's] concern is that the journal, or more
> particularly, the journal's perceived acceptance policy
> upon peer-review, is used as a proxy for quality.

The concern is admirable. Now we must wait to hear what the
alternative candidate for quality-assessment is, against which
journal peer review and journal quality levels are to be compared
as quality-indicators or quality-proxies.

(I can only repeat: it is surely not the re-refereeing of all RAE
submissions by the RAE panels that Braham has in mind. So it would be
interesting to know what he does have in mind! That the scientometric
data and analyses can and should be strengthened -- e.g., by paper-
and author-based citation counts, usage statistics (hits), time-series
analyses, co-citation analyses, and even semantic analyses of the
full-texts -- is uncontested, indeed, that is what I was recommending
that the self-archived full-text corpus would make possible. But what
are the *other* (nonscientometric) ways to assess research quality for
the RAE?)

(By the way, as I can already sense it coming: counting grant income,
and numbers of graduates students, and plotting their respective
citation impacts, etc. is all just more scientometrics, and is exactly what
would go into the RAE-standardized online CVs I recommended, as well
as the multiple regression equation.)

> This acceptance policy can be as strict in on-line journals as in print
> ones, so there would be no reason for [RAE] to equate strict policies with
> those employed by print journals.

Of course not. We are in complete agreement about that. The only handicap
a journal may have is not yet having had the chance to demonstrate
and establish its quality level through its track record. But that is
a liability of all new journal start-ups, and again has nothing to do
with medium (on-paper-only, hybrid, or online-only) nor with economic
model (toll-access or open-access). It is purely a question of quality
(and time).

>sh> But I would be more skeptical about the implication that it is the RAE
>sh> assessors who review the quality of the submissions, rather than the
>sh> peer-reviewers of the journals in which they were published. Some
>sh> spot-checking there might occasionally be, but the lion's share of the
>sh> assessment burden is borne by the journals' quality levels and impact
>sh> factors, not the direct review of the papers by the RAE panel!
>sh> (So the *quality* of the journal still matters: it is the *medium* of
>sh> the journal -- on-paper or online -- that is rightly discounted by
>sh> the RAE as irrelevant.)
> The quality of journals matters, but quality is not the same as impact
> factor.

Agreed. But no scientometric measure is the same as quality: Such
measures are correlates or predictors of quality.

> Possibly, journals with the highest impact factors can be seen to
> be -- in general -- of higher quality than those with low impact factors,

Possibly indeed. But we are agreed that journal-impact (i.e., average
citation ratio) is only one of many (scientometric) ways to estimate
quality. Some other ways were listed above.

> but, as one often sees, rankings on the basis of differences that
> run into the single digit percentage points (e.g. IF 2.35 vs IF 2.27)
> are utterly meaningless.

Agreed. Which is another reason why a univariate measure such as journal
citation count needs to be just one among a whole battery of impact
predictors, in a multiple regression equation.

> It is a known phenomenon that impact factors are highly vulnerable
> to manipulation

True, but once they are just one in a battery of predictors, manipulation
will be more detectable; and a whole battery of quasi-independent
predictors is far harder to manipulate. (And online manipulation is also
more readily detectable.)

> and that in just about any given
> journal a minority of articles is commonly responsible for the bulk of
> the citations on which the impact factors are based.

This would immediately become apparent if the regression equation
included both the journal impact factor and the paper's (and author's)
specific citation counts. The (high or low) paper-specific count would
counterbalance the journal-based count, and they could be weighted as
the RAE assessors saw fit (from further scientometric analyses).

> An American medical
> journal will almost always have a very much higher impact factor than its
> European qualitative equivalent, simply because in the medical areas the
> culture 'dictates' that American authors publish in the main in American
> journals and do not cite their European colleagues, whereas European
> authors publish as much in American as in European journals and usually
> cite all relevant literature, be it American or European.

Where that is the case, it too can be scientometrically adjusted for.

> Quality in this example is not easily measurable in terms of impact
> factors.

Not univariate ones.

>sh> (Hence the suggestion that a "top-quality" work risks nothing in being
>sh> submitted to an "unorthodox medium" -- apart from reiterating that
>sh> the medium of the peer-reviewed journal, whether on-line or on-paper,
>sh> is immaterial -- should certainly not be interpreted by authors as RAE
>sh> license to bypass peer-review, and trust that the RAE panel will review
>sh> all (or most, or even more than the tiniest proportion of submissions
>sh> for spot-checking) directly! Not only would that be prohibitively expensive
>sh> and time-consuming, but it would be an utter waste, given that peer
>sh> review has already performed that chore once already!
> However, if 'unorthodox medium' means 'new journal with an unorthodox
> publishing model' (after all, since most journals have an on-line edition
> nowadays, being electronic by itself would hardly have been described by
> Bahram as 'unorthodox'), then authors of top-quality work are perceived to
> take a risk by publishing in them, for these unorthodox new journals will
> not have an impact factor yet.

I am getting confused again. I thought we had agree that the journal's
medium (online, on-paper, both) and economic model (toll-access, open
access) are irrelevant to the quality of the research appearing in it.
That quality depends entirely on the quality of the submissions and the
peer review. That authors are leery of new journals lacking
track-records is surely not the fault of the RAE, nor of
quality-assesment methods.

Can we just keep these factors distinct? Whether the journal is orthodox
or unorthdox is not and should not be relevant to the RAE (and
if it is relevant to the prospective author, that's another issue). The
only thing relevant to the RAE is the quality of the papers, and how
to measure that quality. For that, I have not heard any alternatives to
scientometrics mentioned (though multivariate regression would certainly
increase the predictive power of the scientometrics). So I'm having
trouble discerning where the beef (in the sense of grievance rather than
substance) here is.

> This is not to say that articles in the new
> open access journals are not cited as often as in conventional journals
> -- on the contrary: we have strong indications at BioMed Central that they
> are actually cited a great deal more often than similar articles published
> conventionally.

Fine. We are in agreement then. If those higher journal impact factors are
still too early to be detected by ISI, they would certainly be detactable
already if those articles were in their authors' institutional Eprint
Archives, as I was recommending, and the many scientometric measures of
impact were derived from that national database ("RAEprints"). Of course
it would be just a UK estimate in the first instance, but that might
not be so bad, and it could be supplemented by ISI data until the
practise spread worldwide (as I hope it would do, swiftly, with the UK
demonstrating how and why).

> The system of impact factors, however, is stacked against
> new journals and has a considerable bias toward entrenched journals
> and their toll-gate models.

It is time that is stacked against new journals, and I don't know what
you mean by bias toward toll-gate models: Do you mean ISI tends to index
toll-based journals? But of course! Almost all journals are still
toll-based today. ISI does require a track-record, but that's
understandable, as ISI itself is a toll-based system, and covering a new
journal has resource implications. Many new journal start-ups fail; and
lower-quality journals understandably have less claim to finite resources
than higher-quality ones. And only a track-record can tell which is

(After several years of courtship, ISI now covers Psycoloquy, for
example, -- an open-access online-only
journal that is at least as unorthodox as the BMC journals and has been
around for over a decade now. ISI has since extended coverage to several
other newer online-only and open-access journals. I'd say there will
soon be an ISI bias FOR rather than against online-journals, because
it's so much cheaper and easier to cover them than the old scan+OCR
ISI uses for paper journals.)

On the other hand, new journals would not have this ISI coverage handicap
if their contents were covered by the OAI scientometric engines. All
that requires is that their contents be open-access and OAI-compliant
(whether because they are self-archived or because the journal itself
provides an open-access archive -- or it allo-archives them for its
authors in an open-access archive). And then much earlier measures of
impending impact (such as usage-measures, or even the first or second
derivative of the usage curve, and the shape of the nascent citation
curve, or even the co-citational authorities doing the early citing)
could provide sensitive early-days scientometric predictors of impact
and quality.

> Fortunately, this is only an irritating,
> but temporary problem as the rates at which articles published in BMC
> open access journals are cited, will ensure high impact factors once
> the Impact Factory deems the time ripe to calculate them.

Of course they will. And you should separate the citation counts
themselves -- which will rise if the papers are being cited, regardless
of whether ISI covers them -- from the separate question of the
track-record needed for ISI coverage. (You should really be welcoming
citebase-style scientometric engines that don't insist on a trial period
before covering papers !)

> I agree that the preparation of large [RAE] portfolios is a waste of time
> and expense if all that happens is straightforwardly adding impact factors
> of the journals in which the papers have been published. So it is not
> so much that RAE rankings are 'predictable' from the impact factors,
> they are *based on* the impact factors.

What we call it matters less than whether there is anything else to base
them on. (RAE assessors are welcome to reveal whether they had any
other, nonscientometric tricks up their sleeves, when they did the

> We agree that open access could help, even with research impact
> assessment.

And that agreement is all that matters...

Received on Mon Nov 25 2002 - 23:31:18 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:46:43 GMT