Re: British Classification Soc post-RAE talk/discussion - 6 July (fwd)

From: Stevan Harnad <harnad_at_ecs.soton.ac.uk>
Date: Wed, 6 Jun 2007 18:07:23 +0100

On Tue, 5 Jun 2007, Loet Leydesdorff wrote:

>> SH:
>> "Publications, journal impact factors, citations, co-citations, citation
>> chronometrics (age, growth, latency to peak, decay rate), hub/authority
>> scores, h-index, prior funding, student counts, co-authorship scores,
>> endogamy/exogamy, textual proximity, download/co-downloads and their
>> chronometrics, etc. can all be tested and validated jointly, discipline by
>> discipline, against their RAE panel rankings in the forthcoming parallel
>> panel-based and metric RAE in 2008. The weights of each predictor can be
>> calibrated to maximize the joint correlation with the rankings."
>
> Dear Steven,
>
> I took this from:
> Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment
> Exercise. In Proceedings of 11th Annual Meeting of the International Society
> for Scientometrics and Informetrics (in press), Madrid, Spain; at
> http://eprints.ecs.soton.ac.uk/13804/
>
> It is very clear now: Your aim is to explain the RAE ranking (as the
> dependent variable). I remain puzzled why one could wish to do so. One can
> expect Type I and Type II errors in these rankings; I would expect both of
> the order of 30% (given the literature). If you would be able to reproduce
> ("calibrate") these rankings using multi-variate regression, you would also
> reproduce the error terms.

Dear Loet,

You are quite right that the RAE panel rankings are themselves merely
predictive measures, not face-valid criteria, and will hence have
errors, noise and bias to varying degrees.

But the RAE panel rankings are the only thing the RAE outcome has been
based on for nearly two decades now! The objective is first to replace
the expensive and time-consuming panel reviews with metrics that give
roughly the same rankings. Then we can work on making the metrics even
more valid and predictive.

First things first: If the panel rankings have been good enough for
the RAE, then metrics that give the same outcome should be at least
good enough too. Being far less costly and labor-intensive and far more
transparent, they are vastly to be preferred (with a much reduced panel
role in validity checking and calibration).

Then we can work on optimizing them.

Stevan

PS Of course there are additional ways of validating metrics, apart from
the RAE; moreover, only the UK has the RAE. But that also makes the UK
an ideal test-bed for prima facie validation of the metrics,
systematically, across fields and institutions.
Received on Wed Jun 06 2007 - 22:45:45 BST

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:48:57 GMT