Re: "Academics strike back at spurious rankings" (Nature, 31 May)

From: Stevan Harnad <harnad_at_ecs.soton.ac.uk>
Date: Sun, 3 Jun 2007 18:42:55 +0100 (BST)

On Sun, 3 Jun 2007, Loet Leydesdorff wrote:

> Yes, I agree that multiple regression is a classical technique. But one
> needs a dependent variable in that case which can be operationalized. Unlike
> the case of barometric pressure, we don't have an objective measure, but the
> standard has to be constructed.

Loet, we are beginning to repeat ourselves. I said that in the case
of weather forecasting, the barometric pressure is the independent
(predictor) variable and rain is the dependent (predicted) variable. We
first validate pressure as a predictor of rain, against rain itself,
and then once pressure is shown to correlate highly enough with rain,
we plan our picnics based on pressure, without having to wait for them
to be rained on.

The same is true with scientometrics. We take our battery of independent
variables -- the many candidate metrics -- and we do a multiple regression on a
criterion, the dependent variable, first to validate them. In the example I
gave, the dependent variable is the RAE panel rankings. Once we validate our
predictor metrics (by field), we can then give top-sliced research funding (in
the UK dual-funding system) without having to waste the time and energies of
the panelists.

> All the validated measures seem predictors (independent variables) to me
> when one thinks within the model of multiple regression. What do you propose
> as the predicted variable?

See above.

Stevan Harnad
Received on Sun Jun 03 2007 - 18:44:24 BST

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:48:57 GMT