Re: Future UK RAEs to be Metrics-Based

From: C.Oppenheim <C.Oppenheim_at_LBORO.AC.UK>
Date: Tue, 19 Sep 2006 15:40:59 +0100

The statement "I should like to point out that quantitative bibliometric
measures have usually not been found to be applicable in the humanities."
would be convincing if supported by references to studies in the
literature. I am not aware of any. Perhaps Mr bensmann could give us
chapter and verse on this?

Charles

Professor Charles Oppenheim
Head
Department of Information Science
Loughborough University
Loughborough
Leics LE11 3TU

Tel 01509-223065
Fax 01509-223053
e mail C.Oppenheim_at_lboro.ac.uk
----- Original Message -----
From: "Stevan Harnad" <harnad_at_ECS.SOTON.AC.UK>
To: <AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM_at_LISTSERVER.SIGMAXI.ORG>
Sent: Tuesday, September 19, 2006 3:33 PM
Subject: Re: Future UK RAEs to be Metrics-Based


> On Tue, 19 Sep 2006, Stephen J Bensman wrote:
>
>> I should like to point out that quantitative bibliometric measures have
>> usually not been found to be applicable in the humanities.
>
> The question at hand is a specific and purely empirical one: Is there a
> significant and sizeable correlation between RAE ranking, as currently
> determined by the RAE panels, and citation counts?
>
> So far, the answer, for every discipline tested to date (including several
> among
> the humanities) has been: yes.
>
>> First, despite plans to do so, ISI never developed a JCR for the A&HCI.
>
> Regrettable, but OA self-archiving mandates will remedy that.
>
>> Second, both the
>> 1981 assessment of US research-doctorate programs by the American Council
>> on Education, etc., and the 1993 assessment of these programs by National
>> Research Council rejected using publication and citation counts for the
>> humanities.
>
> Then the question is: Did they reject it for good reasons? Did they try,
> and fail
> to get numbers that were proportional to what they were trying to
> evaluate? Or
> did they just not try?
>
>> The 1993 assessment substituted faculty awards for these
>> measures. In general, the humanities do not conform closely to typical
>> bibliometric distributions, being more random and scattered.
>
> The empiricial question is: Are citation counts correlated with other
> measures of
> research quality/importance/impact in the humanities (including peer
> evaluation)?
>
>> This has generally been found to be the case in library use studies.
>
> This is not about library use, but about research use, by researchers,
> and, in
> particular these days: online use.
>
>> My own
>> observation has been that, whereas variance in the sciences is due to
>> accepted "paradigms," such variance as there is in the humanities is due
>> to
>> intellectual "fads."
>
> No matter: Is there a correlation between peer judgments of value and
> citation
> counts (and other metrics)?
>
>> I am afraid that one is reduced in the humanities to
>> either subjective evaluations or the acceptance of the subjective
>> evaluations of others. There are thankfully some things in this world
>> not
>> reducible to quantitative laws.
>
> Deciding whether or not to use and cite a piece of work is also a
> "subjective
> judgment" (although the outcome of the use may not be!). The empirical
> question
> remains one about correlation: Is there a correlation between citation
> counts and
> subjective evaluations.
>
> Stevan Harnad
>
>
>> Stevan Harnad <harnad_at_ECS.SOTON.AC.UK>@LISTSERV.UTK.EDU> on 09/19/2006
>> 08:07:32 AM
>>
>> Please respond to ASIS&T Special Interest Group on Metrics
>> <SIGMETRICS_at_LISTSERV.UTK.EDU>
>>
>> Sent by: ASIS&T Special Interest Group on Metrics
>> <SIGMETRICS_at_LISTSERV.UTK.EDU>
>>
>>
>> To: SIGMETRICS_at_LISTSERV.UTK.EDU
>> cc: (bcc: Stephen J Bensman/notsjb/LSU)
>>
>> Subject: Re: Future UK RAEs to be Metrics-Based
>>
>> Adminstrative info for SIGMETRICS (for example unsubscribe):
>> http://web.utk.edu/~gwhitney/sigmetrics.html
>>
>> On Tue, 19 Sep 2006 l.hurtado_at_ED.AC.UK wrote:
>>
>> > --Humanities scholarly publishing is more diverse in venue/genre than
>> > in some other fields. Indeed, journals are not particularly regarded
>> > as quite so central, but only one among several respected and
>> > frequented genres, which include multi-author books, and (perhaps
>> > particularly) monographs.
>>
>> Citation counts can in principle -- and up to a point already do --
>> count citations to all these genres:
>>
>> (1) citations *from* articles *to* preprints, articles, chapters, and
>> books (already being partially indexed, e.g., by ISI)
>>
>> (2) citations *from* preprints, articles, chapters, books *to*
>> preprints, articles, chapters, and books (indexable in principle,
>> already partly indexed by citebase, citeseer, google scholar and
>> scopus, and will flourish dramatically once Open Access prevails)
>>
>> Hence whatever statistically significant RAE/citation correlations
>> and effect sizes Charles Oppenheim manages to find *despite* the weak and
>> partial citation coverage to date is actually evidence of the robustness
>> of the RAE/citation correlation in the fields that are less
>> article-based.
>>
>> > QUESTION: Are the studies that supposedly show such meaningful
>> > correlations actually drawing upon the full spread of publication
>> > genres appropriate to the fields in view?
>>
>> Not yet, and that's the point: Charles's findings are all the more
>> remarkable for being so robust, despite the weak signal!
>>
>> > (I'd be surprised but
>> > delighted were the answer yes, because I'm not aware of any mechanism
>> > in place, such as ISI in journal monitoring, for surveying and counting
>> > in such a vast body of material.
>>
>> The point is that citation coverage right now is most definitely
>> incomplete and insufficient. But that can (and will) only improve
>> (especially under pressure from the RAE, and OA!). Meanwhile, though,
>> the successful demonstration of strong correlations even based on the
>> partial coverage is very promising evidence.
>>
>> > I'm not pushing at all for the labour-intensive RAE of the past.
>>
>> Bravo. That means 80% agreement already!
>>
>> > Indeed, if the question is not how do individual scholars stack up in
>> > comparison to others in their field (which the RAE actually wasn't
>> > designed to determine), but instead how can we identify depts into
>> > which a disproportionate amount of govt funding should be pumped, then
>> > I think in almost any field a group of informed scholars could readily
>> > determine the top 5-10 places within 30 minutes, and with time left
>> > over for coffee.
>>
>> For all the UK departments, and in fair proportion to their merits and
>> needs?
>> Or just for a familiar few?
>>
>> And would several such informed-scholar circles agree on their rankings
>> (with one another, or with the current RAE rankings)?
>>
>> > I'm just asking for more transparency and evidence behind the
>> > enthusiasm for replacing RAE with "metrics".
>>
>> A group of informed scholars over coffee does not strike me as the height
>> of
>> transparency and evidence...
>>
>> However, in validating the new weighted metric equation, and adjusting
>> it for the needs of each discipline, one of the criteria against which
>> it will be validated is of course informed peer judgments: The metric
>> equation should not be at odds with informed peer judgment (nor should
>> there be marked discrepancies among metrics themselves, at least among
>> those we assume to be measuring the same sort of thing, such as downloads
>> and citations).
>>
>> In general, with multiple regression equations (which, by the way,
>> capture only linear effects, unless orthogonal polynomials guessing at
>> nonlinear relations are used), one wants the measures that are meant to
>> measure the same sort of thing to be correlated with one another, but one
>> does not want the correlation to be *too* high, otherwise the measures
>> are redundant: Optimally, they should be cross-checks on one another, but
>> also each should be making its own unique contribution to the prediction,
>> over and above corroborating the rest. And of course the weight of each
>> should be adjustable in accordance with the specific profile of the
>> discipline and its needs and values.
>>
>> For example, exogamy might be more of a virtue in some fields than
>> others. Some fields may be more authority-based or co-citation
>> authority-based than others. For some fields, steep early uptake may be
>> predictive, for others, longevity. etc. This will all be brought into
>> focus by the metric validation and calibration and customization
>> phase that will have to precede the use of the scientometric equation
>> for evaluation -- exactly as validation, standardization and the
>> creation of norms and benchmarks must be done in biometrics and
>> psychometrics before using the metrics for clinical or evaluative
>> purposes.
>>
>> We are talking about a rich new OA world of online performance
>> indicators and predictors sitting on top of an even richer primary
>> database: the research itself.
>>
>> Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open
>> Research Web: A Preview of the Optimal and the Inevitable, in Jacobs,
>> N., Eds. Open Access: Key Strategic, Technical and Economic Aspects,
>> chapter 20. Chandos. http://eprints.ecs.soton.ac.uk/12453/
>>
>> Stevan Harnad
>>
>> > Larry
>> >
>> > Quoting "C.Oppenheim" <C.Oppenheim_at_LBORO.AC.UK>:
>> >
>> > > The correlation is between number of citations in total (and average
>> number
>> > > of citations per member of staff) received by a Department over the
>> > > RAE
>> > > period (1996-2001) and the RAE score received by the Department
>> following
>> > > expert peer review. Correlation analyses are done using Pearson or
>> Spearman
>> > > correlation coefficients. The fact that so few humanities scholars
>> publish
>> > > journal articles does not affect this result.
>> > >
>> > > A paper on the topic is in preparation at the moment.
>> > >
>> > > What intrigues me is why there is so much scepticism about the
>> > > notion.
>> RAE
>> > > is done by peer review experts. Citations are also done by
>> (presumably)
>> > > experts who choose to cite a particular work. So one would expect a
>> > > correlation between the two, wouldn't one? What it tells us is that
>> high
>> > > quality research leads to both high RAE scores AND high citation
>> counts.
>> > >
>> > > I do these calculations (and I've covered many subject areas over
>> > > the
>> > > years, but not biblical studies - something for the future!) in a
>> totally
>> > > open-minded manner. If I get a non-significant or zero correlation
>> > > in
>> such
>> > > a study in the future, I will faithfully report it. But so far, that
>> hasn't
>> > > happened.
>> > >
>> > > Charles
>> > >
>> > > Professor Charles Oppenheim
>> > > Head
>> > > Department of Information Science
>> > > Loughborough University
>> > > Loughborough
>> > > Leics LE11 3TU
>> > >
>> > > Tel 01509-223065
>> > > Fax 01509-223053
>> > > e mail C.Oppenheim_at_lboro.ac.uk
>> > > ----- Original Message -----
>> > > From: <l.hurtado_at_ED.AC.UK>
>> > > To: <AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM_at_LISTSERVER.SIGMAXI.ORG>
>> > > Sent: Monday, September 18, 2006 8:37 PM
>> > > Subject: Re: Future UK RAEs to be Metrics-Based
>> > >
>> > >
>> > School of Divinity, New College
>> > University of Edinburgh
>> > Mound Place
>> > Edinburgh, UK. EH1 2LX
>> > Office Phone: (0)131 650 8920. FAX: (0)131 650 7952
>> >
>>
>
Received on Tue Sep 19 2006 - 15:51:32 BST

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:48:30 GMT