Re: UK Research Evaluation Framework: Validate Metrics Against Panel Rankings

From: Stevan Harnad <harnad_at_ecs.soton.ac.uk>
Date: Fri, 23 Nov 2007 16:06:27 +0000

I think in their spirited responses to my posting, my computer-science
colleagues have been addressing a number of separate questions as if
they were all one question:

(1) Does the conversion from panel-based RAE to metric RAE cost more
money, time and effort than the current RAE system?

Answer: No. Definitely much less.

(2) Could RAE's needs be served by simply harvesting the content that
is already on the web (whether in IRs or on any arbitrary website?

Answer: Definitely not. Most of the target content is not on the web at
all yet.

(3) Is the purpose of the RAE to facilitate web search today?

Answer: No. The purpose is to assess and rank UK research output.

(4) Is the purpose of IRs to facilitate web search today:

Answer: No. Their purpose is to generate web content and to display
and audit institutional research output.

(5) Is the purpose of metrics to facilitate web search today?

Answer: No. Their purpose is to make the RAE less costly and cumbersome,
and perhaps fairer and more accurate.

(6) Is the problem of unique person identification on the web an
RAE/IR/metric issue?

Answer: No, but IRs accommodating RAE metric requirements could help
solve it.

Now, on to specific answers. First the excerpt the triggered the tumult:

   Stevan Harnad (Southampton): [excerpt from
   http://openaccess.eprints.org/index.php?/archives/333-guid.html]
   "...[I]t is important -- indeed imperative -- that all University
   Institutional Repositories (IRs) now get serious about
   systematically archiving all their research output assets
   (especially publications) so they can be counted and assessed
   (as well as accessed!), along with their IR metrics (downloads,
   links, growth/decay rates, harvested citation counts, etc.)."

> Nigel Smart (Bristol): Yeah, lets reinvent the wheel and spend
> loads of tax payers money building a system which already exists.
> Has anyone heard of Google Scholar ? Perhaps it would be easier
> for UUK to license the software off Google ?

Is the system that already exists the one that is going to do the UK's
Research Assessment Exercise in place of the present one? Is Google
Scholar that system? Are all the publications of all UK researchers --
and all the publications that cite them -- in Google Scholar today?

No? Then maybe it would be a good idea if the assessment requirements of
RAE metrics required universities to require their researchers to
deposit all their publications in their IRs. That might even encourage
everyone else to do it too. Then Google Scholar would have all it needs
to do the rest -- for citations. (The other metrics will require more
input data, and usage states.)

> Yorick Wilks (Sheffield): Correct point, and please note the
> connection to my point on person- ambiguity: readers should ask
> themselves how many pages of Google Scholar they have to go
> down to find SOMEONE ELSE'S papers!

Computer scientists are more conscientious than most in self-archiving
their publications *somewhere* on the web, *somehow*. But not all
(perhaps not even most) computer scientists are self-archiving yet,
and most other disciplines are even further behind. So Google Scholar
(and the web) are the wrong place to go today if you want to find most
papers. The idea is to change that. (And person-ambiguity is a problem,
but certainly not the main problem: absence of the target content is
the main problem.)

Institutions have the advantage that they can mandate a systematic
self-archiving policy for all their researchers. And the RAE -- especially
the metric RAE -- has the advantage that it gives institutions a strong
motivation to do it. And OAI-compliant, RAE-metrics-compliant IRs will
help provide the disambiguating tags too.

Then you *will* be able to find everyone's papers via Google Scholar
(etc.).

> Hamish Cunningham (Sheffield): I missed the preceding posts so
> sorry if I'm out of context, but the person ambiguity thing
> that Yorick refers to is key, and Google Scholar doesn't solve
> it. In experiments we've run here on various ways to harvest
> accurate bibliographies by far the best performance is from
> institutional pages, and increasing the quality and quantity
> of these would be a great help. Note the huge amount of work
> that's been done collating RAE lists - if these were all in our
> databases already... No wheels need inventing, as the software
> for institutional repositories is available already.

Missing the preceding posts seems to have been an advantage. The
foregoing comment was spot-on!

(But please note that the objective is not just to get the reference
(authors, title, date, journal, etc.) online, but the full text!

> Ralph Martin (Cardiff): It wouldn't be hard for us personally
> to identify papers in Google Scholar, and claim "this is me". Each
> person could then send in links pointing to GS for their 4 most
> cited papers (or whatever other number was desired), together
> with GS's citation counts on a certain date for said papers.
> A fairly trivial piece of software could then analyse these
> numbers however thought fit (together with spot checks on the
> claims if they don't trust us).

Yes, that would all work splendidly -- if all UK research output were
already on the web, hence harvested by Google Scholar. Alas, it is
not. And that's the problem.

> RM: Yes, more complex metrics might
> be more accurate, but they would cost an awful lot more.

Cost more than what? The current profligate, panel-based RAE?

> RM: Yes,
> adding more factors might improve the results, but pattern
> classifiers can also degrade if too many indicators are used.

The idea is not to overconstrain the metric equation a-priori but to
*validate* it (rather than simply cherry-pick a few metrics a-priori).
So a rich-diverse battery should first be tested, discipline by
discipline, by regressing it against the parallel panel rankings for
each discipline, to initialize the beta weights on each metric. Some
may well turn out to be zero or near zero in some disciplines, so they
may elect to drop them. But the cure for overconstraining data is not to
make arbitrary a-priori choices when it is unnecessary. Once initialised,
the weights can be calibrated and optimized.

> RM: Yes, odd people might have anomalous citation counts - but we
> are not using these here to judge individuals, rather averaging
> over a whole department, or university even.

And that is what the multiple regression does.

> RM: Like Nigel, I am
> really disappointed that many people want to make the process
> much more complex and waste so much public money on this - far
> more than will ever be saved by redirecting marginal money more
> accurately.

Who is proposing to make the RAE more complicated, wasteful and expense
than it is? The metric proposal is in order to achieve the exact opposite!

> Nigel Smart (Bristol): A rather cool (read addictive) thing we
> did was download "Publish or Perish", which uses Google Scholar
> I think. Play the following game... Rank your colleagues in
> order of what you think they should be in terms of brilliance.
> Then determine their H-index (or whatever) from the tool.
> Compare the two rankings. To my amazement the two are amazingly
> close. On the other hand we are quite well served by Google
> in our dept. Try typing the keyword "nigel" into Google. You
> get me as the third most important nigel in the whole world.
> How sad is that ?

The H-index is an a-priori weighted formula. It may or may not be
optimal. Intuitive personal ranking of a few colleagues is not the
way to test this metric, or any other: Multiple regression of a full
complement of candidate metrics against the peer panel rankings over a
pandisciplinary database of the scale of the UK RAE is.

> Geraint A. Wiggins (Goldsmiths): For anyone who cares, I've
> written a little ditty in php that queries Google Scholar for
> the first 100 hits it finds and then tots up the citations. If
> you have a cruel and unusual name like mine, it's accurate; if
> your name is "John Smith", then it'll over- count for obvious
> reasons. Fill in your first name (not initials) and surnames
> in the obvious places in the URL:
> http://www.doc.gold.ac.uk/~mas02gw/cite.php?First=****&Sur=**** It
> would be trivial to make this focus on CS if anyone wants it -
> let me know. It currently doesn't do that because I publish in
> music and psychology too. Interestingly, sometimes Google
> produces different numbers on successive queries - I've not had
> time to try to understand why. But the second shot (ie if you
> refresh after the first time you query) seems to be consistent.

In vain would it be focussed on Google Scholar in CS or any other
discipline if the target content is not yet there.

> Emanuele Trucco (Dundee): Optimising existing processes instead
> of just throwing money at starting from scratch is something
> desperately needed - and not only in this case, but as a mental
> framework to teach in schools. Any fool can start new things,
> but the really needed part is taking them to successful completion
> (forgive me for not remembering the paternity of this quote).

Who is starting from scratch or throwing money? The RAE is already there,
and has been incomparably more expensive and time-consuming in the form
of panel submissions and review. The metric RAE saves most of that time
and money. Moreover, IRs cost a pittance per university, and depositing
costs only a few keystrokes. So what is the financial fuss about?

> Awais Rashid (Lancaster):There is also the Publish or Perish
> tool that provides interesting data and statistics in the same
> vein:

But it suffers from the same underlying problem: The absence of the
target content.

> Dave Cliff (Bristol): One problem with google scholar (and hence
> also PublishOrPerish) is that it extracts author names from
> pdf/ps files but is not clever enough to understand the ligature
> symbols that latex substitutes in for certain pairs of letters.
> So there are a whole bunch of citations to my papers that appear
> to be due to some bloke called "Dave Cli" because of the ff
> ligature. Luckily PublishOrPerish lets you do conjunctive
> searches, but you have to know about this problem beforehand
> to be able to know what other names to add as OR terms in the
> search. Other than that, PublishOrPerish is a very cool interface
> to google scholar (www.harzing.com).

No matter how clever a harvester or search engine, it cannot harvest or
search on what is not there.

> YW: This interchange shows exactly why mere citation counts are not
> good guides to research quality: a survey-of-the-field paper of 25
> years ago that happens to have caught the moment but have little
> or no originality or intellectual content may have vast numbers
> of citations, etc. etc., All this is well known in the citation
> business. H-index was intended to remedy this, can be calculated from
> GS at the touch of a mouse on the H-index site, is more stable with
> respect to wrong identifications of individuals, and (key point)
> can be automated without the SUBJECT HAVING TO IDENTIFY ANYTHING
> AT ALL! Just think of the time and effort savings all round in the
> research community.

All true and welcome, but it remains true that the H-index is still just
one unvalidated candidate metric among many. Stick it into the battery
of metrics for joint cross-validation against the panel rankings (or
other face-valid or validated criteria, if any exist), and let's
see how it does! If it turns out to out-perform the other metrics, so be
it. Life turns out to be a lot simpler! More likely, though, a carefully
weighted combination of metrics (including the components of the h-index
-- number of publications, number of years publishing, etc. etc.,
unpacked and weighted longhand) will need to be customised, discipline
by discipline.


Stevan Harnad
AMERICAN SCIENTIST OPEN ACCESS FORUM:
http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html
    http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/

UNIVERSITIES and RESEARCH FUNDERS:
If you have adopted or plan to adopt a policy of providing Open Access
to your own research article output, please describe your policy at:
    http://www.eprints.org/signup/sign.php
    http://openaccess.eprints.org/index.php?/archives/71-guid.html
    http://openaccess.eprints.org/index.php?/archives/136-guid.html

OPEN-ACCESS-PROVISION POLICY:
    BOAI-1 ("Green"): Publish your article in a suitable toll-access journal
    http://romeo.eprints.org/
OR
    BOAI-2 ("Gold"): Publish your article in an open-access journal if/when
    a suitable one exists.
    http://www.doaj.org/
AND
    in BOTH cases self-archive a supplementary version of your article
    in your own institutional repository.
    http://www.eprints.org/self-faq/
    http://archives.eprints.org/
    http://openaccess.eprints.org/
Received on Fri Nov 23 2007 - 16:29:07 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:49:07 GMT