Re: Re-posting the omitted posting to AmSci

From: Jan Velterop <velteropvonleyden_at_btinternet.com>
Date: Mon, 2 Dec 2002 20:40:45 +0000

    [Moderator's Note: This is a re-posting of a message that
    I failed to post on this list, though I replied to it in:
    http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2428.html ]

Stevan,

It's this one:

-----Original Message-----
From: Jan Velterop
Sent: 27 November 2002 10:08
To: 'September 1998 American Scientist Forum'
Subject: Re: UK Research Assessment Exercise (RAE) review

The semantic whip "what is scientometrics?" may lash, but doesn't
quite crack, in my opinion. If Stevan says "I don't think that in
reminding us [...], Jan is not giving us an alternative to scientometric
quantification.", does that mean that he *does* think I *do*?

Good. I didn't even mean to.

I meant to give an example of a complement to quantification.

Much of the trouble is not quantification per se, but the lack of
information to enable weighting the votes. The journals (well, at least
some of them) lend a certain weight to their peer-review, but this
peer-review is almost always anonymous. Reviewers may not even be proper
'peers' in some cases. Stevan speculates that "Perhaps reviewer-names
could accrue some objective scientometric weight...". I would perhaps
remove the 'perhaps'. Maybe it has its own set of problems, but disclosing
the peers' identity may be a great help in assessing the weight or
significance of the review. Besides, it may disclose possible conflicts
of interest. All BMC's medical journals have open peer review which works
most satisfactorily. All journals also have a comments section enabling
a public, open discussion.

The point of Faculty of 1000 is that an open, secondary review of
published literature by acknowledged leaders in the field, signed by
the reviewer, is seen by increasing numbers of researchers (measured by
the fast-growing usage figures of F1000) as a very meaningful addition
to quantitative data and a way to sort and rank articles in order of
importance. Of course one can subsequently quantify such qualitative
information. But what a known and acknowledged authority thinks of an
article is to many more interesting than what anonymous peer-reviewers
think. Any research assessment exercise should seriously look at resources
such as offered by Faculty of 1000.

Jan Velterop

> -----Original Message-----
> From: Stevan Harnad [mailto:harnad_at_ecs.soton.ac.uk]
> Sent: 26 November 2002 19:05
> To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM_at_LISTSERVER.SIGMAXI.ORG
> Subject: Re: UK Research Assessment Exercise (RAE) review
>
>
> On Tue, 26 Nov 2002, Jan Velterop wrote:
>
>> Scientometrics and other metrics are about counting what can be
>> counted... So 'quantity' is dealt with. What about 'quality'?
>> Quality is relative, and based on judgement... utterly subjective,
>> so what we count is 'votes'. Do more votes mean a higher 'quality'
>> than fewer votes? Does it matter who does the voting?
>
> All good scientometric questions, it seems to me (even the one about
> how to identify and weight voting "authorities"). How to answer, if
> not scientometrically? (Or do you think it should just be a matter of
> individual opinion or taste?)
>
>> I think it [matters who does the voting], at least in these matters,
>> and therefore a review process is needed that ranks things like
>> originality, fundamental new insights, and yes, contributions to
>> wider dissemination and understanding as well, in order to base
>> important decisions on more than just quasi-objective measurements.
>
> Is this not among the things peer review is supposed to do? These are
> almost literally the questions that appear in many referee evaluation
> forms. Are you proposing a second round of review, a few years after
> a paper appears? By all means, if you have the time and resources. And
> certainly the RAE should include such secondary review data in its
> scientometric equation too, if they are available in time.
>
> But in what way is any of this an alternative to the quantitative,
> scientometric assessment of research quality and impact? The
> only ones who
> are not doing it scientometrically are the reviewers
> themselves (whether
> in the primary peer review or in the second one Jan
> recommends). But their
> judgments are just votes (i.e., scientometric data) too, just as the
> journal-names are, in 1st-round peer review. Perhaps reviewer-names
> could accrue some objective scientometric weight too, for the second
> round.
>
> But this is all speculation about what the future
> scientometric analyses
> will yield, once we have these (open access) data available to do all
> these analyses on.
>
> For the RAE, unless Jan is recommending that the assessors do
> a 3rd round
> of direct review of all their submission themselves, scientometrics
> (yes, counting!) seems to be the only way they can do their ranking
> (which is likewise counting).
>
>> Fortunately, in biology such secondary review is beginning
> to take shape:
>> Faculty of 1000 (www.facultyof1000.com). It shows that the
> subjective
>> importance of articles is often unconnected, or only very
> loosely connected,
>> to established scientometrics. It constantly brings up
> 'hidden jewels',
>> articles in pretty obscure journals that are nonetheless
> highly interesting
>> or significant.
>
> I would certainly want to use Faculty of 1000 ratings and citations in
> my multiple regression equation for impact, perhaps even giving them a
> special weight (if analysis shows they earn it!). But what is
> the point?
> This is just a further source of scientometric data!
>
>> I am sure that automated, more inclusive, counting of votes
> made possible by
>> open and OAI-compliant online journals and repositories
> will help the
>> visibility of those currently outside the ISI Impact
> Factory universe, such
>> as the journals from Bhutan. But it can't replace judgement.
>
> No, it can't replace judgment. Like all other analyses, it can merely
> quantify the outcomes of judgments, and weigh them, against
> one another
> and against other measures. What else is there? Even the decision to
> browse, read, and cite is just a set of human judgments we
> are counting
> and trying to use to predict with. Predict what? Later human
> performance,
> and findings, and judgments, i.e., research impact.
>
> I don't think that in reminding us that all of this is based on human
> judgment (and, of course, on empirical reality, in the case
> of science),
> Jan is not giving us an alternative to scientometric
> quantification. He
> is just reminding us of what it is that we are quantifying!
>
> Stevan Harnad
Received on Mon Dec 02 2002 - 20:40:45 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:46:44 GMT