Re: Query about journal (not author) self-citation rates

From: Small, Henry <>
Date: Tue, 25 Mar 2003 19:04:24 +0000

Not aware of any systematic studies of these issues, although there are tons
of journal citation studies based on JCR. My guess is that authors alter
their citation patterns for specific journals because they think readers
will be more familiar with the prior literature in that journal and they a)
don't want to be seen as not citing something they should cite, or b) want
to cite what's relevant to readers. i.e. it's author not editor driven.

Henry Small
Institute for Scientific Information

---------- Forwarded message ----------
Date: Tue, 25 Mar 2003 15:01:27 +0000 (GMT)
From: Stevan Harnad <>
     Lib Serials list <serialst_at_LIST.UVM.EDU>
Newsgroups: bionet.journals.note
Subject: Query about journal (not author) self-citation rates

Author self-citation rates are easily calculated and corrected for.
One can always subtract self-citations from an author's citation
count. But what about journal self-citations (by which I mean
articles in a journal citing other articles in that same journal)?

In both cases -- author self-citation and journal self-citation -- the
self-citations may be legitimate and necessary, or they may be
excessive and inflated. In the case of journals, it is no doubt
possible that the majority of the important and relevant work happens
to be done in the pages of that journal.

But because journals are often evaluated on the basis of their impact
factors (by libraries, choosing which journals to purchase, by authors,
choosing which journals to submit to, and by grant-funders and research
assessors, choosing which research and researchers to hire, fund, and
promote) there is every temptation to get those journal impact factors
as high as possible. The legitimate way is to attract the best research,
by maintaining the best peer-review standards, but a short-cut is to
encourage authors to cite the journal more often in their articles
(as a condition or inducement for acceptance in that journal).

Which leads me to my question: Has anyone done a systematic analysis
to test for this? One could calculate average rates for (S) journals
citing themselves (articles in the same journal, not self-citations
by its authors), (T) journals citing *to* other journals, (B) journals
cited *by* other journals (this could be done across as well as within
fields or even subfields). This could perhaps also be fine-tuned by the
citation-rates of the authors in the journals (their personal t and b
rates, across all their papers). This would give a preliminary picture
of which journals have inflated S-rates, relative to others, perhaps
weighted by the other factors, including google-like "authorities",
namely, high-impact, uninflated journals that can be used as bench-marks.
Even the possibility that a journal's higher S-rate is because it is the
only one in its subfield (or the only one at its level in the subfield)
could be tested using triangulation with the above variables.

Does anyone know of such studies? (Or of evidence of encouraging
self-citation in any way?)

It goes without saying that once the journal literature is open-access,
potential journal-based biases like this will be far less consequential,
because there will be many direct measures of a paper's or author's
research impact, among which the citation impact factor of the journal
in which the paper appeared will be a relatively minor one.

Stevan Harnad
Received on Tue Mar 25 2003 - 19:04:24 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:46:56 GMT