Academic performance indicators are relied upon for their objective insight into prestige. However, the data that they draw upon can be affected by practices such as self-citation and spurious co-authorship.

Mayur Amin and Michael Mabe have discussed and identified several issues with the Impact Factor (IF), probably the most widely used (and abused) of all indicators (1). And more recently, its derivation and transparency have been criticized in several articles. The comments are applicable to many bibliometric indicators, but most have focused on the IF because of its prominence.

Self-citation poses a political and ethical dilemma for bibliometrics: although it plays a vital role in research for both journals and authors, it can also be seen as a way to artificially increase the ranking of a journal or an individual. While most self-citation is justified and necessary, the potential for abuse has become a political issue for bibliometrics.

Dropping your own name

In 1974, Eugene Garfield wrote of self-citation rates: “It says something about your field – its newness, size, isolation; it tells us about the universe in which a journal operates.” (2) While this continues to be true, some researchers have indicated that there is a link between self-citation and the overall citation levels an author receives. James Fowler and Dag Aknses claim that: “a self-cite may yield more citations to a particular author, without yielding more citations to the paper in question” (3).

When self-citation is overused or blatant it can be detected by bibliometric indicators, generating significant attention. Several key researchers have called for bibliometric indicators to be calculated both with and without self-citations to identify their effects or to understand the reason for the self-citation (4,5,6,7).

Another political consideration concerns the replication of reference lists between articles. This occurs where a set of references is deemed to be important enough to be included in almost every article in the field. It sometimes happens even if there is only a tenuous link to the article in question, and when the author may not even have read the paper. This adds numerous “extra” citations to the pool each year. In fact, after analyzing the references in five issues of different medical journals, Gerald de Lacey, Christopher Record and James Wade found that errors were proliferating through medical literature into other articles at an alarming rate (8).

Do we need a watchdog?

The potential for abuse suggests that we may need to regulate citations. At present, we rely on authors to self-regulate. But this is a sensitive issue.

As John Maddox has described: “the widespread practice of spurious co-authorship” (9) is another political aspect of research. In some extreme cases, as Murrie Burgan indicates, articles list more than 100 authors (10). How can it be possible for each of those authors to have actively contributed to the article? Moreover, John Ioannides has shown that the average number of authors per paper is increasing, indicating that the problem is growing (11). And, according to research carried out by Richard Slone into authors listed on papers published in the American Journal of Roentgenology, the number of so-called “undeserved authors” rises as the list gets longer: 9% of authors on a three-author paper were undeserved, rising to 30% on papers with more than six authors (12).

Part of the problem is that a researcher’s personal success is intimately intertwined with his or her publication records. And as long as measures such as the h-index fail to distinguish between the first or the 30th author on a paper, undeserved co-authorship will continue. Some believe that the peer-review process should act as the governing body for research, asking journal editors and referees to act as bibliometric police. However, it can be very difficult to spot incidents of overactive self-citation, unrelated or incorrect references and erroneous authors, while attempting to assess whether the quality of research warrants publication.

There is also the potential to introduce a regulatory body, but the question remains: who should this be? Potentially publishers or associations, but it is far from clear whether there is a need for an independent organization to regulate the system.

As explained above, some researchers have suggested that metrics should be developed that account for excessive self-citation or that cleaner data are used. In the former case, self-citations can be taken out and weighted averages introduced but this can make the metric extremely complex. Meanwhile, publishers are working towards providing increasingly clean data, which makes processes easier.

In the end, is it worth all the effort? As long as the community as a whole can bring thoughtful analysis and interpretation, as well as a healthy dose of common sense, to bear on citations, such political considerations should be mitigated. As Winston Churchill once said: “If you have ten thousand regulations, you destroy all respect for the law.”

References:

(1) Amin, M., Mabe, M. (2000) “Impact Factors: use & abuse”, Perspectives in Publishing, No. 1.
(2) Garfield, E. (1974) “Journal self citation rates – there’s a difference”, Current Contents, No. 52, pp. 5–7.
(3) Fowler, J.H. and Aksnes, D.W. (2007) “Does self-citation pay?”, Scientometrics, Vol. 72, No. 3, pp. 427–37.
(4) Schubert, A., Glanzel, W. and Thijs, B. (2006) “The weight of author self-citations. A fractional approach to self-citation counting”, Scientometrics, Vol. 67, No. 3, pp. 503–14.
(5) Hyland, K. (2003) “Self citation and self reference: credibility and promotion in academic publication”, JASIST, Vol. 54, No. 3, pp. 251–59.
(6) Aksnes, D.W. (2003) “A macro study of self citation”, Scientometrics, Vol. 56, No. 2 , pp. 235–46.
(7) Glanzel, W., Thijs, B. and Schlemmer, B. (2004) “A bibliometric approach to the role of author self-citations in scientific communication”, Scientometrics, Vol. 59, No. 1, pp. 63–77.
(8) de Lacey, G., Record, C. and Wade, J. (1985) “How accurate are quotations and references in medical journals?”, British Medical Journal, Vol. 291, September, pp. 884–86.
(9) Maddox, J. (1994) “Making publication more respectable”, Nature, Vol. 369, No. 6479, p. 353.
(10) Burgan, M. (1995) “Who is the author?”, STC Proceedings, pp. 419–20.
(11) Ioannidis, J.P.A. (2008) “Measuring co-authorship and networking adjusted scientific impact”, PLoS ONE, Vol. 3, No. 7, Art. No. e2778.
(12) Slone, R.M. (1996) “Coauthors’ contributions to major papers published in the AJR: frequency of undeserved coauthorship”, American Journal of Roentgenology, Vol. 167, No.3, pp. 571–79.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)