When Eugene Garfield devised the Impact Factor (IF) in 1955 to help select journals for the Science Citation Index, he had no idea that ‘impact’ would become so controversial.

The IF ranks journals based on how many citations they receive over a particular period. However, in recent years, certain misuses of the IF have been brought to light, including its emergence as a performance-measurement tool. Garfield himself has noted that the IF was never intended to assess individuals (1).

Assessing individuals

In a letter to Nature, Professor David Colquhoun of the Department of Pharmacology, University College London, voiced his concerns about the way IFs are being misused to assess people (2). According to him, it is all part of a worrying trend to manage universities like businesses, measuring scientists against key performance indicators. “IFs are of interest only to journal editors. They are a real problem when used to assess people,” he says.

This becomes clear when one looks behind the figures. Bert Sakmann may have won a Nobel Prize in 1991, but under some current assessment criteria, he would have been unemployed long before that happened. From 1976 to 1985, he published between zero and six papers per year (average: 2.6). Yet, despite this low output, during these years, he produced scientifically important papers.

Problem of perception

The real problem may be one of perception. Colquhoun says, “No one knows how far IFs are being used to assess people, but young scientists are obsessed with them. Whether departments look at IFs or not is irrelevant; the reality is that people perceive this to be the case and work towards getting papers into good journals rather than writing good papers. This distorts science itself: it is a recipe for short-termism and exaggeration.”

People believe Impact Factors are being used to assess people and work towards getting papers into good journals rather than writing good papers.

He continues, “Good departments don’t measure applicants or staff by arbitrary calculations at all. All universities should select by references and assessment of papers, and those that already do so should publicly declare this to ease the fears of applicants.”

In an essay by Eugene Garfield published on its website, Thomson Scientific itself addresses the scope of the IF and the potential for misuse. “Thomson Scientific does not depend on the Impact Factor alone in assessing the usefulness of a journal, and neither should anyone else,” it says (4). It recognizes that while the IF has in recent years been increasingly used in the process of academic evaluation, the metric continues to provide an approximation of the prestige of the journals in which individuals have been published and is not an assessment tool for the individuals themselves.

Metrics will never be able to provide a holistic picture of an individual scientist or journal and should certainly not determine science. However, they can function as an initial indicator, thereby providing a starting point for further discussion or assessment.

References:

(1) Garfield, E. (2005) “The agony and the ecstasy: the history and meaning of the Journal Impact Factor”, International Congress on Peer Review and Biomedical Publication, Chicago, September 16, 2005.
(2) Colquhoun, D. (2003) “Challenging the tyranny of impact factors”, Nature, Correspondence, 423, 479.
(3) Colquhoun, D. (2007) “How should universities be run to get the best out of people?”, Physiology News, Vol. 69, pp. 12–14.
(4) Garfield, E., “The Thomson Scientific Impact Factor”
Photograph of David Colquhoun © Mark Thomas
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)