Issue 10 – March 2009


Research Trends Image

The politics of bibliometrics

As bibliometric indicators gain ground in the measurement of research performance and quality, and researchers and editors understand the importance of citations in these indicators, the potential for citation manipulation is bringing a political dimension to the world of bibliometrics. Research Trends explores the effect of excessive self-citation and spurious co-authorship on citation patterns.

Read more >

Academic performance indicators are relied upon for their objective insight into prestige. However, the data that they draw upon can be affected by practices such as self-citation and spurious co-authorship.

Mayur Amin and Michael Mabe have discussed and identified several issues with the Impact Factor (IF), probably the most widely used (and abused) of all indicators (1). And more recently, its derivation and transparency have been criticized in several articles. The comments are applicable to many bibliometric indicators, but most have focused on the IF because of its prominence.

Self-citation poses a political and ethical dilemma for bibliometrics: although it plays a vital role in research for both journals and authors, it can also be seen as a way to artificially increase the ranking of a journal or an individual. While most self-citation is justified and necessary, the potential for abuse has become a political issue for bibliometrics.

Dropping your own name

In 1974, Eugene Garfield wrote of self-citation rates: “It says something about your field – its newness, size, isolation; it tells us about the universe in which a journal operates.” (2) While this continues to be true, some researchers have indicated that there is a link between self-citation and the overall citation levels an author receives. James Fowler and Dag Aknses claim that: “a self-cite may yield more citations to a particular author, without yielding more citations to the paper in question” (3).

When self-citation is overused or blatant it can be detected by bibliometric indicators, generating significant attention. Several key researchers have called for bibliometric indicators to be calculated both with and without self-citations to identify their effects or to understand the reason for the self-citation (4,5,6,7).

Another political consideration concerns the replication of reference lists between articles. This occurs where a set of references is deemed to be important enough to be included in almost every article in the field. It sometimes happens even if there is only a tenuous link to the article in question, and when the author may not even have read the paper. This adds numerous “extra” citations to the pool each year. In fact, after analyzing the references in five issues of different medical journals, Gerald de Lacey, Christopher Record and James Wade found that errors were proliferating through medical literature into other articles at an alarming rate (8).

Do we need a watchdog?

The potential for abuse suggests that we may need to regulate citations. At present, we rely on authors to self-regulate. But this is a sensitive issue.

As John Maddox has described: “the widespread practice of spurious co-authorship” (9) is another political aspect of research. In some extreme cases, as Murrie Burgan indicates, articles list more than 100 authors (10). How can it be possible for each of those authors to have actively contributed to the article? Moreover, John Ioannides has shown that the average number of authors per paper is increasing, indicating that the problem is growing (11). And, according to research carried out by Richard Slone into authors listed on papers published in the American Journal of Roentgenology, the number of so-called “undeserved authors” rises as the list gets longer: 9% of authors on a three-author paper were undeserved, rising to 30% on papers with more than six authors (12).

Part of the problem is that a researcher’s personal success is intimately intertwined with his or her publication records. And as long as measures such as the h-index fail to distinguish between the first or the 30th author on a paper, undeserved co-authorship will continue. Some believe that the peer-review process should act as the governing body for research, asking journal editors and referees to act as bibliometric police. However, it can be very difficult to spot incidents of overactive self-citation, unrelated or incorrect references and erroneous authors, while attempting to assess whether the quality of research warrants publication.

There is also the potential to introduce a regulatory body, but the question remains: who should this be? Potentially publishers or associations, but it is far from clear whether there is a need for an independent organization to regulate the system.

As explained above, some researchers have suggested that metrics should be developed that account for excessive self-citation or that cleaner data are used. In the former case, self-citations can be taken out and weighted averages introduced but this can make the metric extremely complex. Meanwhile, publishers are working towards providing increasingly clean data, which makes processes easier.

In the end, is it worth all the effort? As long as the community as a whole can bring thoughtful analysis and interpretation, as well as a healthy dose of common sense, to bear on citations, such political considerations should be mitigated. As Winston Churchill once said: “If you have ten thousand regulations, you destroy all respect for the law.”


(1) Amin, M., Mabe, M. (2000) “Impact Factors: use & abuse”, Perspectives in Publishing, No. 1.
(2) Garfield, E. (1974) “Journal self citation rates – there’s a difference”, Current Contents, No. 52, pp. 5–7.
(3) Fowler, J.H. and Aksnes, D.W. (2007) “Does self-citation pay?”, Scientometrics, Vol. 72, No. 3, pp. 427–37.
(4) Schubert, A., Glanzel, W. and Thijs, B. (2006) “The weight of author self-citations. A fractional approach to self-citation counting”, Scientometrics, Vol. 67, No. 3, pp. 503–14.
(5) Hyland, K. (2003) “Self citation and self reference: credibility and promotion in academic publication”, JASIST, Vol. 54, No. 3, pp. 251–59.
(6) Aksnes, D.W. (2003) “A macro study of self citation”, Scientometrics, Vol. 56, No. 2 , pp. 235–46.
(7) Glanzel, W., Thijs, B. and Schlemmer, B. (2004) “A bibliometric approach to the role of author self-citations in scientific communication”, Scientometrics, Vol. 59, No. 1, pp. 63–77.
(8) de Lacey, G., Record, C. and Wade, J. (1985) “How accurate are quotations and references in medical journals?”, British Medical Journal, Vol. 291, September, pp. 884–86.
(9) Maddox, J. (1994) “Making publication more respectable”, Nature, Vol. 369, No. 6479, p. 353.
(10) Burgan, M. (1995) “Who is the author?”, STC Proceedings, pp. 419–20.
(11) Ioannidis, J.P.A. (2008) “Measuring co-authorship and networking adjusted scientific impact”, PLoS ONE, Vol. 3, No. 7, Art. No. e2778.
(12) Slone, R.M. (1996) “Coauthors’ contributions to major papers published in the AJR: frequency of undeserved coauthorship”, American Journal of Roentgenology, Vol. 167, No.3, pp. 571–79.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Pleased to cite you: the social side of citations

The advent of robust data sources for citation analysis and computational tools for social network analysis in recent years has reawakened an old question in the sociometrics of science: how socially connected are citers to those that they cite? Research Trends talks to those in the know.

Read more >

Charles Oppenheim

Charles Oppenheim

While social connectedness correlates with citation counts, science is still more about what you know than who you know. A recent investigation of the social and citation networks of three individual researchers concluded that while a positive correlation exists between social closeness and citation counts, these individuals nevertheless cited widely beyond their immediate social circle (1).

Professor Charles Oppenheim comments on the motivation for this study and its main findings: “Our research started from the hypothesis that people were more likely to cite those close to them, forming so-called ‘citation clubs’ of colleagues in the same department or research unit. There is an allegation that such citation clubs distort citation counts. We took as our primary target the Centre for Information Behaviour and the Evaluation of Research (CIBER) group of researchers based at University College London, well known for their work in deep log analysis.

“The research was quite novel because it used social network analysis (SNA) techniques and UCINET SNA software to analyze the results from questionnaires we sent to CIBER group members and people they had cited. We found no evidence of a citation club – CIBER researchers aren't necessarily socially close to the researchers they cite. However, it must be stressed that this was a small-scale experiment and cannot be generalized to all subject areas, or indeed to anyone apart from the CIBER group.”

A circle of friends and colleagues

Blaise Cronin

Blaise Cronin

Blaise Cronin, Dean and Rudy Professor of Information Science at Indiana University, US, and newly appointed Editor-in-Chief of the Journal of the American Society for Information Science and Technology, agrees that both social and intellectual connections affect citation. “We certainly don’t cite authors just because they are colleagues or friends, but all things being equal, most of us would probably give the nod to those whom we know personally.

“Our colleagues, co-workers, trusted assessors and friends are often to be found nearby – in the lab, along the faculty corridor. Even in an age of hyper-networking, place and physical proximity play a part in determining professional ties and loyalties. And those bonds, in turn, can shape our citation practices.

“Co-citation maps do not merely depict intellectual connections between authors; inscribed in them, in invisible ink as it were, are webs of social ties. A number of bio-bibliometric studies (2) have attempted to combine sociometric and scientometric data to reveal these ties. As the digital infrastructure evolves, we may soon see the emergence of a new sub-field, bio-bibliometrics, and the first generation of socio-cognitive maps of science.”

Paying an intellectual debt

Howard D. White
Howard D. White

Howard D. White, Professor Emeritus at the College of Information Science & Technology at Philadelphia’s Drexel University, US, has been interested in the social dimension of citation for some time. His work on the social and citation structure of an interdisciplinary group established to study human development concluded that citations are driven more by intellectual than social ties (3).

White explains: “There is no doubt that citation networks and social networks often overlap. Given the specialization of research fields, how could this not be the case? But no scientist or scholar would fail to cite a useful work simply because it was by a contemporary they had not met or a dead predecessor they could not have met. Citations are made to buttress intellectual points, and perceived relevance toward that end is far more important than social ties in determining who and what gets cited.”

As the nascent field of bio-bibliometrics continues to grow, we will come to a better understanding of the motivations underlying the practice of citation. Yet it is already clear that, in the main, citations mark the acknowledgement of intellectual debt to those who have gone before, rather than mere whimsy: it really is all about what you know, not who you know.


(1) Johnson, B. and Oppenheim, C. (2007) “How socially connected are citers to those that they cite?”, Journal of Documentation, Vol. 63, No. 5, pp. 609–37.
(2) Cronin, B. (2005) “A hundred million acts of whimsy?”, Current Science, Vol. 89, No. 9, pp. 1505–09.
(3) White, H. D. (2004) “Does citation reflect social structure? Longitudinal evidence from the ‘Globenet’ interdisciplinary research group”, Journal of the American Society for Information Science and Technology, Vol. 55, No. 2, pp. 111–26.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Obama’s “Dream Team”

Obama’s new senior science advisory team brings together some of the most successful and influential scientists in the US, ushering in a new era where science is at the centre of policy. We look at the track records of five of the appointees.

Read more >

New US President Barack Obama’s choices for senior science advisory posts in his new government include some of the most prolific and high-impact scientists working in the US today, earning the nickname of Obama’s “Dream Team”.

In his weekly radio address in December 2008, Obama vowed to “put science at the top of our agenda [because] science holds the key to our survival as a planet and our security and prosperity as a nation”.

Environment on the agenda

As Assistant to the President for Science and Technology and Director of the White House Office of Science and Technology Policy, John P. Holdren is Obama’s top science advisor. Based at the Kennedy School of Government at Harvard University, Holdren is a physicist whose publications on sustainable energy technology and energy policy have featured frequently in Science; his seminal 1971 article (with population biologist Paul Ehrlich) entitled “Impact of population growth” (1) continues to be cited strongly (with more than 30 citations during 2007).

Holdren was recently president of the American Association for the Advancement of Science (AAAS) and then chairman of its Board of Directors. In a statement on the AAAS website, the Association’s Chief Executive Officer Alan Leshner noted: “John Holdren’s expertise spans so many issues of great concern at this point in history – climate change, energy and energy technology, nuclear proliferation.”

Another past president of the AAAS, Jane Lubchenco, assumes the role of National Oceanic and Atmospheric Administration (NOAA) Administrator. The first woman to head the agency, Lubchenco has an impressive list of publications in marine ecology, and co-authored a 1997 article warning of the impacts of human activity on the global ecosystem and the immediate need for action that has been cited more than 1,400 times to date (2). Like Holdren, Lubchenco has a Harvard connection, having taken her Ph.D. there in 1975 and holding a teaching post before relocating to Oregon State University in 1978.

Stocking up on Nobel laureates

President Obama’s Secretary of Energy, Steven Chu, Professor of Physics and Molecular & Cellular Biology and Director of the Lawrence Berkeley National Laboratory at the University of California, Berkeley, shared the 1997 Nobel Prize in Physics for his research in cooling and trapping of atoms with laser light. The first Laureate to be appointed to the Cabinet, Chu’s research interests in single-molecule biology are reflected in his list of more than 140 journal publications since 1996, with more than 7,000 citations to date.

Rounding out President Obama’s “Dream Team” are two biologists, Eric Lander and Harold Varmus, co-chairs of the President’s Council of Advisers on Science and Technology (PCAST) with Holdren. PCAST is a panel of private sector and academic representatives established in 2001 to advise on issues related to technology, research priorities and science education.

Lander, founding Director of the Broad Institute of Massachusetts Institute of Technology and Harvard, was instrumental in the Human Genome Project; his more than 350 journal publications have collectively been cited more than 75,000 times since 1996.

Varmus, former director of the National Institutes of Health and President and CEO of Memorial Sloan-Kettering Cancer Center since 2000, is the second Nobel Prize winner (Physiology or Medicine, 1989) appointed to Obama’s team. His prize-winning research on the cellular origin of retroviral oncogenes published in Nature in 1976 (3) continues to be cited (21 times in 2007).

Towards a well-informed future

President Obama has collected some of the finest scientific talent in the US to advise him, with a particular focus on environmental issues. In fact, the team has also been dubbed the “Green Team”. These five individuals were together cited more than 12,000 times in 2007 and their experience spans the breadth of the physical sciences.

Incidentally, Obama himself is a published author, with a dozen journal publications: his 2006 article (4) with erstwhile presidential rival and now Secretary of State Hillary Clinton on healthcare reform has been cited 28 times to date.

President Obama outlined the key role that science policy will play in the US’s economic recovery in his inauguration speech in January: “The state of the economy calls for action, bold and swift, and we will act […] We will restore science to its rightful place”.


(1) Ehrlich P.R. and Holdren J.P. (1971) “Impact of population growth”, Science, Vol. 171, pp. 1212–17.
Vitousek P.M., Mooney H.A., Lubchenco J. and Melillo J.M. (1997) “Human domination of Earth's ecosystems”, Science, Vol. 277, pp. 494–99.
Stehelin D., Varmus H.E., Bishop J.M. and Vogt P.K. (1976) “DNA related to the transforming gene(s) of avian sarcoma viruses is present in normal avian DNA”, Nature, Vol. 260, pp. 170–73.
Clinton H.R. and Obama B. (2006) “Making patient safety the centerpiece of medical liability reform”, New England Journal of Medicine, Vol. 354, pp. 2205–08.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Inspired by bibliometrics

For many editors, the bibliometrics underlying journal rankings are a fuzzy area. But for those already using similar techniques in their research, bibliometrics is another tool to help increase journal quality. Brian Fath tells us how the work of Derek de Solla Price has opened his eyes to the world of citation analysis.

Read more >

Brian Fath

Brian Fath

Brian Fath is an Associate Professor in the Department of Biological Sciences at Towson University, USA, and Editor-in-Chief for the journal, Ecological Modelling. Like all journal editors, he wants his journal to continue improving. However, unlike many editors, he has a passion for network analysis, giving him a unique insight into the way ranking metrics are calculated and an enhanced understanding of how scholarly literature is cited within communities.

Fath uses ecological network analysis to identify relationships between non-connected elements in food webs. He says: “Network analysis is a very powerful tool to identify hidden relationships. We can now integrate the networks of different systems and identify indirect pathways, making it possible for us to see the unexpected consequences of our actions. For example, CFCs looked good in the lab, but it took 40 years to understand their effect on the planet. Through network analysis, we can potentially gauge those effects before we cause them.”

In October 2007, he was invited to give a presentation on “Assessing Journal Quality Using Bibliometrics” at the Elsevier Editors’ Conference in Miami. While carrying out background research, he came across Derek de Solla Price. “His 1965 paper (1) was a revelation, and I literally just stumbled upon it,” he recalls.

Eye opener

“I thought this paper was fascinating. For instance, de Solla Price identifies research fronts, marked by review papers. This is important, because he also shows that the frequency of review papers is not linked to time, but to the number of papers published in the field. Hot topics, where a lot of papers are published, prompt review papers more frequently than slower-paced areas. This changed my mind on the frequency of publishing review papers,” says Fath.

He was also interested in de Solla Price’s discussion of non-cited papers. Around 35% of papers in a given year are never cited. Editors obviously want to publish the best research, but how can they recognize the outliers? “Our journal is quite avant-garde. We publish some novel papers, and naturally some don’t get cited. But on the other hand, if we could find a way to reduce the number of non-cited papers, our Impact Factor would go up,” he remarks.

Improving quality

Fath believes that bibliometrics can help editors improve the quality of their journals. “We can improve the field by knowing when to call for a review paper and by promoting timely special issues, and these actions are reflected in our bibliometrics,” he says. For instance, he recently discovered that special issues of his journal were actually less frequently cited than regular issues. “We’ve decided to try doing themed issues next year to see if that serves the community better than traditional conference-based special issues,” he says.

He is also paying more attention to keywords in papers, and especially in abstracts. He believes that, “people are really starting to use search engines to find papers, and it seems logical to use keywords. Abstracts are also very important: well-written, clear English is very attractive.”

He does have one concern, however. “We are going through a period of rapid journal growth, which I don’t think is sustainable. It’s possible to get almost anything published somewhere these days – in fact, it can get quite hard to follow the literature. And all these papers are citing other papers, which means everyone’s Impact Factor is increasing. But I wonder if it’s sustainable; can all these new journals also expect their Impact Factors to rise?”

Yet overall, despite some resistance, Fath is convinced that citation analysis is very valuable: “Communities should be citing each other – this is what marks them out as a community; and if you’re not being cited by your own community, you should want to know this and do something about it.”


(1) de Solla Price, D.J. (1965) “Networks of scientific papers”, Science, Vol. 149, pp. 510–15.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

…a Top-Cited marketing paper?

Stephen Vargo and Robert Lusch’s 2004 paper “Evolving to a new dominant logic for marketing” is the Top-Cited in its category. We ask Vargo and one its many citers why they think this article is so successful.

Read more >

In the subject area Economics, Econometrics and Finance, the paper “Evolving to a new dominant logic for marketing”, published by Stephen Vargo and Robert Lusch in the Journal of Marketing, was the TopCited article between 2004 and 2008. This article has been cited 282 times.

Relevance and timing count

Professor Vargo from the Shidler College of Business at the University of Hawaii, US, explains: “While we did not fully anticipate the impact the article would have, I think there are several reasons for it. First, it was intended to capture and extend a general evolution in thought about economic exchange, both within and outside of marketing. The most common comment we receive is something like ‘you said what I have been trying to say’ in part or in whole. Thus, although it was published in a marketing journal, it seems to have resonated with a much larger audience.

“We have also said from the outset that what has now become known as service-dominant (S-D) logic is a work in process and have tried to make its development inclusive. As we have interacted with other scholars, we have modified our original views – and the original foundational premises – and expanded the scope of S-D logic. This approach seems to have been well received.”

Professor Vargo also acknowledges an element of “fortuitous timing” in the article’s success: “The role of service in the economy is becoming increasingly recognized and firms such as IBM and GE – and many others – are shifting from thinking about themselves as manufacturing firms to primarily service firms. Similar shifts are taking place in academic and governmental thinking. S-D logic provides a service-based, conceptual foundation for these changes.”

Busting paradigms

Professor Eric Arnould from the Department of Management and Marketing at the University of Wyoming, US, has cited this paper. He explains: “this article is a paradigm buster; it is as simple as that. The paper took under-systematized currents of thought that have been circulating in the marketing discipline for a number of years and codified them. The paper proposes that marketing is about the exchange of services or resources, not things; and that value is always co-created in the exchange of resources both immaterial (operand) and material (operant) between parties. If widely adopted, their detailed proposals will change marketing theory and practice forever. The paper is widely cited because of the ongoing interest in their recommendations both in practice, such as for IBM, and in the academic world. We cited the paper both for its content and its authority as a paradigm buster”.


(1) Vargo, S.L. and Lusch, R.F. (2004) “Evolving to a new dominant logic for marketing”, Journal of Marketing, Vol. 68, issue 1, pp. 1–17.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Did you know

Bibliometrics originates in law

Some scholars have traced the origins of bibliometrics to the end of the 19th century, with studies in chemistry or in psychology (1). However, there is evidence of publication counts in law as early as the early 19th century. In 1817, an American law teacher mentioned “at least 530” volumes dedicated to judicial decisions; he repeated the study four years later to find “not less than 600”. This was followed in 1826 by a count of English reports published to date and in 1876 by an estimate of the number of English, Irish, Scottish, Canadian and American volumes published until 1874. This is publication analysis in its simplest form, but could be regarded as a bibliometric study none-the-less.

It can even be argued that the roots of citation indexing reach further back. For instance, in 1743, Raymond’s Reports contained tables listing the cases cited by the cases published in that volume (2). In 1783, Douglas’s Reports included an “index of Cases Cited” as a separate table, and a similar, but full-size, citation index was published in 1821. Surely the use of citations here strengthens the case.

(1) Godin, B. (2006) “On the origins of bibliometrics”, Scientometrics, Vol. 68, Issue 1, pp. 109–33.

(2) Shapiro, F.R. (1992) “Origins of bibliometrics, citation indexing, and citation analysis: the neglected legal literature”, Journal of the American Society for Information Science, Vol. 43, Issue 5, pp. 337–39.

  • Elsevier has recently launched the International Center for the Study of Research - ICSR - to help create a more transparent approach to research assessment. Its mission is to encourage the examination of research using an array of metrics and a variety of qualitative and quantitive methods.