Issue 9 – January 2009


Is e-publishing affecting science?

Recent research indicates that e-publishing is influencing citation patterns and reader behavior, but disagrees on the effects. Are researchers taking full advantage of the wider choice in reading materials or are they searching so specifically that they are missing the reading they might previously have found along the way?

Read more >

As the world of publishing continues its relentless march towards the electronic medium, researchers in various fields are trying to understand what this means for science – specifically, how this is affecting citation patterns and reader behavior.

While some recent research based on citation data has indicated that the availability of online journals is narrowing science, experts in the field of reader behavior dispute this claim. Studies into reader behavior suggest that the use of online journals has instead broadened scholarship and may be driving a new “information democracy”.

In July 2008, sociologist James Evans reported in Science the results of a study showing that online journal access has led to an increasing concentration of citations to fewer, more recent articles across a narrower range of journals (1). Evans argues that browsing through print journals used to lead to more serendipitous discoveries of knowledge, while the era of online access has resulted in rapid consensus-building and preferential attachment.

However, in the accompanying editorial, Carol Tenopir at the University of Tennessee in Knoxville offers a different perspective. Tenopir, with longtime collaborator Donald W. King, has studied reader behavior in the online journal environment for many years. Their findings suggest that the number of older articles read by researchers has increased in the
ten years coinciding with the advent of online journals, as have the number of different journals they use (2).

Online journals broaden reading

Tenopir says: “I do not dispute Evans’ findings, but my research leads me to conclude that e-journals are broadening reading, and therefore science.” Tenopir and King’s latest longitudinal work has been accepted for publication in Aslib Proceedings (3).

What our data shows is not a tendency towards an increasingly exclusive and elitist scientific system, but rather one that is increasingly democratic

She suggests that their different conclusions could be due to the fact that they are actually studying different phenomena: “Evans is looking at citation patterns, while we study reading patterns. Scientists read journal articles for many purposes, not just research and writing, but also for teaching, current awareness and so on. Only readings that are for research within their discipline are likely to result in citations. Even then, scientists read many more articles than they eventually cite.”

Tenopir continues, “there are many motivations to cite, including signaling what is the most important or best of the whole body of what the scientist has read. Our surveys on readings show a steady increase in the number of reported readings and a broadening in the number of journal titles from which at least one article is read. Papers found by searching are more likely to be for research, and are often found in the broad range of e-journal titles held by the scientists’ university library. Readings for current awareness are more likely to be found by browsing through personal print subscriptions.

“Evans credits our earlier demonstration of increased searching as a factor in the narrowing of citations but this seems unlikely, as finding more articles through searching is almost certainly a factor in the broadening of the sources of reading and thus citation.”

Citations spreading further

Meanwhile, a new study to be published in the Journal of the American Society for Information Science and Technology was recently posted to the pre-print server arXiv by Vincent Larivière, Yves Gingras and Éric Archambault (4). Using more than 25 million papers and 600 million citations, they show that the concentration of article and journal citations has been decreasing over time.

According to their research, the percentage of papers that receive at least one citation has been increasing since the 1970s. At the same time, the percentage of articles needed to account for 20%, 50% and 80% of the citations received has been increasing, and the Herfindahl-Hirschmann Index – the concentration index used by Evans – has been steadily decreasing since the beginning of the last century.

“Taken together, these results argue for increasing efficiency of information retrieval in an online world, and the information democracy that this entails,” says Larivière. “The scientific system is increasingly efficient at using published knowledge. What our data shows is not a tendency towards an increasingly exclusive and elitist scientific system, but rather one that is increasingly democratic.”

Towards a democracy of citations

In another paper preceding that of Evans, Larivière, Gingras and Archambault also contradict the claim that the age of cited literature is decreasing (5). In Larivière’s view, “Evans’ conclusions reflect a transient phenomenon. The best example of this can be seen in the field of astrophysics, where the authors did observe a decline in the average age of cited literature at the beginning of the open access movement in the 1990s. However, by the beginning of the 2000s, when almost 100% of the papers were available, the average age started to rise again and has not stopped since.”

In fact, while online publishing may have initially narrowed science, as online searching becomes more efficient and researchers learn how to use this wealth of data to greater effect, they are certainly browsing through and reading, if not actually citing, a wider range of materials. In time, we may well see reading and citations broaden further as researchers come across a wider range of readings in the online world.


(1) Evans, J.A. (2008) “Electronic publication and the narrowing of science and scholarship”, Science, Vol. 321, No. 5887, pp. 395–399.
(2) Tenopir, C. and King, D.W. (2002) “Reading behaviour and electronic journals”, Learned Publishing, Vol. 15, No. 4, pp. 259–265.
(3) Tenopir, C., King, D.W., Edwards, S. and Wu, L. (2009) “Electronic journals and changes in scholarly article seeking and reading patterns”, forthcoming in Aslib Proceedings.
(4) Larivière, V., Gingras, Y. and Archambault, E. (2008) “The decline in the concentration of citations, 1900–2007”, forthcoming in the Journal of the American Society for Information Science and TechnologyarXiv:0809.5250v1
(5) Larivière, V., Archambault, E. and Gingras, Y. (2008) “Long-term variations in the aging of scientific literature: from exponential growth to steady-state science (1900–2004)”, Journal of the American Society for Information Science and Technology, Vol. 59, No. 2, pp. 288–296.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
Research Trends Image

Women in science – perception and reality

There is anecdotal and research-based evidence to suggest that women scientists are held back by family commitments and implicit gender bias. While recent literature suggests that these obstacles are beginning to disappear, there is still a long way to go before we reach gender balance in science. Research Trends reviews the changing landscape.

Read more >

As gender equality in science moves further to the forefront of policy agendas, we are seeing more discussion on the perceived challenges facing women in research careers. But what is the reality of the relative output and quality of the science produced by men and women?

In a 2003 EU report entitled Gender and Excellence in the Making, the EU Commissioner for Research asserted that “the promotion of gender equality in science is a vital part of the European Union’s research policy,” and called for public debate informed by research into the mechanisms by which this inequality has emerged (1). Part of the problem can be encapsulated in terms of two apparent conundrums: the Productivity Puzzle and the Impact Enigma (see box).

New research challenges long-held perceptions
Against this backdrop of perceived gender differences, recent research has cast doubt on the validity of the underlying assumptions about productivity and impact (2). An analysis of the published research of 254 Spanish Ph.D. graduates showed no statistically significant gender differences in output (or lack thereof), degree of collaboration or citations per article. The individuals analyzed came from a range of scientific disciplines, but all were awarded their doctorates between 1990 and 1995, and so were of a similar scientific “age”, suggesting that previous differences in output and impact were artifacts of a skewed distribution of women across academic grades.

A puzzle and an enigma

The Productivity Puzzle is the phenomenon whereby women publish fewer articles than men. This observation has been confirmed repeatedly over recent decades, and several reasons have been put forward to explain it. These include sociobiological factors, such as the need for women to balance career with family obligations, and sociopolitical factors, such as systematic gender bias in the process of peer review for journal publication and competitive grant funding.

The Impact Enigma stems from the observation that women have higher citation impact (citations per article) than men. It has been suggested that this might be because women have a publication strategy that emphasizes quality over quantity or that they participate more in collaborative work, resulting in more robust study design and execution.

In keeping with this, a study of radiation oncologists at US academic institutions showed that the h-index (determined for each individual in Scopus) was lower for women than men (mean 6.4 versus 9.4), but that when the results were adjusted for academic ranking, the gender differential almost disappears.

Gender and productivity
Elba Mauleón and Maria Bordons of the Institute for Documentary Studies on Science and Technology (IEDCYT) at the Spanish National Research Council (CSIC) in Madrid have studied the effects of gender on scientific and technological activity in their own institution.

In Mauleón and Bordons’ recent study published in Life Sciences (3), no differences by gender were found in productivity, impact factor of publication journals or number of citations received. According to Bordons, “productivity of both men and women increased with professional rank, and inter-gender differences within each rank were not observed.

“Interestingly, among the youngest scientists with less than ten years at CSIC, women were more productive than their male counterparts, while the inverse relation holds for intermediate levels of seniority. Further longitudinal studies will tell us if this means that new generations of women are more competitive or if women change their publication strategy over the years as a response to personal, social or economic reasons.”

While there is clearly a long road ahead until we begin to see truly proportional gender representation in science, it may be that with the aid of objective bibliometric tools, it is already possible to demonstrate that the reality is moving further away from perception all the time.

Useful links:


(1) EU report (2003) “Gender and Excellence in the Making”.
(2) Borrego, A., Barrios, M., Villarroya, A., Fras, A. and Ollé, C. (2008) “Research output of spanish postdoctoral scientists: does gender matter?”, In: Kretschmer H. and Havemann F. (Eds.): Proceedings of WIS (Fourth International Conference on Webometrics, Informetrics and Scientometrics & Ninth COLLNET Meeting). Berlin: Creative Commons.
(3) Mauleón, E., Bordons, M. and Oppenheim, C. (2008) “The effect of gender of research staff success in life sciences in the Spanish National Research Council”, Research Evaluation, Vol. 17, Issue 3, pp. 213–225.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

THE rankings – a country view

The 2008 Times Higher Education (THE) rankings have just been released, revealing much movement in the rankings of the world’s top 200 universities. We analyze these results at the national level.

Read more >

Last year, we discussed the annual Times Higher Education (THE) rankings and their relevance to UK institutions. In October 2008, the updated 2008 THE rankings were published and show that many institutions have increased their performance and, consequently, their ranking. This year, we focus on the countries where the institutions are based to try to identify potential reasons for good performance.

If data for the institutions in the top 200 places is collected and grouped by country, some interesting facts emerge. Table 1 illustrates the positive net change in position for all institutions within countries, along with the total number of institutions from that country that appear in the rankings.

Country Net change in rank* Number of

institutions in top 200

India 248 2
Netherlands 230 11
Switzerland 217 7
Israel 194 3
United States 158 58
South Korea 83 3
Sweden 80 4
Denmark 75 3
Ireland 73 2
Argentina 67 1
Thailand 57 1
Greece 48 1
Russia 48 1
Mexico 42 1
South Africa 21 1
Norway 11 1
Finland 9 1
Spain 8 1
Hong Kong 4 4

Table 1 – Country analysis of THE rankings 2008
*Institutes that had no position or were outside of the top 200 in 2007 have not been analyzed in the net change in rank data.

As expected, in terms of institutions in the top 200, the rankings continue to be dominated by the global leaders in research performance: the United States, Germany, the United Kingdom, Japan and Australia. The US has an impressive 58 institutes in the rankings, which have seen an overall net increase of 158 places. The overall increase of the other countries listed demonstrates the strong performance of the research in their institutions.

Two Indian universities, the Indian Institute of Technology in Delhi and in Bombay, have experienced the greatest increase in ranking – an astonishing 248 places – which is testament to the continued development of research in India.

The two countries following India, the Netherlands and Switzerland, have also shown impressive results in the 2008 rankings, with substantial increases in their institutions’ positions. Analysis of these two countries in Scopus shows a very similar growth in published articles, as illustrated in Figure 1.

Fig 1

Figure 1 – Publication output (articles and reviews) of the Netherlands and Switzerland, 2003–2007.

The impact of individual institutions

So what is behind these countries’ increase in rankings? When we analyze the data on a national level, it appears that individual institutions can make a huge impact on the ranking of their home country.

In the Netherlands, the VU University Amsterdam attained a rise of 149 positions in rank – an impressive achievement that makes a positive impact on the overall ranking for the Netherlands. In Switzerland, the Ecole Polytechnique Fédérale de Lausanne and the University of Lausanne each rose by 67 and 56 net changes respectively. Together, these rankings make a strong contribution to Switzerland’s overall increase in rank.

This suggests that national improvements in ranking may be at least partially the result of individual universities taking a more strategic approach: targeting international publications, aided by bibliometric tools and building and promoting library collections.

This is not surprising – research institutes the world over are coming to realize that a dedicated effort towards improving strategy can bring significant improvements to the institution. In fact, using bibliometric and other input data to better understand strengths and weaknesses is helping universities compete more successfully against their peers, resulting in impressive improvements for those who are successful.

Useful links:

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Using data to drive performance

As research institutes chase dwindling funding sources and manage international collaborations, they are realizing that they need robust business intelligence data. We speak to research strategy expert Daniel Calto.

Read more >

Grants are the lifeblood of all research universities in the US. Grants support research and defray some of the many indirect research costs across the institute. Yet identifying, applying for and winning funding is becoming increasingly challenging. Research administrators are facing numerous obstacles, including competition for grants, growing compliance requirements – especially in biomedical research – and funding international collaborations.

Daniel Calto recently joined Elsevier, and prior to that, he was Director of Research Strategy and Senior Director of Research Administration at Columbia University in New York, where he was using grants data to drive improvements in research revenue.

To help research administrators manage this increasing complexity while still being able to respond accurately and rapidly to funding opportunities, Calto worked on benchmarking, identifying historical trends and increasing institutional ranking among peers.

Gathering the data

“Benchmarking in the United States has not relied heavily on bibliometrics, although we did start using bibliometric tools to help structure some of our decision-making data, and I expect this approach to continue,” says Calto.

Fragmented, non-standard data is really the Achilles' heel for many research institutes

“Beyond simple benchmarking, we did deeper investigations, such as SWOT analyses. At Columbia, for example, we discovered that we were very strong in applying for training grants, but were lagging behind our peers when it came to funding for large program projects,” he explains. SWOT information and similar analytical interpretations are key to what grant administrators and research institute senior management need in order to pursue better strategies.

As there is no central funding database in the United States, Calto had to gather data from the US’s two biggest funding sources – the National Institutes of Health (NIH) and the National Science Foundation (NSF) – as well as from the many smaller societies and foundations that make funding available.

Calto believes that while he had a lot of success and offered his institution’s administrators some insight into performance, there is still much to do. “Comprehensive data is our greatest challenge. Fragmented, non-standard data is really the Achilles’ heel for many research institutes. For instance, each funding body uses different cataloguing systems, some use annual data, others not.

“And with globalization, we are also dealing with radically different funding systems – the way research is funded in the United States is not the same as in other parts of the world,” he adds.

Making indicators work for you

According to Calto, to correctly interpret any data, it is essential to bring in the qualitative context. This involves conversations with scientists and funding agencies, and a good general knowledge of the research market.

“I like bibliometric and funding data because they are a fair and objective way to rank people, departments and institutes. However, databases are never complete and they must be interpreted carefully. Most department chairs also take into account the importance of originality and innovative research, even though they might not fit into standard metrics,” he explains.

Calto recently joined Elsevier as Director of Product Management for Performance and Planning in the Academic and Government Products Group, where he is now working to develop the very tools that he would have appreciated when he was at Columbia.

“It’s possible to do some very good analyses using bibliometric databases, but for the really detailed information, research institutes now have to allocate resources, such as people and time. This is why dedicated tools that allow senior management to see research performance at a glance are so critical,” he explains.

With access to good data and the tools necessary to carry out efficient analysis, research institutes can ensure that they are applying for the right funding at the right time, with as little internal stress possible. Eventually, this approach will optimize results and reduce missed opportunities.

Useful links:

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

…a Nobel Laureate?

Researchers cite particular papers for many reasons; many citations are simply a way of indicating studies that are relevant to current research, but they can also be a means of showing respect. We ask three researchers who have cited a Nobel Laureate about their motivation for the citation.

Read more >

There are many reasons why authors cite other authors. Often, citations are motivated by the wish to acknowledge the influences of colleagues. Yet, this is clearly not the full picture. An alternative view is that people tend to cite within their social network: authors will cite works by authors they have interpersonal connections with (1).

Roger Tsien, 2008 Nobel Laureate for Chemistry

We have previously discussed how winning a Nobel Prize can affect citations. In Did you know?, we note that 2008 Nobel Laureate in Chemistry, Roger Tsien, has received 38,989 citations*. But, is this because of his large interpersonal network or the influence that his work has had on other researchers?

Tsien’s 1998 paper, “The green fluorescent protein” (2), has been cited 1,814 times*. Professor Uli Nienhaus, from the Institute of Biophysics at the University of Ulm, Germany, has cited this paper on several occasions. He says: “This paper summarizes essential biochemical and biophysical research results on green fluorescent protein up to 1998. It is a comprehensive, clearly written treatise that is an excellent introduction to this field. And this is why we refer readers to this review in the introductory paragraphs of our own research papers.”

Professor Rebekka M. Wachter, from the Center for Bioenergy and Photosynthesis at Arizona State University, US, has also cited Tsien’s 1998 article on more than one occasion. She explains: “Roger Tsien is an eminent authority on fluorescent proteins. His ground-breaking work on green fluorescent protein and its variants is nicely summarized in his 1998 review article. Also, his research on green fluorescent protein maturation paved the way for an active and highly productive project area in my lab on the mechanism of the green fluorescent protein self-processing reaction that yields visible color.”

Looking at an older paper by Tsien from 1980 (3), the same reasons for citing it apply. Dr. Sandra Claro from the Biofysics Department at São Paulo University in Brazil confirms that she cited Tsien’s paper because “he was the first to do experiments chelating intracellular calcium by BAPTA. In addition, he is a respected researcher.”

These researchers cite Tsien to acknowledge his authority in the field rather than for personal reasons. Or, as Professor Nienhaus puts it: “The purpose of citing related work is not to do someone a favor but to provide additional background and support to scientific statements and conclusions.”

However, he adds that citing because of interpersonal connections is not necessarily a bad thing. “Science is a social activity and if I know a researcher in person it is likely that I am also more familiar with his or her work. Moreover, a personal relationship may also build enhanced confidence and trust in someone’s results. That may then lead to a certain bias in the choice of citations. I view this as entirely acceptable and unavoidable.”

Even though the anecdotal evidence presented here shows that authors cite authors out of acknowledgement for scientific influences, the critical comment placed here indicates that citing people who are personal acquaintances is not necessarily objectionable.

* Source: Scopus

Useful links:


(1) Bornmann, L. and Daniel, H.P. (2006) “What do citation counts measure? A review of studies on citing behaviour”, Journal of Documentation, Vol. 64, Issue 1, pp. 45–80.
(2) Tsien, R.Y. (1998) “The green fluorescent protein”, Annual Review of Biochemistry, Issue 67, pp. 509–544.
(3) Tsien, R.Y. (1980) “New calcium indicators and buffers with high selectivity against magnesium and protons: Design, synthesis, and properties of prototype structures”, Biochemistry, Vol. 19, Issue 11, pp. 2396–2404.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Did you know

…how often the 2008 Nobel Laureates for Chemistry and Medicine have been cited?

The 2008 Nobel Laureate in Chemistry, Roger Tsien, received 3,620 citations in 2007. From 1996 to date, his work has been cited a remarkable 38,989 times. His h-index is 67, indicating that 67 of his papers have been cited 67 times or more. He was awarded the Nobel Prize for his discovery and development of green fluorescent protein with two other chemists: Martin Chalfie (571 citations in 2007; h-index of 18) and Osamu Shimomura (50 citations in 2007; h-index of 4).

The Nobel Laureate in Medicine, Harald zur Hausen, was awarded the Nobel Prize for discovering the human papilloma virus that causes cervical cancer. He received 888 citations in 2007, has been cited 9,352 times since 1996 and his h-index is 22. He shares the Award with Fançoise Barre-Sinoussi (436 citations in 2007; h-index of 35) and Luc Montagnier (240 citations in 2007; h-index of 18).

Source: Scopus

  • Elsevier has recently launched the International Center for the Study of Research - ICSR - to help create a more transparent approach to research assessment. Its mission is to encourage the examination of research using an array of metrics and a variety of qualitative and quantitive methods.