Articles

Research trends is an online magazine providing objective insights into scientific trends based on bibliometrics analyses.

Is e-publishing affecting science?

Recent research indicates that e-publishing is influencing citation patterns and reader behavior, but disagrees on the effects. Are researchers taking full advantage of the wider choice in reading materials or are they searching so specifically that they are missing the reading they might previously have found along the way?

Read more >


As the world of publishing continues its relentless march towards the electronic medium, researchers in various fields are trying to understand what this means for science – specifically, how this is affecting citation patterns and reader behavior.

While some recent research based on citation data has indicated that the availability of online journals is narrowing science, experts in the field of reader behavior dispute this claim. Studies into reader behavior suggest that the use of online journals has instead broadened scholarship and may be driving a new “information democracy”.

In July 2008, sociologist James Evans reported in Science the results of a study showing that online journal access has led to an increasing concentration of citations to fewer, more recent articles across a narrower range of journals (1). Evans argues that browsing through print journals used to lead to more serendipitous discoveries of knowledge, while the era of online access has resulted in rapid consensus-building and preferential attachment.

However, in the accompanying editorial, Carol Tenopir at the University of Tennessee in Knoxville offers a different perspective. Tenopir, with longtime collaborator Donald W. King, has studied reader behavior in the online journal environment for many years. Their findings suggest that the number of older articles read by researchers has increased in the
ten years coinciding with the advent of online journals, as have the number of different journals they use (2).

Online journals broaden reading

Tenopir says: “I do not dispute Evans’ findings, but my research leads me to conclude that e-journals are broadening reading, and therefore science.” Tenopir and King’s latest longitudinal work has been accepted for publication in Aslib Proceedings (3).

What our data shows is not a tendency towards an increasingly exclusive and elitist scientific system, but rather one that is increasingly democratic

She suggests that their different conclusions could be due to the fact that they are actually studying different phenomena: “Evans is looking at citation patterns, while we study reading patterns. Scientists read journal articles for many purposes, not just research and writing, but also for teaching, current awareness and so on. Only readings that are for research within their discipline are likely to result in citations. Even then, scientists read many more articles than they eventually cite.”

Tenopir continues, “there are many motivations to cite, including signaling what is the most important or best of the whole body of what the scientist has read. Our surveys on readings show a steady increase in the number of reported readings and a broadening in the number of journal titles from which at least one article is read. Papers found by searching are more likely to be for research, and are often found in the broad range of e-journal titles held by the scientists’ university library. Readings for current awareness are more likely to be found by browsing through personal print subscriptions.

“Evans credits our earlier demonstration of increased searching as a factor in the narrowing of citations but this seems unlikely, as finding more articles through searching is almost certainly a factor in the broadening of the sources of reading and thus citation.”

Citations spreading further

Meanwhile, a new study to be published in the Journal of the American Society for Information Science and Technology was recently posted to the pre-print server arXiv by Vincent Larivière, Yves Gingras and Éric Archambault (4). Using more than 25 million papers and 600 million citations, they show that the concentration of article and journal citations has been decreasing over time.

According to their research, the percentage of papers that receive at least one citation has been increasing since the 1970s. At the same time, the percentage of articles needed to account for 20%, 50% and 80% of the citations received has been increasing, and the Herfindahl-Hirschmann Index – the concentration index used by Evans – has been steadily decreasing since the beginning of the last century.

“Taken together, these results argue for increasing efficiency of information retrieval in an online world, and the information democracy that this entails,” says Larivière. “The scientific system is increasingly efficient at using published knowledge. What our data shows is not a tendency towards an increasingly exclusive and elitist scientific system, but rather one that is increasingly democratic.”

Towards a democracy of citations

In another paper preceding that of Evans, Larivière, Gingras and Archambault also contradict the claim that the age of cited literature is decreasing (5). In Larivière’s view, “Evans’ conclusions reflect a transient phenomenon. The best example of this can be seen in the field of astrophysics, where the authors did observe a decline in the average age of cited literature at the beginning of the open access movement in the 1990s. However, by the beginning of the 2000s, when almost 100% of the papers were available, the average age started to rise again and has not stopped since.”

In fact, while online publishing may have initially narrowed science, as online searching becomes more efficient and researchers learn how to use this wealth of data to greater effect, they are certainly browsing through and reading, if not actually citing, a wider range of materials. In time, we may well see reading and citations broaden further as researchers come across a wider range of readings in the online world.

References:

(1) Evans, J.A. (2008) “Electronic publication and the narrowing of science and scholarship”, Science, Vol. 321, No. 5887, pp. 395–399.
(2) Tenopir, C. and King, D.W. (2002) “Reading behaviour and electronic journals”, Learned Publishing, Vol. 15, No. 4, pp. 259–265.
(3) Tenopir, C., King, D.W., Edwards, S. and Wu, L. (2009) “Electronic journals and changes in scholarly article seeking and reading patterns”, forthcoming in Aslib Proceedings.
(4) Larivière, V., Gingras, Y. and Archambault, E. (2008) “The decline in the concentration of citations, 1900–2007”, forthcoming in the Journal of the American Society for Information Science and TechnologyarXiv:0809.5250v1
(5) Larivière, V., Archambault, E. and Gingras, Y. (2008) “Long-term variations in the aging of scientific literature: from exponential growth to steady-state science (1900–2004)”, Journal of the American Society for Information Science and Technology, Vol. 59, No. 2, pp. 288–296.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

As the world of publishing continues its relentless march towards the electronic medium, researchers in various fields are trying to understand what this means for science – specifically, how this is affecting citation patterns and reader behavior.

While some recent research based on citation data has indicated that the availability of online journals is narrowing science, experts in the field of reader behavior dispute this claim. Studies into reader behavior suggest that the use of online journals has instead broadened scholarship and may be driving a new “information democracy”.

In July 2008, sociologist James Evans reported in Science the results of a study showing that online journal access has led to an increasing concentration of citations to fewer, more recent articles across a narrower range of journals (1). Evans argues that browsing through print journals used to lead to more serendipitous discoveries of knowledge, while the era of online access has resulted in rapid consensus-building and preferential attachment.

However, in the accompanying editorial, Carol Tenopir at the University of Tennessee in Knoxville offers a different perspective. Tenopir, with longtime collaborator Donald W. King, has studied reader behavior in the online journal environment for many years. Their findings suggest that the number of older articles read by researchers has increased in the
ten years coinciding with the advent of online journals, as have the number of different journals they use (2).

Online journals broaden reading

Tenopir says: “I do not dispute Evans’ findings, but my research leads me to conclude that e-journals are broadening reading, and therefore science.” Tenopir and King’s latest longitudinal work has been accepted for publication in Aslib Proceedings (3).

What our data shows is not a tendency towards an increasingly exclusive and elitist scientific system, but rather one that is increasingly democratic

She suggests that their different conclusions could be due to the fact that they are actually studying different phenomena: “Evans is looking at citation patterns, while we study reading patterns. Scientists read journal articles for many purposes, not just research and writing, but also for teaching, current awareness and so on. Only readings that are for research within their discipline are likely to result in citations. Even then, scientists read many more articles than they eventually cite.”

Tenopir continues, “there are many motivations to cite, including signaling what is the most important or best of the whole body of what the scientist has read. Our surveys on readings show a steady increase in the number of reported readings and a broadening in the number of journal titles from which at least one article is read. Papers found by searching are more likely to be for research, and are often found in the broad range of e-journal titles held by the scientists’ university library. Readings for current awareness are more likely to be found by browsing through personal print subscriptions.

“Evans credits our earlier demonstration of increased searching as a factor in the narrowing of citations but this seems unlikely, as finding more articles through searching is almost certainly a factor in the broadening of the sources of reading and thus citation.”

Citations spreading further

Meanwhile, a new study to be published in the Journal of the American Society for Information Science and Technology was recently posted to the pre-print server arXiv by Vincent Larivière, Yves Gingras and Éric Archambault (4). Using more than 25 million papers and 600 million citations, they show that the concentration of article and journal citations has been decreasing over time.

According to their research, the percentage of papers that receive at least one citation has been increasing since the 1970s. At the same time, the percentage of articles needed to account for 20%, 50% and 80% of the citations received has been increasing, and the Herfindahl-Hirschmann Index – the concentration index used by Evans – has been steadily decreasing since the beginning of the last century.

“Taken together, these results argue for increasing efficiency of information retrieval in an online world, and the information democracy that this entails,” says Larivière. “The scientific system is increasingly efficient at using published knowledge. What our data shows is not a tendency towards an increasingly exclusive and elitist scientific system, but rather one that is increasingly democratic.”

Towards a democracy of citations

In another paper preceding that of Evans, Larivière, Gingras and Archambault also contradict the claim that the age of cited literature is decreasing (5). In Larivière’s view, “Evans’ conclusions reflect a transient phenomenon. The best example of this can be seen in the field of astrophysics, where the authors did observe a decline in the average age of cited literature at the beginning of the open access movement in the 1990s. However, by the beginning of the 2000s, when almost 100% of the papers were available, the average age started to rise again and has not stopped since.”

In fact, while online publishing may have initially narrowed science, as online searching becomes more efficient and researchers learn how to use this wealth of data to greater effect, they are certainly browsing through and reading, if not actually citing, a wider range of materials. In time, we may well see reading and citations broaden further as researchers come across a wider range of readings in the online world.

References:

(1) Evans, J.A. (2008) “Electronic publication and the narrowing of science and scholarship”, Science, Vol. 321, No. 5887, pp. 395–399.
(2) Tenopir, C. and King, D.W. (2002) “Reading behaviour and electronic journals”, Learned Publishing, Vol. 15, No. 4, pp. 259–265.
(3) Tenopir, C., King, D.W., Edwards, S. and Wu, L. (2009) “Electronic journals and changes in scholarly article seeking and reading patterns”, forthcoming in Aslib Proceedings.
(4) Larivière, V., Gingras, Y. and Archambault, E. (2008) “The decline in the concentration of citations, 1900–2007”, forthcoming in the Journal of the American Society for Information Science and TechnologyarXiv:0809.5250v1
(5) Larivière, V., Archambault, E. and Gingras, Y. (2008) “Long-term variations in the aging of scientific literature: from exponential growth to steady-state science (1900–2004)”, Journal of the American Society for Information Science and Technology, Vol. 59, No. 2, pp. 288–296.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Why am I cited…?

Citations are one of the principal drivers of scientific conversation and, as such, are subject to intense scrutiny. But what motivates citations, and what helps or hinders a paper’s potential to become a future citation classic? We speak to two highly cited Dutch researchers for their views.

Read more >


Citation is an essential part of science. It places a researcher’s thinking within a continuum of thought, indicating sources of ideas and theories that the author agrees or disagrees with.

A highly cited paper is normally considered to be very relevant within its field, and increasingly across disciplines. It has, somehow, resonated throughout the scientific community.

Professor Jos H. Beijnen, pharmacist at Slotervaart Hospital and the Netherlands Cancer Institute in Amsterdam and appointed at Utrecht University, and Peter N. Nijkamp, based at the Department of Spatial Economics, Vrije Universiteit, Amsterdam, both believe that collaboration and relevant research have helped them become two of the most highly cited Dutch researchers.

Jos H Beijen
Jos H. Beijnen

Collaboration is inspirational

Beijnen is the most active Dutch author in Life Sciences and his most cited paper (1) has received over 950 citations. He attributes his remarkable output to efficient use of his time. He adds, “The selection of collaborators, in my case mostly young pharmacy students who want to do their Ph.D. in my group, is crucial. Their enthusiasm for research fuels me and gives me the energy to work seven days a week.”

Nijkamp agrees: “The biggest challenge for a scientist is to find promising and bright young talents. I have been lucky to find so many interesting young people all over the world with whom I have worked and from whom I have learned a lot.” Nijkamp is the most prolific of Dutch authors in Social Sciences. His most-cited document, with 59 citations, covers un-tolled congestion pricing (2).

Relevant research

Beijnen also believes research should be aimed at tackling issues that directly benefit society. “Our research is always based on a clinical research question. Our research should be beneficial to our patients, and that is what we always keep in mind.”

Nijkamp feels the same: “Most of my research finds its inspiration in pressing societal problems, so it’s no wonder that the information is then shared across a wide audience.”

Peter N Nijkamp
Peter N. Nijkamp

Cross-discipline, cross-border communication

Nijkamp adds that collaboration with his students and peers, as well as with researchers overseas and in different disciplines, has helped him maintain high output. “Modern quantitative economic research is a fascinating activity, where tools from various disciplines are extensively used. This leads to often surprising findings, with great scientific and policy-making value. Research in the social sciences is no longer a solitary activity. Increasingly, modern research in economics is based on collaboration with dozens of good people abroad. I have produced most of my publications together with many people outside the Netherlands.”

The most relevant and highly cited papers are not produced in a vacuum; they offer insight into important questions in their field and their work has wide-ranging value thanks to the collaboration of students, peers in different geographical regions and, increasingly, from different fields. As Nijkamp concludes: “The knowledge society is indeed operating on a global market.”

References:

(1) Schinkel, A.H.; Smit, J.J.M.; Van Tellingen, O.; Beijnen, J.H.; Wagenaar, E.; Van Deemter, L.; Mol, C.A.A.M.; Van der Valk, M.A.; Robanus-Maandag, E.C.; te Riele, H.P.J.; Berns, A.J.M.; Borst, P. (1994) “Disruption of the mouse mdr1a P-glycoprotein gene leads to a deficiency in the blood-brain barrier and to increased sensitivity to drugs”, Cell, Vol. 77, pp. 491–502.
(2) Verhoef, E., Nijkamp, P., Rietveld, P. (1996) “Second-best congestion pricing: The case of an untolled alternative”, Journal of Urban Economics, Vol. 40, No. 3, pp. 279–302.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Citation is an essential part of science. It places a researcher’s thinking within a continuum of thought, indicating sources of ideas and theories that the author agrees or disagrees with.

A highly cited paper is normally considered to be very relevant within its field, and increasingly across disciplines. It has, somehow, resonated throughout the scientific community.

Professor Jos H. Beijnen, pharmacist at Slotervaart Hospital and the Netherlands Cancer Institute in Amsterdam and appointed at Utrecht University, and Peter N. Nijkamp, based at the Department of Spatial Economics, Vrije Universiteit, Amsterdam, both believe that collaboration and relevant research have helped them become two of the most highly cited Dutch researchers.

Jos H Beijen
Jos H. Beijnen

Collaboration is inspirational

Beijnen is the most active Dutch author in Life Sciences and his most cited paper (1) has received over 950 citations. He attributes his remarkable output to efficient use of his time. He adds, “The selection of collaborators, in my case mostly young pharmacy students who want to do their Ph.D. in my group, is crucial. Their enthusiasm for research fuels me and gives me the energy to work seven days a week.”

Nijkamp agrees: “The biggest challenge for a scientist is to find promising and bright young talents. I have been lucky to find so many interesting young people all over the world with whom I have worked and from whom I have learned a lot.” Nijkamp is the most prolific of Dutch authors in Social Sciences. His most-cited document, with 59 citations, covers un-tolled congestion pricing (2).

Relevant research

Beijnen also believes research should be aimed at tackling issues that directly benefit society. “Our research is always based on a clinical research question. Our research should be beneficial to our patients, and that is what we always keep in mind.”

Nijkamp feels the same: “Most of my research finds its inspiration in pressing societal problems, so it’s no wonder that the information is then shared across a wide audience.”

Peter N Nijkamp
Peter N. Nijkamp

Cross-discipline, cross-border communication

Nijkamp adds that collaboration with his students and peers, as well as with researchers overseas and in different disciplines, has helped him maintain high output. “Modern quantitative economic research is a fascinating activity, where tools from various disciplines are extensively used. This leads to often surprising findings, with great scientific and policy-making value. Research in the social sciences is no longer a solitary activity. Increasingly, modern research in economics is based on collaboration with dozens of good people abroad. I have produced most of my publications together with many people outside the Netherlands.”

The most relevant and highly cited papers are not produced in a vacuum; they offer insight into important questions in their field and their work has wide-ranging value thanks to the collaboration of students, peers in different geographical regions and, increasingly, from different fields. As Nijkamp concludes: “The knowledge society is indeed operating on a global market.”

References:

(1) Schinkel, A.H.; Smit, J.J.M.; Van Tellingen, O.; Beijnen, J.H.; Wagenaar, E.; Van Deemter, L.; Mol, C.A.A.M.; Van der Valk, M.A.; Robanus-Maandag, E.C.; te Riele, H.P.J.; Berns, A.J.M.; Borst, P. (1994) “Disruption of the mouse mdr1a P-glycoprotein gene leads to a deficiency in the blood-brain barrier and to increased sensitivity to drugs”, Cell, Vol. 77, pp. 491–502.
(2) Verhoef, E., Nijkamp, P., Rietveld, P. (1996) “Second-best congestion pricing: The case of an untolled alternative”, Journal of Urban Economics, Vol. 40, No. 3, pp. 279–302.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Eigenfactor: pulling the stories out of the data

The Eigenfactor project was set up to provide an alternative way of measuring journal influence and is generating a lot of interest. But the team is doing much more than ranking journals. Jevin West tells us the Eigenfactor story.

Read more >


Carl Bergstrom has been researching journal economics for over a decade. One fruit of those efforts, the Eigenfactor project, is drawing interest from editors, authors, researchers, policy-makers and evaluators seeking new measures of journal influence.

Jevin West, graduate student at the University of Washington in Bergstrom’s research group, recalls: “It all started with Ted Bergstrom, Carl Bergstrom and I chatting about evaluation tools over a beer in December 2005. Carl was getting a lot of flack for using Impact Factors (IFs) in his work on journal economics, so we decided to come up with another way of evaluating the scholarly literature.

“I come from the theoretical side of biology and I’m interested in applying tools and concepts from network science and information theory to various problems, and that extends beyond biology to other fields, including bibliometrics. Fortunately, the nature of citation networks means that many of the models we use in biology are easily transferable.”

How the tools work

The Eigenfactor works by taking a random journal and following a random citation in that journal to another journal, then selecting another random citation from the second journal and following that to the next journal and so on. The Eigenfactor calculates the percentage of time you would spend at each journal. For instance, a search of all journals in 2006 gives Nature the highest Eigenfactor score. If you followed random citations infinitely, you would spend 1.9917% of your time at Nature.

The Article Influence score is calculated by dividing the Eigenfactor score for a particular journal by the number of articles published by that journal. All journals are normalized to 1. The Annual Review of Immunology comes out top, at 27.454 times normal.

Phenomenal interest

The Eigenfactor works like Google’s PageRank, both of which are based on social network theory; where Google follows page links, Eigenfactor uses citations. They evaluate the importance of each journal (or Web page) based on the structure of the entire network.

The IF, in comparison, only looks one citation away and it ignores where they come from. “We take into account where the citation came from just as Google takes into account where a hyperlink comes from,” says West.

When the Eigenfactor Web site was launched in January 2007, it attracted comment in numerous blogs, which raised its profile. “It has been far beyond anything we could have imagined. The interest has been phenomenal,” says West.

“We also have our critics, and this is healthy,” West adds. “I think all metrics should be criticized. Nothing beats reading an individual article in a journal to assess its value, and nothing ever will. But with time and budget constraints being what they are, there is a legitimate need for tools like this.”

More and better tools

Journal ranking is just one of many stories the Eigenfactor team are pulling out of the data. They have created a cost-effectiveness score to help librarians manage their budgets efficiently, science maps and motion charts showing trends over time, which are particularly popular with editors and authors, and researchers interested in the history of science.

The team are planning to improve the tools they have and develop new ones, and they hope to bring in richer data. Also, over the longer-term, they want to apply these tools to other areas. “We’re curious about how science has changed over time and we’re interested in applying these tools to non-bibliometric areas as well,” says West.

Even though this is just a side project, the team are enjoying themselves. “This has all come together at the right time. The data is available and some very sophisticated tools have been developed over recent years. We can now analyze data in some very exciting ways. We’re having a blast!”

Useful links:

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Carl Bergstrom has been researching journal economics for over a decade. One fruit of those efforts, the Eigenfactor project, is drawing interest from editors, authors, researchers, policy-makers and evaluators seeking new measures of journal influence.

Jevin West, graduate student at the University of Washington in Bergstrom’s research group, recalls: “It all started with Ted Bergstrom, Carl Bergstrom and I chatting about evaluation tools over a beer in December 2005. Carl was getting a lot of flack for using Impact Factors (IFs) in his work on journal economics, so we decided to come up with another way of evaluating the scholarly literature.

“I come from the theoretical side of biology and I’m interested in applying tools and concepts from network science and information theory to various problems, and that extends beyond biology to other fields, including bibliometrics. Fortunately, the nature of citation networks means that many of the models we use in biology are easily transferable.”

How the tools work

The Eigenfactor works by taking a random journal and following a random citation in that journal to another journal, then selecting another random citation from the second journal and following that to the next journal and so on. The Eigenfactor calculates the percentage of time you would spend at each journal. For instance, a search of all journals in 2006 gives Nature the highest Eigenfactor score. If you followed random citations infinitely, you would spend 1.9917% of your time at Nature.

The Article Influence score is calculated by dividing the Eigenfactor score for a particular journal by the number of articles published by that journal. All journals are normalized to 1. The Annual Review of Immunology comes out top, at 27.454 times normal.

Phenomenal interest

The Eigenfactor works like Google’s PageRank, both of which are based on social network theory; where Google follows page links, Eigenfactor uses citations. They evaluate the importance of each journal (or Web page) based on the structure of the entire network.

The IF, in comparison, only looks one citation away and it ignores where they come from. “We take into account where the citation came from just as Google takes into account where a hyperlink comes from,” says West.

When the Eigenfactor Web site was launched in January 2007, it attracted comment in numerous blogs, which raised its profile. “It has been far beyond anything we could have imagined. The interest has been phenomenal,” says West.

“We also have our critics, and this is healthy,” West adds. “I think all metrics should be criticized. Nothing beats reading an individual article in a journal to assess its value, and nothing ever will. But with time and budget constraints being what they are, there is a legitimate need for tools like this.”

More and better tools

Journal ranking is just one of many stories the Eigenfactor team are pulling out of the data. They have created a cost-effectiveness score to help librarians manage their budgets efficiently, science maps and motion charts showing trends over time, which are particularly popular with editors and authors, and researchers interested in the history of science.

The team are planning to improve the tools they have and develop new ones, and they hope to bring in richer data. Also, over the longer-term, they want to apply these tools to other areas. “We’re curious about how science has changed over time and we’re interested in applying these tools to non-bibliometric areas as well,” says West.

Even though this is just a side project, the team are enjoying themselves. “This has all come together at the right time. The data is available and some very sophisticated tools have been developed over recent years. We can now analyze data in some very exciting ways. We’re having a blast!”

Useful links:

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Geographical trends of research output

While journal publication can be easily compared with the number of researchers in a given country, the number of articles published does not follow the same pattern; some countries produce a very high number despite having few researchers while others have tens of researchers producing relatively few articles. We take a look behind the numbers.

Read more >


The publication of journal articles worldwide follows a consistent pattern associated with the number of researchers based in a particular country. Unsurprisingly, the share of world articles is dominated by those countries with the most researchers, with countries such as the United States, Japan, the United Kingdom and Germany ranked highest. The geographical distribution of citations shows a similar pattern, with the same four countries appearing in the top four places according to citations received, albeit in a slightly different order. The growth in Chinese researcher numbers and research output has been previously discussed in Research Trends.

Table 1 illustrates the rank of countries according to their share of world articles and indicates the equivalent rank for each country according to citations received.

In the original article on Geographical trends of research output Table 1 had an incorrect year range label which stated 2004 -2007. The data portrayed in this table actually pertains to 1996 - 2007. We apologies for this oversight in this label.

Rank by articles Rank by citations Country Articles Cites Researchers % docs % cites
1 1 United States 3,437,213 43,436,526 7,442,000 25.9% 37.6%
2 4 Japan 983,020 7,167,200 896,211 7.4% 6.2%
3 2 United Kingdom 962,640 9,895,817 313,848 7.3% 8.6%
4 3 Germany 888,287 8,377,298 470,729 6.7% 7.2%
5 13 China, Peoples’ Republic of 758,042 1,629,993 1,152,617 5.7% 1.4%
6 5 France 640,163 5,795,531 348,714 4.8% 5.0%
7 6 Canada 473,763 4,728,874 199,060 3.6% 4.1%
8 7 Italy 461,292 3,821,440 164,026 3.5% 3.3%
9 11 Spain 330,399 2,350,185 161,932 2.5% 2.0%
10 17 Russian Federation 330,020 1,064,077 951,569 2.5% 0.9%
11 9 Australia 295,977 2,566,649 118,145 2.2% 2.2%
12 19 India 286,109 994,561 N/A 2.2% 0.9%
13 8 Netherlands 264,565 3,012,291 915,65 2.0% 2.6%
14 18 Korea, Republic of 217,879 1,018,532 194,055 1.6% 0.9%
15 12 Sweden 194,921 2,188,026 72,459 1.5% 1.9%
16 10 Switzerland 188,134 2,384,981 52,250 1.4% 2.1%
17 22 Taiwan 164,823 769,206 138,604 1.2% 0.7%
18 23 Brazil 163,550 752,658 N/A 1.2% 0.7%
19 24 Poland 159,536 682,354 78,362 1.2% 0.6%
20 14 Belgium 141,737 1,347,624 52,252 1.1% 1.2%

Table 1 – Geographical distribution of world articles 1996 – 2007 – top 20 countries. Source: Scopus. Researcher data taken from OECD Main Science & Technology Indicators, 2008 edition; data is for 2004 FTE researchers. US Researcher Data taken from Science & Engineering Indicators 2008, Table 3.1.

Table 2 illustrates that if the 2004 output of articles in Scopus is compared to researcher numbers in 2004 for these countries, some interesting trends develop. For instance, the number of researchers per research article published varies remarkably. It is important to note that this is different to authors per published article; in this case we are calculating the ratio of total researchers in a country to the publication output of the country. In many cases, there are researchers who never appear on articles as authors, and this is an important distinction to consider.

In Russia, there are 30 researchers for each research article published, while in the US there are 23 researchers. Switzerland has the lowest number of researchers per article at 2.5, followed by the UK at 3.2.

Country Number of researchers (2004) Number of articles (2004) Ratio of researchers per article
Russian Federation 951,569 31,134 30.6
United States 7,442,000 315,161 23.6
China, Peoples’ Republic of 1,152,617 101,685 11.3
Japan 896,211 97,579 9.2
Taiwan 138,604 20,054 6.9
Korea, Republic of 194,055 28,943 6.7
France 348,714 64,909 5.4
Germany 470,729 91,881 5.1
Spain 161,932 36,849 4.4
Poland 78,362 18,524 4.2
Canada 199,060 50,904 3.9
Australia 118,145 32,837 3.6
Sweden 72,459 20,057 3.6
Belgium 52,252 15,451 3.4
Italy 164,026 49,592 3.3
United Kingdom 313,848 97,671 3.2
Netherlands 91,565 28,309 3.2
Switzerland 52,250 20,623 2.5

Table 2 – Researcher numbers for 2004 (source: OECD, US Data from NSF Science & Engineering Indicators 2008, table 3.1) and articles published in 2004 (source: Scopus).

The question follows, why do these countries have such differences in the researcher per article ratio?

Of course this is a difficult question to answer and has many dimensions, all of which will contribute in different amounts in different countries.

Fundamentally, overall population density, economic factors such as GDP and per capita expenditure and infrastructure will be significant factors in the ability to support research, but we are quick to point out that the countries that have the lower ratios, such as the UK and Switzerland, have some of the highest economic capabilities and strongest infrastructures in the world – this illustrates the issues in trying to understand these differences. Certainly, research funding from both governmental and private sources will affect the maintenance of research institutions and the ability to recruit research personnel. In the US, many research institutions have huge programs that require a substantial amount of staff members, which will increase the researcher numbers in our ratio.

In addition, the ability of a country to actually encourage students to follow a research path can often be problematic – in recent times in the UK there has been commentary on the problems of filling university places in subjects such as chemistry and physics and a study by Olivieri & Rowlands (2006) indicated that acquiring research staff was one of biggest barriers to research performance, which could be a significant factor to understand these interesting ratios between articles and researchers.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The publication of journal articles worldwide follows a consistent pattern associated with the number of researchers based in a particular country. Unsurprisingly, the share of world articles is dominated by those countries with the most researchers, with countries such as the United States, Japan, the United Kingdom and Germany ranked highest. The geographical distribution of citations shows a similar pattern, with the same four countries appearing in the top four places according to citations received, albeit in a slightly different order. The growth in Chinese researcher numbers and research output has been previously discussed in Research Trends.

Table 1 illustrates the rank of countries according to their share of world articles and indicates the equivalent rank for each country according to citations received.

In the original article on Geographical trends of research output Table 1 had an incorrect year range label which stated 2004 -2007. The data portrayed in this table actually pertains to 1996 - 2007. We apologies for this oversight in this label.

Rank by articles Rank by citations Country Articles Cites Researchers % docs % cites
1 1 United States 3,437,213 43,436,526 7,442,000 25.9% 37.6%
2 4 Japan 983,020 7,167,200 896,211 7.4% 6.2%
3 2 United Kingdom 962,640 9,895,817 313,848 7.3% 8.6%
4 3 Germany 888,287 8,377,298 470,729 6.7% 7.2%
5 13 China, Peoples’ Republic of 758,042 1,629,993 1,152,617 5.7% 1.4%
6 5 France 640,163 5,795,531 348,714 4.8% 5.0%
7 6 Canada 473,763 4,728,874 199,060 3.6% 4.1%
8 7 Italy 461,292 3,821,440 164,026 3.5% 3.3%
9 11 Spain 330,399 2,350,185 161,932 2.5% 2.0%
10 17 Russian Federation 330,020 1,064,077 951,569 2.5% 0.9%
11 9 Australia 295,977 2,566,649 118,145 2.2% 2.2%
12 19 India 286,109 994,561 N/A 2.2% 0.9%
13 8 Netherlands 264,565 3,012,291 915,65 2.0% 2.6%
14 18 Korea, Republic of 217,879 1,018,532 194,055 1.6% 0.9%
15 12 Sweden 194,921 2,188,026 72,459 1.5% 1.9%
16 10 Switzerland 188,134 2,384,981 52,250 1.4% 2.1%
17 22 Taiwan 164,823 769,206 138,604 1.2% 0.7%
18 23 Brazil 163,550 752,658 N/A 1.2% 0.7%
19 24 Poland 159,536 682,354 78,362 1.2% 0.6%
20 14 Belgium 141,737 1,347,624 52,252 1.1% 1.2%

Table 1 – Geographical distribution of world articles 1996 – 2007 – top 20 countries. Source: Scopus. Researcher data taken from OECD Main Science & Technology Indicators, 2008 edition; data is for 2004 FTE researchers. US Researcher Data taken from Science & Engineering Indicators 2008, Table 3.1.

Table 2 illustrates that if the 2004 output of articles in Scopus is compared to researcher numbers in 2004 for these countries, some interesting trends develop. For instance, the number of researchers per research article published varies remarkably. It is important to note that this is different to authors per published article; in this case we are calculating the ratio of total researchers in a country to the publication output of the country. In many cases, there are researchers who never appear on articles as authors, and this is an important distinction to consider.

In Russia, there are 30 researchers for each research article published, while in the US there are 23 researchers. Switzerland has the lowest number of researchers per article at 2.5, followed by the UK at 3.2.

Country Number of researchers (2004) Number of articles (2004) Ratio of researchers per article
Russian Federation 951,569 31,134 30.6
United States 7,442,000 315,161 23.6
China, Peoples’ Republic of 1,152,617 101,685 11.3
Japan 896,211 97,579 9.2
Taiwan 138,604 20,054 6.9
Korea, Republic of 194,055 28,943 6.7
France 348,714 64,909 5.4
Germany 470,729 91,881 5.1
Spain 161,932 36,849 4.4
Poland 78,362 18,524 4.2
Canada 199,060 50,904 3.9
Australia 118,145 32,837 3.6
Sweden 72,459 20,057 3.6
Belgium 52,252 15,451 3.4
Italy 164,026 49,592 3.3
United Kingdom 313,848 97,671 3.2
Netherlands 91,565 28,309 3.2
Switzerland 52,250 20,623 2.5

Table 2 – Researcher numbers for 2004 (source: OECD, US Data from NSF Science & Engineering Indicators 2008, table 3.1) and articles published in 2004 (source: Scopus).

The question follows, why do these countries have such differences in the researcher per article ratio?

Of course this is a difficult question to answer and has many dimensions, all of which will contribute in different amounts in different countries.

Fundamentally, overall population density, economic factors such as GDP and per capita expenditure and infrastructure will be significant factors in the ability to support research, but we are quick to point out that the countries that have the lower ratios, such as the UK and Switzerland, have some of the highest economic capabilities and strongest infrastructures in the world – this illustrates the issues in trying to understand these differences. Certainly, research funding from both governmental and private sources will affect the maintenance of research institutions and the ability to recruit research personnel. In the US, many research institutions have huge programs that require a substantial amount of staff members, which will increase the researcher numbers in our ratio.

In addition, the ability of a country to actually encourage students to follow a research path can often be problematic – in recent times in the UK there has been commentary on the problems of filling university places in subjects such as chemistry and physics and a study by Olivieri & Rowlands (2006) indicated that acquiring research staff was one of biggest barriers to research performance, which could be a significant factor to understand these interesting ratios between articles and researchers.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
Research Trends Image

Mapping unknown regions

Maps of science help us visualize and conceptualize how different scientific disciplines relate to each other. Although many maps assume a hierarchy of disciplines, Richard Klavans and Kevin Boyack believe the structure should be circular

Read more >


A map of science is a diagram showing how different areas of science are related. The earliest maps tended to be hierarchical, starting with mathematics, then physics, chemistry and biology. Applied sciences would be like branches off this tree – electrical engineering branching off physics, chemical engineering branching off chemistry, and medicine and agricultural science branching off biology (with some chemistry).

What is a map of science?

A map of science consists of a set of elements and the relationships between the elements. These elements can be any unit that represents a partition of science. Maps must have partitions, where science is separated into different parts, and these partitions must be linked, either explicitly (such as a line drawn between two partitions), or through proximate location (or physical adjacency) that explicitly denotes linkage.

But our analysis does not support the hypothesis that science is actually structured this way. We analyzed 20 maps of science. Two of these maps were made by experts, 17 were drawn through analysis of the citation patterns of millions of articles in thousands of peer-reviewed journals and one was based on course requirements at a university.

We found that science looks more like a circle than a hierarchy. Starting (arbitrarily) at mathematics (see figures 1 and 2), one can proceed through the areas mentioned above (physics, chemistry, engineering, earth sciences, biology, biotechnology, infectious diseases, medicine) and continue around the circle through health services, brain research, humanities, social sciences, computer science and back to mathematics. There isn’t agreement about the order suggested here (some might put computer science next to biotechnology, others might put chemistry closer to medicine), but there is consensus that these are the most common connections for all of the maps we examined.

Why use a circle?
Mapping science as a (non-hierarchical) circle is a useful aid for career counseling. A hierarchical map of science implies that one’s path should always be aimed towards "central" areas (and correspondingly avoiding "peripheral" areas). A circle has the unique characteristic that there is no "center" (or, to say it more accurately, each point is the center). A circle illustrates what we need to communicate to a student – that many paths are equally valid.

Mapping science as a circle is also useful for understanding science policy. Governments support investments in science, just like one places weights on the edge of a wheel. Balancing the wheel of science reflects fundamental tradeoffs between supporting the arts, providing an understanding about how people behave, providing health and well-being to society, pursuing techno-economic goals, and supporting basic research which may have no immediate economic or social impact. Maps, presented as weights on a wheel of science, can play an important role in communicating the national orientation towards these different objectives.

Here there be dragons

In the 13th and 14th centuries, maps of the world showed the known world floating on a sea of uncertainty with unexplored regions marked “here there be dragons”. This metaphor is still important today. Science education should be about communicating that there are many more areas yet to be discovered, students can take part in this process, and society, as a whole, can benefit from this discovery process.

One can communicate this same sense of excitement by placing what we "know" on the edge of a circle and what is "unknown" as the white space inside the circle. We should communicate, to both students and the public at large, that there still need to be explorations into the heart of the unknown. More and more of this exploration is interdisciplinary, which means it’s further from the known edges of the circle of science. Deep inside the circle are the dragons that the next generation must face and conquer.

Figure 1 - In the map above, fields are arranged around a circle based on the meta-analysis of 20 maps of science. The order of the 554 disciplines (journal categories) is based on multiple factor analyses and the 84,000 paradigms (co-citation clusters) are ordered around the circle by discipline.

Fig 2

Figure 2 - A country’s strengths are located in the paradigm clusters, which are idiosyncratically linked by the country and in which the country has at least one form of leadership – in this case, the USA.

The three types of leadership are:

  1. publication leaders: the largest number of current papers (2003–2007);
  2. reference leadership: the largest number of cited papers forming the co-citation clusters;
  3. thought leadership: referencing more recent papers than the #1 competitor AND publication share >0.8.

Useful links:

Places & Spaces
All the original maps and their codings

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

A map of science is a diagram showing how different areas of science are related. The earliest maps tended to be hierarchical, starting with mathematics, then physics, chemistry and biology. Applied sciences would be like branches off this tree – electrical engineering branching off physics, chemical engineering branching off chemistry, and medicine and agricultural science branching off biology (with some chemistry).

What is a map of science?

A map of science consists of a set of elements and the relationships between the elements. These elements can be any unit that represents a partition of science. Maps must have partitions, where science is separated into different parts, and these partitions must be linked, either explicitly (such as a line drawn between two partitions), or through proximate location (or physical adjacency) that explicitly denotes linkage.

But our analysis does not support the hypothesis that science is actually structured this way. We analyzed 20 maps of science. Two of these maps were made by experts, 17 were drawn through analysis of the citation patterns of millions of articles in thousands of peer-reviewed journals and one was based on course requirements at a university.

We found that science looks more like a circle than a hierarchy. Starting (arbitrarily) at mathematics (see figures 1 and 2), one can proceed through the areas mentioned above (physics, chemistry, engineering, earth sciences, biology, biotechnology, infectious diseases, medicine) and continue around the circle through health services, brain research, humanities, social sciences, computer science and back to mathematics. There isn’t agreement about the order suggested here (some might put computer science next to biotechnology, others might put chemistry closer to medicine), but there is consensus that these are the most common connections for all of the maps we examined.

Why use a circle?
Mapping science as a (non-hierarchical) circle is a useful aid for career counseling. A hierarchical map of science implies that one’s path should always be aimed towards "central" areas (and correspondingly avoiding "peripheral" areas). A circle has the unique characteristic that there is no "center" (or, to say it more accurately, each point is the center). A circle illustrates what we need to communicate to a student – that many paths are equally valid.

Mapping science as a circle is also useful for understanding science policy. Governments support investments in science, just like one places weights on the edge of a wheel. Balancing the wheel of science reflects fundamental tradeoffs between supporting the arts, providing an understanding about how people behave, providing health and well-being to society, pursuing techno-economic goals, and supporting basic research which may have no immediate economic or social impact. Maps, presented as weights on a wheel of science, can play an important role in communicating the national orientation towards these different objectives.

Here there be dragons

In the 13th and 14th centuries, maps of the world showed the known world floating on a sea of uncertainty with unexplored regions marked “here there be dragons”. This metaphor is still important today. Science education should be about communicating that there are many more areas yet to be discovered, students can take part in this process, and society, as a whole, can benefit from this discovery process.

One can communicate this same sense of excitement by placing what we "know" on the edge of a circle and what is "unknown" as the white space inside the circle. We should communicate, to both students and the public at large, that there still need to be explorations into the heart of the unknown. More and more of this exploration is interdisciplinary, which means it’s further from the known edges of the circle of science. Deep inside the circle are the dragons that the next generation must face and conquer.

Figure 1 - In the map above, fields are arranged around a circle based on the meta-analysis of 20 maps of science. The order of the 554 disciplines (journal categories) is based on multiple factor analyses and the 84,000 paradigms (co-citation clusters) are ordered around the circle by discipline.

Fig 2

Figure 2 - A country’s strengths are located in the paradigm clusters, which are idiosyncratically linked by the country and in which the country has at least one form of leadership – in this case, the USA.

The three types of leadership are:

  1. publication leaders: the largest number of current papers (2003–2007);
  2. reference leadership: the largest number of cited papers forming the co-citation clusters;
  3. thought leadership: referencing more recent papers than the #1 competitor AND publication share >0.8.

Useful links:

Places & Spaces
All the original maps and their codings

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Custom data fuels OECD’s Innovation Strategy

The Organisation for Economic Co-Operation and Development (OECD) has recently decided to develop an Innovation Strategy to help governments boost innovation performance. We speak to Hiroyuki Tomizawa, Principal Administrator in the Economic Analysis and Statistics Division of the Directorate for Science, Technology and Industry at the OECD.

Read more >


The Organisation for Economic Co-Operation and Development (OECD) provides a forum for the governments of 30 like-minded market democracies to compare policy experiences, share best practices and seek answers to common economic, social and governance challenges.

Established in 1948 to lead the Marshall Plan for rebuilding Europe after the Second World War, the OECD has been collecting and analyzing statistical, economic and social data at the request of its members since 1961. These data are used to generate collective policy discussions, leading to decision-making and implementation. For instance, its Science, Technology and Industry Scoreboard, which comes out every two years, explores the interaction between knowledge and globalization at the heart of the ongoing transformation of OECD economies.

Impact of globalization on research

In recent years, the OECD has expanded its focus on its 30 member countries to offer analytical expertise and experience to over 100 developing and emerging market economies. This has to a certain extent been driven by globalization, which has made it virtually impossible to study specific areas in isolation. This has seen the scope of the OECD’s work shift from the examination of individual policy areas within each member country to the analysis of how various policy areas interact with each other and with other countries, including those outside the OECD group.

The OECD has an ambitious publishing program, releasing a large amount of its research and accompanying data in 250 new titles every year. Through this output, the OECD aims to help governments foster sustainable economic growth, financial stability, trade, investment and innovation, while at the same time striving for environmental preservation, social equity and poverty reduction.

It has also come to realize that data alone are not enough; to truly help governments foster innovation, strategies are needed. To this end, the OECD is developing an Innovation Strategy, which will provide mutually reinforcing policies and recommendations to boost innovation performance, pointing to general and country-specific practices and, where appropriate, developing guidelines. This work will culminate in a report to ministers in 2010, but some patterns are already clear.

For instance, are governments doing enough to foster collaboration between universities and businesses, and not just within their borders? Many key inventions, such as the World Wide Web, have come from public research. Are governments doing enough to strengthen this bedrock of innovation?

The importance of a reliable data source

In October 2008, the OECD announced it had decided to use Scopus Custom Data in its research, analysis and benchmarking work. Hiroyuki Tomizawa, Principal Administrator in the Economic Analysis and Statistics Division of the Directorate for Science, Technology and Industry at the OECD, explains: “The three key factors behind this decision were the product’s broad (international) coverage, clean, flexible data and advanced features, such as the ability to link between authors and institutions.”

He adds that the OECD anticipates using the Scopus data for three main purposes:

  • to analyze global trends and identify subject areas that are experiencing intense activity;
  • to understand research activities at the country level in order to be able to make comparative analyses between countries;
  • to understand co-authorship and collaboration across borders. In a competitive knowledge society, countries are deploying policies to attract the best talent, but it is not always easy for them to assess whether they were successful or not.

Three possible groups can benefit from the resulting OECD reports: policymakers, funding agencies, and governments and commercial research organizations. In this way, Scopus data will contribute to the OECD achieving its goals and will help to determine the direction of future economic decision-making.

More information on the OECD, including the full list of members and a wide range of publicly available reports, is available here.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The Organisation for Economic Co-Operation and Development (OECD) provides a forum for the governments of 30 like-minded market democracies to compare policy experiences, share best practices and seek answers to common economic, social and governance challenges.

Established in 1948 to lead the Marshall Plan for rebuilding Europe after the Second World War, the OECD has been collecting and analyzing statistical, economic and social data at the request of its members since 1961. These data are used to generate collective policy discussions, leading to decision-making and implementation. For instance, its Science, Technology and Industry Scoreboard, which comes out every two years, explores the interaction between knowledge and globalization at the heart of the ongoing transformation of OECD economies.

Impact of globalization on research

In recent years, the OECD has expanded its focus on its 30 member countries to offer analytical expertise and experience to over 100 developing and emerging market economies. This has to a certain extent been driven by globalization, which has made it virtually impossible to study specific areas in isolation. This has seen the scope of the OECD’s work shift from the examination of individual policy areas within each member country to the analysis of how various policy areas interact with each other and with other countries, including those outside the OECD group.

The OECD has an ambitious publishing program, releasing a large amount of its research and accompanying data in 250 new titles every year. Through this output, the OECD aims to help governments foster sustainable economic growth, financial stability, trade, investment and innovation, while at the same time striving for environmental preservation, social equity and poverty reduction.

It has also come to realize that data alone are not enough; to truly help governments foster innovation, strategies are needed. To this end, the OECD is developing an Innovation Strategy, which will provide mutually reinforcing policies and recommendations to boost innovation performance, pointing to general and country-specific practices and, where appropriate, developing guidelines. This work will culminate in a report to ministers in 2010, but some patterns are already clear.

For instance, are governments doing enough to foster collaboration between universities and businesses, and not just within their borders? Many key inventions, such as the World Wide Web, have come from public research. Are governments doing enough to strengthen this bedrock of innovation?

The importance of a reliable data source

In October 2008, the OECD announced it had decided to use Scopus Custom Data in its research, analysis and benchmarking work. Hiroyuki Tomizawa, Principal Administrator in the Economic Analysis and Statistics Division of the Directorate for Science, Technology and Industry at the OECD, explains: “The three key factors behind this decision were the product’s broad (international) coverage, clean, flexible data and advanced features, such as the ability to link between authors and institutions.”

He adds that the OECD anticipates using the Scopus data for three main purposes:

  • to analyze global trends and identify subject areas that are experiencing intense activity;
  • to understand research activities at the country level in order to be able to make comparative analyses between countries;
  • to understand co-authorship and collaboration across borders. In a competitive knowledge society, countries are deploying policies to attract the best talent, but it is not always easy for them to assess whether they were successful or not.

Three possible groups can benefit from the resulting OECD reports: policymakers, funding agencies, and governments and commercial research organizations. In this way, Scopus data will contribute to the OECD achieving its goals and will help to determine the direction of future economic decision-making.

More information on the OECD, including the full list of members and a wide range of publicly available reports, is available here.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Plus ça change, plus c’est la même chose: de Solla Price’s legacy and the changing face of scientometrics

Derek de Solla Price has always been a major source of inspiration for Professor Anthony van Raan. He looks back at the development of the science of science since the 1960

Read more >


The invention and development of the Science Citation Index by Eugene Garfield in the 1960s was a major breakthrough in the study of science. This invention enabled statistical analyses of scientific literature on a very large scale. The great scientist Derek de Solla Price immediately recognized the value of Garfield’s invention, particularly from the perspective of the contemporaneous history of science.

Scientists have always been fascinated by basic features such as simplicity, symmetry, harmony and order. The Science Citation Index motivated de Solla Price to work on a ‘physical approach’ to science, in which he tried to find laws to predict further developments, inspired by the principles of statistical mechanics.

Cognitive and social indicators
Specific parameters, ‘indicators’, are guides to finding and understanding such basic features. The most basic feature concerns the cognitive dimension: the development of the content and structure of science. Other indicators relate to the social dimension of science, in particular to aspects formulated in questions such as:

  • How many researchers?
  • How much money is spent on science? How ‘good’ are research groups?
  • How does communication in science work, particularly the role of books, journals, conferences?

And beyond that there is another, often forgotten, question:

  • What is the economic profit of scientific activities?

A landmark in the development of science indicators was the first publication in a biennial series of the Science & Engineering Indicators report (as it is now called) in 1973. Encouraged by the success of economists in developing quantitative measures of political significance for areas such as unemployment and GNP, the US National Science Board started this series of reports, which focus more on the demographic and economic state of science than on its cognitive state.

What is the difference between data and indicators?
An indicator is a measure that explicitly addresses some assumption. To begin with, we need to discover which features of science can be given a numerical expression. Indicators cannot exist without a specific goal; they must address specific questions. They have to be created to gauge important ‘forces’; for example, how scientific progress is related to specific cognitive and socio-economic aspects. If indicators are not problem-driven, they are useless. They have to describe the recent past in such a way that they can guide us in, and inform us about, the near future.

A second and more fundamental role of indicators is their potential to test aspects of theories and models of scientific development and its interaction with society. In this sense, indicators are not only tools for science policymakers and research managers, but also instruments in the study of science.

But we also have to realize that science indicators do not answer typical epistemological questions such as:

  • How do scientists decide what will be called a scientific fact?
  • How do scientists decide whether a particular observation supports or contradicts a theory?
  • How do scientists come to accept certain methods or scientific instruments as valid means of attaining knowledge?
  • How does knowledge selectively accumulate? (1)

De Solla Price strikingly described the mission of the indicator-maker: to find the simplest pattern in the data at hand, and then look for the more complex patterns that modify the first (2). What should be constructed from the data is not a number but a pattern: a cluster of points on a map, a peak on a graph, a correlation of significant elements in a matrix, a qualitative similarity between two histograms.

What has also changed is the mode of publishing. Electronic publishing and electronic archives mark a whole new era.

If these patterns are found, the next step is to suggest models that produce such patterns and to test these models with further data. A numerical indicator or an indicative pattern alone has little significance. The data must be given perspective: the change of an indicator with time, or different rates of change of two different indicators. It is crucial that geometrical or topological objects or relations are used to replace numerical quantities.

Now, 25 years after the passing of de Solla Price, plus ça change, plus c’est la même chose rings true. What has changed is the very significant progress in application-oriented indicator work based on the enormous increase of available data and, above all, the almost unbelievable – compared to the 1970s – increase of computing power and electronic facilities. What has also changed is the mode of publishing. Electronic publishing and electronic archives mark a whole new era.

What has remained the same, however, are some of the most fundamental questions. For instance, to what extent can science maps derived from citation or concept-similarity data be said to exist in a strict spatial sense? In other words, do measures of similarity imply the existence of metric space? This question brings us to an even more fundamental problem formulated by de Solla Price: that the ontological status of maps of science will remain speculative until more has been learned about the structure of the brain itself.

The ideas and work of de Solla Price have always been one of my major sources of inspiration and I take pride in being a winner of an international award that bears his name.

Professor Anthony F.J. van Raan
Centre for Science and Technology Studies, Leiden University, the Netherlands

Contact him directly

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The invention and development of the Science Citation Index by Eugene Garfield in the 1960s was a major breakthrough in the study of science. This invention enabled statistical analyses of scientific literature on a very large scale. The great scientist Derek de Solla Price immediately recognized the value of Garfield’s invention, particularly from the perspective of the contemporaneous history of science.

Scientists have always been fascinated by basic features such as simplicity, symmetry, harmony and order. The Science Citation Index motivated de Solla Price to work on a ‘physical approach’ to science, in which he tried to find laws to predict further developments, inspired by the principles of statistical mechanics.

Cognitive and social indicators
Specific parameters, ‘indicators’, are guides to finding and understanding such basic features. The most basic feature concerns the cognitive dimension: the development of the content and structure of science. Other indicators relate to the social dimension of science, in particular to aspects formulated in questions such as:

  • How many researchers?
  • How much money is spent on science? How ‘good’ are research groups?
  • How does communication in science work, particularly the role of books, journals, conferences?

And beyond that there is another, often forgotten, question:

  • What is the economic profit of scientific activities?

A landmark in the development of science indicators was the first publication in a biennial series of the Science & Engineering Indicators report (as it is now called) in 1973. Encouraged by the success of economists in developing quantitative measures of political significance for areas such as unemployment and GNP, the US National Science Board started this series of reports, which focus more on the demographic and economic state of science than on its cognitive state.

What is the difference between data and indicators?
An indicator is a measure that explicitly addresses some assumption. To begin with, we need to discover which features of science can be given a numerical expression. Indicators cannot exist without a specific goal; they must address specific questions. They have to be created to gauge important ‘forces’; for example, how scientific progress is related to specific cognitive and socio-economic aspects. If indicators are not problem-driven, they are useless. They have to describe the recent past in such a way that they can guide us in, and inform us about, the near future.

A second and more fundamental role of indicators is their potential to test aspects of theories and models of scientific development and its interaction with society. In this sense, indicators are not only tools for science policymakers and research managers, but also instruments in the study of science.

But we also have to realize that science indicators do not answer typical epistemological questions such as:

  • How do scientists decide what will be called a scientific fact?
  • How do scientists decide whether a particular observation supports or contradicts a theory?
  • How do scientists come to accept certain methods or scientific instruments as valid means of attaining knowledge?
  • How does knowledge selectively accumulate? (1)

De Solla Price strikingly described the mission of the indicator-maker: to find the simplest pattern in the data at hand, and then look for the more complex patterns that modify the first (2). What should be constructed from the data is not a number but a pattern: a cluster of points on a map, a peak on a graph, a correlation of significant elements in a matrix, a qualitative similarity between two histograms.

What has also changed is the mode of publishing. Electronic publishing and electronic archives mark a whole new era.

If these patterns are found, the next step is to suggest models that produce such patterns and to test these models with further data. A numerical indicator or an indicative pattern alone has little significance. The data must be given perspective: the change of an indicator with time, or different rates of change of two different indicators. It is crucial that geometrical or topological objects or relations are used to replace numerical quantities.

Now, 25 years after the passing of de Solla Price, plus ça change, plus c’est la même chose rings true. What has changed is the very significant progress in application-oriented indicator work based on the enormous increase of available data and, above all, the almost unbelievable – compared to the 1970s – increase of computing power and electronic facilities. What has also changed is the mode of publishing. Electronic publishing and electronic archives mark a whole new era.

What has remained the same, however, are some of the most fundamental questions. For instance, to what extent can science maps derived from citation or concept-similarity data be said to exist in a strict spatial sense? In other words, do measures of similarity imply the existence of metric space? This question brings us to an even more fundamental problem formulated by de Solla Price: that the ontological status of maps of science will remain speculative until more has been learned about the structure of the brain itself.

The ideas and work of de Solla Price have always been one of my major sources of inspiration and I take pride in being a winner of an international award that bears his name.

Professor Anthony F.J. van Raan
Centre for Science and Technology Studies, Leiden University, the Netherlands

Contact him directly

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Why “The citation cycle” is my favorite de Solla Price paper

Dr. Henk Moed’s research over the past 25 years strongly builds upon the pioneering work of Derek de Solla Price. Here, Moed discusses his favorite de Solla Price paper.

Read more >


My research into quantitative science studies over the past 25 years with my colleagues at the Centre for Science and Technology Studies (CWTS) strongly builds upon the pioneering work of Derek de Solla Price. Most, if not all, of the research topics, ideas and methodologies we have been working on were introduced and explored in his publications.

For instance, in the recent development of research performance assessment methodologies in social sciences and humanities, de Solla Price’s analysis of “citation measures of hard science, soft science, technology, and nonscience” (1) play a crucial role. Studies of national research systems and the production of national science indicators build upon de Solla Price’s proposals for a model and a comprehensive system of science indicators (2, 3). Current mapping of the structure and development of scientific activity, and identifying networks of scientific papers, individual researchers and groups, are all based on de Solla Price’s papers “Networks of scientific papers” (4) and “The citation cycle” (5).

The citation cycle

“The citation cycle” is my favorite paper. It contains many ideas, analyses and suggestions for future research. In my view, this paper is an exemplar for genuine, creative and original bibliometric-scientometric analysis of a large database, such as Eugene Garfield’s Science Citation Index (6). One of our publications presented an update and a further extension of some elements of the citation cycle (7).

For instance, while de Solla Price found that, in 1980, a team of 2.5 authors published 2.2 papers, our later study in 2002 found that a team of 3.8 authors produced on average 2.8 papers. An in-depth analysis inspired by de Solla Price (5) categorized authors publishing in a year into continuants, movers, newcomers and transients. De Solla Price defined active scientists in a year as scientists who published in that year. But since active scientists do not necessarily publish papers in each year of their career, the new study proposed a new measure of the number of scientists active in a year based on the assumption that active scientists publish at least one paper every four years.

Most, if not all, of the research topics, ideas and methodologies we have been working on were introduced and experienced in de Solla Price's publications

Publication productivity

Our research question was: did scientists’ publication productivity (defined as the number of published papers per scientist, one of the key parameters in the citation cycle) increase during the 1980s and 1990s? It was found that while the average scientist published more research articles during the time period considered, from a global perspective, introduced in de Solla Price’s citation cycle, the overall publication productivity (defined as the total number of articles published by all authors in a year divided by the number of scientists active in that year) has remained approximately constant during the past two decades.

This paradox is explained by the fact that scientists are collaborating more intensively, making the sizes of teams authoring papers larger. At the level of disciplines, however, basic and applied physics and chemistry tend to show an increase in publication productivity over the years, while medical and biological sciences have declined.

A detailed interpretation of these outcomes, taking the effects of policy into account, goes beyond the scope of this contribution. Scientists may have successfully increased their individual publication output through more collaboration and authorship inflation, possibly stimulated by the use of ‘crude’ publication counts in research evaluation, without increasing their joint productivity.

An alternative interpretation holds that the amount of energy and resources absorbed by collaborative work may be so substantial that it held overall publication productivity back. More research is needed to further clarify the issues addressed. The main objective of this contribution is to emphasize how strongly these issues are related to de Solla Price’s citation cycle, and how an update, a longitudinal analysis and a further extension of his creative works may generate policy-relevant observations.

Dr. Henk F. Moed
Centre for Science and Technology Studies, Leiden University, the Netherlands

Contact him directly

References:

(1) de Solla Price, D.J. (1970) “Citation measures of hard science, soft science, technology, and nonscience”, In: Nelson, C.E. and Pollock, D.K. (Eds.) Communication Among Scientists and Engineers, pp. 3–22. Lexington, MA, USA: D.C. Heath and Company.
(2) Price, D.J. (1978) “Towards a model for science indicators”, In: Elkana, Y., Lederberg, J., Merton, R.K., Thackray, A., and Zuckerman, H. (Eds.) Toward a Metric of Science: The Advent of Science Indicators, pp. 69–95. New York, USA: John Wiley and Sons.
(3) de Solla Price, D.J. (July 1980) “Towards a comprehensive system of science indicators”, Conference on Evaluation in Science and Technology – Theory and Practice, Dubrovnik.
(4) de Solla Price, D.J. (1965) “Networks of scientific papers”, Science, Vol. 149, No. 3683, pp. 510–515.
(5) de Solla Price, D.J. (1980) “The citation cycle”, In: Griffith B.C. Key Papers in Information Science, pp. 195–210. White Plains, NY, USA: Knowledge Industry Publications.
(6) Garfield, E. (1964) “The citation index – a new dimension in indexing”, Science, Vol. 144, pp. 649–654.
(7) Moed, H.F. (2005) Citation Analysis in Research Evaluation. Dordrecht, the Netherlands: Kluwer.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

My research into quantitative science studies over the past 25 years with my colleagues at the Centre for Science and Technology Studies (CWTS) strongly builds upon the pioneering work of Derek de Solla Price. Most, if not all, of the research topics, ideas and methodologies we have been working on were introduced and explored in his publications.

For instance, in the recent development of research performance assessment methodologies in social sciences and humanities, de Solla Price’s analysis of “citation measures of hard science, soft science, technology, and nonscience” (1) play a crucial role. Studies of national research systems and the production of national science indicators build upon de Solla Price’s proposals for a model and a comprehensive system of science indicators (2, 3). Current mapping of the structure and development of scientific activity, and identifying networks of scientific papers, individual researchers and groups, are all based on de Solla Price’s papers “Networks of scientific papers” (4) and “The citation cycle” (5).

The citation cycle

“The citation cycle” is my favorite paper. It contains many ideas, analyses and suggestions for future research. In my view, this paper is an exemplar for genuine, creative and original bibliometric-scientometric analysis of a large database, such as Eugene Garfield’s Science Citation Index (6). One of our publications presented an update and a further extension of some elements of the citation cycle (7).

For instance, while de Solla Price found that, in 1980, a team of 2.5 authors published 2.2 papers, our later study in 2002 found that a team of 3.8 authors produced on average 2.8 papers. An in-depth analysis inspired by de Solla Price (5) categorized authors publishing in a year into continuants, movers, newcomers and transients. De Solla Price defined active scientists in a year as scientists who published in that year. But since active scientists do not necessarily publish papers in each year of their career, the new study proposed a new measure of the number of scientists active in a year based on the assumption that active scientists publish at least one paper every four years.

Most, if not all, of the research topics, ideas and methodologies we have been working on were introduced and experienced in de Solla Price's publications

Publication productivity

Our research question was: did scientists’ publication productivity (defined as the number of published papers per scientist, one of the key parameters in the citation cycle) increase during the 1980s and 1990s? It was found that while the average scientist published more research articles during the time period considered, from a global perspective, introduced in de Solla Price’s citation cycle, the overall publication productivity (defined as the total number of articles published by all authors in a year divided by the number of scientists active in that year) has remained approximately constant during the past two decades.

This paradox is explained by the fact that scientists are collaborating more intensively, making the sizes of teams authoring papers larger. At the level of disciplines, however, basic and applied physics and chemistry tend to show an increase in publication productivity over the years, while medical and biological sciences have declined.

A detailed interpretation of these outcomes, taking the effects of policy into account, goes beyond the scope of this contribution. Scientists may have successfully increased their individual publication output through more collaboration and authorship inflation, possibly stimulated by the use of ‘crude’ publication counts in research evaluation, without increasing their joint productivity.

An alternative interpretation holds that the amount of energy and resources absorbed by collaborative work may be so substantial that it held overall publication productivity back. More research is needed to further clarify the issues addressed. The main objective of this contribution is to emphasize how strongly these issues are related to de Solla Price’s citation cycle, and how an update, a longitudinal analysis and a further extension of his creative works may generate policy-relevant observations.

Dr. Henk F. Moed
Centre for Science and Technology Studies, Leiden University, the Netherlands

Contact him directly

References:

(1) de Solla Price, D.J. (1970) “Citation measures of hard science, soft science, technology, and nonscience”, In: Nelson, C.E. and Pollock, D.K. (Eds.) Communication Among Scientists and Engineers, pp. 3–22. Lexington, MA, USA: D.C. Heath and Company.
(2) Price, D.J. (1978) “Towards a model for science indicators”, In: Elkana, Y., Lederberg, J., Merton, R.K., Thackray, A., and Zuckerman, H. (Eds.) Toward a Metric of Science: The Advent of Science Indicators, pp. 69–95. New York, USA: John Wiley and Sons.
(3) de Solla Price, D.J. (July 1980) “Towards a comprehensive system of science indicators”, Conference on Evaluation in Science and Technology – Theory and Practice, Dubrovnik.
(4) de Solla Price, D.J. (1965) “Networks of scientific papers”, Science, Vol. 149, No. 3683, pp. 510–515.
(5) de Solla Price, D.J. (1980) “The citation cycle”, In: Griffith B.C. Key Papers in Information Science, pp. 195–210. White Plains, NY, USA: Knowledge Industry Publications.
(6) Garfield, E. (1964) “The citation index – a new dimension in indexing”, Science, Vol. 144, pp. 649–654.
(7) Moed, H.F. (2005) Citation Analysis in Research Evaluation. Dordrecht, the Netherlands: Kluwer.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The invisible college: working within the Pricean tradition

It is almost impossible to explain how Derek de Solla Price has influenced her work, says Professor Katherine McCain, since her entire career has been within the Pricean tradition. She discusses what this means to her.

Read more >


It’s hard to isolate and focus on “how the work of Derek de Solla Price influenced and continues to influence your work”, as the invitation put it, because I, and several other Drexel faculty and students, have worked within the ‘Pricean tradition’ of research and scholarship throughout our scholarly careers.

De Solla Price’s contributions to the history of science and developmental trends, quantitative patterns of citation and scholarship in the sciences (and non-sciences), and the role of the invisible college in scientific communication were part of the Drexel research environment in the 1970s and 1980s. This came about directly through the friendship between de Solla Price and the late Belver Griffith and more generally the key role that studies of bibliometrics and scientific communication played in faculty and doctoral research at the time under the guidance of Griffith, Howard White and Carl Drott.

In my case, I came to information science and bibliometrics/scientometrics fairly late in life, after two degrees in the life sciences, five years managing a biology library and a strong interest in the history of science and technology. I had actually encountered de Solla Price’s work at high school through his article on the Antikythera Device (1) but of course had no idea who the author was and no inkling of his influence on my future career.

Index for the natural sciences

I first encountered the kinds of phenomena that de Solla Price focused on when tallying citations to journals in faculty and Ph.D. student papers in the Biology Department at Temple University, Philadelphia, US (2). Most citations could be described by what I subsequently discovered was de Solla Price’s Index for the natural sciences (majority of citations to articles published in the previous five years) (3).

I think (one of) the most important links between De Solla Price's work and my own are... his general thoughts on mapping.

As a doctoral student at Drexel with an interest in quantitative studies of science and mapping, I read most of de Solla Price’s major works in my doctoral coursework and as background for my thesis research. Much of it resonated particularly with me because of my previous experiences in the life sciences. The concepts of the invisible college and research front (4, 5) seemed to fit my observations of the way that zoology and marine biology worked and, as noted above, de Solla Price’s Index described the citation patterns of the biologists I worked with at Temple University.

Mapping as scientific representation

Looking at my post-Ph.D. publications, I see several influences of de Solla Price’s work on my own. Generally, I focus on quantitative data that describe trends and activities in the sciences, as de Solla Price did, supplementing it with interviews and other methods of knowledge elicitation. De Solla Price’s interest in patterns of citation in journal literature is reflected in my studies of journals in the natural and social sciences and what citation and co-citation patterns can tell us about the core literature of a field. Invisible colleges and research fronts can be identified in co-citation maps – a major part of the bibliometric work at Drexel.

However, I think the most important links between de Solla Price’s work and my own are his comment on an early author co-citation map of information science and his general thoughts on mapping. In the first case, Howard White (6) quotes de Solla Price as saying that he “knew everyone about halfway across and then no one”. In the second, de Solla Price commented in 1979 that the major features of science could be represented in two-dimensional maps. These two observations validate the usefulness of mapping as a way of representing a field of science more broadly than even the most knowledgeable expert and have continued to inspire me over the past 30 years.

Professor Katherine W. McCain
College of Information Science & Technology, Drexel University, Philadelphia, US

Contact her directly

References:

(1) de Solla Price, D.J. (1959) “An Ancient Greek computer”, Scientific American, Vol. 200, No. 6, pp. 60–67.
(2) McCain, K.W. and Bobick, J.E. (1981) “Patterns of journal use in a departmental science library: a citation analysis”, Journal of the American Society for Information Science, Vol. 32, No. 4, pp. 257–267.
(3) de Solla Price, D.J. (1986) “Citation measures of hard science, soft science, technology, and nonscience”, In: Nelson, C.E. and Pollock, D.K. (Eds.) Communication Among Scientists and Engineers, pp. 155–179. New York, USA: Columbia University Press.
(4) de Solla Price, D.J. (1961) Science Since Babylon. New Haven, USA: Yale University Press.
(5)
de Solla Price, D.J. (1965) “Networks of scientific papers”, Science, Vol. 149, No. 3683, pp. 510–515.
(6)
White, H.D. (1990) “Author cocitation analysis: overview and defense”, In: Borgman, C. (Ed.) Bibliometrics and Scholarly Communication, pp. 84–106. Newbury Park, CA, USA: Sage.
(7) de Solla Price, D.J. (1979) “The revolution in mapping of science”, Proceedings of the American Society for Information Science Annual Meeting, Vol. 16, pp. 249–253.


VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

It’s hard to isolate and focus on “how the work of Derek de Solla Price influenced and continues to influence your work”, as the invitation put it, because I, and several other Drexel faculty and students, have worked within the ‘Pricean tradition’ of research and scholarship throughout our scholarly careers.

De Solla Price’s contributions to the history of science and developmental trends, quantitative patterns of citation and scholarship in the sciences (and non-sciences), and the role of the invisible college in scientific communication were part of the Drexel research environment in the 1970s and 1980s. This came about directly through the friendship between de Solla Price and the late Belver Griffith and more generally the key role that studies of bibliometrics and scientific communication played in faculty and doctoral research at the time under the guidance of Griffith, Howard White and Carl Drott.

In my case, I came to information science and bibliometrics/scientometrics fairly late in life, after two degrees in the life sciences, five years managing a biology library and a strong interest in the history of science and technology. I had actually encountered de Solla Price’s work at high school through his article on the Antikythera Device (1) but of course had no idea who the author was and no inkling of his influence on my future career.

Index for the natural sciences

I first encountered the kinds of phenomena that de Solla Price focused on when tallying citations to journals in faculty and Ph.D. student papers in the Biology Department at Temple University, Philadelphia, US (2). Most citations could be described by what I subsequently discovered was de Solla Price’s Index for the natural sciences (majority of citations to articles published in the previous five years) (3).

I think (one of) the most important links between De Solla Price's work and my own are... his general thoughts on mapping.

As a doctoral student at Drexel with an interest in quantitative studies of science and mapping, I read most of de Solla Price’s major works in my doctoral coursework and as background for my thesis research. Much of it resonated particularly with me because of my previous experiences in the life sciences. The concepts of the invisible college and research front (4, 5) seemed to fit my observations of the way that zoology and marine biology worked and, as noted above, de Solla Price’s Index described the citation patterns of the biologists I worked with at Temple University.

Mapping as scientific representation

Looking at my post-Ph.D. publications, I see several influences of de Solla Price’s work on my own. Generally, I focus on quantitative data that describe trends and activities in the sciences, as de Solla Price did, supplementing it with interviews and other methods of knowledge elicitation. De Solla Price’s interest in patterns of citation in journal literature is reflected in my studies of journals in the natural and social sciences and what citation and co-citation patterns can tell us about the core literature of a field. Invisible colleges and research fronts can be identified in co-citation maps – a major part of the bibliometric work at Drexel.

However, I think the most important links between de Solla Price’s work and my own are his comment on an early author co-citation map of information science and his general thoughts on mapping. In the first case, Howard White (6) quotes de Solla Price as saying that he “knew everyone about halfway across and then no one”. In the second, de Solla Price commented in 1979 that the major features of science could be represented in two-dimensional maps. These two observations validate the usefulness of mapping as a way of representing a field of science more broadly than even the most knowledgeable expert and have continued to inspire me over the past 30 years.

Professor Katherine W. McCain
College of Information Science & Technology, Drexel University, Philadelphia, US

Contact her directly

References:

(1) de Solla Price, D.J. (1959) “An Ancient Greek computer”, Scientific American, Vol. 200, No. 6, pp. 60–67.
(2) McCain, K.W. and Bobick, J.E. (1981) “Patterns of journal use in a departmental science library: a citation analysis”, Journal of the American Society for Information Science, Vol. 32, No. 4, pp. 257–267.
(3) de Solla Price, D.J. (1986) “Citation measures of hard science, soft science, technology, and nonscience”, In: Nelson, C.E. and Pollock, D.K. (Eds.) Communication Among Scientists and Engineers, pp. 155–179. New York, USA: Columbia University Press.
(4) de Solla Price, D.J. (1961) Science Since Babylon. New Haven, USA: Yale University Press.
(5)
de Solla Price, D.J. (1965) “Networks of scientific papers”, Science, Vol. 149, No. 3683, pp. 510–515.
(6)
White, H.D. (1990) “Author cocitation analysis: overview and defense”, In: Borgman, C. (Ed.) Bibliometrics and Scholarly Communication, pp. 84–106. Newbury Park, CA, USA: Sage.
(7) de Solla Price, D.J. (1979) “The revolution in mapping of science”, Proceedings of the American Society for Information Science Annual Meeting, Vol. 16, pp. 249–253.


VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Journals as retention mechanisms of scientific growth

Professor Loet Leydesdorff has spent the last 20 years developing an idea first posed by Derek de Solla Price in 1961. He asks whether the aggregated citation relations among journals can be used to study clusters of journals as representations of the intellectual organization of the sciences.

Read more >


Among the many discoveries that Derek de Solla Price made during his lifetime, I find Figure 1 the most inspiring (1). In this picture, de Solla Price provides a graphic illustration of the exponential growth of scientific journal literature since the appearance of the first journals in 1665. De Solla Price was fascinated with journals and their exponential growth in size and numbers ever since his first study of the Philosophical Transactions of the Royal Society of London from its very beginning in 1665 (2, 3).

On the basis of an experimental version of the Science Citation Index in 1961, de Solla Price formulated a program for mapping the sciences in terms of aggregated journal-journal citation structures as follows:

“The total research front of science has never, however, been a single row of knitting. It is, instead, divided by dropped stitches into quite small segments and strips. From a study of the citations of journals by journals I come to the conclusion that most of these strips correspond to the work of, at most a few hundred men at any one time. Such strips represent objectively defined subjects whose description may vary materially from year to year but which remain otherwise an intellectual whole. If one would work out the nature of such strips, it might lead to a method for delineating the topography of current scientific literature. […] Journal citations provide the most readily available data for a test of such methods” (4)

Organization of knowledge

Over the past 20 years, I have addressed the question of whether the aggregated citation relations among journals can be used to study clusters of journals as representations of the intellectual organization of the sciences. If the intellectual organization of the sciences is operationalized using journal structures, three theoretically important problems can be addressed:

1. In science studies, this operationalization of the intellectual organization of knowledge in terms of texts (journals), as different from the social organization of the sciences in terms of institutions and people, would enable us to explain the scientific enterprise as a result of these two interacting and potentially coevolving dimensions (5, 6, 7).

2. In science policy analysis, the question of whether a baseline can be constructed for measuring the efficacy of political interventions was raised by Kenneth Studer and Daryl Chubin (8; cf. 9, 10). Wolfgang van den Daele et al distinguished between parametric steering, in terms of more institutional activities due to increased funding, versus the relative autonomy and potential self-organization of scientific communication into specialties and disciplinary structures (11).

3. While journal Impact Factors are defined with reference to averages across the sciences (12, 13), important parameters of intellectual organization, such as publication and citation frequencies, vary among disciplines (14). In fact, publication practices across disciplinary divides are virtually incomparable (15, 16, 17). The Impact Factor is a global measure that does not take into account the intellectual structures in the database.

Mapping the data

De Solla Price conjectured that specialties would begin to exhibit ‘speciation’ when the carrying community grows larger than a hundred or so active scientists (18). Furthermore, the proliferation of scientific journals can be expected to correlate with this because new communities will wish to begin their own journals (4, 19). New journals are organized within existing frameworks, but the bifurcations and other network dynamics feed back on the historical organization to the extent that new fields of science and technology become established and existing ones reorganized.

De Solla Price's dream of making sciometric mapping a relatively hard sociual science can, with hindsight, be considered as fundamentally flawed.

Whereas the variation is visible in the data, the selection mechanisms remain latent and can therefore only be hypothesized. On the one hand, these constructs are needed as dimensions for the mapping of the data. On the other hand, constructs remain ‘soft’; that is, open for debate and reconstruction. De Solla Price’s dream of making scientometric mapping a relatively hard social science (20) can, with hindsight, be considered as fundamentally flawed (21, 22). When both the data and the perspectives are potentially changing, the position of the analyst can no longer be considered as neutral (23).

 Fig 1

Figure 1 – In this graph showing the number of journals founded (not surviving) as a function of date (NB, the two uppermost points are taken from a slightly differently based list), de Solla Price provides a graphic illustration of the exponential growth of the scientific journal literature since the first journals in 1665 (1).

Professor Loet Leydesdorff
Amsterdam School of Communications Research,
University of Amsterdam

Contact him directly

References:

(1) de Solla Price, D.J. (1961) Science Since Babylon. New Haven, USA: Yale University Press.
(2) de Solla Price, D.J. (1951) “Quantitative measures of the development of science”, Archives Internationales d’Histoire des Sciences, Vol. 14, pp. 85–93.
(3) de Solla Price, D.J. (1978) “Toward a model of science indicators”, In: Elkana, Y., Lederberg, J., Merton, R.K., Thackray A. and Zuckerman H. (Eds.) The Advent of Science Indicators, pp. 69–95. New York, USA: John Wiley and Sons.
(4) de Solla Price, D.J. (1965) “Networks of scientific papers”, Science, Vol. 149, pp. 510–515.
(5) Whitley, R.D. (1984) The Intellectual and Social Organization of the Sciences. Oxford, UK: Oxford University Press.
(6) Leydesdorff, L. (1995) The Challenge of Scientometrics: The Development, Measurement, and Self-Organization of Scientific Communications. Leiden, the Netherlands: DSWO Press, Leiden University.
(7) Leydesdorff, L. (1998) “Theories of citation?”, Scientometrics, Vol. 43, No. 1, pp. 5–25.
(8) Studer, K.E. and Chubin, D.E. (1980) The Cancer Mission. Social Contexts of Biomedical Research. Beverly Hills, USA: Sage.
(9) Leydesdorff, L. and van der Schaar, P. (1987) “The use of scientometric indicators for evaluating national research programmes”, Science & Technology Studies, Vol. 5, pp. 22–31.
(10) Leydesdorff, L., Cozzens, S.E. and van den Besselaar, P. (1994) “Tracking areas of strategic importance using scientometric journal mappings”, Research Policy, Vol. 23, pp. 217–229.
(11) van den Daele, W., Krohn, W. and Weingart, P. (Eds.) (1979) Geplante Forschung: Vergleichende Studien über den Einfluss politischer Programme auf die Wissenschaftsentwicklung. Frankfurt a.M., Germany: Suhrkamp.
(12) Garfield, E. (1972) “Citation analysis as a tool in journal evaluation”, Science, Vol. 178, No. 4060, pp. 471–479.
(13) Garfield, E. (1979) Citation Indexing: Its Theory and Application in Science, Technology, and Humanities. New York, USA: John Wiley and Sons.
(14) de Solla Price, D.J. (1970) “Citation measures of hard science, soft science, technology, and nonscience”, In: Nelson, C.E. and Pollock D.K., (Eds.) Communication among Scientists and Engineers, pp. 3–22. Lexington, MA, USA: Heath.
(15) Leydesdorff, L. (2008) “Caveats for the use of citation indicators in research and journal evaluation”, Journal of the American Society for Information Science and Technology, Vol. 59, No. 2, pp. 278–287.
(16) Cozzens, S.E. (1985) “Using the archive: Derek Price’s theory of differences among the sciences”, Scientometrics, Vol. 7, pp. 431–441.
(17) Nederhof, A.J., Zwaan, R.A., Bruin, R.E. and Dekker, P.J. (1989) “Assessing the usefulness of bibliometric indicators for the humanities and the social sciences: A comparative study”, Scientometrics, Vol. 15, pp. 423–436.
(18) de Solla Price, D.J. (1963) Little Science, Big Science. New York, USA: Columbia University Press.
(19) Van den Besselaar, P. and Leydesdorff, L. (1996) “Mapping change in scientific specialties: a scientometric reconstruction of the development of artificial intelligence”, Journal of the American Society for Information Science, Vol. 47, pp. 415–436.
(20) de Solla Price, D.J. (1978) “Editorial Statement”, Scientometrics, Vol. 1, No. 1, pp. 7–8.
(21) Wouters, P. and Leydesdorff, L. (1994) “Has Price’s dream come true: is scientometrics a hard science?”, Scientometrics, Vol. 31, pp. 193–222.
(22) Wouters, P. (1999) The Citation Culture. Amsterdam, the Netherlands: Unpublished Ph.D. Thesis, University of Amsterdam.
(23) Leydesdorff, L. (2006) “Can scientific journals be classified in terms of aggregated journal-journal citation relations using the journal citation reports?”, Journal of the American Society for Information Science & Technology, Vol. 57, No. 5, pp. 601–613.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Among the many discoveries that Derek de Solla Price made during his lifetime, I find Figure 1 the most inspiring (1). In this picture, de Solla Price provides a graphic illustration of the exponential growth of scientific journal literature since the appearance of the first journals in 1665. De Solla Price was fascinated with journals and their exponential growth in size and numbers ever since his first study of the Philosophical Transactions of the Royal Society of London from its very beginning in 1665 (2, 3).

On the basis of an experimental version of the Science Citation Index in 1961, de Solla Price formulated a program for mapping the sciences in terms of aggregated journal-journal citation structures as follows:

“The total research front of science has never, however, been a single row of knitting. It is, instead, divided by dropped stitches into quite small segments and strips. From a study of the citations of journals by journals I come to the conclusion that most of these strips correspond to the work of, at most a few hundred men at any one time. Such strips represent objectively defined subjects whose description may vary materially from year to year but which remain otherwise an intellectual whole. If one would work out the nature of such strips, it might lead to a method for delineating the topography of current scientific literature. […] Journal citations provide the most readily available data for a test of such methods” (4)

Organization of knowledge

Over the past 20 years, I have addressed the question of whether the aggregated citation relations among journals can be used to study clusters of journals as representations of the intellectual organization of the sciences. If the intellectual organization of the sciences is operationalized using journal structures, three theoretically important problems can be addressed:

1. In science studies, this operationalization of the intellectual organization of knowledge in terms of texts (journals), as different from the social organization of the sciences in terms of institutions and people, would enable us to explain the scientific enterprise as a result of these two interacting and potentially coevolving dimensions (5, 6, 7).

2. In science policy analysis, the question of whether a baseline can be constructed for measuring the efficacy of political interventions was raised by Kenneth Studer and Daryl Chubin (8; cf. 9, 10). Wolfgang van den Daele et al distinguished between parametric steering, in terms of more institutional activities due to increased funding, versus the relative autonomy and potential self-organization of scientific communication into specialties and disciplinary structures (11).

3. While journal Impact Factors are defined with reference to averages across the sciences (12, 13), important parameters of intellectual organization, such as publication and citation frequencies, vary among disciplines (14). In fact, publication practices across disciplinary divides are virtually incomparable (15, 16, 17). The Impact Factor is a global measure that does not take into account the intellectual structures in the database.

Mapping the data

De Solla Price conjectured that specialties would begin to exhibit ‘speciation’ when the carrying community grows larger than a hundred or so active scientists (18). Furthermore, the proliferation of scientific journals can be expected to correlate with this because new communities will wish to begin their own journals (4, 19). New journals are organized within existing frameworks, but the bifurcations and other network dynamics feed back on the historical organization to the extent that new fields of science and technology become established and existing ones reorganized.

De Solla Price's dream of making sciometric mapping a relatively hard sociual science can, with hindsight, be considered as fundamentally flawed.

Whereas the variation is visible in the data, the selection mechanisms remain latent and can therefore only be hypothesized. On the one hand, these constructs are needed as dimensions for the mapping of the data. On the other hand, constructs remain ‘soft’; that is, open for debate and reconstruction. De Solla Price’s dream of making scientometric mapping a relatively hard social science (20) can, with hindsight, be considered as fundamentally flawed (21, 22). When both the data and the perspectives are potentially changing, the position of the analyst can no longer be considered as neutral (23).

 Fig 1

Figure 1 – In this graph showing the number of journals founded (not surviving) as a function of date (NB, the two uppermost points are taken from a slightly differently based list), de Solla Price provides a graphic illustration of the exponential growth of the scientific journal literature since the first journals in 1665 (1).

Professor Loet Leydesdorff
Amsterdam School of Communications Research,
University of Amsterdam

Contact him directly

References:

(1) de Solla Price, D.J. (1961) Science Since Babylon. New Haven, USA: Yale University Press.
(2) de Solla Price, D.J. (1951) “Quantitative measures of the development of science”, Archives Internationales d’Histoire des Sciences, Vol. 14, pp. 85–93.
(3) de Solla Price, D.J. (1978) “Toward a model of science indicators”, In: Elkana, Y., Lederberg, J., Merton, R.K., Thackray A. and Zuckerman H. (Eds.) The Advent of Science Indicators, pp. 69–95. New York, USA: John Wiley and Sons.
(4) de Solla Price, D.J. (1965) “Networks of scientific papers”, Science, Vol. 149, pp. 510–515.
(5) Whitley, R.D. (1984) The Intellectual and Social Organization of the Sciences. Oxford, UK: Oxford University Press.
(6) Leydesdorff, L. (1995) The Challenge of Scientometrics: The Development, Measurement, and Self-Organization of Scientific Communications. Leiden, the Netherlands: DSWO Press, Leiden University.
(7) Leydesdorff, L. (1998) “Theories of citation?”, Scientometrics, Vol. 43, No. 1, pp. 5–25.
(8) Studer, K.E. and Chubin, D.E. (1980) The Cancer Mission. Social Contexts of Biomedical Research. Beverly Hills, USA: Sage.
(9) Leydesdorff, L. and van der Schaar, P. (1987) “The use of scientometric indicators for evaluating national research programmes”, Science & Technology Studies, Vol. 5, pp. 22–31.
(10) Leydesdorff, L., Cozzens, S.E. and van den Besselaar, P. (1994) “Tracking areas of strategic importance using scientometric journal mappings”, Research Policy, Vol. 23, pp. 217–229.
(11) van den Daele, W., Krohn, W. and Weingart, P. (Eds.) (1979) Geplante Forschung: Vergleichende Studien über den Einfluss politischer Programme auf die Wissenschaftsentwicklung. Frankfurt a.M., Germany: Suhrkamp.
(12) Garfield, E. (1972) “Citation analysis as a tool in journal evaluation”, Science, Vol. 178, No. 4060, pp. 471–479.
(13) Garfield, E. (1979) Citation Indexing: Its Theory and Application in Science, Technology, and Humanities. New York, USA: John Wiley and Sons.
(14) de Solla Price, D.J. (1970) “Citation measures of hard science, soft science, technology, and nonscience”, In: Nelson, C.E. and Pollock D.K., (Eds.) Communication among Scientists and Engineers, pp. 3–22. Lexington, MA, USA: Heath.
(15) Leydesdorff, L. (2008) “Caveats for the use of citation indicators in research and journal evaluation”, Journal of the American Society for Information Science and Technology, Vol. 59, No. 2, pp. 278–287.
(16) Cozzens, S.E. (1985) “Using the archive: Derek Price’s theory of differences among the sciences”, Scientometrics, Vol. 7, pp. 431–441.
(17) Nederhof, A.J., Zwaan, R.A., Bruin, R.E. and Dekker, P.J. (1989) “Assessing the usefulness of bibliometric indicators for the humanities and the social sciences: A comparative study”, Scientometrics, Vol. 15, pp. 423–436.
(18) de Solla Price, D.J. (1963) Little Science, Big Science. New York, USA: Columbia University Press.
(19) Van den Besselaar, P. and Leydesdorff, L. (1996) “Mapping change in scientific specialties: a scientometric reconstruction of the development of artificial intelligence”, Journal of the American Society for Information Science, Vol. 47, pp. 415–436.
(20) de Solla Price, D.J. (1978) “Editorial Statement”, Scientometrics, Vol. 1, No. 1, pp. 7–8.
(21) Wouters, P. and Leydesdorff, L. (1994) “Has Price’s dream come true: is scientometrics a hard science?”, Scientometrics, Vol. 31, pp. 193–222.
(22) Wouters, P. (1999) The Citation Culture. Amsterdam, the Netherlands: Unpublished Ph.D. Thesis, University of Amsterdam.
(23) Leydesdorff, L. (2006) “Can scientific journals be classified in terms of aggregated journal-journal citation relations using the journal citation reports?”, Journal of the American Society for Information Science & Technology, Vol. 57, No. 5, pp. 601–613.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
  • Elsevier has recently launched the International Center for the Study of Research - ICSR - to help create a more transparent approach to research assessment. Its mission is to encourage the examination of research using an array of metrics and a variety of qualitative and quantitive methods.