My research into quantitative science studies over the past 25 years with my colleagues at the Centre for Science and Technology Studies (CWTS) strongly builds upon the pioneering work of Derek de Solla Price. Most, if not all, of the research topics, ideas and methodologies we have been working on were introduced and explored in his publications.

For instance, in the recent development of research performance assessment methodologies in social sciences and humanities, de Solla Price’s analysis of “citation measures of hard science, soft science, technology, and nonscience” (1) play a crucial role. Studies of national research systems and the production of national science indicators build upon de Solla Price’s proposals for a model and a comprehensive system of science indicators (2, 3). Current mapping of the structure and development of scientific activity, and identifying networks of scientific papers, individual researchers and groups, are all based on de Solla Price’s papers “Networks of scientific papers” (4) and “The citation cycle” (5).

The citation cycle

“The citation cycle” is my favorite paper. It contains many ideas, analyses and suggestions for future research. In my view, this paper is an exemplar for genuine, creative and original bibliometric-scientometric analysis of a large database, such as Eugene Garfield’s Science Citation Index (6). One of our publications presented an update and a further extension of some elements of the citation cycle (7).

For instance, while de Solla Price found that, in 1980, a team of 2.5 authors published 2.2 papers, our later study in 2002 found that a team of 3.8 authors produced on average 2.8 papers. An in-depth analysis inspired by de Solla Price (5) categorized authors publishing in a year into continuants, movers, newcomers and transients. De Solla Price defined active scientists in a year as scientists who published in that year. But since active scientists do not necessarily publish papers in each year of their career, the new study proposed a new measure of the number of scientists active in a year based on the assumption that active scientists publish at least one paper every four years.

Most, if not all, of the research topics, ideas and methodologies we have been working on were introduced and experienced in de Solla Price's publications

Publication productivity

Our research question was: did scientists’ publication productivity (defined as the number of published papers per scientist, one of the key parameters in the citation cycle) increase during the 1980s and 1990s? It was found that while the average scientist published more research articles during the time period considered, from a global perspective, introduced in de Solla Price’s citation cycle, the overall publication productivity (defined as the total number of articles published by all authors in a year divided by the number of scientists active in that year) has remained approximately constant during the past two decades.

This paradox is explained by the fact that scientists are collaborating more intensively, making the sizes of teams authoring papers larger. At the level of disciplines, however, basic and applied physics and chemistry tend to show an increase in publication productivity over the years, while medical and biological sciences have declined.

A detailed interpretation of these outcomes, taking the effects of policy into account, goes beyond the scope of this contribution. Scientists may have successfully increased their individual publication output through more collaboration and authorship inflation, possibly stimulated by the use of ‘crude’ publication counts in research evaluation, without increasing their joint productivity.

An alternative interpretation holds that the amount of energy and resources absorbed by collaborative work may be so substantial that it held overall publication productivity back. More research is needed to further clarify the issues addressed. The main objective of this contribution is to emphasize how strongly these issues are related to de Solla Price’s citation cycle, and how an update, a longitudinal analysis and a further extension of his creative works may generate policy-relevant observations.

Dr. Henk F. Moed
Centre for Science and Technology Studies, Leiden University, the Netherlands

Contact him directly


(1) de Solla Price, D.J. (1970) “Citation measures of hard science, soft science, technology, and nonscience”, In: Nelson, C.E. and Pollock, D.K. (Eds.) Communication Among Scientists and Engineers, pp. 3–22. Lexington, MA, USA: D.C. Heath and Company.
(2) Price, D.J. (1978) “Towards a model for science indicators”, In: Elkana, Y., Lederberg, J., Merton, R.K., Thackray, A., and Zuckerman, H. (Eds.) Toward a Metric of Science: The Advent of Science Indicators, pp. 69–95. New York, USA: John Wiley and Sons.
(3) de Solla Price, D.J. (July 1980) “Towards a comprehensive system of science indicators”, Conference on Evaluation in Science and Technology – Theory and Practice, Dubrovnik.
(4) de Solla Price, D.J. (1965) “Networks of scientific papers”, Science, Vol. 149, No. 3683, pp. 510–515.
(5) de Solla Price, D.J. (1980) “The citation cycle”, In: Griffith B.C. Key Papers in Information Science, pp. 195–210. White Plains, NY, USA: Knowledge Industry Publications.
(6) Garfield, E. (1964) “The citation index – a new dimension in indexing”, Science, Vol. 144, pp. 649–654.
(7) Moed, H.F. (2005) Citation Analysis in Research Evaluation. Dordrecht, the Netherlands: Kluwer.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)