Articles

Research trends is an online magazine providing objective insights into scientific trends based on bibliometrics analyses.

De Solla Price and the evolution of scientometrics

Has scientometrics changed over the last two-and-a-half decades? And would Derek de Solla Price have enjoyed the changes? Professor Wolfgang Glänzel answers our questions.

Read more >


Wolfgang Glänzel is Professor of Quantitative Science Studies in the Faculty of Business and Economics at Katholieke Universiteit Leuven, Belgium. He is also the Director of the Steunpunt O&O Indicatoren, which is housed within the Faculty of Economics and Applied Economics. This is an inter-university consortium of all Flemish universities. Its mission is the development of a consistent system of indicators for the Flemish Government to quantify R&D efforts at Flemish universities, research institutes and industry.

Prof. Glänzel answers our questions about his memories of Derek de Solla Price and the changes that have taken place in bibliometrics over the last two-and-a-half decades.

RT: What are your memories of de Solla Price?

WG: I didn’t meet him personally. I studied mathematics in Budapest and joined Tibor Braun’s team in 1980. De Solla Price passed away in 1983, so there was unfortunately little opportunity to meet him. Everything I know about him originates from the literature and the anecdotes of people who personally knew him. I was shocked by his unexpected passing and felt like that day signified the close of an important chapter in the field.

RT: What elements of de Solla Price’s work were the most influential in the field of scientometrics?

WG: He was one of the founders of scientometrics and he paved the way for future scientometric research. He published books and important papers that addressed fundamental issues for our field, such as how to get away from methods and models adopted from other fields towards the development of a scientometric-specific methodology.

De Solla Price proposed the growth model and studied scientometric transactions, i.e. the network of citations between scientific papers. He found that a paper that is frequently cited will probably get more citations than one cited less often and created a model for this phenomenon. He also conducted scientometric studies for policy implications and research evaluation, thus opening the door for the present-day evaluative bibliometrics.

RT: How did de Solla Price’s work influence your own?

WG: His career as a scientist was an example to me of how to approach and conduct interdisciplinary research. De Solla Price had a Ph.D. in experimental physics, and then gained a second doctorate in the history of science. He founded a new discipline but also remained a prominent member of his own scientific community.

I also learned that scientometrics is much more than a mere umbrella for a diversity of tools used to measure the output of research. In several papers and lectures I have expressed my concerns regarding some recent developments in our field (1, 2).

There are several topics already tackled by de Solla Price that inspired me to continue his research or answer unresolved questions. Among these are mathematical models for the cumulative advantage principle and for scientometric transactions, the question of obsolescence of scientific information in different fields.

RT: When you won the Derek de Solla Price Medal, Le Pair described your work as being broad as well as focused, which was at the heart of de Solla Price’s research. What similarities would you draw from this?

WG: I’m afraid that I’m not objective enough to be able to answer that question.

RT: Twenty-five years after his passing, how do you think bibliometrics has changed and do you think de Solla Price would have enjoyed the new elements of the field?

WG: I think he would have enjoyed several new elements. First, scientometrics has evolved from an invisible college to an established field with its own scientific journals, conference series, an international academic society and institutionalized education.

In de Solla Price’s day, data processing for bibliometrics was still slow, expensive and limited. Access to bibliometric information has also been transformed by the development of information technology, and I think de Solla Price would have enjoyed this development. The World Wide Web would also have interested him. In the 1980s, this was in its infancy and no one could have predicted its success.

Important bibliometric results have also been published since, and I think he would have enjoyed reading these advancements to the field. However, his dream that scientometrics would become a hard science has not yet happened, as discussed in “Has Price’s dream come true: is scientometrics a hard science?” (3)

I also see the uninformed use and misuse of bibliometric results. By uniformed use I mean that bibliometric data are not used in the proper context but this is done unconsciously; and by misuse I mean that the data are consciously presented and interpreted incorrectly or deliberately used in an inappropriate context. However, I believe the positive achievements of scientometrics over the past 25 years prevail. New elements such as open access, electronic publication and communication and the extension of the bibliographic databases represent new challenges to be taken on by the scientometric community.

Professor Wolfgang Glänzel
Faculty of Business and Economics at Katholieke Universiteit Leuven, Belgium

Contact him directly here

References:

(1) Glänzel, W. and Schoepflin, U. (1994) “Little scientometrics – big scientometrics ... and beyond”, Scientometrics, Vol. 30, Nos. 2–3, pp. 375–384.
(2) Glänzel, W. (2008) “Seven myths in bibliometrics – About facts and fiction in quantitative science studies”, ISSI Newsletter, Vol. 4, No. 2, pp. 24–32.
(3) Wouters, P. and Leydesdorff, L. (1994) “Has Price’s dream come true: is scientometrics a hard science?”, Scientometrics, Vol. 31, No. 2, pp. 193–222.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Wolfgang Glänzel is Professor of Quantitative Science Studies in the Faculty of Business and Economics at Katholieke Universiteit Leuven, Belgium. He is also the Director of the Steunpunt O&O Indicatoren, which is housed within the Faculty of Economics and Applied Economics. This is an inter-university consortium of all Flemish universities. Its mission is the development of a consistent system of indicators for the Flemish Government to quantify R&D efforts at Flemish universities, research institutes and industry.

Prof. Glänzel answers our questions about his memories of Derek de Solla Price and the changes that have taken place in bibliometrics over the last two-and-a-half decades.

RT: What are your memories of de Solla Price?

WG: I didn’t meet him personally. I studied mathematics in Budapest and joined Tibor Braun’s team in 1980. De Solla Price passed away in 1983, so there was unfortunately little opportunity to meet him. Everything I know about him originates from the literature and the anecdotes of people who personally knew him. I was shocked by his unexpected passing and felt like that day signified the close of an important chapter in the field.

RT: What elements of de Solla Price’s work were the most influential in the field of scientometrics?

WG: He was one of the founders of scientometrics and he paved the way for future scientometric research. He published books and important papers that addressed fundamental issues for our field, such as how to get away from methods and models adopted from other fields towards the development of a scientometric-specific methodology.

De Solla Price proposed the growth model and studied scientometric transactions, i.e. the network of citations between scientific papers. He found that a paper that is frequently cited will probably get more citations than one cited less often and created a model for this phenomenon. He also conducted scientometric studies for policy implications and research evaluation, thus opening the door for the present-day evaluative bibliometrics.

RT: How did de Solla Price’s work influence your own?

WG: His career as a scientist was an example to me of how to approach and conduct interdisciplinary research. De Solla Price had a Ph.D. in experimental physics, and then gained a second doctorate in the history of science. He founded a new discipline but also remained a prominent member of his own scientific community.

I also learned that scientometrics is much more than a mere umbrella for a diversity of tools used to measure the output of research. In several papers and lectures I have expressed my concerns regarding some recent developments in our field (1, 2).

There are several topics already tackled by de Solla Price that inspired me to continue his research or answer unresolved questions. Among these are mathematical models for the cumulative advantage principle and for scientometric transactions, the question of obsolescence of scientific information in different fields.

RT: When you won the Derek de Solla Price Medal, Le Pair described your work as being broad as well as focused, which was at the heart of de Solla Price’s research. What similarities would you draw from this?

WG: I’m afraid that I’m not objective enough to be able to answer that question.

RT: Twenty-five years after his passing, how do you think bibliometrics has changed and do you think de Solla Price would have enjoyed the new elements of the field?

WG: I think he would have enjoyed several new elements. First, scientometrics has evolved from an invisible college to an established field with its own scientific journals, conference series, an international academic society and institutionalized education.

In de Solla Price’s day, data processing for bibliometrics was still slow, expensive and limited. Access to bibliometric information has also been transformed by the development of information technology, and I think de Solla Price would have enjoyed this development. The World Wide Web would also have interested him. In the 1980s, this was in its infancy and no one could have predicted its success.

Important bibliometric results have also been published since, and I think he would have enjoyed reading these advancements to the field. However, his dream that scientometrics would become a hard science has not yet happened, as discussed in “Has Price’s dream come true: is scientometrics a hard science?” (3)

I also see the uninformed use and misuse of bibliometric results. By uniformed use I mean that bibliometric data are not used in the proper context but this is done unconsciously; and by misuse I mean that the data are consciously presented and interpreted incorrectly or deliberately used in an inappropriate context. However, I believe the positive achievements of scientometrics over the past 25 years prevail. New elements such as open access, electronic publication and communication and the extension of the bibliographic databases represent new challenges to be taken on by the scientometric community.

Professor Wolfgang Glänzel
Faculty of Business and Economics at Katholieke Universiteit Leuven, Belgium

Contact him directly here

References:

(1) Glänzel, W. and Schoepflin, U. (1994) “Little scientometrics – big scientometrics ... and beyond”, Scientometrics, Vol. 30, Nos. 2–3, pp. 375–384.
(2) Glänzel, W. (2008) “Seven myths in bibliometrics – About facts and fiction in quantitative science studies”, ISSI Newsletter, Vol. 4, No. 2, pp. 24–32.
(3) Wouters, P. and Leydesdorff, L. (1994) “Has Price’s dream come true: is scientometrics a hard science?”, Scientometrics, Vol. 31, No. 2, pp. 193–222.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

How de Solla Price influenced my work

When Professor Leo Egghe met Derek de Solla Price in 1981, he had little idea of the influence he would have on his informetrics career. Here, Egghe recalls how de Solla Price’s universal philosophy on the science of science has inspired his thinking.

Read more >


I was fortunate enough to meet Derek de Solla Price at a lecture he gave in Brussels in 1981. At that time, I was at a crossroads in my career: after my Ph.D. in mathematics in 1978, I became chief librarian of the Limburgs Universitair Centrum (now Universiteit Hasselt), a position I still occupy. In 1983, together with the then chief librarian of the University of Antwerp, Prof. H. Vervliet, I prepared the foundation of the degree in library and information science. In that year, I became part-time professor in this field and still teach courses on Quantitative Methods in Library and Information Science and Information Retrieval. After finishing a book on mathematics in 1984 (1), I switched to informetrics research. When I met de Solla Price, I was not yet an informetrician and had no idea of the influence he was going to have on my future career.

The science of science

It was not so much de Solla Price’s mathematical work that influenced me, as his universal philosophy on the science of science. His book Little Science, Big Science (2) describes growth distributions and size- and rank-frequency distributions of very different phenomena in information science, the physical world, linguistics, econometrics and so on. This book showed me that many of those phenomena have common laws and can be described in one framework, which I called Information Production Processes (IPPs) (3, 4). IPPs can be constructed far beyond information science, as de Solla Price explained (2). I defined an IPP as a system where one has ‘sources’ that have or produce ‘items’.

A classic bibliography is an example of an IPP. Authors have papers, yielding another example. But papers can also be sources, producing or receiving items as references or citations. Books are sources of their borrowings: words are sources (known as ‘types’ in linguistics) and their occurrences in the text are the items (’tokens’ in linguistics). Beyond informetrics, as de Solla Price describes, we have communities (cities and villages) as sources and their inhabitants as items (demography), and in econometrics one can consider employees as sources and their production or salary as items (2).

This universality is not the only remarkable thing. De Solla Price notices that all these phenomena (or IPPs) also satisfy the same sociometric (informetric) laws:

  • exponential or S-shaped growth functions;
  • size-frequency functions (expressing the number of sources with a certain number of items) of power-law type, such as Lotka’s law (5), and;
  • rank-frequency functions (expressing the number of items in the source on rank r – sources are arranged in decreasing order of the number of items they have) also of power-law type but with another exponent than in the size-frequency case, such as Zipf (linguistics) and Mandelbrot and Pareto (econometrics).

Essentially, these are all the same laws and are equivalent to Lotka’s law.

Success breeds success

It is remarkable that while rank-frequency functions are studied in informetrics, linguistics and econometrics, informetrics only studies size-frequency functions via Lotka’s law. De Solla Price introduced Lotka’s law into informetrics and – although equivalent with the rank-frequency laws – the size-frequency function (Lotka’s law) is easier to work with since it does not use source-rankings.

The university of de Solla Price's view of the science of science has influenced my entire informetrics career.

De Solla Price even introduces the econometric principle ‘success breeds success’ (SBS) into informetrics based on the earlier work of Nobel Prize-winner Herbert Simon (6, 7). SBS is the principle that (in my terminology): the probability is higher that a new item is produced by a source that already has many items, than the probability that a new item is produced by a source with only a few items. This leads de Solla Price to a partial explanation of Lotka’s law (7).

More recently, de Solla Price’s work (8) has lent itself to research I am currently undertaking on the relation between productivity (number of papers) and collaboration (co-authorship). He indicates (in my terminology) that for a certain author (the IPP) for whom sources are his or her papers and items are the co-authors of each paper, you may find that researchers produce more papers if they collaborate more, a finding that seems to be confirmed in my recent work (in progress).

The universality of de Solla Price’s view of the science of science has influenced my entire informetrics career. Since 1985, I have worked so much with IPPs and Lotka’s law that I published a mathematically-orientated book (9) in which Lotka’s law is used as an axiom that many mathematical results in all subfields of informetrics follow.

Professor Leo Egghe
Universiteit Hasselt, Belgium, and Universiteit Antwerpen, Belgium

Contact him directly

References:

(1) Egghe, L. (1984) “Stopping time techniques for analysts and probabilists”, London Mathematical Society Lecture Notes Series 100. Cambridge, UK: Cambridge University Press.
(2) de Solla Price, D.J. (1963) Little Science, Big Science. New York, USA: Columbia University Press.
(3) Egghe, L. (1989) The Duality of Informetric Systems with Applications to the Empirical Laws. Ph.D. Thesis, City University, London, UK.
(4) Egghe, L. (1990) “The duality of informetric systems with applications to the empirical laws”, Journal of Information Science, Vol. 16, No. 1, pp. 17–27.
(5) Lotka, A.J. (1926) “The frequency distribution of scientific productivity”, Journal of the Washington Academy of Sciences, Vol. 16, No. 12, pp. 317–324.
(6) de Solla Price, D.J. (1976) “A general theory of bibliometric and other cumulative advantage processes”, Journal of the American Society for Information Science, Vol. 27, pp. 292–306.
(7) Simon, H.A. (1957) “On a class of skew distribution functions”, In: Models of man: Social and Rational, Ch. 9. New York, USA: John Wiley and Sons.
(8) de Solla Price, D.J. and Beaver, D.B. (1966) “Collaboration in an invisible college”, American Psychologist, Vol. 21, pp. 1011–1018.
(9) Egghe, L. (2005) Power Laws in the Information Production Process: Lotkaian Informetrics. Oxford, UK: Elsevier.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

I was fortunate enough to meet Derek de Solla Price at a lecture he gave in Brussels in 1981. At that time, I was at a crossroads in my career: after my Ph.D. in mathematics in 1978, I became chief librarian of the Limburgs Universitair Centrum (now Universiteit Hasselt), a position I still occupy. In 1983, together with the then chief librarian of the University of Antwerp, Prof. H. Vervliet, I prepared the foundation of the degree in library and information science. In that year, I became part-time professor in this field and still teach courses on Quantitative Methods in Library and Information Science and Information Retrieval. After finishing a book on mathematics in 1984 (1), I switched to informetrics research. When I met de Solla Price, I was not yet an informetrician and had no idea of the influence he was going to have on my future career.

The science of science

It was not so much de Solla Price’s mathematical work that influenced me, as his universal philosophy on the science of science. His book Little Science, Big Science (2) describes growth distributions and size- and rank-frequency distributions of very different phenomena in information science, the physical world, linguistics, econometrics and so on. This book showed me that many of those phenomena have common laws and can be described in one framework, which I called Information Production Processes (IPPs) (3, 4). IPPs can be constructed far beyond information science, as de Solla Price explained (2). I defined an IPP as a system where one has ‘sources’ that have or produce ‘items’.

A classic bibliography is an example of an IPP. Authors have papers, yielding another example. But papers can also be sources, producing or receiving items as references or citations. Books are sources of their borrowings: words are sources (known as ‘types’ in linguistics) and their occurrences in the text are the items (’tokens’ in linguistics). Beyond informetrics, as de Solla Price describes, we have communities (cities and villages) as sources and their inhabitants as items (demography), and in econometrics one can consider employees as sources and their production or salary as items (2).

This universality is not the only remarkable thing. De Solla Price notices that all these phenomena (or IPPs) also satisfy the same sociometric (informetric) laws:

  • exponential or S-shaped growth functions;
  • size-frequency functions (expressing the number of sources with a certain number of items) of power-law type, such as Lotka’s law (5), and;
  • rank-frequency functions (expressing the number of items in the source on rank r – sources are arranged in decreasing order of the number of items they have) also of power-law type but with another exponent than in the size-frequency case, such as Zipf (linguistics) and Mandelbrot and Pareto (econometrics).

Essentially, these are all the same laws and are equivalent to Lotka’s law.

Success breeds success

It is remarkable that while rank-frequency functions are studied in informetrics, linguistics and econometrics, informetrics only studies size-frequency functions via Lotka’s law. De Solla Price introduced Lotka’s law into informetrics and – although equivalent with the rank-frequency laws – the size-frequency function (Lotka’s law) is easier to work with since it does not use source-rankings.

The university of de Solla Price's view of the science of science has influenced my entire informetrics career.

De Solla Price even introduces the econometric principle ‘success breeds success’ (SBS) into informetrics based on the earlier work of Nobel Prize-winner Herbert Simon (6, 7). SBS is the principle that (in my terminology): the probability is higher that a new item is produced by a source that already has many items, than the probability that a new item is produced by a source with only a few items. This leads de Solla Price to a partial explanation of Lotka’s law (7).

More recently, de Solla Price’s work (8) has lent itself to research I am currently undertaking on the relation between productivity (number of papers) and collaboration (co-authorship). He indicates (in my terminology) that for a certain author (the IPP) for whom sources are his or her papers and items are the co-authors of each paper, you may find that researchers produce more papers if they collaborate more, a finding that seems to be confirmed in my recent work (in progress).

The universality of de Solla Price’s view of the science of science has influenced my entire informetrics career. Since 1985, I have worked so much with IPPs and Lotka’s law that I published a mathematically-orientated book (9) in which Lotka’s law is used as an axiom that many mathematical results in all subfields of informetrics follow.

Professor Leo Egghe
Universiteit Hasselt, Belgium, and Universiteit Antwerpen, Belgium

Contact him directly

References:

(1) Egghe, L. (1984) “Stopping time techniques for analysts and probabilists”, London Mathematical Society Lecture Notes Series 100. Cambridge, UK: Cambridge University Press.
(2) de Solla Price, D.J. (1963) Little Science, Big Science. New York, USA: Columbia University Press.
(3) Egghe, L. (1989) The Duality of Informetric Systems with Applications to the Empirical Laws. Ph.D. Thesis, City University, London, UK.
(4) Egghe, L. (1990) “The duality of informetric systems with applications to the empirical laws”, Journal of Information Science, Vol. 16, No. 1, pp. 17–27.
(5) Lotka, A.J. (1926) “The frequency distribution of scientific productivity”, Journal of the Washington Academy of Sciences, Vol. 16, No. 12, pp. 317–324.
(6) de Solla Price, D.J. (1976) “A general theory of bibliometric and other cumulative advantage processes”, Journal of the American Society for Information Science, Vol. 27, pp. 292–306.
(7) Simon, H.A. (1957) “On a class of skew distribution functions”, In: Models of man: Social and Rational, Ch. 9. New York, USA: John Wiley and Sons.
(8) de Solla Price, D.J. and Beaver, D.B. (1966) “Collaboration in an invisible college”, American Psychologist, Vol. 21, pp. 1011–1018.
(9) Egghe, L. (2005) Power Laws in the Information Production Process: Lotkaian Informetrics. Oxford, UK: Elsevier.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Celebrating the legacy of de Solla Price

The relevance of Derek de Solla Price’s work may have taken a long time to be fully recognized, but 25 years after his death he is far from forgotten. Dr. Eugene Garfield looks back at his legacy.

Read more >


Twenty-five years after his death, Derek de Solla Price is still explicitly cited in about 100 scholarly publications each year. The implicit citation of his work is undoubtedly much greater. Rarely does a week go by without someone referring to his aphorism that “80 percent to 90 percent of the scientists who ever lived are alive now.” Having just reread my own 1984 tribute to Derek, I can say that there is little I could add to those remarks to further demonstrate the impact of his work.

That impact will increase as the field of scientometrics continues to experience its own exponential growth. And the award of the Derek de Solla Price Medal will be a regular reminder of his pioneering role. For those who wish to know more about his influence on me and several generations of citation analysts, bibliometricians and science policy enthusiasts, I refer them to my personal Web page where his presence and influence is immediately apparent. Of particular interest is the Citation Classic commentary Derek wrote a few months before his death about his most cited work, Little Science, Big Science.

Delayed recognition

From a long-term historical perspective it is worth noting that de Solla Price’s career exemplifies delayed recognition. His 1951 paper, “Quantitative measures of the development of science”, concerning the exponential growth of science published in the relatively obscure Archives Internationale d’Histoire des Sciences (1) was essentially ignored.

Over a decade later there was still little or no recognition of his seminal observation. Even after Science Since Babylon was published in 1961, there was only a trickle of recognition. Then, in 1963, his future Citation Classic, Little Science, Big Science (2) was published. However, another two decades would pass before citations to his work would reach their peak.

Although Derek was a few years older than me, when he died it felt like I had lost a younger brother. In many ways Derek was a teenager till the end. He had an impish personality. I often had to chastise him for inappropriate behavior for which he always immediately apologized. Derek’s untimely death denied him the opportunity of using citation analysis to support nominations for the Nobel Prize. He had just been elected to the Royal Swedish Academy of Sciences. Unbeknownst to either of us at the time, the librarian of that prestigious institution had been using the Science Citation Index to provide documentation support to all nominations submitted to the Nobel committees.

To demonstrate the citation impact of Derek’s published work, we recently updated several HistCite collections.

Data with far-reaching potential

In closing, since it is unlikely that most readers will have access to the printed volumes, it is worth calling special attention to Derek’s foreword to volume 3 of Essays of an Information Scientist (3). In it he recalls the day we met when I appeared before the Science Information Council of the National Science Foundation (NSF) seeking support to create the experimental Genetic Citation Index. NSF refused the request but NIH funded the study.

Notwithstanding the refusal, I personally was immediately struck by the realization that citation links represented a radically new kind of data with far-reaching potential. Though we couldn’t predict with absolute certainty how much a citation index might be used, or even to what purpose, it seemed clear to me that such an index must be developed. It also seemed clear to me that such an index would have a good chance of becoming a commercial success, instead of becoming a permanent burden on the Federal budget; though a new immigrant to the land of Federal fiscal matters, I was able to recognize that prospect as being nearly unique.

Bit by bit we have begun to understand how citations work, and in the course of this, there has emerged a new sort of statistical sociology of science that has thrown light on many aspects of the authorship, refereeing and publication of scientific research papers. The Society for Social Studies of Science now has an annual meeting devoted to this new method of understanding science that has grown, almost as an accidental by-product, from the indexing technology developed by the Institute for Scientific Information. Our initial intuitive perceptions have turned out to be correct.

Dr. Eugene Garfield

Contact him directly

References:

(1) de solla Price, D.J. (1951) “Quantitative measures of the development of science”, Archives Internationale d’Histoire des Sciences, Vol. 14, pp. 85–93.
(2) de Solla Price, D.J. (1963) Little Science, Big Science. New York, USA: Columbia University Press.
(3) de Solla Price, D.J. (1977–1978) “Foreword”, Essays of an Information Scientist, Vol. 3, pp. v–ix.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Twenty-five years after his death, Derek de Solla Price is still explicitly cited in about 100 scholarly publications each year. The implicit citation of his work is undoubtedly much greater. Rarely does a week go by without someone referring to his aphorism that “80 percent to 90 percent of the scientists who ever lived are alive now.” Having just reread my own 1984 tribute to Derek, I can say that there is little I could add to those remarks to further demonstrate the impact of his work.

That impact will increase as the field of scientometrics continues to experience its own exponential growth. And the award of the Derek de Solla Price Medal will be a regular reminder of his pioneering role. For those who wish to know more about his influence on me and several generations of citation analysts, bibliometricians and science policy enthusiasts, I refer them to my personal Web page where his presence and influence is immediately apparent. Of particular interest is the Citation Classic commentary Derek wrote a few months before his death about his most cited work, Little Science, Big Science.

Delayed recognition

From a long-term historical perspective it is worth noting that de Solla Price’s career exemplifies delayed recognition. His 1951 paper, “Quantitative measures of the development of science”, concerning the exponential growth of science published in the relatively obscure Archives Internationale d’Histoire des Sciences (1) was essentially ignored.

Over a decade later there was still little or no recognition of his seminal observation. Even after Science Since Babylon was published in 1961, there was only a trickle of recognition. Then, in 1963, his future Citation Classic, Little Science, Big Science (2) was published. However, another two decades would pass before citations to his work would reach their peak.

Although Derek was a few years older than me, when he died it felt like I had lost a younger brother. In many ways Derek was a teenager till the end. He had an impish personality. I often had to chastise him for inappropriate behavior for which he always immediately apologized. Derek’s untimely death denied him the opportunity of using citation analysis to support nominations for the Nobel Prize. He had just been elected to the Royal Swedish Academy of Sciences. Unbeknownst to either of us at the time, the librarian of that prestigious institution had been using the Science Citation Index to provide documentation support to all nominations submitted to the Nobel committees.

To demonstrate the citation impact of Derek’s published work, we recently updated several HistCite collections.

Data with far-reaching potential

In closing, since it is unlikely that most readers will have access to the printed volumes, it is worth calling special attention to Derek’s foreword to volume 3 of Essays of an Information Scientist (3). In it he recalls the day we met when I appeared before the Science Information Council of the National Science Foundation (NSF) seeking support to create the experimental Genetic Citation Index. NSF refused the request but NIH funded the study.

Notwithstanding the refusal, I personally was immediately struck by the realization that citation links represented a radically new kind of data with far-reaching potential. Though we couldn’t predict with absolute certainty how much a citation index might be used, or even to what purpose, it seemed clear to me that such an index must be developed. It also seemed clear to me that such an index would have a good chance of becoming a commercial success, instead of becoming a permanent burden on the Federal budget; though a new immigrant to the land of Federal fiscal matters, I was able to recognize that prospect as being nearly unique.

Bit by bit we have begun to understand how citations work, and in the course of this, there has emerged a new sort of statistical sociology of science that has thrown light on many aspects of the authorship, refereeing and publication of scientific research papers. The Society for Social Studies of Science now has an annual meeting devoted to this new method of understanding science that has grown, almost as an accidental by-product, from the indexing technology developed by the Institute for Scientific Information. Our initial intuitive perceptions have turned out to be correct.

Dr. Eugene Garfield

Contact him directly

References:

(1) de solla Price, D.J. (1951) “Quantitative measures of the development of science”, Archives Internationale d’Histoire des Sciences, Vol. 14, pp. 85–93.
(2) de Solla Price, D.J. (1963) Little Science, Big Science. New York, USA: Columbia University Press.
(3) de Solla Price, D.J. (1977–1978) “Foreword”, Essays of an Information Scientist, Vol. 3, pp. v–ix.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Why did you cite…?

More than 913,700 French articles are referenced in Scopus. Of these, “Note preliminaire sur le traitement des angiomes vertebraux par vertebroplastie acrylique percutane” (1) is ranked as the most cited article, with more than 500 citations to date. To gain some insight into what makes a successful non-English paper, we asked the authors and those […]

Read more >


More than 913,700 French articles are referenced in Scopus. Of these, “Note preliminaire sur le traitement des angiomes vertebraux par vertebroplastie acrylique percutane” (1) is ranked as the most cited article, with more than 500 citations to date.

To gain some insight into what makes a successful non-English paper, we asked the authors and those who have cited the paper frequently why they thought this paper had such an impact. The unanimous response was that the main reason for citing the article so frequently was because it represented a landmark in the field and was the first to describe a technique that was adopted internationally in the years thereafter.

One of the authors, Professor Deramond from CHU Amiens, says: “It is the first article describing the original vertebroplasty technique […]. A considerable number of articles […] focus on this minimally invasive therapeutic method […] [hence the article] is cited systematically.”

Frequent citers agree with this. Dr. Pflugmacher, from the University of Berlin, says that “the article is cited several times because it is the origin of vertebroplasty.” Dr. Liebermann of the Cleveland Clinic, Dr. Burton from the University of Texas and Dr. Jensen from the University of Virginia expressed very similar views.

Effect of language on diffusion
It seems, however, that the fact that the article was written in French was rather an obstacle to its early diffusion. Professor Deramond notes that “it wasn’t until 1997 and the publication of an article in the American Journal of Neuroradiology that vertebroplasty became really recognized and spread worldwide.” One of the other authors, Professor Le Gars from CHU Amiens, stresses: “This article is often cited because it is the first to describe the vertebroplasty technique, devised in our hospital and now used worldwide. This is what explains the high number of cites, the usage of the French language in an Anglo-Saxon world being rather a penalizing factor.”

Professor Belkoff, a frequent citer from the John Hopkins Medical Center, adds: “Vertebroplasty would have become the mainstream practice that it is perhaps 10 years earlier, had the article been written in English. If it were not for Jacques Dion, a French Canadian, hearing about vertebroplasty presented in French at a meeting of radiologists, the introduction of vertebroplasty to the US may have taken even longer. Jacques brought back what he learned to UVA, where he and colleagues Mary Jensen, John Mathis and Avery Evans used it and started spreading the word.”

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

More than 913,700 French articles are referenced in Scopus. Of these, “Note preliminaire sur le traitement des angiomes vertebraux par vertebroplastie acrylique percutane” (1) is ranked as the most cited article, with more than 500 citations to date.

To gain some insight into what makes a successful non-English paper, we asked the authors and those who have cited the paper frequently why they thought this paper had such an impact. The unanimous response was that the main reason for citing the article so frequently was because it represented a landmark in the field and was the first to describe a technique that was adopted internationally in the years thereafter.

One of the authors, Professor Deramond from CHU Amiens, says: “It is the first article describing the original vertebroplasty technique […]. A considerable number of articles […] focus on this minimally invasive therapeutic method […] [hence the article] is cited systematically.”

Frequent citers agree with this. Dr. Pflugmacher, from the University of Berlin, says that “the article is cited several times because it is the origin of vertebroplasty.” Dr. Liebermann of the Cleveland Clinic, Dr. Burton from the University of Texas and Dr. Jensen from the University of Virginia expressed very similar views.

Effect of language on diffusion
It seems, however, that the fact that the article was written in French was rather an obstacle to its early diffusion. Professor Deramond notes that “it wasn’t until 1997 and the publication of an article in the American Journal of Neuroradiology that vertebroplasty became really recognized and spread worldwide.” One of the other authors, Professor Le Gars from CHU Amiens, stresses: “This article is often cited because it is the first to describe the vertebroplasty technique, devised in our hospital and now used worldwide. This is what explains the high number of cites, the usage of the French language in an Anglo-Saxon world being rather a penalizing factor.”

Professor Belkoff, a frequent citer from the John Hopkins Medical Center, adds: “Vertebroplasty would have become the mainstream practice that it is perhaps 10 years earlier, had the article been written in English. If it were not for Jacques Dion, a French Canadian, hearing about vertebroplasty presented in French at a meeting of radiologists, the introduction of vertebroplasty to the US may have taken even longer. Jacques brought back what he learned to UVA, where he and colleagues Mary Jensen, John Mathis and Avery Evans used it and started spreading the word.”

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The misuse of metrics can harm science

When Eugene Garfield devised the Impact Factor (IF) in 1955 to help select journals for the Science Citation Index, he had no idea that ‘impact’ would become so controversial. The IF ranks journals based on how many citations they receive over a particular period. However, in recent years, certain misuses of the IF have been […]

Read more >


When Eugene Garfield devised the Impact Factor (IF) in 1955 to help select journals for the Science Citation Index, he had no idea that ‘impact’ would become so controversial.

The IF ranks journals based on how many citations they receive over a particular period. However, in recent years, certain misuses of the IF have been brought to light, including its emergence as a performance-measurement tool. Garfield himself has noted that the IF was never intended to assess individuals (1).

Assessing individuals

In a letter to Nature, Professor David Colquhoun of the Department of Pharmacology, University College London, voiced his concerns about the way IFs are being misused to assess people (2). According to him, it is all part of a worrying trend to manage universities like businesses, measuring scientists against key performance indicators. “IFs are of interest only to journal editors. They are a real problem when used to assess people,” he says.

This becomes clear when one looks behind the figures. Bert Sakmann may have won a Nobel Prize in 1991, but under some current assessment criteria, he would have been unemployed long before that happened. From 1976 to 1985, he published between zero and six papers per year (average: 2.6). Yet, despite this low output, during these years, he produced scientifically important papers.

Problem of perception

The real problem may be one of perception. Colquhoun says, “No one knows how far IFs are being used to assess people, but young scientists are obsessed with them. Whether departments look at IFs or not is irrelevant; the reality is that people perceive this to be the case and work towards getting papers into good journals rather than writing good papers. This distorts science itself: it is a recipe for short-termism and exaggeration.”

People believe Impact Factors are being used to assess people and work towards getting papers into good journals rather than writing good papers.

He continues, “Good departments don’t measure applicants or staff by arbitrary calculations at all. All universities should select by references and assessment of papers, and those that already do so should publicly declare this to ease the fears of applicants.”

In an essay by Eugene Garfield published on its website, Thomson Scientific itself addresses the scope of the IF and the potential for misuse. “Thomson Scientific does not depend on the Impact Factor alone in assessing the usefulness of a journal, and neither should anyone else,” it says (4). It recognizes that while the IF has in recent years been increasingly used in the process of academic evaluation, the metric continues to provide an approximation of the prestige of the journals in which individuals have been published and is not an assessment tool for the individuals themselves.

Metrics will never be able to provide a holistic picture of an individual scientist or journal and should certainly not determine science. However, they can function as an initial indicator, thereby providing a starting point for further discussion or assessment.

References:

(1) Garfield, E. (2005) “The agony and the ecstasy: the history and meaning of the Journal Impact Factor”, International Congress on Peer Review and Biomedical Publication, Chicago, September 16, 2005.
(2) Colquhoun, D. (2003) “Challenging the tyranny of impact factors”, Nature, Correspondence, 423, 479.
(3) Colquhoun, D. (2007) “How should universities be run to get the best out of people?”, Physiology News, Vol. 69, pp. 12–14.
(4) Garfield, E., “The Thomson Scientific Impact Factor”
Photograph of David Colquhoun © Mark Thomas
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

When Eugene Garfield devised the Impact Factor (IF) in 1955 to help select journals for the Science Citation Index, he had no idea that ‘impact’ would become so controversial.

The IF ranks journals based on how many citations they receive over a particular period. However, in recent years, certain misuses of the IF have been brought to light, including its emergence as a performance-measurement tool. Garfield himself has noted that the IF was never intended to assess individuals (1).

Assessing individuals

In a letter to Nature, Professor David Colquhoun of the Department of Pharmacology, University College London, voiced his concerns about the way IFs are being misused to assess people (2). According to him, it is all part of a worrying trend to manage universities like businesses, measuring scientists against key performance indicators. “IFs are of interest only to journal editors. They are a real problem when used to assess people,” he says.

This becomes clear when one looks behind the figures. Bert Sakmann may have won a Nobel Prize in 1991, but under some current assessment criteria, he would have been unemployed long before that happened. From 1976 to 1985, he published between zero and six papers per year (average: 2.6). Yet, despite this low output, during these years, he produced scientifically important papers.

Problem of perception

The real problem may be one of perception. Colquhoun says, “No one knows how far IFs are being used to assess people, but young scientists are obsessed with them. Whether departments look at IFs or not is irrelevant; the reality is that people perceive this to be the case and work towards getting papers into good journals rather than writing good papers. This distorts science itself: it is a recipe for short-termism and exaggeration.”

People believe Impact Factors are being used to assess people and work towards getting papers into good journals rather than writing good papers.

He continues, “Good departments don’t measure applicants or staff by arbitrary calculations at all. All universities should select by references and assessment of papers, and those that already do so should publicly declare this to ease the fears of applicants.”

In an essay by Eugene Garfield published on its website, Thomson Scientific itself addresses the scope of the IF and the potential for misuse. “Thomson Scientific does not depend on the Impact Factor alone in assessing the usefulness of a journal, and neither should anyone else,” it says (4). It recognizes that while the IF has in recent years been increasingly used in the process of academic evaluation, the metric continues to provide an approximation of the prestige of the journals in which individuals have been published and is not an assessment tool for the individuals themselves.

Metrics will never be able to provide a holistic picture of an individual scientist or journal and should certainly not determine science. However, they can function as an initial indicator, thereby providing a starting point for further discussion or assessment.

References:

(1) Garfield, E. (2005) “The agony and the ecstasy: the history and meaning of the Journal Impact Factor”, International Congress on Peer Review and Biomedical Publication, Chicago, September 16, 2005.
(2) Colquhoun, D. (2003) “Challenging the tyranny of impact factors”, Nature, Correspondence, 423, 479.
(3) Colquhoun, D. (2007) “How should universities be run to get the best out of people?”, Physiology News, Vol. 69, pp. 12–14.
(4) Garfield, E., “The Thomson Scientific Impact Factor”
Photograph of David Colquhoun © Mark Thomas
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

English as the international language of science

Since the end of the Second World War, English has become the established language of scholarly communication, but not without controversy. We examine some of the reasons and the consequences for local-language publishing.

Read more >


Since the end of the Second World War, English has become the established language of scholarly communication, but not without controversy. In this article we examine some of the reasons for the rise of English and its consequences in the context of national trends in English and local-language publishing.

The underlying reason for the rise of English as the language of science remains a topic of debate, but most frequently it is acknowledged as an accident of 20th century political and economic history (1). The British Empire, which spanned the globe from the late 16th to the early 20th century, was the largest empire in history and made English a truly international language. Today it is the first language of about 400 million people in 53 countries, and the second language of as many as 1.4 billion more. English was therefore well positioned to become the default language of science in the wake of the disruptive wars of the first half of the 20th century.

Shifting language preferences

Whatever the reason, the use of English as the scholarly lingua franca has become self-reinforcing, with academic reward schemes in many countries placing great emphasis on publication in international (mostly English-language) journals. Figure 1 shows the ratio of the number of journal articles published by selected nations’ researchers in English to those published in that nation’s official language in three consecutive four-year periods.

The Netherlands has always had a strong tradition of publishing in English, and so the ratio of English to Dutch journal articles is quite high and shows no clear trend in this analysis. Conversely, Italy’s ratio has risen dramatically over the period of analysis, suggesting a very strong impetus by Italian authors to publish in English. More modest, but equally important, trends away from local-language authorship are repeated in Gemany, France, Spain and the Russian Federation.

Figure 1 – Ratio of the number of journal articles published by researchers in English to those in the official language in six European countries, 1996–2007. Source: Scopus.

Reference:

(1) Tardy, C. (2004) “The role of English in scientific communication: lingua franca or Tyrannousaurus rex?” Journal of English for Academic Purposes, Vol. 3, No. 3, pp. 247–269.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Since the end of the Second World War, English has become the established language of scholarly communication, but not without controversy. In this article we examine some of the reasons for the rise of English and its consequences in the context of national trends in English and local-language publishing.

The underlying reason for the rise of English as the language of science remains a topic of debate, but most frequently it is acknowledged as an accident of 20th century political and economic history (1). The British Empire, which spanned the globe from the late 16th to the early 20th century, was the largest empire in history and made English a truly international language. Today it is the first language of about 400 million people in 53 countries, and the second language of as many as 1.4 billion more. English was therefore well positioned to become the default language of science in the wake of the disruptive wars of the first half of the 20th century.

Shifting language preferences

Whatever the reason, the use of English as the scholarly lingua franca has become self-reinforcing, with academic reward schemes in many countries placing great emphasis on publication in international (mostly English-language) journals. Figure 1 shows the ratio of the number of journal articles published by selected nations’ researchers in English to those published in that nation’s official language in three consecutive four-year periods.

The Netherlands has always had a strong tradition of publishing in English, and so the ratio of English to Dutch journal articles is quite high and shows no clear trend in this analysis. Conversely, Italy’s ratio has risen dramatically over the period of analysis, suggesting a very strong impetus by Italian authors to publish in English. More modest, but equally important, trends away from local-language authorship are repeated in Gemany, France, Spain and the Russian Federation.

Figure 1 – Ratio of the number of journal articles published by researchers in English to those in the official language in six European countries, 1996–2007. Source: Scopus.

Reference:

(1) Tardy, C. (2004) “The role of English in scientific communication: lingua franca or Tyrannousaurus rex?” Journal of English for Academic Purposes, Vol. 3, No. 3, pp. 247–269.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
Research Trends Image

Journal publication: why the Netherlands is so prolific

The share of world articles is dominated by those countries with the most researchers. However, the geographical distribution of the journals’ publication country does not follow the same pattern. The Netherlands, which is the third largest journal publisher, is a notable case in point. Why is this?

Read more >


It is generally known that the share of world articles is dominated by the countries with the most researchers. This is unsurprising and has been the case for many years. However, the geographical distribution of the journals’ publication country does not follow the same pattern, as Table 1 reveals.
Table 1

The Netherlands is a particularly notable example of this differential, especially when one considers the size of the country’s population, ranking third on the list behind the United States and the United Kingdom.

According to these data, the Netherlands publishes over 9.0% of all journals in the world. An initial explanation for this is that several of the world’s largest scientific, technical and medical publishers, including Elsevier, Springer and Taylor & Francis, all have offices in the Netherlands. This skews the figures somewhat since the country of publication is linked to the publishers’ head office location and not necessarily to where the journal is physically published. However, this does not explain why these companies chose the Netherlands as their publishing location.

Galileo

Galileo’s last and greatest work, published in 1638 by Elzevir, Discorsi e Dimostrazioni Matematiche is considered the first important discussion of modern physics.

Location, location, location

A look back at the history of Elsevier in the Netherlands goes some way to answering the second anomaly. For centuries, the Netherlands was a haven for scholars escaping religious or creative persecution in their own countries. Between the 17th and 19th centuries, famous scholars such as Erasmus, John Locke, John Milton, Descartes and Galileo published their work in the Netherlands rather than in their home countries because it had a liberal publishing infrastructure. One of the first publishers in the Netherlands, founded in 1580, was Elzevir. Its name was adopted in 1880 by one of the largest science and technology publishers, Elsevier.

By the 19th century, the German language had become the standard scientific language. In many disciplines, knowledge of German was a basic requirement internationally until well into the 20th century. German publishers were well established in the market and at a commercial peak. However, with the rise of Hitler’s Nazi regime in the 1930s, many of Germany’s best scientists fled to neighboring countries as well as the United States.

Moving west

This emigration of scientists led the Noord Hollandsche Uitgevers Maatschappij, which later became a part of Elsevier, to believe that the language of science would shift from German to English, a prediction that proved to be true. Elsevier started to publish the work of European scientists in English, one of the first of which was Paul Karrer’s Organic Chemistry in 1937.

After the Second World War, the German publishing industry was in tatters and what remained of it, mainly located in Leipzig and Berlin, found itself within the Soviet occupation zone and later in the GDR. As a consequence there was a movement westwards: the German National Library moved from Leipzig to Frankfurt and Springer from Berlin to Heidelberg.

Dutch publishers took advantage of the situation and the Netherlands’ location between Western Europe and the English-speaking UK and US, which made it the perfect center of the new international science-publishing world that emerged after the war. Other international publishing houses also saw the opportunities the Netherlands offered and established offices there. This has resulted in a high concentration of publishing companies relative to the size of the country and number of researchers, and thus a high number of published journals attributed to it.

Many thanks to Professor Hans Roosendaal for his help with the historical aspects of this article.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

It is generally known that the share of world articles is dominated by the countries with the most researchers. This is unsurprising and has been the case for many years. However, the geographical distribution of the journals’ publication country does not follow the same pattern, as Table 1 reveals.
Table 1

The Netherlands is a particularly notable example of this differential, especially when one considers the size of the country’s population, ranking third on the list behind the United States and the United Kingdom.

According to these data, the Netherlands publishes over 9.0% of all journals in the world. An initial explanation for this is that several of the world’s largest scientific, technical and medical publishers, including Elsevier, Springer and Taylor & Francis, all have offices in the Netherlands. This skews the figures somewhat since the country of publication is linked to the publishers’ head office location and not necessarily to where the journal is physically published. However, this does not explain why these companies chose the Netherlands as their publishing location.

Galileo

Galileo’s last and greatest work, published in 1638 by Elzevir, Discorsi e Dimostrazioni Matematiche is considered the first important discussion of modern physics.

Location, location, location

A look back at the history of Elsevier in the Netherlands goes some way to answering the second anomaly. For centuries, the Netherlands was a haven for scholars escaping religious or creative persecution in their own countries. Between the 17th and 19th centuries, famous scholars such as Erasmus, John Locke, John Milton, Descartes and Galileo published their work in the Netherlands rather than in their home countries because it had a liberal publishing infrastructure. One of the first publishers in the Netherlands, founded in 1580, was Elzevir. Its name was adopted in 1880 by one of the largest science and technology publishers, Elsevier.

By the 19th century, the German language had become the standard scientific language. In many disciplines, knowledge of German was a basic requirement internationally until well into the 20th century. German publishers were well established in the market and at a commercial peak. However, with the rise of Hitler’s Nazi regime in the 1930s, many of Germany’s best scientists fled to neighboring countries as well as the United States.

Moving west

This emigration of scientists led the Noord Hollandsche Uitgevers Maatschappij, which later became a part of Elsevier, to believe that the language of science would shift from German to English, a prediction that proved to be true. Elsevier started to publish the work of European scientists in English, one of the first of which was Paul Karrer’s Organic Chemistry in 1937.

After the Second World War, the German publishing industry was in tatters and what remained of it, mainly located in Leipzig and Berlin, found itself within the Soviet occupation zone and later in the GDR. As a consequence there was a movement westwards: the German National Library moved from Leipzig to Frankfurt and Springer from Berlin to Heidelberg.

Dutch publishers took advantage of the situation and the Netherlands’ location between Western Europe and the English-speaking UK and US, which made it the perfect center of the new international science-publishing world that emerged after the war. Other international publishing houses also saw the opportunities the Netherlands offered and established offices there. This has resulted in a high concentration of publishing companies relative to the size of the country and number of researchers, and thus a high number of published journals attributed to it.

Many thanks to Professor Hans Roosendaal for his help with the historical aspects of this article.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Journal analysis

What can journal analysis tell us? Research Trends takes a closer look at a collection of French medical journals and a collection of physics journals.

Read more >


Journal evaluation is becoming increasingly important across academia, from scientists who have been invited to participate in the editorial processes of a journal to librarians who are considering which journals to make available to their users.

Many factors play a part in the evaluation of a journal, and these will be different for various groups of users. At the same time, evaluation usually needs to be performed in the context of other journals in a similar field. In the past, journal evaluation took a lot of time and effort. However, recognizing the growing demand for user-friendly evaluation tools, Scopus has developed the Scopus Journal Analyzer, which displays transparent, objective results for quick and intuitive comparison of up to 10 journals. In addition, the data are updated every two months, which means users have access to the most up-to-date information available.

A clearer picture

To evaluate a journal thoroughly, it is important to look at how it has been performing over time. It is also important to compare it with similar journals to understand the results in context.

To take an example, Presse Médicale is a multidisciplinary French medical review journal that commenced publication in 1893 under the title La Presse Médicale and continued as Nouvelle Presse Médicale. It receives most citations from itself, and from the other French review journals Revue de Médecine Interne, Revue du Praticien and Revue de Geriatrie.

It is relatively simple to compare the publishing trends of these four journals using the Scopus Journal Analyzer. For instance, Figure 1 shows that the annual output of three of the journals remained roughly steady over the period 1996–2007, with only Presse Médicale reducing the amount of content that it publishes. The low point at the right-hand side of this and the other graphs reflects the fact that the data for 2008 are as yet incomplete.

Despite this drop in output, Presse Médicale first increased and then maintained the level of citations that it attracts; Revue de Médecine Interne shows a similar increase in total annual citations despite its static content output (see Figure 2).

We can also combine these two metrics in the Trend Line, which shows trends in average journal citation per article (see Figure 3). The Trend Line is calculated by dividing the total citations received in a calendar year by the total documents published in that same year. The citations are counted regardless of when the item being cited was published.

Figure 3 clearly shows that Presse Médicale and Revue de Médecine Interne are attracting more citations while Revue du Praticien and Revue de Geriatrie have maintained a steady rate.

It is interesting to speak to the publishing editor of Presse Médicale to find out if any editorial changes took place during the period shown that might have impacted the citation accrual. “Presse Médicale used to be a weekly, then a fortnightly publication. Since 2006, it’s been monthly, so naturally the number of articles decreased,” says Olivier Chabot. “We also have a very exacting editorial board and the rejection rate has increased over the last four years. We now have a rejection rate of 55% for papers and 80% for clinical cases. The quality of our papers could explain why our citation rate has remained steady, even though the quantity has decreased. For the last two years, we have published more papers in English. Perhaps these articles are more highly cited. We also increased our self-citation.”

To take another example, Nuclear Physics B, which commenced publishing in 1967, focuses on original research in high-energy physics and quantum field theory. It is read by particle physicists, field theoreticians and statistical and mathematical physicists. Most of its citations come from Physical Review D, Journal of High Energy Physics, Nuclear Physics B and Physics Letters B.

Again, these journals can be compared in the Scopus Journal Analyzer. Physical Review D is registering an annual increase in citations (see Figure 4). Nuclear Physics B has the highest average journal citation per article, 77.15 in 2007 (see Trend Line in Table 1). This upward trend can also be seen for Physics Letters B (see Figure 5).

Nuclear Physics B has consistently maintained its high standards despite the reduction in the number of papers being published in particle physics,” says Publishing Director David Clark of his journal.
Fig 1

Figure 1 – The annual output of the four journals under review remained relatively stable over the period 1996–2007, with only Presse Médicale reducing the amount of content that it publishes.

Fig 2

Figure 2 – Despite reducing its output, Presse Médicale has first increased and then maintained the level of citations that it attracts. Revue de Médecine Interne has also experienced a steady increase in citations.

Fig 3

Figure 3 – The Trend Line, which shows average journal citation per article, clearly reveals that Presse Médicale and Revue de Médecine Interne are attracting more citations while Revue du Praticien and Revue de Geriatrie have maintained a steady rate.

Fig 4

Figure 4 – Of the journals under review, only Physical Review D is registering an annual increase in citations.

Fig 5

Figure 5Physics Letters B has registered a steady increase in average journal citation per article.

Table 1

Table 1 Nuclear Physics B has the highest average journal citation per article: 77.15 in 2007.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Journal evaluation is becoming increasingly important across academia, from scientists who have been invited to participate in the editorial processes of a journal to librarians who are considering which journals to make available to their users.

Many factors play a part in the evaluation of a journal, and these will be different for various groups of users. At the same time, evaluation usually needs to be performed in the context of other journals in a similar field. In the past, journal evaluation took a lot of time and effort. However, recognizing the growing demand for user-friendly evaluation tools, Scopus has developed the Scopus Journal Analyzer, which displays transparent, objective results for quick and intuitive comparison of up to 10 journals. In addition, the data are updated every two months, which means users have access to the most up-to-date information available.

A clearer picture

To evaluate a journal thoroughly, it is important to look at how it has been performing over time. It is also important to compare it with similar journals to understand the results in context.

To take an example, Presse Médicale is a multidisciplinary French medical review journal that commenced publication in 1893 under the title La Presse Médicale and continued as Nouvelle Presse Médicale. It receives most citations from itself, and from the other French review journals Revue de Médecine Interne, Revue du Praticien and Revue de Geriatrie.

It is relatively simple to compare the publishing trends of these four journals using the Scopus Journal Analyzer. For instance, Figure 1 shows that the annual output of three of the journals remained roughly steady over the period 1996–2007, with only Presse Médicale reducing the amount of content that it publishes. The low point at the right-hand side of this and the other graphs reflects the fact that the data for 2008 are as yet incomplete.

Despite this drop in output, Presse Médicale first increased and then maintained the level of citations that it attracts; Revue de Médecine Interne shows a similar increase in total annual citations despite its static content output (see Figure 2).

We can also combine these two metrics in the Trend Line, which shows trends in average journal citation per article (see Figure 3). The Trend Line is calculated by dividing the total citations received in a calendar year by the total documents published in that same year. The citations are counted regardless of when the item being cited was published.

Figure 3 clearly shows that Presse Médicale and Revue de Médecine Interne are attracting more citations while Revue du Praticien and Revue de Geriatrie have maintained a steady rate.

It is interesting to speak to the publishing editor of Presse Médicale to find out if any editorial changes took place during the period shown that might have impacted the citation accrual. “Presse Médicale used to be a weekly, then a fortnightly publication. Since 2006, it’s been monthly, so naturally the number of articles decreased,” says Olivier Chabot. “We also have a very exacting editorial board and the rejection rate has increased over the last four years. We now have a rejection rate of 55% for papers and 80% for clinical cases. The quality of our papers could explain why our citation rate has remained steady, even though the quantity has decreased. For the last two years, we have published more papers in English. Perhaps these articles are more highly cited. We also increased our self-citation.”

To take another example, Nuclear Physics B, which commenced publishing in 1967, focuses on original research in high-energy physics and quantum field theory. It is read by particle physicists, field theoreticians and statistical and mathematical physicists. Most of its citations come from Physical Review D, Journal of High Energy Physics, Nuclear Physics B and Physics Letters B.

Again, these journals can be compared in the Scopus Journal Analyzer. Physical Review D is registering an annual increase in citations (see Figure 4). Nuclear Physics B has the highest average journal citation per article, 77.15 in 2007 (see Trend Line in Table 1). This upward trend can also be seen for Physics Letters B (see Figure 5).

Nuclear Physics B has consistently maintained its high standards despite the reduction in the number of papers being published in particle physics,” says Publishing Director David Clark of his journal.
Fig 1

Figure 1 – The annual output of the four journals under review remained relatively stable over the period 1996–2007, with only Presse Médicale reducing the amount of content that it publishes.

Fig 2

Figure 2 – Despite reducing its output, Presse Médicale has first increased and then maintained the level of citations that it attracts. Revue de Médecine Interne has also experienced a steady increase in citations.

Fig 3

Figure 3 – The Trend Line, which shows average journal citation per article, clearly reveals that Presse Médicale and Revue de Médecine Interne are attracting more citations while Revue du Praticien and Revue de Geriatrie have maintained a steady rate.

Fig 4

Figure 4 – Of the journals under review, only Physical Review D is registering an annual increase in citations.

Fig 5

Figure 5Physics Letters B has registered a steady increase in average journal citation per article.

Table 1

Table 1 Nuclear Physics B has the highest average journal citation per article: 77.15 in 2007.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The h-index and its variants: which works best?

Numerous variants of the h-index have been developed since Jorge Hirsch first proposed the h-index in 2005. Yet, while increasingly refined bibliometric tools can only be a good thing, are so many indices necessary and valuable, or just confusing?

Read more >


The h-index was originally proposed by Jorge Hirsch in 2005 to quantify the scientific output of an individual researcher. It was conceived as an improvement on previous indices, which tended to focus on the impact of the journals in which the researcher had published, and so assumed that the author’s performance was equivalent to the journal’s average. If a scientist’s publications are ranked in order of the number of lifetime citations they have received, the h-index is the highest number, h, of their papers that have each received at least h citations.

Quantity versus impact

1. Quantity of the productive core: the h, g and h(2) indices and the m-quotient describe the productive core of a scientist’s output and show the <em>number</em> of papers in the core.

2. Impact of the productive core: the a, m, r, ar and hw indices show the impact of papers in the core. This is closer to peer-assessment results.

Room for improvement

The h-index quickly gained widespread popularity. This is largely due to the fact that it is conceptually simple, easy to calculate and gives a robust estimate of the broad impact of a scientist’s cumulative research, explains Dr. Lutz Bornmann, post-doctoral researcher active in bibliometrics, scientometrics and peer-review research at ETH Zurich, the Swiss Federal Institute of Technology (1).

However, the h-index has received some criticism, most notably:

  • It is not influenced by citations beyond what is required for entry to the h-defining class. This means that it is insensitive to one or several highly cited papers in a scientist’s paper set, which are the papers that are primarily responsible for a scientist’s reputation.
  • It is highly dependent on the length of a scientist’s career, meaning only scientists with similar years of service can be compared fairly.
  • A scientist’s h-index can only rise (with time), or remain the same. It can never go down, and so cannot indicate periods of inactivity, retirement or even death.

Variants of the h-index that have been developed in an attempt to solve one or more of its perceived shortcomings include the m-quotient, g-index, h(2)-index, a-index, m-index, r-index, ar-index and hw-index. Hirsch himself proposed the m-quotient, which divides the h-index by the number of years a scientist has been active, thereby addressing the problem of longer careers correlating with higher h scores.

Two types of index

For Bornmann, the value of a bibliometric index lies in how closely it predicts the results of peer assessment. In a paper published in the Journal of the American Society for Information Science and Technology, of which he was co-author (1), he analyzed nine indices to find out whether any has improved upon the original h-index, with particular focus on their ability to accurately predict peer assessment.

He discovered that there are two basic types of index: those that better represent the quantity of the productive core (defined as the papers that fall into the h-defining class), and those that better represent the impact of the productive core (see sidebar). In a further study to validate these findings, Bornmann tested his results against 693 applicants to the Long-Term Fellowship program of the European Molecular Biology Organization, Heidelberg, Germany. The study confirmed these two basic types.

This is useful, as the indices that better represent the impact of the productive core agree with the opinions of the applicants’ peers. “The results of both studies indicate that there is an empirical incremental contribution associated with some of the h-index variants that have been proposed up to now; that is, with the variants that depict the impact of the papers in the productive core,” Bornmann says.

A balanced approach

Several researchers in the field have suggested that bibliometricians would do well to use several indices when assessing a scientist’s output and impact. Bornmann agrees, adding that the results of his research indicate that the best way to combine the different indices is to ensure that one chooses an index from each category.

“After analysis of all indices, matching the results to the real results of peer assessment, using two indices – one to measure output and another to measure impact – is the closest to peer-assessment results,” he explains. “In the future, we definitely need fewer h-index variants and more studies that test their empirical application with data sets from different fields.”

Dr. Bornmann’s full paper can be found here.

Reference:

(1) Bornmann, L., Mutz, R., and Daniel, H.D. (2008) “Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine”, Journal of the American Society for Information Science and Technology, Vol. 59, No. 5, pp. 830–837.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The h-index was originally proposed by Jorge Hirsch in 2005 to quantify the scientific output of an individual researcher. It was conceived as an improvement on previous indices, which tended to focus on the impact of the journals in which the researcher had published, and so assumed that the author’s performance was equivalent to the journal’s average. If a scientist’s publications are ranked in order of the number of lifetime citations they have received, the h-index is the highest number, h, of their papers that have each received at least h citations.

Quantity versus impact

1. Quantity of the productive core: the h, g and h(2) indices and the m-quotient describe the productive core of a scientist’s output and show the <em>number</em> of papers in the core.

2. Impact of the productive core: the a, m, r, ar and hw indices show the impact of papers in the core. This is closer to peer-assessment results.

Room for improvement

The h-index quickly gained widespread popularity. This is largely due to the fact that it is conceptually simple, easy to calculate and gives a robust estimate of the broad impact of a scientist’s cumulative research, explains Dr. Lutz Bornmann, post-doctoral researcher active in bibliometrics, scientometrics and peer-review research at ETH Zurich, the Swiss Federal Institute of Technology (1).

However, the h-index has received some criticism, most notably:

  • It is not influenced by citations beyond what is required for entry to the h-defining class. This means that it is insensitive to one or several highly cited papers in a scientist’s paper set, which are the papers that are primarily responsible for a scientist’s reputation.
  • It is highly dependent on the length of a scientist’s career, meaning only scientists with similar years of service can be compared fairly.
  • A scientist’s h-index can only rise (with time), or remain the same. It can never go down, and so cannot indicate periods of inactivity, retirement or even death.

Variants of the h-index that have been developed in an attempt to solve one or more of its perceived shortcomings include the m-quotient, g-index, h(2)-index, a-index, m-index, r-index, ar-index and hw-index. Hirsch himself proposed the m-quotient, which divides the h-index by the number of years a scientist has been active, thereby addressing the problem of longer careers correlating with higher h scores.

Two types of index

For Bornmann, the value of a bibliometric index lies in how closely it predicts the results of peer assessment. In a paper published in the Journal of the American Society for Information Science and Technology, of which he was co-author (1), he analyzed nine indices to find out whether any has improved upon the original h-index, with particular focus on their ability to accurately predict peer assessment.

He discovered that there are two basic types of index: those that better represent the quantity of the productive core (defined as the papers that fall into the h-defining class), and those that better represent the impact of the productive core (see sidebar). In a further study to validate these findings, Bornmann tested his results against 693 applicants to the Long-Term Fellowship program of the European Molecular Biology Organization, Heidelberg, Germany. The study confirmed these two basic types.

This is useful, as the indices that better represent the impact of the productive core agree with the opinions of the applicants’ peers. “The results of both studies indicate that there is an empirical incremental contribution associated with some of the h-index variants that have been proposed up to now; that is, with the variants that depict the impact of the papers in the productive core,” Bornmann says.

A balanced approach

Several researchers in the field have suggested that bibliometricians would do well to use several indices when assessing a scientist’s output and impact. Bornmann agrees, adding that the results of his research indicate that the best way to combine the different indices is to ensure that one chooses an index from each category.

“After analysis of all indices, matching the results to the real results of peer assessment, using two indices – one to measure output and another to measure impact – is the closest to peer-assessment results,” he explains. “In the future, we definitely need fewer h-index variants and more studies that test their empirical application with data sets from different fields.”

Dr. Bornmann’s full paper can be found here.

Reference:

(1) Bornmann, L., Mutz, R., and Daniel, H.D. (2008) “Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine”, Journal of the American Society for Information Science and Technology, Vol. 59, No. 5, pp. 830–837.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

United States’ share of research output continues to decline

The US has long led the global knowledge economy, but the last decade has seen its dominant position weakening. What is causing this decline?

Read more >


An extensive body of research has consistently demonstrated that the US share of scientific articles published in peer-reviewed journals has been in decline over the last decades (see Figure 1). This has typically been ascribed to the effect of the developing knowledge economies of China and the four Asian Tiger nations, Taiwan, Singapore, Hong Kong and South Korea, and has not been considered a policy concern (1). However, since the 1990s the absolute number of articles published by US-based researchers has plateaued (see Figure 2).

This flattening of scholarly output has been confirmed by the “Science and Engineering Indicators” (SEI) 2008 (2), published in January by the US National Science Board. This biennial report contrasts this finding with strong annual growth in research funding in the US over the same period, from US$200 billion in 1997 to around US$340 billion (or 2.6% of GDP) in 2006. A companion policy statement, “Research and Development: Essential Foundation for U.S. Competitiveness in a Global Economy” (3), nevertheless calls for a “strong national response” by further increasing the level of US government funding for basic research.

Despite these trends in article output, the SEI 2008 report demonstrates that the US continues to produce the best-cited research in the world, as indicated by its dominant share of articles in the top 1% of cited articles across all fields. This finding is borne out by comparing the h-index of the US with those of selected world regions (see Figure 3).

By any measure, the US remains the world’s dominant scientific nation. The question facing government policymakers in the age of knowledge-based economies is: for how much longer?

Fig 1

Figure 1 – Share of world articles published by US researchers, 1997–2007.
Source: Scopus

Fig 2

Figure 2 – Number of articles published by US researchers (light blue) versus world (dark blue), 1997–2007.
Source: Scopus

Fig 3

Figure 3 – H-index of US versus selected global regions. Here, the h-index defines the number of documents published in the period 1996-2006 that receive the same or greater number of citations during the same period. Source: SCImago SJR – SCImago Journal & Country Rank

References:

(1) Hill, D., Rapoport, A.I., Lehming, R.F., and Bell, R.K. (2007) “Changing U.S. output of scientific articles: 1988–2003”, National Science Foundation special report.
(2) “Science and Engineering Indicators 2008”, National Science Board report.
(3) “Research and Development: Essential Foundation for U.S. Competitiveness in a Global Economy”, National Science Board report.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

An extensive body of research has consistently demonstrated that the US share of scientific articles published in peer-reviewed journals has been in decline over the last decades (see Figure 1). This has typically been ascribed to the effect of the developing knowledge economies of China and the four Asian Tiger nations, Taiwan, Singapore, Hong Kong and South Korea, and has not been considered a policy concern (1). However, since the 1990s the absolute number of articles published by US-based researchers has plateaued (see Figure 2).

This flattening of scholarly output has been confirmed by the “Science and Engineering Indicators” (SEI) 2008 (2), published in January by the US National Science Board. This biennial report contrasts this finding with strong annual growth in research funding in the US over the same period, from US$200 billion in 1997 to around US$340 billion (or 2.6% of GDP) in 2006. A companion policy statement, “Research and Development: Essential Foundation for U.S. Competitiveness in a Global Economy” (3), nevertheless calls for a “strong national response” by further increasing the level of US government funding for basic research.

Despite these trends in article output, the SEI 2008 report demonstrates that the US continues to produce the best-cited research in the world, as indicated by its dominant share of articles in the top 1% of cited articles across all fields. This finding is borne out by comparing the h-index of the US with those of selected world regions (see Figure 3).

By any measure, the US remains the world’s dominant scientific nation. The question facing government policymakers in the age of knowledge-based economies is: for how much longer?

Fig 1

Figure 1 – Share of world articles published by US researchers, 1997–2007.
Source: Scopus

Fig 2

Figure 2 – Number of articles published by US researchers (light blue) versus world (dark blue), 1997–2007.
Source: Scopus

Fig 3

Figure 3 – H-index of US versus selected global regions. Here, the h-index defines the number of documents published in the period 1996-2006 that receive the same or greater number of citations during the same period. Source: SCImago SJR – SCImago Journal & Country Rank

References:

(1) Hill, D., Rapoport, A.I., Lehming, R.F., and Bell, R.K. (2007) “Changing U.S. output of scientific articles: 1988–2003”, National Science Foundation special report.
(2) “Science and Engineering Indicators 2008”, National Science Board report.
(3) “Research and Development: Essential Foundation for U.S. Competitiveness in a Global Economy”, National Science Board report.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
  • Elsevier has recently launched the International Center for the Study of Research - ICSR - to help create a more transparent approach to research assessment. Its mission is to encourage the examination of research using an array of metrics and a variety of qualitative and quantitive methods.