Articles

Research trends is an online magazine providing objective insights into scientific trends based on bibliometrics analyses.

Bibliometrics comes of age

Almost 40 years ago when bibliometrics emerged as a field in its own right no one could have anticipated how developments in technology and research administration would push bibliometrics to center stage in research assessment. Research Trends asks Wolfgang Glänzel, of the Centre for R&D Monitoring ECOOM in Leuven, how he sees this remarkable “Perspective Shift”.

Read more >


Almost 40 years ago, when bibliometrics emerged as a field in its own right, no one could have anticipated how developments in technology and research administration would push bibliometrics to center stage in research assessment. Research Trends asks Wolfgang Glänzel, of the Expertisecentrum O&O Monitoring (Centre for R&D Monitoring, ECOOM) in Leuven, how he sees this remarkable “Perspective Shift”.

Wolfgang Glänzel

Wolfgang Glänzel

Research Trends (RT): Once a sub-discipline of information science, bibliometrics has developed into a prominent research field that provides instruments for evaluating and benchmarking research performance. You call this “the Perspective Shift”. Has this Perspective Shift changed the approach of bibliometric research within the community itself; i.e. has it changed the starting points for research projects, shifted the focus of research topics and literature, and so on?
Wolfgang Glänzel (WG): Such a shift can indeed be observed. One must of course distinguish between genuine research projects and projects commissioned, for instance, by national research foundations, ministries or European Framework programs.

Most commissioned work in our field is policy-related and focused on research evaluation. Since this has become one of the main funding pillars of bibliometric centers and, in turn, requires an appropriate methodological foundation, the shift has had measurable effect on the research profile of the field.

The change is also mirrored by the research literature. In a paper by Schoepflin (2001), we found a specific change in the profile of the journal Scientometrics that supports this statement: 20 years after the journal was launched, case studies and methodological papers have become dominant.

RT: Does the currently available range of bibliometric indicators, including the Impact Factor (IF), h-index, g-index and Eigenfactor, accommodate the new reality of bibliometrics and its applications?
WG: Improvements and adjustments within the bibliometric toolkit are certainly necessary to meet new challenges. This also implies development of new measures and “indicators” for evaluating and benchmarking research performance.

Without a doubt, the quantity and quality of bibliometric tools have increased and improved considerably during the last three decades. The plethora of new metrics, however, most of which are designed to substitute or supplement the h-index and the IF, are not always suited to serve this purpose. Further methodological and mathematical research is needed to distinguish useful tools from “rank shoots”. Time will show which of these approaches will survive and become established as standard tools in our field.

In general, though, I am positive that a proper selection of indicators and methods is sufficient to solve most of today’s bibliometric tasks. And, as these tasks become increasingly complex, each level of aggregation will need specific approaches and standards as well. There will not be any single measure, no single “best” indicator, that could accommodate all facets of the new reality of bibliometrics and its applications.

RT: What do you consider the challenges ahead for bibliometrics and how do you think this will or should be reflected by bibliometric indicators?
WG: There are certainly some major obstacles in bibliometrics, and I will limit my comments to three of them.

First, scientometrics was originally developed to model and measure quantitative aspects of scholarly communication in basic research. The success of scientometrics has led to its extension across the applied and technical sciences, and then to the social sciences, humanities and the arts, despite communication behavior differing considerably between these subject fields. Researchers in the social sciences and humanities use different publication channels and have different citation practices. This requires a completely different approach, not simply an adjustment of indicators.

Of course, this is not a challenge for bibliometrics alone. The development of new methods goes along with the creation of bibliographic databases that meet the requirements of bibliometric use. This implies an important opportunity for both new investments and intensive interaction with information professionals.

The second challenge is brought by electronic communication, the internet and open-access publishing. Electronic communication has dramatically changed scholarly communication in the last two decades. However, the development of web-based tools has not always kept pace with the changes. The demand for proper documentation, compatibility and “cleanness” of data, as well as for reproducibility of results, still remain challenges.
Thirdly, scholarly communication – that is, communication among researchers – is not the only form of scientific communication. Modeling and measuring communication outside research communities to measure the social impact of research and scientific work can be considered the third important task that bibliometricians will be faced with in the near future.

RT: Inappropriate or uninformed use of bibliometric indicators by laymen, such as science policymakers or research managers, can have serious consequences for institutions or individuals. Do you think bibliometricians have any responsibility in this respect?
WG: In most respects, I could repeat my opinion published 15 years ago in a paper with Schoepflin entitled “Little Scientometrics – Big Scientometrics ... and Beyond”. Rapid technological advances and the worldwide availability of preprocessed data have resulted in the phenomenon of “desktop scientometrics” proclaimed by Katz and Hicks in 1997. Today, even a “pocket bibliometrician” is not an absurd nightmare anymore; such tools are already available on the internet.

Obviously, the temptation to use cheap or even free bibliometric tools that do not require grounded knowledge or skills is difficult to resist. Uninformed use of bibliometric indicators has brought our field into discredit, and has consequences for the evaluated scientists and institutions as well. Of course, this makes us concerned. Bibliometricians may not pass over in silence the inappropriate use of their research results in science policy and research management.

Bibliometricians should possibly focus more on communicating with scientists and end-users. It is certainly important to stress that bibliometrics is not just a service but, first and foremost, a research field that develops, provides and uses methods for the evaluation of research. Moreover, professionals should be selecting the appropriate methodology to underlie evaluation studies, not clients or end-users.

Despite some negative experiences, the growing number of students and successful PhDs in our field gives me hope that the uninformed use of bibliometric indicators will soon become a thing of the past.

RT: In your opinion, what has been the most exciting bibliometric development of the last decade and why was it so important?
WG: There were many exciting bibliometric developments in the last decade. If I had to name only one, I would probably choose the h-index. Not because it was such a big breakthrough – it is actually very simple, yet ingenious – but because its effects have been so far-reaching. The h-index has brought back our original spirit of the pioneering days by stimulating research and communication on this topic. In fact, scientists from various fields are returning to scientometrics as an attractive research field.

RT: What do you see as the most promising topic, metric or method for the bibliometric future? What makes your heart beat faster?
WG: There’s no particular topic I think is “most promising”, but the fact that scientometrics has become an established discipline certainly makes my heart beat faster. Now and into the future, the necessity of doing research in our field and of teaching professional skills in bibliometrics is becoming more widely recognized.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Almost 40 years ago, when bibliometrics emerged as a field in its own right, no one could have anticipated how developments in technology and research administration would push bibliometrics to center stage in research assessment. Research Trends asks Wolfgang Glänzel, of the Expertisecentrum O&O Monitoring (Centre for R&D Monitoring, ECOOM) in Leuven, how he sees this remarkable “Perspective Shift”.

Wolfgang Glänzel

Wolfgang Glänzel

Research Trends (RT): Once a sub-discipline of information science, bibliometrics has developed into a prominent research field that provides instruments for evaluating and benchmarking research performance. You call this “the Perspective Shift”. Has this Perspective Shift changed the approach of bibliometric research within the community itself; i.e. has it changed the starting points for research projects, shifted the focus of research topics and literature, and so on?
Wolfgang Glänzel (WG): Such a shift can indeed be observed. One must of course distinguish between genuine research projects and projects commissioned, for instance, by national research foundations, ministries or European Framework programs.

Most commissioned work in our field is policy-related and focused on research evaluation. Since this has become one of the main funding pillars of bibliometric centers and, in turn, requires an appropriate methodological foundation, the shift has had measurable effect on the research profile of the field.

The change is also mirrored by the research literature. In a paper by Schoepflin (2001), we found a specific change in the profile of the journal Scientometrics that supports this statement: 20 years after the journal was launched, case studies and methodological papers have become dominant.

RT: Does the currently available range of bibliometric indicators, including the Impact Factor (IF), h-index, g-index and Eigenfactor, accommodate the new reality of bibliometrics and its applications?
WG: Improvements and adjustments within the bibliometric toolkit are certainly necessary to meet new challenges. This also implies development of new measures and “indicators” for evaluating and benchmarking research performance.

Without a doubt, the quantity and quality of bibliometric tools have increased and improved considerably during the last three decades. The plethora of new metrics, however, most of which are designed to substitute or supplement the h-index and the IF, are not always suited to serve this purpose. Further methodological and mathematical research is needed to distinguish useful tools from “rank shoots”. Time will show which of these approaches will survive and become established as standard tools in our field.

In general, though, I am positive that a proper selection of indicators and methods is sufficient to solve most of today’s bibliometric tasks. And, as these tasks become increasingly complex, each level of aggregation will need specific approaches and standards as well. There will not be any single measure, no single “best” indicator, that could accommodate all facets of the new reality of bibliometrics and its applications.

RT: What do you consider the challenges ahead for bibliometrics and how do you think this will or should be reflected by bibliometric indicators?
WG: There are certainly some major obstacles in bibliometrics, and I will limit my comments to three of them.

First, scientometrics was originally developed to model and measure quantitative aspects of scholarly communication in basic research. The success of scientometrics has led to its extension across the applied and technical sciences, and then to the social sciences, humanities and the arts, despite communication behavior differing considerably between these subject fields. Researchers in the social sciences and humanities use different publication channels and have different citation practices. This requires a completely different approach, not simply an adjustment of indicators.

Of course, this is not a challenge for bibliometrics alone. The development of new methods goes along with the creation of bibliographic databases that meet the requirements of bibliometric use. This implies an important opportunity for both new investments and intensive interaction with information professionals.

The second challenge is brought by electronic communication, the internet and open-access publishing. Electronic communication has dramatically changed scholarly communication in the last two decades. However, the development of web-based tools has not always kept pace with the changes. The demand for proper documentation, compatibility and “cleanness” of data, as well as for reproducibility of results, still remain challenges.
Thirdly, scholarly communication – that is, communication among researchers – is not the only form of scientific communication. Modeling and measuring communication outside research communities to measure the social impact of research and scientific work can be considered the third important task that bibliometricians will be faced with in the near future.

RT: Inappropriate or uninformed use of bibliometric indicators by laymen, such as science policymakers or research managers, can have serious consequences for institutions or individuals. Do you think bibliometricians have any responsibility in this respect?
WG: In most respects, I could repeat my opinion published 15 years ago in a paper with Schoepflin entitled “Little Scientometrics – Big Scientometrics ... and Beyond”. Rapid technological advances and the worldwide availability of preprocessed data have resulted in the phenomenon of “desktop scientometrics” proclaimed by Katz and Hicks in 1997. Today, even a “pocket bibliometrician” is not an absurd nightmare anymore; such tools are already available on the internet.

Obviously, the temptation to use cheap or even free bibliometric tools that do not require grounded knowledge or skills is difficult to resist. Uninformed use of bibliometric indicators has brought our field into discredit, and has consequences for the evaluated scientists and institutions as well. Of course, this makes us concerned. Bibliometricians may not pass over in silence the inappropriate use of their research results in science policy and research management.

Bibliometricians should possibly focus more on communicating with scientists and end-users. It is certainly important to stress that bibliometrics is not just a service but, first and foremost, a research field that develops, provides and uses methods for the evaluation of research. Moreover, professionals should be selecting the appropriate methodology to underlie evaluation studies, not clients or end-users.

Despite some negative experiences, the growing number of students and successful PhDs in our field gives me hope that the uninformed use of bibliometric indicators will soon become a thing of the past.

RT: In your opinion, what has been the most exciting bibliometric development of the last decade and why was it so important?
WG: There were many exciting bibliometric developments in the last decade. If I had to name only one, I would probably choose the h-index. Not because it was such a big breakthrough – it is actually very simple, yet ingenious – but because its effects have been so far-reaching. The h-index has brought back our original spirit of the pioneering days by stimulating research and communication on this topic. In fact, scientists from various fields are returning to scientometrics as an attractive research field.

RT: What do you see as the most promising topic, metric or method for the bibliometric future? What makes your heart beat faster?
WG: There’s no particular topic I think is “most promising”, but the fact that scientometrics has become an established discipline certainly makes my heart beat faster. Now and into the future, the necessity of doing research in our field and of teaching professional skills in bibliometrics is becoming more widely recognized.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Sparking debate

Bibliometrics have become the way to rank journals, but each metric is calibrated to favor specific features. This means that using just one metric can only provide a limited perspective. Research Trends speaks to Prof. Henk Moed about how his new metric offers a context-based ranking to journals.

Read more >


A major drawback of bibliometric journal ranking is that in the search for simplicity, important details can be missed. As with all quantitative approaches to complex issues, it is vital to take the source data, methodology and original assumptions into account when analyzing the results.

Across a subject field as broad as scholarly communication, assessing journal impact by citations to a journal in a two-year time frame is obviously going to favor those subjects that cite heavily, and rapidly. Some fields, particularly those in the life sciences, tend to conform to this citation pattern better than others, leading to some widely recognized distortions. This becomes a problem when research assessment is based solely on one global ranking without taking its intrinsic limitations into account.

Henk Moed

Henk Moed

Context matters
In response to the gap in the available bibliometric toolkit, Prof. Henk Moed of the Centre for Science and Technology Studies (CWTS), in the Department (Faculty) of Social Sciences at Leiden University, has developed a context-based metric called source-normalized impact per paper (SNIP).

He explains that SNIP takes context into account in five ways. First, it takes a research field’s citation frequency into account to correct for the fact that researchers in some fields cite each other more than in other fields. Second, it considers the immediacy of a field, or how quickly a paper is likely to have an impact. In some fields, it can take a long time for a paper to start being cited while other fields continue to cite old papers for longer. Third, it accounts for how well the field is covered by the underlying database; basically, is enough of a given subject’s literature actually in the database. Fourth, delimitation of a journal’s subfield is not based on a fixed classification of journals, but is tailor-made to take a journal’s focus into account, so that each journal has its proper surrounding subject field. And fifth, to counter any potential for editorial manipulation, SNIP is only applied to peer-reviewed papers in journals.

CWTS

The Centre for Science and Technology Studies (CWTS), based at Leiden University, conducts cutting-edge basic and applied research in the field of bibliometrics, research assessment and mapping. The results of this research is made available to science-policy professionals through CWTS B.V.

However, Moed was not simply filling a market gap: “I thought that this would be a useful addition to the bibliometric toolbox, but I also wanted to stimulate debate about bibliometric tools and journal ranking in general.” Moed is at pains to explain that SNIP is not a replacement for any other ranking tool because: “there can be no single perfect measure of anything as multidimensional as journal ranking – the concept is so complex that no single index could ever represent it properly.” He continues: “SNIP is not the solution for anyone who wants a single number for journal ranking, but it does offer a number of strong points that can help shed yet another light on journal analysis.”

He adds, however, that contextual weighting means SNIP is offering a particular view, and it is important to take this into account when using it. He strongly believes that no metric, including SNIP, is useful alone: “it only really makes sense if you use it in conjunction with other metrics.”

Use the right tool

This leads to Moed’s wider aim: by providing a new option and adding to the range of tools available for bibliometricians, he hopes to stimulate debate on journal ranking and assessment in general. He explains: “All indicators are weighted differently, and thus produce different results. This is why I believe that we can never have just one ranking system: we must have as wide a choice of indicators as possible.” Like many in the bibliometric community, Moed has serious concerns about how ranking systems are being used.

It is also very important to combine all quantitative assessment with qualitative indicators and peer review. “Rankings are very useful in guiding opinion, but they cannot replace them,” he says. “You first have to decide what you want to measure, and then find out which indicator is right in your circumstances. No single metric can do justice to all fields and deliver one perfect ranking system. You may even need several indicators to help you assess academic performance, and you certainly need to be ready to call on expert opinions.”

In fact, a European Commission Expert Group on Assessment of University-based Research is working from the same assumption: that research assessment must take a multidimensional view.

Henk Moed

Henk F. Moed has been a senior staff member at the Centre for Science and Technology Studies (CWTS), in the Department (Faculty) of Social Sciences at Leiden University, since 1986. He obtained a PhD in Science Studies at the University of Leiden in 1989. He has been active in numerous research areas, including bibliometric databases and bibliometric indicators. He has published over 50 research articles and is editor of several journals in his field. He is a winner of the Derek de Solla Price Award in 1999. In 2005, he published a monograph, Citation Analysis in Research Evaluation (Springer), which is one of the very few textbooks in the field.

Moed believes that what is really required is an assessment framework in which bibliometric tools sit alongside qualitative indicators in order to give a balanced picture. He expects that adoption of a long-term perspective in research policy will become increasingly important, alongside development of quantitative tools that facilitate this. SNIP fits well into this development. “But we must keep in mind that journal-impact metrics should not be used as surrogates of the actual citation impact of individual papers or research group publication œuvres. This is also true for SNIP”.

More information means better judgment
Moed welcomes debate and criticism of SNIP, and hopes to further stimulate debate on assessment of scholarly communication in general. “I realize that having more insight into the journal communication system is beneficial for researchers because they can make well-informed decisions on their publication strategy. I believe that more knowledge of journal evaluation, and more tools and more options, can only help researchers make better judgments.”

His focus on context is also intended to both encourage and guide debate. “Under current evaluation systems, many researchers in fields that have low citation rates, slow maturation rates or partial database coverage – such as mathematics, engineering, the social sciences and humanities – find it hard to advance in their careers and obtain funding, as they are not scoring well against highly and quickly citing, well covered fields, simply because citation and database characteristics in their fields are different. I hope SNIP will help in illuminating this, and that a metric that takes context into account will be useful for researchers in slower citing fields, as they can now really see which journals are having the most impact within their area and under their behavioral patterns.”

Useful links

SNIP
CWTS
“Measuring contextual citation impact of scientific journals”, Henk Moed
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

A major drawback of bibliometric journal ranking is that in the search for simplicity, important details can be missed. As with all quantitative approaches to complex issues, it is vital to take the source data, methodology and original assumptions into account when analyzing the results.

Across a subject field as broad as scholarly communication, assessing journal impact by citations to a journal in a two-year time frame is obviously going to favor those subjects that cite heavily, and rapidly. Some fields, particularly those in the life sciences, tend to conform to this citation pattern better than others, leading to some widely recognized distortions. This becomes a problem when research assessment is based solely on one global ranking without taking its intrinsic limitations into account.

Henk Moed

Henk Moed

Context matters
In response to the gap in the available bibliometric toolkit, Prof. Henk Moed of the Centre for Science and Technology Studies (CWTS), in the Department (Faculty) of Social Sciences at Leiden University, has developed a context-based metric called source-normalized impact per paper (SNIP).

He explains that SNIP takes context into account in five ways. First, it takes a research field’s citation frequency into account to correct for the fact that researchers in some fields cite each other more than in other fields. Second, it considers the immediacy of a field, or how quickly a paper is likely to have an impact. In some fields, it can take a long time for a paper to start being cited while other fields continue to cite old papers for longer. Third, it accounts for how well the field is covered by the underlying database; basically, is enough of a given subject’s literature actually in the database. Fourth, delimitation of a journal’s subfield is not based on a fixed classification of journals, but is tailor-made to take a journal’s focus into account, so that each journal has its proper surrounding subject field. And fifth, to counter any potential for editorial manipulation, SNIP is only applied to peer-reviewed papers in journals.

CWTS

The Centre for Science and Technology Studies (CWTS), based at Leiden University, conducts cutting-edge basic and applied research in the field of bibliometrics, research assessment and mapping. The results of this research is made available to science-policy professionals through CWTS B.V.

However, Moed was not simply filling a market gap: “I thought that this would be a useful addition to the bibliometric toolbox, but I also wanted to stimulate debate about bibliometric tools and journal ranking in general.” Moed is at pains to explain that SNIP is not a replacement for any other ranking tool because: “there can be no single perfect measure of anything as multidimensional as journal ranking – the concept is so complex that no single index could ever represent it properly.” He continues: “SNIP is not the solution for anyone who wants a single number for journal ranking, but it does offer a number of strong points that can help shed yet another light on journal analysis.”

He adds, however, that contextual weighting means SNIP is offering a particular view, and it is important to take this into account when using it. He strongly believes that no metric, including SNIP, is useful alone: “it only really makes sense if you use it in conjunction with other metrics.”

Use the right tool

This leads to Moed’s wider aim: by providing a new option and adding to the range of tools available for bibliometricians, he hopes to stimulate debate on journal ranking and assessment in general. He explains: “All indicators are weighted differently, and thus produce different results. This is why I believe that we can never have just one ranking system: we must have as wide a choice of indicators as possible.” Like many in the bibliometric community, Moed has serious concerns about how ranking systems are being used.

It is also very important to combine all quantitative assessment with qualitative indicators and peer review. “Rankings are very useful in guiding opinion, but they cannot replace them,” he says. “You first have to decide what you want to measure, and then find out which indicator is right in your circumstances. No single metric can do justice to all fields and deliver one perfect ranking system. You may even need several indicators to help you assess academic performance, and you certainly need to be ready to call on expert opinions.”

In fact, a European Commission Expert Group on Assessment of University-based Research is working from the same assumption: that research assessment must take a multidimensional view.

Henk Moed

Henk F. Moed has been a senior staff member at the Centre for Science and Technology Studies (CWTS), in the Department (Faculty) of Social Sciences at Leiden University, since 1986. He obtained a PhD in Science Studies at the University of Leiden in 1989. He has been active in numerous research areas, including bibliometric databases and bibliometric indicators. He has published over 50 research articles and is editor of several journals in his field. He is a winner of the Derek de Solla Price Award in 1999. In 2005, he published a monograph, Citation Analysis in Research Evaluation (Springer), which is one of the very few textbooks in the field.

Moed believes that what is really required is an assessment framework in which bibliometric tools sit alongside qualitative indicators in order to give a balanced picture. He expects that adoption of a long-term perspective in research policy will become increasingly important, alongside development of quantitative tools that facilitate this. SNIP fits well into this development. “But we must keep in mind that journal-impact metrics should not be used as surrogates of the actual citation impact of individual papers or research group publication œuvres. This is also true for SNIP”.

More information means better judgment
Moed welcomes debate and criticism of SNIP, and hopes to further stimulate debate on assessment of scholarly communication in general. “I realize that having more insight into the journal communication system is beneficial for researchers because they can make well-informed decisions on their publication strategy. I believe that more knowledge of journal evaluation, and more tools and more options, can only help researchers make better judgments.”

His focus on context is also intended to both encourage and guide debate. “Under current evaluation systems, many researchers in fields that have low citation rates, slow maturation rates or partial database coverage – such as mathematics, engineering, the social sciences and humanities – find it hard to advance in their careers and obtain funding, as they are not scoring well against highly and quickly citing, well covered fields, simply because citation and database characteristics in their fields are different. I hope SNIP will help in illuminating this, and that a metric that takes context into account will be useful for researchers in slower citing fields, as they can now really see which journals are having the most impact within their area and under their behavioral patterns.”

Useful links

SNIP
CWTS
“Measuring contextual citation impact of scientific journals”, Henk Moed
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

A question of prestige

Prestige measured by quantity of citations is one thing, but when it is based on the quality of those citations, you get a better sense of the real value of research to a community. Research Trends talks to Prof. Félix de Moya about SCImago Journal Rank (SJR), which ranks journals based on where their citations originate.

Read more >


Prestige measured by quantity of citations is one thing, but when it is based on the quality of those citations, you get a better sense of the real value of research to a community. Research Trends talks to Prof. Félix de Moya about SCImago Journal Rank (SJR), which ranks journals based on where their citations originate.

Felix de Moya

Felix de Moya

Research Trends (RT): SCImago Journal Rank (SJR) has been described as a prestige metric. Can you explain what this means and what its advantages are?
Félix de Moya (FdM): In a social context, prestige can be understood as an author’s ability or power to influence the remaining actors, which, within the research evaluation domain, can be translated as a journal’s ability to place itself in the center of scholarly discussion; that is, to achieve a commanding position in researchers’ minds.

Prestige metrics aim at highlighting journals that do not depend exclusively on the number of endorsements, as citations, they receive from other journals, but rather on a combination of the number of endorsements and the importance of each one of these endorsements. Considered in this way, the prestige of a journal is distributed among the ones it is related to through the citations.

SCImago

The SCImago Journal & Country Rank is a portal that includes the journals and country scientific indicators developed from information in the Scopus database. These indicators can be used to assess and analyze scientific domains.

This platform takes its name from the SCImago Journal Rank (SJR) indicator, developed by SCImago from the widely known algorithm Google PageRank. This indicator shows the visibility of the journals contained in the Scopus database from 1996.

SCImago is a research group from the Consejo Superior de Investigaciones Científicas (CSIC), University of Granada, Extremadura, Carlos III (Madrid) and Alcalá de Henares. The group conducts research into information analysis, representation and retrieval using visualization techniques.

RT: I understand that SJR is based on the premise that not all citations are equal (much like Google’s PageRank algorithm treats links as more or less valuable). Can you explain why it is so important to consider the value of each citation and what benefits this brings to your final ranking?
FdM: When assigning a value to a journal, the source of its citations are not the only important consideration. It is also essential to control for the effects of self-citation or other practices that have nothing to do with scientific impact. This method achieves that because citation “quality” cannot be controlled unless one has control over the whole citation network, which is, of course, impossible.

RT: Why did you decide to develop a new evaluation metric? Were you meeting a perceived market gap or seeking to improve on current methods?
FdM: As researchers working in bibliometric and scientometric fields, we have studied research-evaluation metrics for many years and we are aware of their weaknesses and limitations. The success of the PageRank algorithm and other Eigenvector-based methods to assign importance ranges to linked information resources inspired us to develop a methodology that could be applied to journal citation networks. It is not only a matter of translating previous prestige metrics to citation networks; deep knowledge of citation dynamics is needed in order to find centrality values that characterize the influence or importance that a journal may have for the scientific community.

RT: Since you launched your portal in November 2007, what level of interest have you seen from users?
FdM: The SJR indicator is provided at the SCImago Journal & Country Rank website, which has 50,000 unique visits per month. We attribute this traffic in a large part to the open availability of the metrics. And, what it is more important to us in terms of value is the increasing number of research papers that use SJR to analyze journal influence.

RT: I understand that SJR is calculated for many journals that currently have no rank under other metrics, such as those published in languages other than English, from developing countries, or those representing small communities or niche topics of research. The advantages to these journals – and to the researchers that are published in them – is obvious, but what about the advantages to science in general?
FdM: In our opinion, SJR is encouraging scientific discussion about how citation analysis methods can be used in journal evaluation. And this is really happening: in fact, all the new methodological developments in Eigenvector-based performance indicators are encouraging this healthy debate.

However, unlike many other Eigenvector-based performance indicators, SJR is built on the entire Scopus database rather than across a sample set of journals. This has important methodological implications. In addition, SJR’s results are openly available through the SCImago Journal & Country Rank evaluation platform, which makes SJR a global framework to analyze and compare the findings, and allows researchers and users worldwide to reach their own conclusions.

FdM: When assigning a value to a journal, the source of its citations are not the only important consideration. It is also essential to control for the effects of self-citation or other practices that have nothing to do with scientific impact. This method achieves that because citation “quality” cannot be controlled unless one has control over the whole citation network, which is, of course, impossible.

RT: Why did you decide to develop a new evaluation metric? Were you meeting a perceived market gap or seeking to improve on current methods?
FdM: As researchers working in bibliometric and scientometric fields, we have studied research-evaluation metrics for many years and we are aware of their weaknesses and limitations. The success of the PageRank algorithm and other Eigenvector-based methods to assign importance ranges to linked information resources inspired us to develop a methodology that could be applied to journal citation networks. It is not only a matter of translating previous prestige metrics to citation networks; deep knowledge of citation dynamics is needed in order to find centrality values that characterize the influence or importance that a journal may have for the scientific community.

RT: Since you launched your portal in November 2007, what level of interest have you seen from users?
FdM: The SJR indicator is provided at the SCImago Journal & Country Rank website, which has 50,000 unique visits per month. We attribute this traffic in a large part to the open availability of the metrics. And, what it is more important to us in terms of value is the increasing number of research papers that use SJR to analyze journal influence.

RT: I understand that SJR is calculated for many journals that currently have no rank under other metrics, such as those published in languages other than English, from developing countries, or those representing small communities or niche topics of research. The advantages to these journals – and to the researchers that are published in them – is obvious, but what about the advantages to science in general?
FdM: In our opinion, SJR is encouraging scientific discussion about how citation analysis methods can be used in journal evaluation. And this is really happening: in fact, all the new methodological developments in Eigenvector-based performance indicators are encouraging this healthy debate.

However, unlike many other Eigenvector-based performance indicators, SJR is built on the entire Scopus database rather than across a sample set of journals. This has important methodological implications. In addition, SJR’s results are openly available through the SCImago Journal & Country Rank evaluation platform, which makes SJR a global framework to analyze and compare the findings, and allows researchers and users worldwide to reach their own conclusions.

Félix de Moya

Félix de Moya has been professor of the Library and Information Science Department at the University of Granada since 2000. He obtained a Ph.D. degree in data structures and library management at the University of Granada in 1992. He has been active in numerous research areas, including information analysis, representation and retrieval by means of visualization techniques. He has been involved in innovative teaching projects, such as “Developing information systems for practice and experimentation in a controlled environment” in 2004 and 2006, and “Self-teaching modules for virtual teaching applications” in 2003.

Full resumé

RT: What particular benefits does SJR bring to the academic community? How can researchers use SJR to support their publishing career?
FdM: Following the reasoning above, SJR is already being used for real research evaluations. Researchers and research groups are using SJR to measure research achievements for tenure and career advancement, and research managers are paying increasing attention to it because it offers a comprehensive and widely available resource that helps them design methods for evaluating research. Universities worldwide are, for example, using SJR as a criterion for journal assessment in the evaluation processes.

RT: One of the main criticisms leveled at ranking metrics is that their simplicity and supposed objectivity is so seductive that more traditional methods of ranking, such as speaking to researchers and reading their papers, are in danger of being completely superceded by ranking metrics. What is your position?
FdM: Ideally, whenever a quantitative measure is involved in research-performance assessment it should be always supported by expert opinion. Unfortunately, this is not always possible due to the nature of some specific evaluation processes and the resources allocated to them. In cases where the application of quantitative metrics is the only way, efforts should be made to design the assessment criteria and reach consensus among all stakeholders to select a combination of indicators and sources that comprise fair assessment parameters.

RT: Finally, why do you think the world needs another ranking metric?
FdM: The scientific community is becoming accustomed to the availability of several journal indices and rankings, and to the idea that no single indicator can be used in every situation. Many new metrics have been released in recent years, and it is necessary to analyze the strengths and weaknesses of each these. When a new methodology solves some of the well-known problems of prior metrics, it is certainly needed.

In addition, the research community is moving forward from traditional binary assessment methods for journals. My research group believes that new metrics should be oriented toward identifying levels or grades of journal importance, especially considering the rapid increase in scientific sources, which means metrics are frequently calculated on universals rather than samples. In this scenario, we need a measure that can discern journal “quality” in a source where a huge number of publications coexist.

Useful links

SJR
SCImago
Free journal-ranking tool enters citation market: Nature news and comment
Wikipedia – SJR
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Prestige measured by quantity of citations is one thing, but when it is based on the quality of those citations, you get a better sense of the real value of research to a community. Research Trends talks to Prof. Félix de Moya about SCImago Journal Rank (SJR), which ranks journals based on where their citations originate.

Felix de Moya

Felix de Moya

Research Trends (RT): SCImago Journal Rank (SJR) has been described as a prestige metric. Can you explain what this means and what its advantages are?
Félix de Moya (FdM): In a social context, prestige can be understood as an author’s ability or power to influence the remaining actors, which, within the research evaluation domain, can be translated as a journal’s ability to place itself in the center of scholarly discussion; that is, to achieve a commanding position in researchers’ minds.

Prestige metrics aim at highlighting journals that do not depend exclusively on the number of endorsements, as citations, they receive from other journals, but rather on a combination of the number of endorsements and the importance of each one of these endorsements. Considered in this way, the prestige of a journal is distributed among the ones it is related to through the citations.

SCImago

The SCImago Journal & Country Rank is a portal that includes the journals and country scientific indicators developed from information in the Scopus database. These indicators can be used to assess and analyze scientific domains.

This platform takes its name from the SCImago Journal Rank (SJR) indicator, developed by SCImago from the widely known algorithm Google PageRank. This indicator shows the visibility of the journals contained in the Scopus database from 1996.

SCImago is a research group from the Consejo Superior de Investigaciones Científicas (CSIC), University of Granada, Extremadura, Carlos III (Madrid) and Alcalá de Henares. The group conducts research into information analysis, representation and retrieval using visualization techniques.

RT: I understand that SJR is based on the premise that not all citations are equal (much like Google’s PageRank algorithm treats links as more or less valuable). Can you explain why it is so important to consider the value of each citation and what benefits this brings to your final ranking?
FdM: When assigning a value to a journal, the source of its citations are not the only important consideration. It is also essential to control for the effects of self-citation or other practices that have nothing to do with scientific impact. This method achieves that because citation “quality” cannot be controlled unless one has control over the whole citation network, which is, of course, impossible.

RT: Why did you decide to develop a new evaluation metric? Were you meeting a perceived market gap or seeking to improve on current methods?
FdM: As researchers working in bibliometric and scientometric fields, we have studied research-evaluation metrics for many years and we are aware of their weaknesses and limitations. The success of the PageRank algorithm and other Eigenvector-based methods to assign importance ranges to linked information resources inspired us to develop a methodology that could be applied to journal citation networks. It is not only a matter of translating previous prestige metrics to citation networks; deep knowledge of citation dynamics is needed in order to find centrality values that characterize the influence or importance that a journal may have for the scientific community.

RT: Since you launched your portal in November 2007, what level of interest have you seen from users?
FdM: The SJR indicator is provided at the SCImago Journal & Country Rank website, which has 50,000 unique visits per month. We attribute this traffic in a large part to the open availability of the metrics. And, what it is more important to us in terms of value is the increasing number of research papers that use SJR to analyze journal influence.

RT: I understand that SJR is calculated for many journals that currently have no rank under other metrics, such as those published in languages other than English, from developing countries, or those representing small communities or niche topics of research. The advantages to these journals – and to the researchers that are published in them – is obvious, but what about the advantages to science in general?
FdM: In our opinion, SJR is encouraging scientific discussion about how citation analysis methods can be used in journal evaluation. And this is really happening: in fact, all the new methodological developments in Eigenvector-based performance indicators are encouraging this healthy debate.

However, unlike many other Eigenvector-based performance indicators, SJR is built on the entire Scopus database rather than across a sample set of journals. This has important methodological implications. In addition, SJR’s results are openly available through the SCImago Journal & Country Rank evaluation platform, which makes SJR a global framework to analyze and compare the findings, and allows researchers and users worldwide to reach their own conclusions.

FdM: When assigning a value to a journal, the source of its citations are not the only important consideration. It is also essential to control for the effects of self-citation or other practices that have nothing to do with scientific impact. This method achieves that because citation “quality” cannot be controlled unless one has control over the whole citation network, which is, of course, impossible.

RT: Why did you decide to develop a new evaluation metric? Were you meeting a perceived market gap or seeking to improve on current methods?
FdM: As researchers working in bibliometric and scientometric fields, we have studied research-evaluation metrics for many years and we are aware of their weaknesses and limitations. The success of the PageRank algorithm and other Eigenvector-based methods to assign importance ranges to linked information resources inspired us to develop a methodology that could be applied to journal citation networks. It is not only a matter of translating previous prestige metrics to citation networks; deep knowledge of citation dynamics is needed in order to find centrality values that characterize the influence or importance that a journal may have for the scientific community.

RT: Since you launched your portal in November 2007, what level of interest have you seen from users?
FdM: The SJR indicator is provided at the SCImago Journal & Country Rank website, which has 50,000 unique visits per month. We attribute this traffic in a large part to the open availability of the metrics. And, what it is more important to us in terms of value is the increasing number of research papers that use SJR to analyze journal influence.

RT: I understand that SJR is calculated for many journals that currently have no rank under other metrics, such as those published in languages other than English, from developing countries, or those representing small communities or niche topics of research. The advantages to these journals – and to the researchers that are published in them – is obvious, but what about the advantages to science in general?
FdM: In our opinion, SJR is encouraging scientific discussion about how citation analysis methods can be used in journal evaluation. And this is really happening: in fact, all the new methodological developments in Eigenvector-based performance indicators are encouraging this healthy debate.

However, unlike many other Eigenvector-based performance indicators, SJR is built on the entire Scopus database rather than across a sample set of journals. This has important methodological implications. In addition, SJR’s results are openly available through the SCImago Journal & Country Rank evaluation platform, which makes SJR a global framework to analyze and compare the findings, and allows researchers and users worldwide to reach their own conclusions.

Félix de Moya

Félix de Moya has been professor of the Library and Information Science Department at the University of Granada since 2000. He obtained a Ph.D. degree in data structures and library management at the University of Granada in 1992. He has been active in numerous research areas, including information analysis, representation and retrieval by means of visualization techniques. He has been involved in innovative teaching projects, such as “Developing information systems for practice and experimentation in a controlled environment” in 2004 and 2006, and “Self-teaching modules for virtual teaching applications” in 2003.

Full resumé

RT: What particular benefits does SJR bring to the academic community? How can researchers use SJR to support their publishing career?
FdM: Following the reasoning above, SJR is already being used for real research evaluations. Researchers and research groups are using SJR to measure research achievements for tenure and career advancement, and research managers are paying increasing attention to it because it offers a comprehensive and widely available resource that helps them design methods for evaluating research. Universities worldwide are, for example, using SJR as a criterion for journal assessment in the evaluation processes.

RT: One of the main criticisms leveled at ranking metrics is that their simplicity and supposed objectivity is so seductive that more traditional methods of ranking, such as speaking to researchers and reading their papers, are in danger of being completely superceded by ranking metrics. What is your position?
FdM: Ideally, whenever a quantitative measure is involved in research-performance assessment it should be always supported by expert opinion. Unfortunately, this is not always possible due to the nature of some specific evaluation processes and the resources allocated to them. In cases where the application of quantitative metrics is the only way, efforts should be made to design the assessment criteria and reach consensus among all stakeholders to select a combination of indicators and sources that comprise fair assessment parameters.

RT: Finally, why do you think the world needs another ranking metric?
FdM: The scientific community is becoming accustomed to the availability of several journal indices and rankings, and to the idea that no single indicator can be used in every situation. Many new metrics have been released in recent years, and it is necessary to analyze the strengths and weaknesses of each these. When a new methodology solves some of the well-known problems of prior metrics, it is certainly needed.

In addition, the research community is moving forward from traditional binary assessment methods for journals. My research group believes that new metrics should be oriented toward identifying levels or grades of journal importance, especially considering the rapid increase in scientific sources, which means metrics are frequently calculated on universals rather than samples. In this scenario, we need a measure that can discern journal “quality” in a source where a huge number of publications coexist.

Useful links

SJR
SCImago
Free journal-ranking tool enters citation market: Nature news and comment
Wikipedia – SJR
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

New perspectives on journal performance

Bibliometric indicators have brought great efficiency to research assessment, but not without controversy. Bibilometricians themselves have long warned against relying on a single measure to assess influence, while researchers have been crying out for transparency and choice. The incorporation of additional metrics into databases offers more options to everyone.

Read more >


Research has long played an important role in human culture, yet its evaluation remains heterogeneous as well as controversial. For several centuries, review by peers has been the method of choice to evaluate research publications; however, the use of bibliometrics has become more prominent in recent years.

Bibliometric indicators are not without their own controversies (1, 2) and recently there has been an explosion of new metrics, accompanying a shift in the mindset of the scientific community towards a multidimensional view of journal evaluation. These metrics have different properties and, as such, can provide new insights on various aspects of research.

Measuring prestige

SCImago is a research group led by Prof. Félix de Moya at the Consejo Superior de Investigaciones Científicas. The group is dedicated to information analysis, representation and retrieval by means of visualization techniques, and has recently developed SCImago Journal Rank (SJR) (3). This takes three years of publication data into account to assign relative scores to all of the sources (journal articles, conference proceedings and review articles) in a citation network, in this case journals in the Scopus database.

Inspired by the Google PageRank™ algorithm, SJR weights citations by the SJR of the citing journal; a citation from a source with a relatively high SJR is worth more than a citation from a source with a relatively low SJR. The results and methodology of this analysis are publicly available and allow comparison of journals over a period of time, and against each other.

Accounting for context

Another new metric based on the Scopus database is Source Normalized Impact per Paper (SNIP) (4), the brainchild of Prof. Henk Moed at the Centre for Science and Technology Studies (CWTS) at Leiden University. SNIP takes into account characteristics of the source’s subject field, especially the frequency at which authors cite other papers in their reference lists, the speed at which citation impact mature, and the extent to which the database used in the assessment covers the field’s literature.

SNIP is the ratio of a source’s average citation count per paper in a three-year citation window over the “citation potential” of its subject field. Citation potential is an estimate of the average number of citations a paper can be expected to receive in a given subject field. Citation potential is important because it accounts for the fact that typical citation counts vary widely between research disciplines, tending to be higher in life sciences than in mathematics or social sciences, for example.

Citation potential can also vary between subject fields within a discipline. For instance, basic research journals tend to show higher citation potentials than applied research or clinical journals, and journals covering emerging topics often have higher citation potentials than periodicals in well-established areas.

More choices

SNIP and SJR, using the same data source and publication window, can be seen as complementary to each other: SJR can be primarily perceived as a measure of prestige and SNIP as a measure of impact that corrects for context, although there is some overlap between the two.

Both metrics offer several new benefits. For a start, they are transparent: their respective methodologies have been published and made publicly available. These methodologies are community driven, answering the express needs of the people using the metrics. The indicators also account for the differences in citation behavior between different fields and subfields of science. Moreover, the metrics will be updated twice a year, giving users early indication of changes in citation patterns. Furthermore, they are dynamic indicators: additions to Scopus, including historical data, will be taken into account in the biannual releases of the metrics. And lastly, both metrics are freely available, and apply to all content in Scopus.

It should be emphasized that although the impact or quality of journals is an aspect of research performance in its own right, journal indicators should not replace the actual citation impact of individual papers or sets of research group publications. This is true for both existing and new journal metrics.

The fact that SJR and SNIP are relatively new additions to the existing suite of bibliometrics indicators is part of their strength. Both build upon earlier metrics, taking the latest thinking on measuring impact into account without being hindered by a legacy that denies modern publication and citation practices. Their unique properties – including transparency, public availability, dynamism, field normalization and three-year publication window – means they offer a step forward in citation analysis and thus provide new insights into the research landscape.

References:

(1) Corbyn, Z. (June 2009) “Hefce backs off citations in favour of peer review in REF”, Times Higher Education Supplement
(2) Corbyn, Z. (August 2009) “A threat to scientific communication”, Times Higher Education Supplement
(3) de Moya, F. (December 2009) “The SJR indicator: A new indicator of journals' scientific prestige”, Arxiv
(4) Moed, H. (November 2009) “Measuring contextual citation impact of scientific journals”, Arxiv
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Research has long played an important role in human culture, yet its evaluation remains heterogeneous as well as controversial. For several centuries, review by peers has been the method of choice to evaluate research publications; however, the use of bibliometrics has become more prominent in recent years.

Bibliometric indicators are not without their own controversies (1, 2) and recently there has been an explosion of new metrics, accompanying a shift in the mindset of the scientific community towards a multidimensional view of journal evaluation. These metrics have different properties and, as such, can provide new insights on various aspects of research.

Measuring prestige

SCImago is a research group led by Prof. Félix de Moya at the Consejo Superior de Investigaciones Científicas. The group is dedicated to information analysis, representation and retrieval by means of visualization techniques, and has recently developed SCImago Journal Rank (SJR) (3). This takes three years of publication data into account to assign relative scores to all of the sources (journal articles, conference proceedings and review articles) in a citation network, in this case journals in the Scopus database.

Inspired by the Google PageRank™ algorithm, SJR weights citations by the SJR of the citing journal; a citation from a source with a relatively high SJR is worth more than a citation from a source with a relatively low SJR. The results and methodology of this analysis are publicly available and allow comparison of journals over a period of time, and against each other.

Accounting for context

Another new metric based on the Scopus database is Source Normalized Impact per Paper (SNIP) (4), the brainchild of Prof. Henk Moed at the Centre for Science and Technology Studies (CWTS) at Leiden University. SNIP takes into account characteristics of the source’s subject field, especially the frequency at which authors cite other papers in their reference lists, the speed at which citation impact mature, and the extent to which the database used in the assessment covers the field’s literature.

SNIP is the ratio of a source’s average citation count per paper in a three-year citation window over the “citation potential” of its subject field. Citation potential is an estimate of the average number of citations a paper can be expected to receive in a given subject field. Citation potential is important because it accounts for the fact that typical citation counts vary widely between research disciplines, tending to be higher in life sciences than in mathematics or social sciences, for example.

Citation potential can also vary between subject fields within a discipline. For instance, basic research journals tend to show higher citation potentials than applied research or clinical journals, and journals covering emerging topics often have higher citation potentials than periodicals in well-established areas.

More choices

SNIP and SJR, using the same data source and publication window, can be seen as complementary to each other: SJR can be primarily perceived as a measure of prestige and SNIP as a measure of impact that corrects for context, although there is some overlap between the two.

Both metrics offer several new benefits. For a start, they are transparent: their respective methodologies have been published and made publicly available. These methodologies are community driven, answering the express needs of the people using the metrics. The indicators also account for the differences in citation behavior between different fields and subfields of science. Moreover, the metrics will be updated twice a year, giving users early indication of changes in citation patterns. Furthermore, they are dynamic indicators: additions to Scopus, including historical data, will be taken into account in the biannual releases of the metrics. And lastly, both metrics are freely available, and apply to all content in Scopus.

It should be emphasized that although the impact or quality of journals is an aspect of research performance in its own right, journal indicators should not replace the actual citation impact of individual papers or sets of research group publications. This is true for both existing and new journal metrics.

The fact that SJR and SNIP are relatively new additions to the existing suite of bibliometrics indicators is part of their strength. Both build upon earlier metrics, taking the latest thinking on measuring impact into account without being hindered by a legacy that denies modern publication and citation practices. Their unique properties – including transparency, public availability, dynamism, field normalization and three-year publication window – means they offer a step forward in citation analysis and thus provide new insights into the research landscape.

References:

(1) Corbyn, Z. (June 2009) “Hefce backs off citations in favour of peer review in REF”, Times Higher Education Supplement
(2) Corbyn, Z. (August 2009) “A threat to scientific communication”, Times Higher Education Supplement
(3) de Moya, F. (December 2009) “The SJR indicator: A new indicator of journals' scientific prestige”, Arxiv
(4) Moed, H. (November 2009) “Measuring contextual citation impact of scientific journals”, Arxiv
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Jorge Hirsch: the man behind the metric

Four years ago, the h-index burst onto the bibliometric scene, sparking an explosion of studies on the metric itself, its potential use in different contexts, and a host of variant metrics on the same theme. But the man who shares his initial with the index, Professor Jorge Hirsch, is a physicist, not a bibliometrician. Research Trends goes direct to the source to find out where the h-index came from.

Read more >


The h-index, conceived in 2005, is the number of papers by a particular author that receive h or more citations. The letter 'h' stands for 'highly cited'. It has already become one of the most widely used metrics for research evaluation, and has been adopted by bibliometricians and non-bibliometricians alike. Professor Jorge Hirsch, whose academic career in physics has taken him from Buenos Aires to Chicago to San Diego, talks to Research Trends about where it all started.

Research Trends (RT): What triggered your interest in bibliometrics?
Professor Jorge Hirsch (JH): There were two main reasons: I had trouble getting papers accepted in journals with the highest Impact Factors because of the controversial nature of my research. Fortunately, there were journals with lower Impact Factors that did accept my papers. Nonetheless, they were well cited, meaning other researchers found them useful.
A criterion often used in evaluating research achievement was to count papers published in high Impact-Factor journals; I wanted to provide an alternative criterion.

Secondly, I was on committees where I had to evaluate and compare research achievements of candidates for academic positions at my institution. I felt that too much weight was often placed on subjective criteria – such as letters of recommendation – rather than objective ones.

RT: How are bibliometrics perceived by physicists?
JH: Opinions are wide ranging: some hate them, some love them, and some have mixed feelings. There seems to be a strong correlation between how physicists perceive bibliometric indicators and how highly they rank with them as individuals. I imagine this is the case in other disciplines, too.

RT: How did you come up with the h-index?
JH: I have always paid a lot of attention to citations. If somebody writes a lot of papers that aren’t cited, it is very difficult to judge whether those papers have any value. In exceptional cases – for example, when research is very novel and not yet understood by the community – it does. But in most cases, un-cited papers are and remain irrelevant. So the number of papers an author writes is not a good indicator of the research achievement of that individual. The cumulative total number of citations for an individual is often not very useful either, because currently most research is collaborative and an author may receive a lot of citations for papers in which his/her role was not very important.

In response, I tried to look carefully at the entire citation record of the individual I was evaluating – that is, at the citation numbers for a large number of his/her papers. This is both time consuming and often inconsistent between candidates, so I wanted to devise an indicator that could be applied simply and consistently, and reflected achievement as much as possible.

Looking at the citation index of many physicists, I came up with the h-index in 2003 and started applying it to physicists I knew, immediately finding a strong correlation between my subjective opinion of them and the value of their h-index. I shared the idea with colleagues, several of whom gave me very positive feedback. Two years later, I decided to write a paper on it.

RT: Did you foresee the influence that the h-index would have on academia?
JH: I had not worked in bibliometrics before and was not totally familiar with the literature on the subject. I had recently read an article on bibliometrics by S. Redner in Physics Today (June 2005) that I found very interesting, and it made me realize how important people find these issues. But I had no idea how my paper would be received, nor whether it would be publishable in a scientific journal.

So I am certainly surprised and happy that my work has been well received. I am especially pleased that it’s attracted attention across all scientific disciplines, not just in physics or even natural sciences. I have some concern, however, that the h-index may sometimes be misused by over-relying on it, although I don't know of any specific instances.

RT: Do you intend to publish further work in bibliometrics?
JH: Yes. Although it is not the main focus of my research at present, I would like to understand the issues better and contribute to the subject.

RT: What do you think of the use of bibliometric indicators for evaluation purposes (e.g. grants, tenure, career advancement, funding, etc.)?
JH: I certainly think bibliometric indicators should play a role in evaluation, keeping in mind that there is a danger of over-reliance on them. Especially in life-changing decisions such as granting or denying tenure, the role of bibliometric indicators should be limited, and complemented by detailed analysis of the candidate and direct evaluation of the scientific content of their research. Such analysis should be especially thorough in cases where there is a large discrepancy between the direct evaluation and the collective evaluation of the scientific community as reflected in the bibliometric indicators.

I believe bibliometric indicators can be particularly useful in aiding decisions on distributing grant support; although it should be kept in mind that non-mainstream research can be undervalued by bibliometric indicators, and could still be highly deserving of support. Bibliometric indicators should always be used alongside other indicators and good judgment.

Useful links

Hirsch’s original paper: An index to quantify an individual's scientific research output
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The h-index, conceived in 2005, is the number of papers by a particular author that receive h or more citations. The letter 'h' stands for 'highly cited'. It has already become one of the most widely used metrics for research evaluation, and has been adopted by bibliometricians and non-bibliometricians alike. Professor Jorge Hirsch, whose academic career in physics has taken him from Buenos Aires to Chicago to San Diego, talks to Research Trends about where it all started.

Research Trends (RT): What triggered your interest in bibliometrics?
Professor Jorge Hirsch (JH): There were two main reasons: I had trouble getting papers accepted in journals with the highest Impact Factors because of the controversial nature of my research. Fortunately, there were journals with lower Impact Factors that did accept my papers. Nonetheless, they were well cited, meaning other researchers found them useful.
A criterion often used in evaluating research achievement was to count papers published in high Impact-Factor journals; I wanted to provide an alternative criterion.

Secondly, I was on committees where I had to evaluate and compare research achievements of candidates for academic positions at my institution. I felt that too much weight was often placed on subjective criteria – such as letters of recommendation – rather than objective ones.

RT: How are bibliometrics perceived by physicists?
JH: Opinions are wide ranging: some hate them, some love them, and some have mixed feelings. There seems to be a strong correlation between how physicists perceive bibliometric indicators and how highly they rank with them as individuals. I imagine this is the case in other disciplines, too.

RT: How did you come up with the h-index?
JH: I have always paid a lot of attention to citations. If somebody writes a lot of papers that aren’t cited, it is very difficult to judge whether those papers have any value. In exceptional cases – for example, when research is very novel and not yet understood by the community – it does. But in most cases, un-cited papers are and remain irrelevant. So the number of papers an author writes is not a good indicator of the research achievement of that individual. The cumulative total number of citations for an individual is often not very useful either, because currently most research is collaborative and an author may receive a lot of citations for papers in which his/her role was not very important.

In response, I tried to look carefully at the entire citation record of the individual I was evaluating – that is, at the citation numbers for a large number of his/her papers. This is both time consuming and often inconsistent between candidates, so I wanted to devise an indicator that could be applied simply and consistently, and reflected achievement as much as possible.

Looking at the citation index of many physicists, I came up with the h-index in 2003 and started applying it to physicists I knew, immediately finding a strong correlation between my subjective opinion of them and the value of their h-index. I shared the idea with colleagues, several of whom gave me very positive feedback. Two years later, I decided to write a paper on it.

RT: Did you foresee the influence that the h-index would have on academia?
JH: I had not worked in bibliometrics before and was not totally familiar with the literature on the subject. I had recently read an article on bibliometrics by S. Redner in Physics Today (June 2005) that I found very interesting, and it made me realize how important people find these issues. But I had no idea how my paper would be received, nor whether it would be publishable in a scientific journal.

So I am certainly surprised and happy that my work has been well received. I am especially pleased that it’s attracted attention across all scientific disciplines, not just in physics or even natural sciences. I have some concern, however, that the h-index may sometimes be misused by over-relying on it, although I don't know of any specific instances.

RT: Do you intend to publish further work in bibliometrics?
JH: Yes. Although it is not the main focus of my research at present, I would like to understand the issues better and contribute to the subject.

RT: What do you think of the use of bibliometric indicators for evaluation purposes (e.g. grants, tenure, career advancement, funding, etc.)?
JH: I certainly think bibliometric indicators should play a role in evaluation, keeping in mind that there is a danger of over-reliance on them. Especially in life-changing decisions such as granting or denying tenure, the role of bibliometric indicators should be limited, and complemented by detailed analysis of the candidate and direct evaluation of the scientific content of their research. Such analysis should be especially thorough in cases where there is a large discrepancy between the direct evaluation and the collective evaluation of the scientific community as reflected in the bibliometric indicators.

I believe bibliometric indicators can be particularly useful in aiding decisions on distributing grant support; although it should be kept in mind that non-mainstream research can be undervalued by bibliometric indicators, and could still be highly deserving of support. Bibliometric indicators should always be used alongside other indicators and good judgment.

Useful links

Hirsch’s original paper: An index to quantify an individual's scientific research output
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Iranian universities pushing ahead

Iran is steadily publishing more papers and attracting an increasing number of international citations. Is the Middle East on the brink of a scientific revival? Research Trends investigates.

Read more >


Europe may have eclipsed the Middle East during the Renaissance, but as the number of publications from Iran grows, a revival seems to be gathering pace. It has been suggested that this may be related to the importance that Iran attaches to the development of nuclear technology. Another reason could be the positive effects of reformist president Mohammad Khatami, who has shown a strong commitment to higher education (1).

In a recent study (2), Zouhayr Hayati and Saeideh Ebrahimy analyzed the scientific output produced by institutes and organizations in Iran, motivated by the observation that the “recent policy of government officials to increase participation has substantially increased the number of Iranian scholars in international journals.”

They compared universities to research institutes and other organizations and found that there was no difference in the citation impact of the papers produced by the three groups, but there was a difference in quantity: universities produce more papers.

Productivity reaps citations

Using Scopus data, Research Trends identified the top-five prolific and cited Iranian universities and institutes in 2007 (see Tables 1 and 2 respectively).

Top-five prolific institutes
Number of articles in 2007
1. University of Tehran
2,006
2. Sharif University of Technology
1,122
3. Daneshgahe Azad Eslami
1,011
4. Daneshgahe Tarbiat Modares
879
5. Amirkabir University of Technology
746

Table 1 – Scientific output of the most prolific institutes in Iran in 2007 Source: Scopus

Top-five cited institutes
Citations, two-year rolling
1. University of Tehran
1,960
2. Daneshgahe Tarbiat Modares
1,260
3. Sharif University of Technology
1,135
4. Daneshgahe Azad Eslami
1,027
5. Shiraz University
778

Table 2 – Number of citations in 2007 to publications from 2005 and 2006 for the most-cited institutes in Iran Source: Scopus

There is little difference between the two Tables; the most productive institutes are typically also the most cited.

Indeed, Hayati and Ebrahimy show a positive correlation between an institute’s scientific output and the number of citations for all three groups (Pearson’s correlation = 0.94). They also found that the average number of citations per article – a measure of the impact these articles have had in the scientific community – was higher for more productive institutes (Pearson’s correlation = 0.21).

When trying to replicate these correlations with Scopus data, we investigated articles published in 2005 and 2006, and citations to those articles in 2007. We did not distinguish between the three groups of institutions. We found a very strong positive correlation between article output and citations received (0.94), but this can hardly be considered surprising; as the number of articles written increases, it is a given that the number of citations will also increase.

To show that the number of citations per article rises as the number of articles that are published increases, there would need to be a positive correlation between output and citations per article. In Hayati and Ebrahimy’s study the Pearson’s correlation was low, and in this present study it is lower still, at a mere 0.0002. Taken together, this suggests that no such relationship between productivity and citation impact exists for universities and research institutes in Iran.

Attracting international attention

When looking at international collaboration, we see the same pattern. If an institute publishes many papers, the number of international collaborations is also high (Pearson’s correlation = 0.73). However, when we look at the correlation between the number of papers and the percentage of articles that are written in collaboration with international partners, the correlation becomes less convincing (Pearson’s correlation = 0.53).

In a broader context, Iran as a whole is on the right track. Figure 1 illustrates how the number of Iranian articles published has shown year-on-year growth of 25% over the last 12 years.

Figure 1 – Number of articles from Iran published between 1996 and 2007 Source: Scopus

Figure 1 – Number of articles from Iran published between 1996 and 2007 Source: Scopus

Figure 2 shows how citations to Iranian research have also increased over the same time period, and that this increase cannot solely be explained by increased self-citations from Iran. Internationally, Iranian research is being cited more and more.

Figure 2 – Percentage of self-citations for Iran as a rolling two-year measure (citations in 2007 to articles published in 2005 and 2006) Source: Scopus

Figure 2 – Percentage of self-citations for Iran as a rolling two-year measure (citations in 2007 to articles published in 2005 and 2006) Source: Scopus

Findings in both the article by Hayati and Ebrahimy and the present study show that Iranian institutes are on the right track when it comes to increasing the total number of articles and the total number of citations. Relatively speaking, citations per Iranian article remains constant, as there is not a strong correlation between increased output and the number of citations received per article. As global perceptions of Iranian science shift over the coming years, we may see Iran begin to take its place among the scientific nations of the world.

Reference:

(1) Editorial 'Revival in Iran' (August 17, 2006) Nature, Issue 442, pp. 719–720
(2) Hayati Z. and Ebrahimi S. (2009) 'Correlation between quality and quantity in scientific production: A case study of Iranian organizations from 1997 to 2006', Scientometrics, Vol. 80 issue 3, pp. 625-636
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Europe may have eclipsed the Middle East during the Renaissance, but as the number of publications from Iran grows, a revival seems to be gathering pace. It has been suggested that this may be related to the importance that Iran attaches to the development of nuclear technology. Another reason could be the positive effects of reformist president Mohammad Khatami, who has shown a strong commitment to higher education (1).

In a recent study (2), Zouhayr Hayati and Saeideh Ebrahimy analyzed the scientific output produced by institutes and organizations in Iran, motivated by the observation that the “recent policy of government officials to increase participation has substantially increased the number of Iranian scholars in international journals.”

They compared universities to research institutes and other organizations and found that there was no difference in the citation impact of the papers produced by the three groups, but there was a difference in quantity: universities produce more papers.

Productivity reaps citations

Using Scopus data, Research Trends identified the top-five prolific and cited Iranian universities and institutes in 2007 (see Tables 1 and 2 respectively).

Top-five prolific institutes
Number of articles in 2007
1. University of Tehran
2,006
2. Sharif University of Technology
1,122
3. Daneshgahe Azad Eslami
1,011
4. Daneshgahe Tarbiat Modares
879
5. Amirkabir University of Technology
746

Table 1 – Scientific output of the most prolific institutes in Iran in 2007 Source: Scopus

Top-five cited institutes
Citations, two-year rolling
1. University of Tehran
1,960
2. Daneshgahe Tarbiat Modares
1,260
3. Sharif University of Technology
1,135
4. Daneshgahe Azad Eslami
1,027
5. Shiraz University
778

Table 2 – Number of citations in 2007 to publications from 2005 and 2006 for the most-cited institutes in Iran Source: Scopus

There is little difference between the two Tables; the most productive institutes are typically also the most cited.

Indeed, Hayati and Ebrahimy show a positive correlation between an institute’s scientific output and the number of citations for all three groups (Pearson’s correlation = 0.94). They also found that the average number of citations per article – a measure of the impact these articles have had in the scientific community – was higher for more productive institutes (Pearson’s correlation = 0.21).

When trying to replicate these correlations with Scopus data, we investigated articles published in 2005 and 2006, and citations to those articles in 2007. We did not distinguish between the three groups of institutions. We found a very strong positive correlation between article output and citations received (0.94), but this can hardly be considered surprising; as the number of articles written increases, it is a given that the number of citations will also increase.

To show that the number of citations per article rises as the number of articles that are published increases, there would need to be a positive correlation between output and citations per article. In Hayati and Ebrahimy’s study the Pearson’s correlation was low, and in this present study it is lower still, at a mere 0.0002. Taken together, this suggests that no such relationship between productivity and citation impact exists for universities and research institutes in Iran.

Attracting international attention

When looking at international collaboration, we see the same pattern. If an institute publishes many papers, the number of international collaborations is also high (Pearson’s correlation = 0.73). However, when we look at the correlation between the number of papers and the percentage of articles that are written in collaboration with international partners, the correlation becomes less convincing (Pearson’s correlation = 0.53).

In a broader context, Iran as a whole is on the right track. Figure 1 illustrates how the number of Iranian articles published has shown year-on-year growth of 25% over the last 12 years.

Figure 1 – Number of articles from Iran published between 1996 and 2007 Source: Scopus

Figure 1 – Number of articles from Iran published between 1996 and 2007 Source: Scopus

Figure 2 shows how citations to Iranian research have also increased over the same time period, and that this increase cannot solely be explained by increased self-citations from Iran. Internationally, Iranian research is being cited more and more.

Figure 2 – Percentage of self-citations for Iran as a rolling two-year measure (citations in 2007 to articles published in 2005 and 2006) Source: Scopus

Figure 2 – Percentage of self-citations for Iran as a rolling two-year measure (citations in 2007 to articles published in 2005 and 2006) Source: Scopus

Findings in both the article by Hayati and Ebrahimy and the present study show that Iranian institutes are on the right track when it comes to increasing the total number of articles and the total number of citations. Relatively speaking, citations per Iranian article remains constant, as there is not a strong correlation between increased output and the number of citations received per article. As global perceptions of Iranian science shift over the coming years, we may see Iran begin to take its place among the scientific nations of the world.

Reference:

(1) Editorial 'Revival in Iran' (August 17, 2006) Nature, Issue 442, pp. 719–720
(2) Hayati Z. and Ebrahimi S. (2009) 'Correlation between quality and quantity in scientific production: A case study of Iranian organizations from 1997 to 2006', Scientometrics, Vol. 80 issue 3, pp. 625-636
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Small countries lead international collaboration

The global nature of many of science’s most pressing challenges demands greater international collaboration. Research Trends looks at how different countries measure up and finds that smaller nations are leading the way.

Read more >


Recent research has shown that international research collaboration is growing rapidly (1). This is unsurprising given the fact that many of the most pressing challenges in science are global in nature (2). Think about climate change or the H1N1 flu virus: these clearly cross borders and demand a global response. Analyzing data on international collaborative article output by country reveals that smaller countries proportionally carry out more international research than those in larger countries (see Table 1).

Professor Jean-Claude Thill from the Department of Geography and Earth Sciences at UNC Charlotte explains: “There seems to be an inverse relationship between the degree of internationalization and the size of the country. Small countries offer fewer opportunities for interaction within their borders and therefore present a strong incentive (push factor) for international collaboration. Conversely, large countries offer internally plenty of research collaboration opportunities.”

Professor Richard Sternberg from the University of Washington discusses the particular situation of the USA in this ranking: “In Europe, where many countries are tied together in a union, when a French scientist does field work with a Spanish scientist on a beach near the French/Spanish border and they publish a paper together, it’s considered international collaboration. In America, when a scientist from Oregon does field work with a scientist from North Carolina on a beach on the outer banks of Carolina (5,000km away from Oregon) and they publish a paper together, it’s not considered international collaboration.”

Professor Markus Fischer from the University of Bern, Switzerland – the country that ranked first for international collaboration – agrees: “My first idea is that small countries have higher outside collaboration”. Switzerland occupies first place, even in comparison to smaller countries. Professor Fischer suspects that additional factors, such as high overall output, higher-quality research and some cultural and/or language differences may explain some of the remaining variation.

Funding cross-border research

Funding issues can also play a part, encouraging internationalization in some regions while stifling it in others. Professor Stenberg says: “In the European Community, scientific research money is dedicated to fund collaborative research projects between scientists from different member states. The US government does not have such a mandate, per se.”

Professor Thill agrees: “The structure of national research funding agencies in the USA is such that there are few funding opportunities for cross-national research.” And, even where opportunities do exist, it can take a long time before research can even begin. Professor Stenberg explains that in his experience, “it took at least two, and usually more, years of planning and negotiating to get funded.”

Internationalism as national policy

Ranking second in our table is Chile. Atilio Bustos González, Director Sistema de Biblioteca from the Pontificia Universidad Católica de Valparaíso in Chile, is not at all surprised by Chile’s high ranking: “The research community in Chile is small, with just 2.96 researchers per 1,000 citizens of working age. Therefore, international collaboration is mandatory. We even have a national agency of research and universities, CONICYT, to stimulate international collaboration.”

Part of this high level of international collaboration can be attributed to astrophysics, one of the main areas of output and impact of Chilean research. Bustos González explains: “European South Observatory and Cerro Tololo (USA Observatory) are the main astrophysical installations in the southern hemisphere. American and European researchers work together with Chile on projects financed by these governments. This results in many international publications. The main countries with which Chile collaborates are the USA, Spain, Germany, France, England, Brazil and Argentina.”

Another contributing factor is that many researchers are educated abroad. “For many years, the nation’s strategy for developing researchers has been to stimulate education in developed countries. One consequence of this strategy is that Chilean researchers often publish with their international colleagues,” he adds.

While the nature of contemporary research questions often demands collaboration with researchers across national boundaries, many countries are also forced by geographical limitations or encouraged by national policies to pursue more internationalization than others. The size and resources of a country have a clear effect on the frequency with which local researchers will seek foreign collaborators, but in those regions where government policy restricts or slows the ability of researchers to reach out, even research topics that require international collaboration can be stifled.

Rank
Country
Collaboration % 2007
1
Switzerland
55.9%
2
Chile
53.8%
3
Denmark
51.6%
4
Belgium
51.6%
5
Bulgaria
50.9%
6
Hong Kong
50.7%
7
Austria
50.0%
8
Sweden
48.0%
9
Norway
48.0%
10
Portugal
47.0%
11
Romania
46.7%
12
Slovakia
46.5%
13
New Zealand
46.2%
14
Ireland
45.9%
15
Hungary
45.7%
16
Netherlands
45.5%
17
Thailand
45.3%
18
France
43.8%
19
South Africa
43.6%
20
Finland
43.2%
21
Argentina
42.4%
22
Germany
41.9%
23
Canada
39.8%
24
Mexico
39.5%
25
Ukraine
39.5%
26
Czech Republic
39.3%
27
United Kingdom
39.0%
28
Australia
38.7%
29
Israel
38.4%
30
Singapore
38.4%
31
Slovenia
37.6%
32
Italy
36.4%
33
Malaysia
35.9%
34
Egypt
35.3%
35
Spain
34.9%
36
Greece
33.7%
37
Russian Federation
33.1%
38
Poland
31.3%
39
Pakistan
27.7%
40
Brazil
27.2%
41
Croatia
27.0%
42
USA
26.4%
43
Korea, Republic of
23.8%
44
Japan
21.0%
45
Iran, Islamic Republic of
20.3%
46
India
17.8%
47
Taiwan, Province of China
15.7%
48
Turkey
15.3%
49
China
13.4%

Table 1 – Countries with an output of more than 5,000 articles in 2007 are ranked on their collaboration percentage. This percentage is calculated by counting the number of articles on which authors from more than one country have collaborated, divided by the total number of articles. Source: Scopus

Useful links

In Issue 11, Jamo Saarti at Kuopio University, Finland, also underlined the importance of international collaboration in research, especially with regard to improving institutional rankings.

Reference:

(1) Leydesdorff, L. and Wagner, C.S. (2008) ‘International collaboration in science and the formation of a core group’, Informetrics, 2, pp. 317–325.
(2) Rees, M. (October 30, 2008) ‘International collaboration is part of science’s DNA’, Nature, 456, p. 31.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Recent research has shown that international research collaboration is growing rapidly (1). This is unsurprising given the fact that many of the most pressing challenges in science are global in nature (2). Think about climate change or the H1N1 flu virus: these clearly cross borders and demand a global response. Analyzing data on international collaborative article output by country reveals that smaller countries proportionally carry out more international research than those in larger countries (see Table 1).

Professor Jean-Claude Thill from the Department of Geography and Earth Sciences at UNC Charlotte explains: “There seems to be an inverse relationship between the degree of internationalization and the size of the country. Small countries offer fewer opportunities for interaction within their borders and therefore present a strong incentive (push factor) for international collaboration. Conversely, large countries offer internally plenty of research collaboration opportunities.”

Professor Richard Sternberg from the University of Washington discusses the particular situation of the USA in this ranking: “In Europe, where many countries are tied together in a union, when a French scientist does field work with a Spanish scientist on a beach near the French/Spanish border and they publish a paper together, it’s considered international collaboration. In America, when a scientist from Oregon does field work with a scientist from North Carolina on a beach on the outer banks of Carolina (5,000km away from Oregon) and they publish a paper together, it’s not considered international collaboration.”

Professor Markus Fischer from the University of Bern, Switzerland – the country that ranked first for international collaboration – agrees: “My first idea is that small countries have higher outside collaboration”. Switzerland occupies first place, even in comparison to smaller countries. Professor Fischer suspects that additional factors, such as high overall output, higher-quality research and some cultural and/or language differences may explain some of the remaining variation.

Funding cross-border research

Funding issues can also play a part, encouraging internationalization in some regions while stifling it in others. Professor Stenberg says: “In the European Community, scientific research money is dedicated to fund collaborative research projects between scientists from different member states. The US government does not have such a mandate, per se.”

Professor Thill agrees: “The structure of national research funding agencies in the USA is such that there are few funding opportunities for cross-national research.” And, even where opportunities do exist, it can take a long time before research can even begin. Professor Stenberg explains that in his experience, “it took at least two, and usually more, years of planning and negotiating to get funded.”

Internationalism as national policy

Ranking second in our table is Chile. Atilio Bustos González, Director Sistema de Biblioteca from the Pontificia Universidad Católica de Valparaíso in Chile, is not at all surprised by Chile’s high ranking: “The research community in Chile is small, with just 2.96 researchers per 1,000 citizens of working age. Therefore, international collaboration is mandatory. We even have a national agency of research and universities, CONICYT, to stimulate international collaboration.”

Part of this high level of international collaboration can be attributed to astrophysics, one of the main areas of output and impact of Chilean research. Bustos González explains: “European South Observatory and Cerro Tololo (USA Observatory) are the main astrophysical installations in the southern hemisphere. American and European researchers work together with Chile on projects financed by these governments. This results in many international publications. The main countries with which Chile collaborates are the USA, Spain, Germany, France, England, Brazil and Argentina.”

Another contributing factor is that many researchers are educated abroad. “For many years, the nation’s strategy for developing researchers has been to stimulate education in developed countries. One consequence of this strategy is that Chilean researchers often publish with their international colleagues,” he adds.

While the nature of contemporary research questions often demands collaboration with researchers across national boundaries, many countries are also forced by geographical limitations or encouraged by national policies to pursue more internationalization than others. The size and resources of a country have a clear effect on the frequency with which local researchers will seek foreign collaborators, but in those regions where government policy restricts or slows the ability of researchers to reach out, even research topics that require international collaboration can be stifled.

Rank
Country
Collaboration % 2007
1
Switzerland
55.9%
2
Chile
53.8%
3
Denmark
51.6%
4
Belgium
51.6%
5
Bulgaria
50.9%
6
Hong Kong
50.7%
7
Austria
50.0%
8
Sweden
48.0%
9
Norway
48.0%
10
Portugal
47.0%
11
Romania
46.7%
12
Slovakia
46.5%
13
New Zealand
46.2%
14
Ireland
45.9%
15
Hungary
45.7%
16
Netherlands
45.5%
17
Thailand
45.3%
18
France
43.8%
19
South Africa
43.6%
20
Finland
43.2%
21
Argentina
42.4%
22
Germany
41.9%
23
Canada
39.8%
24
Mexico
39.5%
25
Ukraine
39.5%
26
Czech Republic
39.3%
27
United Kingdom
39.0%
28
Australia
38.7%
29
Israel
38.4%
30
Singapore
38.4%
31
Slovenia
37.6%
32
Italy
36.4%
33
Malaysia
35.9%
34
Egypt
35.3%
35
Spain
34.9%
36
Greece
33.7%
37
Russian Federation
33.1%
38
Poland
31.3%
39
Pakistan
27.7%
40
Brazil
27.2%
41
Croatia
27.0%
42
USA
26.4%
43
Korea, Republic of
23.8%
44
Japan
21.0%
45
Iran, Islamic Republic of
20.3%
46
India
17.8%
47
Taiwan, Province of China
15.7%
48
Turkey
15.3%
49
China
13.4%

Table 1 – Countries with an output of more than 5,000 articles in 2007 are ranked on their collaboration percentage. This percentage is calculated by counting the number of articles on which authors from more than one country have collaborated, divided by the total number of articles. Source: Scopus

Useful links

In Issue 11, Jamo Saarti at Kuopio University, Finland, also underlined the importance of international collaboration in research, especially with regard to improving institutional rankings.

Reference:

(1) Leydesdorff, L. and Wagner, C.S. (2008) ‘International collaboration in science and the formation of a core group’, Informetrics, 2, pp. 317–325.
(2) Rees, M. (October 30, 2008) ‘International collaboration is part of science’s DNA’, Nature, 456, p. 31.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
Research Trends Image

Analyzing a multidisciplinary research field

The analysis of multidisciplinary research can be very difficult, in large part due to the fact that scientific terminology is often shared with the traditional fields it draws together. In a multidisciplinary field, such as energy, keywords can be ambiguous. Research Trends explores a delineation based on subject categorization to measure country specialization and collaboration against impact.

Read more >


A researcher who consults a bibliographic database and looks for articles using the keywords “CO2” and “greenhouse” could be a climatologist working on atmospheric models or a botanist interested in boosting crop yields.

This simple example demonstrates the importance of reviewing the context of keywords and finding ways to delineate the field of research. Extending the combination of keywords usually delivers more precise results, but it will inevitably lead to reduced completeness, or recall. To increase recall without losing precision, data sets can be expanded using the information in the references from, and citations to, the initial data set. This approach was employed by Eric Archambault et al (1) to chart leading countries in the energy research field, using Scopus data for his analysis.

Setting a context

As “energy” is such a generic term in many scientific areas with numerous definitions, Archambault describes the context in his article as “research related to human society”. Archambault also uses the following definition for “energy R&D”, formulated by the Global Climate Change Group (GCCG) at Pacific Northwest National Laboratory, USA:

“['Energy R+D' is] the linked process by which an energy supply, energy end-use or carbon-management technology moves from its conception in theory to its feasibility testing and small-scale deployment. [...It] encompasses activities such as basic and applied research as well as technology development and demonstration in all aspects of production, power generation, transmission, distribution and energy storage and energy-efficiency technologies.”

Archambault’s approach shares common ground with the SciVal method developed by Dick Klavans and Kevin Boyack. The latter employs keyword and co-citation analysis to define dynamic research paradigms or clusters (2). According to this method, a paper is not simply allocated a research cluster based on its subject-area classification, making this mapping of science more realistic and sensitive to trends, notably in the multidisciplinary sciences. (See Research Trends, Issue 12, ‘Analyzing the multidisciplinary landscape’).

Scopus classifies journals in major subject areas, one of which is “Energy”. Journals can be allocated to multiple subject areas as appropriate to their scope. The classification of journals in the “Energy” subject area is based on criteria that bear resemblance to the GCCG “energy R&D” definition. Interestingly, the average number of subject areas that journals in the “Energy” papers belong to (2.09) is higher than the average value of all science (1.37), indicating that they exhibit a strong degree of interdisciplinarity.

Measuring specialization against impact
Within the Scopus “Energy” subject area data set, a country analysis yields a bubble chart of the 20 most prolific countries (see Figure 1). On the horizontal axis is the Specialization Index, which is a country’s share of the “Energy” subject area compared to all subject areas in which that country has published, relative to the world’s share (1.37%). On the vertical axis, Relative Impact is plotted, which is defined as all citations in 1996–2007 to all articles in the “Energy” subject area produced by one country, relative to the world’s impact in the “Energy” subject area (3.952). The bubble size is proportional to the total article output in 1996–2007.

Figure 1 – Specialization versus Impact for the 20 most-prolific countries in the subject area “Energy”, 1996–2007. Source: Scopus

Figure 1 – Specialization versus Impact for the 20 most-prolific countries in the subject area “Energy”, 1996–2007. Source: Scopus

Archambault presented a similar bubble chart, but he used another definition of the impact. He weighted the citations by their subject fields, took multiple, smaller citation time windows and averaged the results over 1996–2007 afterwards.

It is clear that there is a negative relationship between specialization and impact, which is strongly influenced by the positions of Russia and China on the chart. China pairs the highest level of specialization with the lowest impact of the top 20 countries. However, cultural influences, such as a tendency to publish in the Chinese language, may still hide many citations from view.

There are three countries that score higher than average on both indices: Japan, South Korea and Turkey – the latter being most notable outlier.

Specialization and international collaboration are vital

In the next chart (see Figure 2), we have replaced the Specialization index with another Scopus indicator: Country collaboration, which measures the international character of research. The average world collaboration rate in this context is 22.5%.

Figure 2: Country collaboration versus Impact for the 20 most-prolific countries in the subject area “Energy”, 1996–2007. Source: Scopus

Figure 2: Country collaboration versus Impact for the 20 most-prolific countries in the subject area “Energy”, 1996–2007. Source: Scopus

We observe a weak positive relationship, where international collaboration is associated with higher citation impact. A closer examination reveals that the horizontal positions of the bubbles on this chart are practically mirrored in Figure 1: countries with a high specialization index generally have a low collaboration rate. Exceptions are the USA, Japan, Turkey and Taiwan, whose impacts are high, even with a relatively low collaboration rate. It must be emphasized that removing China and Russia from this analysis destroys the positive correlation.

To analyze multidisciplinary research fields, advanced bibliographic analysis methods can be advantageous. A simple keyword search to delineate a multidisciplinary field may be insufficient, with unsatisfactory rates of recall and precision. However, this analysis, based on a dataset of papers that are classified under the generic subject area of “Energy”, largely reproduces the same relationships that Archambault found.

The importance of energy research needs no further explanation, but the choice of strategy and approach partially depends on the effectiveness of specialization and international collaboration. In a recent speech at MIT, US President Barack Obama advocated US leadership in the development of clean-energy technologies, which alludes to specialization (3), while he also reached out for international collaboration to mitigate global warming – another energy-related issue (4). Future bibliometric analyses may reveal the effectiveness of his plans in terms of scientific quality.

References:

(1) Archambault, E. et al (2009) Bibliometric analysis of leading countries in energy research. Science-Metrix.
(2) Scival Spotlight – Information Website.
(3) McKenna, P. (2009) ‘Obama says US in global race to develop clean energy’, New Scientist.
(4) Goldenberg, S. and Watts, J. (2009) ‘US aims for bilateral climate change deals with China and India’, The Guardian.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

A researcher who consults a bibliographic database and looks for articles using the keywords “CO2” and “greenhouse” could be a climatologist working on atmospheric models or a botanist interested in boosting crop yields.

This simple example demonstrates the importance of reviewing the context of keywords and finding ways to delineate the field of research. Extending the combination of keywords usually delivers more precise results, but it will inevitably lead to reduced completeness, or recall. To increase recall without losing precision, data sets can be expanded using the information in the references from, and citations to, the initial data set. This approach was employed by Eric Archambault et al (1) to chart leading countries in the energy research field, using Scopus data for his analysis.

Setting a context

As “energy” is such a generic term in many scientific areas with numerous definitions, Archambault describes the context in his article as “research related to human society”. Archambault also uses the following definition for “energy R&D”, formulated by the Global Climate Change Group (GCCG) at Pacific Northwest National Laboratory, USA:

“['Energy R+D' is] the linked process by which an energy supply, energy end-use or carbon-management technology moves from its conception in theory to its feasibility testing and small-scale deployment. [...It] encompasses activities such as basic and applied research as well as technology development and demonstration in all aspects of production, power generation, transmission, distribution and energy storage and energy-efficiency technologies.”

Archambault’s approach shares common ground with the SciVal method developed by Dick Klavans and Kevin Boyack. The latter employs keyword and co-citation analysis to define dynamic research paradigms or clusters (2). According to this method, a paper is not simply allocated a research cluster based on its subject-area classification, making this mapping of science more realistic and sensitive to trends, notably in the multidisciplinary sciences. (See Research Trends, Issue 12, ‘Analyzing the multidisciplinary landscape’).

Scopus classifies journals in major subject areas, one of which is “Energy”. Journals can be allocated to multiple subject areas as appropriate to their scope. The classification of journals in the “Energy” subject area is based on criteria that bear resemblance to the GCCG “energy R&D” definition. Interestingly, the average number of subject areas that journals in the “Energy” papers belong to (2.09) is higher than the average value of all science (1.37), indicating that they exhibit a strong degree of interdisciplinarity.

Measuring specialization against impact
Within the Scopus “Energy” subject area data set, a country analysis yields a bubble chart of the 20 most prolific countries (see Figure 1). On the horizontal axis is the Specialization Index, which is a country’s share of the “Energy” subject area compared to all subject areas in which that country has published, relative to the world’s share (1.37%). On the vertical axis, Relative Impact is plotted, which is defined as all citations in 1996–2007 to all articles in the “Energy” subject area produced by one country, relative to the world’s impact in the “Energy” subject area (3.952). The bubble size is proportional to the total article output in 1996–2007.

Figure 1 – Specialization versus Impact for the 20 most-prolific countries in the subject area “Energy”, 1996–2007. Source: Scopus

Figure 1 – Specialization versus Impact for the 20 most-prolific countries in the subject area “Energy”, 1996–2007. Source: Scopus

Archambault presented a similar bubble chart, but he used another definition of the impact. He weighted the citations by their subject fields, took multiple, smaller citation time windows and averaged the results over 1996–2007 afterwards.

It is clear that there is a negative relationship between specialization and impact, which is strongly influenced by the positions of Russia and China on the chart. China pairs the highest level of specialization with the lowest impact of the top 20 countries. However, cultural influences, such as a tendency to publish in the Chinese language, may still hide many citations from view.

There are three countries that score higher than average on both indices: Japan, South Korea and Turkey – the latter being most notable outlier.

Specialization and international collaboration are vital

In the next chart (see Figure 2), we have replaced the Specialization index with another Scopus indicator: Country collaboration, which measures the international character of research. The average world collaboration rate in this context is 22.5%.

Figure 2: Country collaboration versus Impact for the 20 most-prolific countries in the subject area “Energy”, 1996–2007. Source: Scopus

Figure 2: Country collaboration versus Impact for the 20 most-prolific countries in the subject area “Energy”, 1996–2007. Source: Scopus

We observe a weak positive relationship, where international collaboration is associated with higher citation impact. A closer examination reveals that the horizontal positions of the bubbles on this chart are practically mirrored in Figure 1: countries with a high specialization index generally have a low collaboration rate. Exceptions are the USA, Japan, Turkey and Taiwan, whose impacts are high, even with a relatively low collaboration rate. It must be emphasized that removing China and Russia from this analysis destroys the positive correlation.

To analyze multidisciplinary research fields, advanced bibliographic analysis methods can be advantageous. A simple keyword search to delineate a multidisciplinary field may be insufficient, with unsatisfactory rates of recall and precision. However, this analysis, based on a dataset of papers that are classified under the generic subject area of “Energy”, largely reproduces the same relationships that Archambault found.

The importance of energy research needs no further explanation, but the choice of strategy and approach partially depends on the effectiveness of specialization and international collaboration. In a recent speech at MIT, US President Barack Obama advocated US leadership in the development of clean-energy technologies, which alludes to specialization (3), while he also reached out for international collaboration to mitigate global warming – another energy-related issue (4). Future bibliometric analyses may reveal the effectiveness of his plans in terms of scientific quality.

References:

(1) Archambault, E. et al (2009) Bibliometric analysis of leading countries in energy research. Science-Metrix.
(2) Scival Spotlight – Information Website.
(3) McKenna, P. (2009) ‘Obama says US in global race to develop clean energy’, New Scientist.
(4) Goldenberg, S. and Watts, J. (2009) ‘US aims for bilateral climate change deals with China and India’, The Guardian.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Research supports UN millennium development goals

The UN Millennium Development Goals aim to combat the effects of extreme poverty. Bringing together governments, industry and research, this global effort hopes to solve some of our greatest challenges by 2015. Research Trends looks at how research on HIV/AIDS, tuberculosis and malaria measures up to the impact of these diseases.

Read more >



In September 2000, the United Nations (UN) gathered at the Millennium Summit in New York to discuss their role at the beginning of the 21st century (1). The meeting culminated in the adoption of the UN Millennium Declaration by the present heads of state (2). The Millennium Development Goals (MDGs) were derived from this declaration, and include targets that were adopted in 2001 by 192 UN member states and at least 23 international organizations (3).

Millennium Development Goals

1. End poverty and hunger
2. Universal education
3. Gender equality
4. Child health
5. Maternal health
6. Combat HIV/AIDS
7. Environmental sustainability
8. Global partnership

Targeting the biggest killers

The eight MDGs represent a global commitment to improve the most basic indicators of standard of living for all (see box). Goal 6 is to combat HIV/AIDS, as well as other diseases, such as malaria and tuberculosis, and is divided into three main targets:

1. Target 6.A: Have halted by 2015 and begun to reverse the spread of HIV/AIDS

2. Target 6.B: Achieve, by 2010, universal access to treatment for HIV/AIDS for all those who need it

3. Target 6.C: Have halted by 2015 and begun to reverse the incidence of malaria and other major diseases (including tuberculosis)

Nearly a decade on, progress has been made, but will the 2015 targets be reached? According to a recent UN report, “most countries are struggling to meet the Goal 6 targets of achieving universal access to treatment for HIV/AIDS by 2010 and of halting and reversing the spread of HIV/AIDS by 2015. […] Large increases in funding and attention to malaria have accelerated malaria-control activities in many countries. […] The incidence of TB is expected to be halted and begin to decline before the target date of 2015.” (4)

A little help goes a long way

By highlighting inequalities between countries, the UN Millennium Declaration also stands as a moral imperative for wealthier countries to assist in relieving the burden of disease in the most-afflicted countries. An analysis of research output in HIV, malaria and tuberculosis reveals the commitment of developed nations to help, even though these diseases are less prevalent in their populations.

The maps (see Figures 1, 2 and 3) show estimated 2004 death rates per 100,000 for HIV/AIDS, malaria, and tuberculosis in each country (5), as well as the 10 most prolific countries in terms of research articles published between 2004 and 2008 on these diseases.

Figure 1 – Deaths from HIV/AIDS are highest in Sub-Saharan Africa Click map for larger version

Figure 1 – Deaths from HIV/AIDS are highest in Sub-Saharan Africa Click map for larger version

The burden of HIV/AIDS is heaviest in sub-Saharan Africa. Although research on HIV is concentrated in the Western world (North America, Europe and Australia), it is interesting to note that China and South Africa are exceptions to this generalization, ranking respectively second and fifth in terms of article output on the subject.

Figure 2 – Malaria death rates and research high in Thailand, Kenya and India Click map for larger version

Figure 2 – Malaria death rates and research high in Thailand, Kenya and India Click map for larger version

The world’s highest reported death rates due to malaria are in Sub-Saharan Africa. The bulk of recent research on the disease comes from the USA, Europe and Australia, where disease burden is low; however, Thailand, Kenya and India do suffer significant malaria death rates and are also some of the most productive countries in terms of research on malaria.

Figure 3 – Tuberculosis death rates high in Sub-Saharan Africa and Eurasia Click map for larger version

Figure 3 – Tuberculosis death rates high in Sub-Saharan Africa and Eurasia Click map for larger version

Population-adjusted deaths from tuberculosis are greatest in Sub-Saharan Africa and Eurasia. The USA and Europe publish a large proportion of the research on tuberculosis, but other countries such as Brazil, China, South Africa, Japan and India (with significant death rates due to the disease) also make it into the top-10 most-prolific countries.

“India’s focus on tuberculosis research right now is phenomenal – and this is matched by the volume of publication emerging from the country,” observes Dr Brian D. Robertson, Deputy Director of the Centre for Integrative Systems Biology at Imperial College London.

Alan D. Lopez, University of Queensland, Australia, adds: “It is no surprise that most articles come from the countries shown, and in particular the USA. However, the five-fold variation in HIV/AIDS papers compared with malaria and tuberculosis from the USA is interesting, given that there is at most a two-fold variation in death rates.”

This analysis shows that as far as HIV, malaria and tuberculosis are concerned, countries do seem to be pulling together, regardless of their respective burden of disease, in an effort to meet the MDGs. However, as highlighted in the recent UN report, there is still much to be done, and only time will tell if current efforts are sufficient to reach the 2015 targets.

References:

(1) The Millennium Assembly of the United Nations – Millennium Summit
(2) United Nations Millennium Declaration
(3) The United Nations Millennium Development Goals
(4) United Nations (September 2008) ‘Fact Sheet – Goal 6: Combat HIV/AIDS, malaria and other diseases’, UN Department of Public Information, DPI/2517 L.
(5) World Health Organization – Department of Measurement and Health Information (February 2009) ‘Global disease burden’.
Figure 3 – Tuberculosis death rates high in Sub-Saharan Africa and Eurasia
Click map for larger version
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)


In September 2000, the United Nations (UN) gathered at the Millennium Summit in New York to discuss their role at the beginning of the 21st century (1). The meeting culminated in the adoption of the UN Millennium Declaration by the present heads of state (2). The Millennium Development Goals (MDGs) were derived from this declaration, and include targets that were adopted in 2001 by 192 UN member states and at least 23 international organizations (3).

Millennium Development Goals

1. End poverty and hunger
2. Universal education
3. Gender equality
4. Child health
5. Maternal health
6. Combat HIV/AIDS
7. Environmental sustainability
8. Global partnership

Targeting the biggest killers

The eight MDGs represent a global commitment to improve the most basic indicators of standard of living for all (see box). Goal 6 is to combat HIV/AIDS, as well as other diseases, such as malaria and tuberculosis, and is divided into three main targets:

1. Target 6.A: Have halted by 2015 and begun to reverse the spread of HIV/AIDS

2. Target 6.B: Achieve, by 2010, universal access to treatment for HIV/AIDS for all those who need it

3. Target 6.C: Have halted by 2015 and begun to reverse the incidence of malaria and other major diseases (including tuberculosis)

Nearly a decade on, progress has been made, but will the 2015 targets be reached? According to a recent UN report, “most countries are struggling to meet the Goal 6 targets of achieving universal access to treatment for HIV/AIDS by 2010 and of halting and reversing the spread of HIV/AIDS by 2015. […] Large increases in funding and attention to malaria have accelerated malaria-control activities in many countries. […] The incidence of TB is expected to be halted and begin to decline before the target date of 2015.” (4)

A little help goes a long way

By highlighting inequalities between countries, the UN Millennium Declaration also stands as a moral imperative for wealthier countries to assist in relieving the burden of disease in the most-afflicted countries. An analysis of research output in HIV, malaria and tuberculosis reveals the commitment of developed nations to help, even though these diseases are less prevalent in their populations.

The maps (see Figures 1, 2 and 3) show estimated 2004 death rates per 100,000 for HIV/AIDS, malaria, and tuberculosis in each country (5), as well as the 10 most prolific countries in terms of research articles published between 2004 and 2008 on these diseases.

Figure 1 – Deaths from HIV/AIDS are highest in Sub-Saharan Africa Click map for larger version

Figure 1 – Deaths from HIV/AIDS are highest in Sub-Saharan Africa Click map for larger version

The burden of HIV/AIDS is heaviest in sub-Saharan Africa. Although research on HIV is concentrated in the Western world (North America, Europe and Australia), it is interesting to note that China and South Africa are exceptions to this generalization, ranking respectively second and fifth in terms of article output on the subject.

Figure 2 – Malaria death rates and research high in Thailand, Kenya and India Click map for larger version

Figure 2 – Malaria death rates and research high in Thailand, Kenya and India Click map for larger version

The world’s highest reported death rates due to malaria are in Sub-Saharan Africa. The bulk of recent research on the disease comes from the USA, Europe and Australia, where disease burden is low; however, Thailand, Kenya and India do suffer significant malaria death rates and are also some of the most productive countries in terms of research on malaria.

Figure 3 – Tuberculosis death rates high in Sub-Saharan Africa and Eurasia Click map for larger version

Figure 3 – Tuberculosis death rates high in Sub-Saharan Africa and Eurasia Click map for larger version

Population-adjusted deaths from tuberculosis are greatest in Sub-Saharan Africa and Eurasia. The USA and Europe publish a large proportion of the research on tuberculosis, but other countries such as Brazil, China, South Africa, Japan and India (with significant death rates due to the disease) also make it into the top-10 most-prolific countries.

“India’s focus on tuberculosis research right now is phenomenal – and this is matched by the volume of publication emerging from the country,” observes Dr Brian D. Robertson, Deputy Director of the Centre for Integrative Systems Biology at Imperial College London.

Alan D. Lopez, University of Queensland, Australia, adds: “It is no surprise that most articles come from the countries shown, and in particular the USA. However, the five-fold variation in HIV/AIDS papers compared with malaria and tuberculosis from the USA is interesting, given that there is at most a two-fold variation in death rates.”

This analysis shows that as far as HIV, malaria and tuberculosis are concerned, countries do seem to be pulling together, regardless of their respective burden of disease, in an effort to meet the MDGs. However, as highlighted in the recent UN report, there is still much to be done, and only time will tell if current efforts are sufficient to reach the 2015 targets.

References:

(1) The Millennium Assembly of the United Nations – Millennium Summit
(2) United Nations Millennium Declaration
(3) The United Nations Millennium Development Goals
(4) United Nations (September 2008) ‘Fact Sheet – Goal 6: Combat HIV/AIDS, malaria and other diseases’, UN Department of Public Information, DPI/2517 L.
(5) World Health Organization – Department of Measurement and Health Information (February 2009) ‘Global disease burden’.
Figure 3 – Tuberculosis death rates high in Sub-Saharan Africa and Eurasia
Click map for larger version
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Climate research outstrips CO2 emissions

The greenhouse effect was first discovered in the 1820s, but only recently have its impacts been fully recognized. Research Trends looks at the relationship between economic growth, CO2-led climate change and article output.

Read more >


In the 1820s, the French scientist Joseph Fourier formulated the idea that some gases in the atmosphere freely let through visible and ultraviolet sunlight that heats the earth’s surface but absorb and scatter the infrared radiation that is reflected back to space. As a result, heat is trapped in the atmosphere, which causes the temperature on earth to rise. This is known as the greenhouse effect.

In the late 19th century, the Swedish scientist Svante Arrhenius was the first to speculate that rising carbon dioxide (CO2) levels in the atmosphere could change the earth’s surface temperature through this greenhouse effect. He calculated that cutting CO2 levels by half would lower the earth’s temperature by 4–5°C.

His ideas were generally dismissed or simply ignored by contemporary scientists, but in 1938, Guy S. Callendar revisited his ideas and brought up more arguments in favor of Arrhenius’s hypothesis. More and more scientists became convinced that atmospheric CO2 strongly influenced the temperature on earth and that anthropogenic carbon emissions contributed significantly to atmospheric CO2 levels. In 1960, Charles D. Keeling was the first to start measuring the carbon dioxide level in the atmosphere very precisely, and on the basis of these data was able to conclude that it was rising rapidly (1).

With climate research in its infancy, Helmut E. Landsberg stressed in his 1970 Science paper (2) that very little was known about how human activity might change the climate. His article marked the establishment of modern climate change research, which continues to thrive today.

Article output and the economy

Bibliographic analysis of research articles on climate change with reference to CO2 in peer-reviewed journals reveals that those specifically mentioning anthropogenic CO2 form a major subset (up to half) of all articles mentioning CO2 over the period 1996–2006 (see Figure 1). Also visible are the stagnations of growth around 1998–1999 and 2001–2002, and a plateau around 2004–2006. These periods coincide with global economic recessions.

To investigate whether there might be a relationship with the economy, the article-output data was compared with the global gross domestic product (GDP). And, since economic growth is driven by energy, which is predominantly generated by burning fossil fuels, another relevant data set is the growth of anthropogenic carbon emissions(3). Putting these data together reveals a cycle whereby rising CO2 levels drive research on CO2-led climate change, but where funding for such research is ultimately dependent on the CO2 emissions that drive economic growth (See Figure 2).

There is a clear relationship between both article output curves and the GDP curve. In addition, the carbon emission profile seems to either lag a year behind or jump a year ahead of the GDP curve between 1997 and 2002, but also follows the same general trend. The relationship between article output and the GDP may be explained by governmental and corporate research budgets that depend on tax revenues, and thus economically productive (CO2-generating) activity.

Do national research outputs correlate with carbon emissions? Looking at cumulative carbon emissions and article output on CO2 and climate change per country (see Figure 3) suggests that six of the top 10 countries publishing on this topic and also in the top 10 of carbon emitters: the US, UK, Japan, Germany, China and Canada. China appears to be a notable outlier, with research output failing to keep pace with carbon emissions, and these articles are also cited less than those of other high-emission nations.

With increasing carbon emissions and corresponding atmospheric CO2 levels, climate change research is becoming more urgent as the potential for drastic impacts on humanity become more certain. Governments around the world are responding with a focus on research, and this is often on a par with the magnitude of each nation’s carbon emissions. While it appears that the economic activity that drives CO2 growth may also drive research on the effects of anthropogenic CO2 on climate change, it is clear that the rate of production of scientific knowledge on anthropogenic CO2 is outstripping growth of those emissions.

Figure 2 – Rescaling the annual variations in article output, carbon emissions and GDP in arbitrary units between 1996 and 2006 allows for direct comparison, indicating a cyclical relationship. Source: Scopus

Figure 2 – Rescaling the annual variations in article output, carbon emissions and GDP in arbitrary units between 1996 and 2006 allows for direct comparison, indicating a cyclical relationship. Source: Scopus

Figure 1 – Article output (articles), carbon emissions (metric tons) and GDP (arbitrary units) have all risen between 1996 and 2006, with small plateaus around major recessions. Source: Scopus

Figure 1 – Article output (articles), carbon emissions (metric tons) and GDP (arbitrary units) have all risen between 1996 and 2006, with small plateaus around major recessions. Source: Scopus

Figure 3 – Comparing the relationship between article output, average citation per article and carbon emissions (metric tons) between 1996 and 2006 indicates that countries with high CO2 output are also among the most prolific in terms of article output. Source: Scopus

Figure 3 – Comparing the relationship between article output, average citation per article and carbon emissions (metric tons) between 1996 and 2006 indicates that countries with high CO2 output are also among the most prolific in terms of article output. Source: Scopus

References:

(1) Weart, S. (2009) “The Discovery of Global Warming – The Carbon Dioxide Greenhouse Effect”
(2) Landsberg, H.E. (1970) “Man-Made Climatic Changes: Man's activities have altered the climate of urbanized areas and may affect global climate in the future”, Science, Vol. 170 (3964), p. 1265. DOI: 10.1126/science.170.3964.1265
(3) “International Carbon Dioxide Emissions and Carbon Intensity”, Energy Information Administration, Official Energy Statistics from the U.S. Government

Interesting article:

Tucker, M. (1995) “Carbon dioxide emissions and global GDP”, Ecological Economics, Vol. 15, Issue 3, pp. 215–223
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

In the 1820s, the French scientist Joseph Fourier formulated the idea that some gases in the atmosphere freely let through visible and ultraviolet sunlight that heats the earth’s surface but absorb and scatter the infrared radiation that is reflected back to space. As a result, heat is trapped in the atmosphere, which causes the temperature on earth to rise. This is known as the greenhouse effect.

In the late 19th century, the Swedish scientist Svante Arrhenius was the first to speculate that rising carbon dioxide (CO2) levels in the atmosphere could change the earth’s surface temperature through this greenhouse effect. He calculated that cutting CO2 levels by half would lower the earth’s temperature by 4–5°C.

His ideas were generally dismissed or simply ignored by contemporary scientists, but in 1938, Guy S. Callendar revisited his ideas and brought up more arguments in favor of Arrhenius’s hypothesis. More and more scientists became convinced that atmospheric CO2 strongly influenced the temperature on earth and that anthropogenic carbon emissions contributed significantly to atmospheric CO2 levels. In 1960, Charles D. Keeling was the first to start measuring the carbon dioxide level in the atmosphere very precisely, and on the basis of these data was able to conclude that it was rising rapidly (1).

With climate research in its infancy, Helmut E. Landsberg stressed in his 1970 Science paper (2) that very little was known about how human activity might change the climate. His article marked the establishment of modern climate change research, which continues to thrive today.

Article output and the economy

Bibliographic analysis of research articles on climate change with reference to CO2 in peer-reviewed journals reveals that those specifically mentioning anthropogenic CO2 form a major subset (up to half) of all articles mentioning CO2 over the period 1996–2006 (see Figure 1). Also visible are the stagnations of growth around 1998–1999 and 2001–2002, and a plateau around 2004–2006. These periods coincide with global economic recessions.

To investigate whether there might be a relationship with the economy, the article-output data was compared with the global gross domestic product (GDP). And, since economic growth is driven by energy, which is predominantly generated by burning fossil fuels, another relevant data set is the growth of anthropogenic carbon emissions(3). Putting these data together reveals a cycle whereby rising CO2 levels drive research on CO2-led climate change, but where funding for such research is ultimately dependent on the CO2 emissions that drive economic growth (See Figure 2).

There is a clear relationship between both article output curves and the GDP curve. In addition, the carbon emission profile seems to either lag a year behind or jump a year ahead of the GDP curve between 1997 and 2002, but also follows the same general trend. The relationship between article output and the GDP may be explained by governmental and corporate research budgets that depend on tax revenues, and thus economically productive (CO2-generating) activity.

Do national research outputs correlate with carbon emissions? Looking at cumulative carbon emissions and article output on CO2 and climate change per country (see Figure 3) suggests that six of the top 10 countries publishing on this topic and also in the top 10 of carbon emitters: the US, UK, Japan, Germany, China and Canada. China appears to be a notable outlier, with research output failing to keep pace with carbon emissions, and these articles are also cited less than those of other high-emission nations.

With increasing carbon emissions and corresponding atmospheric CO2 levels, climate change research is becoming more urgent as the potential for drastic impacts on humanity become more certain. Governments around the world are responding with a focus on research, and this is often on a par with the magnitude of each nation’s carbon emissions. While it appears that the economic activity that drives CO2 growth may also drive research on the effects of anthropogenic CO2 on climate change, it is clear that the rate of production of scientific knowledge on anthropogenic CO2 is outstripping growth of those emissions.

Figure 2 – Rescaling the annual variations in article output, carbon emissions and GDP in arbitrary units between 1996 and 2006 allows for direct comparison, indicating a cyclical relationship. Source: Scopus

Figure 2 – Rescaling the annual variations in article output, carbon emissions and GDP in arbitrary units between 1996 and 2006 allows for direct comparison, indicating a cyclical relationship. Source: Scopus

Figure 1 – Article output (articles), carbon emissions (metric tons) and GDP (arbitrary units) have all risen between 1996 and 2006, with small plateaus around major recessions. Source: Scopus

Figure 1 – Article output (articles), carbon emissions (metric tons) and GDP (arbitrary units) have all risen between 1996 and 2006, with small plateaus around major recessions. Source: Scopus

Figure 3 – Comparing the relationship between article output, average citation per article and carbon emissions (metric tons) between 1996 and 2006 indicates that countries with high CO2 output are also among the most prolific in terms of article output. Source: Scopus

Figure 3 – Comparing the relationship between article output, average citation per article and carbon emissions (metric tons) between 1996 and 2006 indicates that countries with high CO2 output are also among the most prolific in terms of article output. Source: Scopus

References:

(1) Weart, S. (2009) “The Discovery of Global Warming – The Carbon Dioxide Greenhouse Effect”
(2) Landsberg, H.E. (1970) “Man-Made Climatic Changes: Man's activities have altered the climate of urbanized areas and may affect global climate in the future”, Science, Vol. 170 (3964), p. 1265. DOI: 10.1126/science.170.3964.1265
(3) “International Carbon Dioxide Emissions and Carbon Intensity”, Energy Information Administration, Official Energy Statistics from the U.S. Government

Interesting article:

Tucker, M. (1995) “Carbon dioxide emissions and global GDP”, Ecological Economics, Vol. 15, Issue 3, pp. 215–223
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)