Articles

Research trends is an online magazine providing objective insights into scientific trends based on bibliometrics analyses.

Standardizing research metrics and indicators – Perspectives & approaches

Lisa Colledge and Gali Halevi examine the topic of implementing standards in research metrics, following the STI conference, and the opinions of leading researchers whom they interviewed.

Read more >


The increase of data availability and computational advances has led to a plethora of metrics and indicators being developed for different levels of research evaluation. Whether at the individual, program, department or institution level, there are numerous methodologies and indicators offered to capture the impact of research output. These advances have also highlighted the fact that metrics must be applied appropriately depending on the goal and subject of the evaluation and should be used alongside qualitative inputs, such as peer review.

However, this has not solved the challenge of finding core quality and validity measures that will guide the current and future development of evaluative metrics and indicators. While innovation in the field of research metrics is ongoing, funders, institutions and departments are already using output metrics to measure specific elements. Such metrics as are being used cannot be scaled up to global indicators, however. This means that the field now faces a divide: although new metrics exist, they are oftentimes not suitable or cannot be scaled up to the global research ecosystem. Therefore, evaluators still use metrics that have already been recognized as unsuitable measures of individuals’ performance, such as journal-level indicators. But for lack of agreed upon alternatives, such metrics are being used routinely in inappropriate circumstances despite their shortcomings.

The need for quality and validity measures that will guide the development of research metrics and ensure that they are applied in an appropriate and fair way is at the heart of several discussions carried out via conferences and listservs, especially in the Scientometrics, Science Policy, and Research Funding communities.

One such panel discussion was held at the Science and Technology Indicators (STI) 2014 conference in Leiden. The panel focused on the need for standardization in the field of research metrics that will speak to their validity, quality and appropriate use and ways to arrive at a consensus. The panel consisted of Dr. Lisa Colledge (Elsevier, Director of Research Metrics), Stephen Curry (professor of Structural Biology at Imperial College, London, and member of HEFCE  Higher Education Funding Council for England steering group on the use of research metrics in performance measurement), Stefanie Haustein (University of Montreal), Jonathan Adams (Chief Scientist at Digital Science), and Diana Hicks (Georgia Institute of Technology).

The Snowball Metrics initiative (1), presented by Dr. Lisa Colledge, is an example of research universities collaborating internationally to arrive at a commonly agreed upon set of measures of research (outputs as well as other aspects of the research process). Snowball Metrics’ aim is for universities to agree on a set of metrics methodologies that give strategic insight into all of a university’s activities. These metrics should be understood by everyone in the same way, so that when universities calculate metrics using these “recipes” they all follow the same protocol (2).

Lisa emphasized that Snowball Metrics welcomes feedback from the research community, to improve the existing recipes and to expand the set of recipes available. Elsevier is involved in Snowball Metrics at the invitation of the universities who drive it, to project manage and to provide technical expertise where needed. The Snowball Metrics program has responded to the HEFCE review (3), and this initiative has significantly influenced Elsevier’s overall approach to the use of research metrics, expressed in a response to the same HEFCE review (4). The main principles of Elsevier’s manifesto are:

  1. A set of multiple metrics distributed across the entire research workflow is needed.
  2. Metrics must be available to be selected for all relevant peers.
  3. The generation and use of metrics should be automated and scalable.
  4. Quantitative information provided by metrics must be complemented by qualitative evidence to ensure the most complete and accurate input to answer a question.
  5. The combination of multiple metrics gives the most reliable quantitative input.
  6. Disciplinary and other characteristics that affect metrics, but that do not indicate different levels of performance, must be taken into account.
  7. Metrics should be carefully selected to ensure that they are appropriate to the question being asked.
  8. We cannot prevent the inappropriate or irresponsible use of metrics, but we can encourage responsible use by being transparent, and intolerant of “gaming”.
  9. Those in the research community who apply metrics in their day-to-day work, and who are themselves evaluated through their use, should ideally define the set of metrics to be used. It is highly desirable that this same community, or those empowered by the community on their behalf, maintains the metric definitions.
  10. There should be no methodological black boxes.
  11. Metric methodologies should be independent of the data sources and tools needed to generate them, and also independent of the business and access models through which the underlying data are made available.
  12. Aggregated or composite metrics should be avoided.

Dr. Ian Viney, Director of Strategic Evaluation and Impact, Medical Research Council, supports this approach, saying that “standards, at least properly described metrics, are important if you want to have reproducibility for your analyses, across different organizations and/or timescales. Evaluation of research, is itself research and development – success and failure should be properly documented.” Therefore, “’recipes’ should be available for discussion; testing and modification and effective approaches should become accepted standards – methods that everyone can apply.” Dr. Viney also commented on the gap between research metrics and the research community saying that:

The link between these outputs and research activity or impact is little understood.  What is most interesting is the development of metrics relating to other logically important areas of research activity – e.g. the ways in which researchers influence policy setting processes, or research feeds into policies, the way in which research teams develop new processes and products, the way in which research materials are disseminated and used.  We can make a good argument that these activities are intermediate indicators of impact, they logically describe steps along a pathway to impact.  They describe activities however not well reported in any standard format, and data is not readily available on these outputs.” 

 Dr. Ian Viney:  “We should be open about our methods, discussion across stakeholders is helpful, and work such as Snowball Metrics will help accelerate the field. I will be convinced that a particular method should become a standard when it has been successfully and reproducibly applied, when it helps us better understand research progress, productivity and/or quality. The scientometrics community should provide expert advice to stakeholders regarding the development of suitable approaches.  This community has a central role in proposing the most promising methods for wider use.
 

 

Dr. Jonathan Adams, Chief Scientist at Digital Science
, who participated in the panel, cautioned against rigid setting of standards. In his view “It is infeasible to set comprehensive written standards for metrics, indicators or evaluation methodologies when there is a diverse range of contexts, cultures and jurisdictions in which they might be applied and when data access and data diversity are changing very rapidly.” Therefore, his opinion is that any attempt to create such standards would create “an artificial vision of security and stability” that might be used inappropriately by research agencies and managers.

Dr. Paul Wouters, Director of The Centre for Science and Technology Studies (CWTS) and professor of Scientometrics, added his concern regarding standardization of metrics stating that “standards may be important for the construction of databases of research products. So at the technical level they can be useful. However, standards can mislead users if they are essentially captured by narrow interests.”

Following the conference, CWTS, a part of Leiden University, published the “The Leiden Manifesto in the Making: proposal of a set of principles on the use of assessment metrics(5).

In the manifesto Paul Wouters, Sarah de Rijcke and their colleagues summarized some principles around which the debate about standardization and quality should revolve:

  1. There should be a connection between assessment procedures and the primary process of knowledge creation. If such a connection doesn’t exist then the assessment loses a part of its usefulness for researchers and scholars.
  2. Standards developed by universities and data provided should be monitored and benefit from the technical expertise of the Scientometrics community. Although the Scientometrics community does not want to set standard themselves, it should take an active part in documenting them and ensuring their validity and quality.
  3. There’s a need to strengthen the working relationship with the public nature of the infrastructure of meta-data, including current research information systems, publication databases and citation indexes including those available from for-profit companies.
  4. Taking these issues together provides an inspiring collective research agenda for the Scientometrics community.

Dr. Wouters added that the main motivation should be to “prevent misuse or harmful applications by deans, universities or other stakeholders in scientometrics. Although many studies in scientometrics suffer from deficient methods, this problem cannot be solved with standards, but only with better education and software (which may build on some technical standards).

Dr. Paul Wouters:

“I do not think that global standards are currently possible or even desirable, therefore, Principles of good evaluation practices: YES. Universal technical standards: NO.

The Scientometrics community should analyze, train, educate, clarify, and also take on board the study of how the Scientometric indicators influence the conduct of science and scholarship.”
 

 

Dr. Peter Dahler-Larsen, a professor at the Department of Political Science at the University of Copenhagen, recently contributed to The Citation Culture blog on the topic of development of quality standards for Science & Technology indicators. Dr. Dahler-Larsen commented on his contribution to Research Trends, saying that “it is important to follow the discussion of standards, because in some fields standards pave the way for a particular set of practices that embody particular values - for better or for worse.” The main motivation for the development of standards, added Dr. Dahler-Larsen, is “NOT their agreed-upon character” but rather their ability to “inspire ethical and methodological awareness, and this can take place even without much consensus.” Yet, Dr. Dahler-Larsen says that in spite of their importance he “does not have high hopes about the adoption of standards in policy-making.”

 Dr. Dahler-Larsen:

“The most important function of standards is to raise awareness and debate. Standards can be helpful in discussion of problematic policy-making initiatives.
The Scientometric community has an important role to play because presumably, they comprise experts who knows about the pros and cons and pitfalls related to particular measurement approaches etc.  Their accumulated experience should inform better practice
.
 

 
Dr. John T. Green, who chairs the Snowball Metrics Steering Committee, believes that “whilst some argue that it is impossible to define or agree standard metrics because of the diverse range of contexts and geographies, like it or not, funders and governments are using such measures - some almost slavishly and exclusively (as in Taiwan to allocate government funding). Therefore, whilst it is ideologically acceptable for the Scientometric community to take the high ground and claim that because metrics cannot be perfect therefore none should be developed, to do so is ignoring reality – let us at least do our best and develop metrics as best we can (as indeed has happened over time with bibliometrics). I believe it is important for the academic community to engage and ensure that whatever is used to measure them is fit for purpose, or as fit as can be, especially given that they should never be used in isolation – metrics are only a part, albeit an important part, of the evaluation landscape. Thus the approach of Snowball – bottom-up and owned by the academic community.”

Professor Jun Ikeda, Chief Advisor to the President of the University of Tsukuba, Japan, supports the development of standards in metrics. In his view they will save researchers time when reporting to funders. Prof. Ikeda pointed to the fact that in many cases there is a real difficulty to compare universities’ performance and says that “If every university defines things in their own way, and calculates metrics in their own way, then seeing a metric that is higher or lower than someone else's is meaningless because the difference might not be real, but just due to different ways of working with the data. I want to do apples to apples comparisons, to be sure that I can be confident in differences that I see, and confident in taking decisions based on them.”

Prof. Ikeda:

“The biggest gap is for the research community to drive the direction that this whole area is going in. A lot is happening, but we feel a bit like it is all being done to us. There is space for us to take control of our own destiny, and shape things as we would like them to be, and as they make the most sense to us.
 

 
Research-focused universities need to be active in defining the metrics that they want to use to give insights into their strategies, Prof. Ikeda said. “Ideally the researchers within our universities would also support and use the same metrics to help them to promote their careers and to understand how they are performing relative to their own peers.” Although the debate about whether standardization in research metrics is necessary or even desirable, there is no doubt that the discussion of itself is of importance as it serves as an instrument to raise awareness about the complexity of the topic as a whole. Standards may not be easy to develop or implement, but there is little doubt that consensus regarding their proper use is needed. As more data becomes available and more metrics are developed, the issue of their usefulness and accuracy in different settings becomes crucial. Data providers, evaluators, funders and the Scientometric community must work together to not only aggregate, calculate and produce metrics, but also test them in different contexts and educate the wider audience as to their proper use.

 

References

(1) Colledge, L. (2014) “Snowball Metrics Recipe Book”, Available at: http://www.snowballmetrics.com/wp-content/uploads/snowball-recipe-book_HR.pdf
(2) Snowball Metrics, “Why is this initiative important to the higher education sector?”, Available at: http://www.snowballmetrics.com/benefits
(3) Snowball Metrics (2014) “Response to the call for evidence to the independent review of the role of metrics in research assessment”, Available at: http://www.snowballmetrics.com/wp-content/uploads/Snowball-response-to-HEFCE-review-on-metrics-300614F.pdf
(4) Elsevier (2014) “Response to HEFCE’s call for evidence: independent review of the role of metrics in research assessment”, Available at: http://www.elsevier.com/__data/assets/pdf_file/0015/210813/Elsevier-response-HEFCE-review-role-of-metrics.pdf


VN:F [1.9.22_1171]
Rating: 5.5/10 (2 votes cast)

The increase of data availability and computational advances has led to a plethora of metrics and indicators being developed for different levels of research evaluation. Whether at the individual, program, department or institution level, there are numerous methodologies and indicators offered to capture the impact of research output. These advances have also highlighted the fact that metrics must be applied appropriately depending on the goal and subject of the evaluation and should be used alongside qualitative inputs, such as peer review.

However, this has not solved the challenge of finding core quality and validity measures that will guide the current and future development of evaluative metrics and indicators. While innovation in the field of research metrics is ongoing, funders, institutions and departments are already using output metrics to measure specific elements. Such metrics as are being used cannot be scaled up to global indicators, however. This means that the field now faces a divide: although new metrics exist, they are oftentimes not suitable or cannot be scaled up to the global research ecosystem. Therefore, evaluators still use metrics that have already been recognized as unsuitable measures of individuals’ performance, such as journal-level indicators. But for lack of agreed upon alternatives, such metrics are being used routinely in inappropriate circumstances despite their shortcomings.

The need for quality and validity measures that will guide the development of research metrics and ensure that they are applied in an appropriate and fair way is at the heart of several discussions carried out via conferences and listservs, especially in the Scientometrics, Science Policy, and Research Funding communities.

One such panel discussion was held at the Science and Technology Indicators (STI) 2014 conference in Leiden. The panel focused on the need for standardization in the field of research metrics that will speak to their validity, quality and appropriate use and ways to arrive at a consensus. The panel consisted of Dr. Lisa Colledge (Elsevier, Director of Research Metrics), Stephen Curry (professor of Structural Biology at Imperial College, London, and member of HEFCE  Higher Education Funding Council for England steering group on the use of research metrics in performance measurement), Stefanie Haustein (University of Montreal), Jonathan Adams (Chief Scientist at Digital Science), and Diana Hicks (Georgia Institute of Technology).

The Snowball Metrics initiative (1), presented by Dr. Lisa Colledge, is an example of research universities collaborating internationally to arrive at a commonly agreed upon set of measures of research (outputs as well as other aspects of the research process). Snowball Metrics’ aim is for universities to agree on a set of metrics methodologies that give strategic insight into all of a university’s activities. These metrics should be understood by everyone in the same way, so that when universities calculate metrics using these “recipes” they all follow the same protocol (2).

Lisa emphasized that Snowball Metrics welcomes feedback from the research community, to improve the existing recipes and to expand the set of recipes available. Elsevier is involved in Snowball Metrics at the invitation of the universities who drive it, to project manage and to provide technical expertise where needed. The Snowball Metrics program has responded to the HEFCE review (3), and this initiative has significantly influenced Elsevier’s overall approach to the use of research metrics, expressed in a response to the same HEFCE review (4). The main principles of Elsevier’s manifesto are:

  1. A set of multiple metrics distributed across the entire research workflow is needed.
  2. Metrics must be available to be selected for all relevant peers.
  3. The generation and use of metrics should be automated and scalable.
  4. Quantitative information provided by metrics must be complemented by qualitative evidence to ensure the most complete and accurate input to answer a question.
  5. The combination of multiple metrics gives the most reliable quantitative input.
  6. Disciplinary and other characteristics that affect metrics, but that do not indicate different levels of performance, must be taken into account.
  7. Metrics should be carefully selected to ensure that they are appropriate to the question being asked.
  8. We cannot prevent the inappropriate or irresponsible use of metrics, but we can encourage responsible use by being transparent, and intolerant of “gaming”.
  9. Those in the research community who apply metrics in their day-to-day work, and who are themselves evaluated through their use, should ideally define the set of metrics to be used. It is highly desirable that this same community, or those empowered by the community on their behalf, maintains the metric definitions.
  10. There should be no methodological black boxes.
  11. Metric methodologies should be independent of the data sources and tools needed to generate them, and also independent of the business and access models through which the underlying data are made available.
  12. Aggregated or composite metrics should be avoided.

Dr. Ian Viney, Director of Strategic Evaluation and Impact, Medical Research Council, supports this approach, saying that “standards, at least properly described metrics, are important if you want to have reproducibility for your analyses, across different organizations and/or timescales. Evaluation of research, is itself research and development – success and failure should be properly documented.” Therefore, “’recipes’ should be available for discussion; testing and modification and effective approaches should become accepted standards – methods that everyone can apply.” Dr. Viney also commented on the gap between research metrics and the research community saying that:

The link between these outputs and research activity or impact is little understood.  What is most interesting is the development of metrics relating to other logically important areas of research activity – e.g. the ways in which researchers influence policy setting processes, or research feeds into policies, the way in which research teams develop new processes and products, the way in which research materials are disseminated and used.  We can make a good argument that these activities are intermediate indicators of impact, they logically describe steps along a pathway to impact.  They describe activities however not well reported in any standard format, and data is not readily available on these outputs.” 

 Dr. Ian Viney:  “We should be open about our methods, discussion across stakeholders is helpful, and work such as Snowball Metrics will help accelerate the field. I will be convinced that a particular method should become a standard when it has been successfully and reproducibly applied, when it helps us better understand research progress, productivity and/or quality. The scientometrics community should provide expert advice to stakeholders regarding the development of suitable approaches.  This community has a central role in proposing the most promising methods for wider use.
 

 

Dr. Jonathan Adams, Chief Scientist at Digital Science
, who participated in the panel, cautioned against rigid setting of standards. In his view “It is infeasible to set comprehensive written standards for metrics, indicators or evaluation methodologies when there is a diverse range of contexts, cultures and jurisdictions in which they might be applied and when data access and data diversity are changing very rapidly.” Therefore, his opinion is that any attempt to create such standards would create “an artificial vision of security and stability” that might be used inappropriately by research agencies and managers.

Dr. Paul Wouters, Director of The Centre for Science and Technology Studies (CWTS) and professor of Scientometrics, added his concern regarding standardization of metrics stating that “standards may be important for the construction of databases of research products. So at the technical level they can be useful. However, standards can mislead users if they are essentially captured by narrow interests.”

Following the conference, CWTS, a part of Leiden University, published the “The Leiden Manifesto in the Making: proposal of a set of principles on the use of assessment metrics(5).

In the manifesto Paul Wouters, Sarah de Rijcke and their colleagues summarized some principles around which the debate about standardization and quality should revolve:

  1. There should be a connection between assessment procedures and the primary process of knowledge creation. If such a connection doesn’t exist then the assessment loses a part of its usefulness for researchers and scholars.
  2. Standards developed by universities and data provided should be monitored and benefit from the technical expertise of the Scientometrics community. Although the Scientometrics community does not want to set standard themselves, it should take an active part in documenting them and ensuring their validity and quality.
  3. There’s a need to strengthen the working relationship with the public nature of the infrastructure of meta-data, including current research information systems, publication databases and citation indexes including those available from for-profit companies.
  4. Taking these issues together provides an inspiring collective research agenda for the Scientometrics community.

Dr. Wouters added that the main motivation should be to “prevent misuse or harmful applications by deans, universities or other stakeholders in scientometrics. Although many studies in scientometrics suffer from deficient methods, this problem cannot be solved with standards, but only with better education and software (which may build on some technical standards).

Dr. Paul Wouters:

“I do not think that global standards are currently possible or even desirable, therefore, Principles of good evaluation practices: YES. Universal technical standards: NO.

The Scientometrics community should analyze, train, educate, clarify, and also take on board the study of how the Scientometric indicators influence the conduct of science and scholarship.”
 

 

Dr. Peter Dahler-Larsen, a professor at the Department of Political Science at the University of Copenhagen, recently contributed to The Citation Culture blog on the topic of development of quality standards for Science & Technology indicators. Dr. Dahler-Larsen commented on his contribution to Research Trends, saying that “it is important to follow the discussion of standards, because in some fields standards pave the way for a particular set of practices that embody particular values - for better or for worse.” The main motivation for the development of standards, added Dr. Dahler-Larsen, is “NOT their agreed-upon character” but rather their ability to “inspire ethical and methodological awareness, and this can take place even without much consensus.” Yet, Dr. Dahler-Larsen says that in spite of their importance he “does not have high hopes about the adoption of standards in policy-making.”

 Dr. Dahler-Larsen:

“The most important function of standards is to raise awareness and debate. Standards can be helpful in discussion of problematic policy-making initiatives.
The Scientometric community has an important role to play because presumably, they comprise experts who knows about the pros and cons and pitfalls related to particular measurement approaches etc.  Their accumulated experience should inform better practice
.
 

 
Dr. John T. Green, who chairs the Snowball Metrics Steering Committee, believes that “whilst some argue that it is impossible to define or agree standard metrics because of the diverse range of contexts and geographies, like it or not, funders and governments are using such measures - some almost slavishly and exclusively (as in Taiwan to allocate government funding). Therefore, whilst it is ideologically acceptable for the Scientometric community to take the high ground and claim that because metrics cannot be perfect therefore none should be developed, to do so is ignoring reality – let us at least do our best and develop metrics as best we can (as indeed has happened over time with bibliometrics). I believe it is important for the academic community to engage and ensure that whatever is used to measure them is fit for purpose, or as fit as can be, especially given that they should never be used in isolation – metrics are only a part, albeit an important part, of the evaluation landscape. Thus the approach of Snowball – bottom-up and owned by the academic community.”

Professor Jun Ikeda, Chief Advisor to the President of the University of Tsukuba, Japan, supports the development of standards in metrics. In his view they will save researchers time when reporting to funders. Prof. Ikeda pointed to the fact that in many cases there is a real difficulty to compare universities’ performance and says that “If every university defines things in their own way, and calculates metrics in their own way, then seeing a metric that is higher or lower than someone else's is meaningless because the difference might not be real, but just due to different ways of working with the data. I want to do apples to apples comparisons, to be sure that I can be confident in differences that I see, and confident in taking decisions based on them.”

Prof. Ikeda:

“The biggest gap is for the research community to drive the direction that this whole area is going in. A lot is happening, but we feel a bit like it is all being done to us. There is space for us to take control of our own destiny, and shape things as we would like them to be, and as they make the most sense to us.
 

 
Research-focused universities need to be active in defining the metrics that they want to use to give insights into their strategies, Prof. Ikeda said. “Ideally the researchers within our universities would also support and use the same metrics to help them to promote their careers and to understand how they are performing relative to their own peers.” Although the debate about whether standardization in research metrics is necessary or even desirable, there is no doubt that the discussion of itself is of importance as it serves as an instrument to raise awareness about the complexity of the topic as a whole. Standards may not be easy to develop or implement, but there is little doubt that consensus regarding their proper use is needed. As more data becomes available and more metrics are developed, the issue of their usefulness and accuracy in different settings becomes crucial. Data providers, evaluators, funders and the Scientometric community must work together to not only aggregate, calculate and produce metrics, but also test them in different contexts and educate the wider audience as to their proper use.

 

References

(1) Colledge, L. (2014) “Snowball Metrics Recipe Book”, Available at: http://www.snowballmetrics.com/wp-content/uploads/snowball-recipe-book_HR.pdf
(2) Snowball Metrics, “Why is this initiative important to the higher education sector?”, Available at: http://www.snowballmetrics.com/benefits
(3) Snowball Metrics (2014) “Response to the call for evidence to the independent review of the role of metrics in research assessment”, Available at: http://www.snowballmetrics.com/wp-content/uploads/Snowball-response-to-HEFCE-review-on-metrics-300614F.pdf
(4) Elsevier (2014) “Response to HEFCE’s call for evidence: independent review of the role of metrics in research assessment”, Available at: http://www.elsevier.com/__data/assets/pdf_file/0015/210813/Elsevier-response-HEFCE-review-role-of-metrics.pdf


VN:F [1.9.22_1171]
Rating: 5.5/10 (2 votes cast)

Reporting Back: STI 2014 Leiden, The Netherlands

The latest STI conference, held at Leiden University in September, focused on the topic of implementing standards in research metrics. Gali Halevi attended the conference and reports back.

Read more >


One of the main attractions at the Science and Technology Indicators (STI) conference (1) held in Leiden in September 2014, was “The Daily Issue”. Invented by Diana Wildschut, Harmen Zijp and Patrick Nederkoorn, the reporters had three hours to find out what was happening at the conference and report about it using 1950s equipment and without telephones or internet (2). The result was a hilarious newsletter published every day and handed to the audience who came to realize how the world of Scientometrics looks to outsiders. An example included an item on the issue of serendipity in scientific process which resulted in the invention of “Serendipimetry”, a metric that measures serendipity (see additional interpretation in the text box below).

SERENDIPIMETRY

(The Daily Issue; No. 72 http://sti2014.cwts.nl/download/f-z2r2.pdf)

Some of the most valuable scientific outcomes are the result of accidental discoveries. This article explores the possibility of a metric of serendipity. Firstly, a clear distinction has to be made between a serendipity indicator and a serendipitous indicator. The latter may only be meaningful in the way it could assist chance events in finding information. More interesting however, it could be to actually measure, or at least estimate, the degree of serendipity that led to a research result. And yet another angle would be the presentation of research that might facilitate its receivers, e.g. the readers of an article, in making odd detours, living through paradigm shifts et cetera.

Alongside the traditional topics often discussed at the STI conference such as statistical representation of scientific output in forms of performance indicators and metrics, this year the conference put a strong focus on innovation. New datasets and algorithms were among the topics given significant attention. Examples include new data derived from funding systems which were explored in relation to productivity, efficiency, and patenting. Looking at the factors that influence participation in government funded programs, Lepori et al. (3) found a very strong concentration of participations from a very small number of European research universities. They also showed that these numbers can be predicted with high precision from organizational characteristics and, especially, size and international reputation. Relationships between funding, competitiveness and performance (4) were found to contradict previous findings, whereas here the researchers found that the share of institutional funding does not correlate with competitiveness, overall performance, and top performance. Additional research papers using funding systems data are available here.

New gender and career data currently available brought forth a series of studies dedicated to the relationship between gender, career level and scientific output. Van der Weijden and Calero Medina (5) studied the oeuvres of female and male scientists in academic publishing using bibliometrics. Using data from the ACUMEN survey (6), their analysis confirmed the traditional gender pattern: men produce on average a higher number of publications compared to women, regardless of their academic position and research field, and women are not evenly represented across authorship positions. Paul-Hus et al. (7) studied the place of women in the Russian scientific research system in various disciplines and how this position has evolved during the last forty years.  They found that gender parity is far from being achieved and that women remain underrepresented in terms of their relative contribution to scientific output across disciplines. Sugimoto et al. (8) presented a study featuring a global analysis of women in patents from 1976 to 2013, which found that women’s contribution to patenting remains even lower than would be predicted given their representation in Science, Technology, Engineering, and Mathematics.

Career-related studies also open new paths to understanding the relationships between academic positions, publishing, and relative scientific contributions of researchers throughout their careers. Derycke et al. (9) studied the factors influencing PhD students’ scientific productivity and found that scientific discipline, phase of the PhD process, funding situation, family situation, and organizational culture within the research team are important factors predicting the number of publications. Van der Weijden (10) used a survey to study PhD students’ perceptions of career perspectives in academic R&D, non-academic R&D, and outside R&D, and assessed to what extent these career perspectives influence their job choice. She found that several career-related aspects, such as long-term career perspectives and the availability of permanent positions, are judged much more negatively for the academic R&D sector.

A session on University-Industry collaborations featured interesting research topics such as the relationship between industry-academia co-authorships and their influence on academic commercialization output (Wong & Singh (11); Yegros-Yegros et al. (12)) as well as global trends in University-Industry relationships using affiliation analysis of dual publications (Yegros-Yegros & Tijssen (13)). Related to this topic was a session on patents analysis which was used to study topics such as scientific evaluation and strategic priorities (Ping Ho & Wong (14)).

Measures of online attention, a topic of discussion in the past couple of years, was given special focus at the conference with probably the most studies featured in a session. Studies covered topics such as Mendeley readership analysis and their relationship with academic status (Zahedi et al (15)), Tweets on the Nobel Prize awards and their impact (Levitt & Thelwall (16)), and gender biases (Bar-Ilan & Van der Weijden (17)).

True to its slogan “Context counts: Pathways to master big and little data”, this conference selected a wide range of studies using newly available data to explore topics that provide context to scientific output, including gender, career, university-industry and measurement of engagement. In addition, the selected keynote lectures provided some overall strategic insight into metrics development. Diana Hicks and Henk Moed encouraged the audience to think more strategically about the application of metrics for evaluation purposes. The 7 principles manifesto suggested by Diana Hicks provides evaluators with a framework which can be used to perform assessments of researchers, institutions and programs. This manifesto was picked up by the CWTS group in Leiden headed by Paul Wouters, who is now working on creating an agreed upon set of principles that could potentially inform evaluation and funding systems (18).

Henk Moed (19) called for special attention to be given to the context and purpose of evaluation, using meta-analysis to inform the choice of data and methodology of the evaluation. Presenting the “The multi-dimensional research assessment matrix”, he gave some examples of how to compile correct and fair evaluation indicators using guiding questions that inform the process (20).

If there is one message that could be drawn from this conference it is that the plethora of recently available data, statistical analysis and indicators is an overall positive development only if they are used in the correct context and are able to answer the questions posed. There is no one metric that fits all evaluation objectives and therefore the data selected, the method used and the conclusions drawn must be made carefully, keeping in mind that context is probably the key factor to successful assessment.

Daily Issue example

The Daily Issue | Edition 73 commenting on Diana Hick’s 7 principles of research evaluation manifesto (comments in italics)

  1. Metrics are not a substitute for assessment – Don’t blame it on the metrics
  2. Spend time and money to produce high quality data – Print your results on glossy paper
  3. Metrics should be transparent and accessible- Everyone can have a say even if they don’t know s**
  4. Data should be verified by those evaluated – Be careful not to insult anyone
  5. Be sensitive to field differences – Use long words to avoid homonyms
  6. Normalize data to account for variations by fields and over time – If your data is useless for one field, make slight adaptations and use them for another field or try again in 10 years
  7. Metrics should align with strategic goals – Follow the money

The Daily Issue: Edition 73: http://sti2014.cwts.nl/News?article=n-w2&title=Daily+Issues+now+online!

 References

(Full text of all articles is available here: http://sti2014.cwts.nl/download/f-y2w2.pdf)

(1) http://sti2014.cwts.nl/Home
(2) http://www.spullenmannen.nl/index.php?lang=en&page=werk&proj=29
(3) Lepori, B., Heller-Schuh, B., Scherngell, Th., Barber, M. (2014) “Understanding factors influencing participation in European programs of Higher Education Institutions”, Proceedings of STI 2014 Leiden, p. 345
(4) Sandström, U., Heyman, U. and Van den Besselaar, P. (2014) “The Complex Relationship between Competitive Funding and Performance”, Proceedings of STI 2014 Leiden, p. 523
(5) Van der Weijden, I. and Calero Medina, C. (2014) “Gender, Academic Position and Scientific Publishing: a bibliometric analysis of the oeuvres of researchers”, Proceedings of STI 2014 Leiden, p. 673
(6) ACUMEN Survey (2011), available http://cybermetrics.wlv.ac.uk/survey-acumen.html
(7) Paul-Hus, A., Bouvier, R., Ni, C., Sugimoto, C.R., Pislyakov, V. and Larivière, V. (2014) “Women and science in Russia: a historical bibliometric analysis”, Proceedings of STI 2014 Leiden, p. 411
(8) Sugimoto, C.R., Ni, C., West, J.D., and Larivière, V. (2014) “Innovative women: an analysis of global gender disparities in patenting”, Proceedings of STI 2014 Leiden, p. 611
(9) Derycke, H., Levecque, K., Debacker, N., Vandevelde, K. and Anseel, F. (2014) “Factors influencing PhD students’ scientific productivity”, Proceedings of STI 2014 Leiden, p. 155
(10) Van der Weijden, I., Zahedi, Z., Must, U. and Meijer, I. (2014) “Gender Differences in Societal Orientation and Output of Individual Scientists”, Proceedings of STI 2014 Leiden, p. 680
(11) Wong, P.K. and Singh, A. (2014) “A Preliminary Examination of the Relationship between Co-Publications with Industry and Technology Commercialization Output of Academics: The Case of the National University of Singapore”, Proceedings of STI 2014 Leiden, p. 702
(12) Yegros-Yegros, A., Azagra-Caro, J.M., López-Ferrer, M. and Tijssen, R.J.W. (2014) “Do University-Industry co-publication volumes correspond with university funding from business firms?”, Proceedings of STI 2014 Leiden, p. 716
(13) Yegros-Yegros, A. and Tijssen, R. (2014) “University-Industry dual appointments: global trends and their role in the interaction with industry”, Proceedings of STI 2014 Leiden, p. 712
(14) Ho, Y.P. and Wong, P.K. (2014) “Using Patent Indicators to Evaluate the Strategic Priorities of Public Research Institutions: An exploratory study”, Proceedings of STI 2014 Leiden, p. 276
(15) Zahedi, Z., Costas, R. and Wouters, P. (2014) “Broad altmetric analysis of Mendeley readerships through the ‘academic status’ of the readers of scientific publications”, Proceedings of STI 2014 Leiden, p. 720
(16) Levitt, J. and Thelwall, M. (2014) “Investigating the Impact of the Award of the Nobel Prize on Tweets”, Proceedings of STI 2014 Leiden, p. 359
(17) Bar-Ilan, J. and Van der Weijden, I. (2014) “Altmetric gender bias? – Preliminary results”, Proceedings of STI 2014 Leiden, p. 26
(18) CWTS (2014) “The Leiden manifesto in the making: proposal of a set of principles on the use of assessment metrics in the S&T indicators conference”, Available at: http://citationculture.wordpress.com/2014/09/15/the-leiden-manifesto-in-the-making-proposal-of-a-set-of-principles-on-the-use-of-assessment-metrics-in-the-st-indicators-conference/
(19) Moed, H.F., Halevi, G. (2014) “Metrics-Based Research Assessment”, Proceedings of STI 2014 Leiden, p. 391
(20) Moed, H.F., Plume, A. (2011) “The multi-dimensional research assessment matrix”, Research Trends, Issue 23, May 2011, Available at: https://www.researchtrends.com/issue23-may-2011/the-multi-dimensional-research-assessment-matrix/
VN:F [1.9.22_1171]
Rating: 7.0/10 (2 votes cast)

One of the main attractions at the Science and Technology Indicators (STI) conference (1) held in Leiden in September 2014, was “The Daily Issue”. Invented by Diana Wildschut, Harmen Zijp and Patrick Nederkoorn, the reporters had three hours to find out what was happening at the conference and report about it using 1950s equipment and without telephones or internet (2). The result was a hilarious newsletter published every day and handed to the audience who came to realize how the world of Scientometrics looks to outsiders. An example included an item on the issue of serendipity in scientific process which resulted in the invention of “Serendipimetry”, a metric that measures serendipity (see additional interpretation in the text box below).

SERENDIPIMETRY

(The Daily Issue; No. 72 http://sti2014.cwts.nl/download/f-z2r2.pdf)

Some of the most valuable scientific outcomes are the result of accidental discoveries. This article explores the possibility of a metric of serendipity. Firstly, a clear distinction has to be made between a serendipity indicator and a serendipitous indicator. The latter may only be meaningful in the way it could assist chance events in finding information. More interesting however, it could be to actually measure, or at least estimate, the degree of serendipity that led to a research result. And yet another angle would be the presentation of research that might facilitate its receivers, e.g. the readers of an article, in making odd detours, living through paradigm shifts et cetera.

Alongside the traditional topics often discussed at the STI conference such as statistical representation of scientific output in forms of performance indicators and metrics, this year the conference put a strong focus on innovation. New datasets and algorithms were among the topics given significant attention. Examples include new data derived from funding systems which were explored in relation to productivity, efficiency, and patenting. Looking at the factors that influence participation in government funded programs, Lepori et al. (3) found a very strong concentration of participations from a very small number of European research universities. They also showed that these numbers can be predicted with high precision from organizational characteristics and, especially, size and international reputation. Relationships between funding, competitiveness and performance (4) were found to contradict previous findings, whereas here the researchers found that the share of institutional funding does not correlate with competitiveness, overall performance, and top performance. Additional research papers using funding systems data are available here.

New gender and career data currently available brought forth a series of studies dedicated to the relationship between gender, career level and scientific output. Van der Weijden and Calero Medina (5) studied the oeuvres of female and male scientists in academic publishing using bibliometrics. Using data from the ACUMEN survey (6), their analysis confirmed the traditional gender pattern: men produce on average a higher number of publications compared to women, regardless of their academic position and research field, and women are not evenly represented across authorship positions. Paul-Hus et al. (7) studied the place of women in the Russian scientific research system in various disciplines and how this position has evolved during the last forty years.  They found that gender parity is far from being achieved and that women remain underrepresented in terms of their relative contribution to scientific output across disciplines. Sugimoto et al. (8) presented a study featuring a global analysis of women in patents from 1976 to 2013, which found that women’s contribution to patenting remains even lower than would be predicted given their representation in Science, Technology, Engineering, and Mathematics.

Career-related studies also open new paths to understanding the relationships between academic positions, publishing, and relative scientific contributions of researchers throughout their careers. Derycke et al. (9) studied the factors influencing PhD students’ scientific productivity and found that scientific discipline, phase of the PhD process, funding situation, family situation, and organizational culture within the research team are important factors predicting the number of publications. Van der Weijden (10) used a survey to study PhD students’ perceptions of career perspectives in academic R&D, non-academic R&D, and outside R&D, and assessed to what extent these career perspectives influence their job choice. She found that several career-related aspects, such as long-term career perspectives and the availability of permanent positions, are judged much more negatively for the academic R&D sector.

A session on University-Industry collaborations featured interesting research topics such as the relationship between industry-academia co-authorships and their influence on academic commercialization output (Wong & Singh (11); Yegros-Yegros et al. (12)) as well as global trends in University-Industry relationships using affiliation analysis of dual publications (Yegros-Yegros & Tijssen (13)). Related to this topic was a session on patents analysis which was used to study topics such as scientific evaluation and strategic priorities (Ping Ho & Wong (14)).

Measures of online attention, a topic of discussion in the past couple of years, was given special focus at the conference with probably the most studies featured in a session. Studies covered topics such as Mendeley readership analysis and their relationship with academic status (Zahedi et al (15)), Tweets on the Nobel Prize awards and their impact (Levitt & Thelwall (16)), and gender biases (Bar-Ilan & Van der Weijden (17)).

True to its slogan “Context counts: Pathways to master big and little data”, this conference selected a wide range of studies using newly available data to explore topics that provide context to scientific output, including gender, career, university-industry and measurement of engagement. In addition, the selected keynote lectures provided some overall strategic insight into metrics development. Diana Hicks and Henk Moed encouraged the audience to think more strategically about the application of metrics for evaluation purposes. The 7 principles manifesto suggested by Diana Hicks provides evaluators with a framework which can be used to perform assessments of researchers, institutions and programs. This manifesto was picked up by the CWTS group in Leiden headed by Paul Wouters, who is now working on creating an agreed upon set of principles that could potentially inform evaluation and funding systems (18).

Henk Moed (19) called for special attention to be given to the context and purpose of evaluation, using meta-analysis to inform the choice of data and methodology of the evaluation. Presenting the “The multi-dimensional research assessment matrix”, he gave some examples of how to compile correct and fair evaluation indicators using guiding questions that inform the process (20).

If there is one message that could be drawn from this conference it is that the plethora of recently available data, statistical analysis and indicators is an overall positive development only if they are used in the correct context and are able to answer the questions posed. There is no one metric that fits all evaluation objectives and therefore the data selected, the method used and the conclusions drawn must be made carefully, keeping in mind that context is probably the key factor to successful assessment.

Daily Issue example

The Daily Issue | Edition 73 commenting on Diana Hick’s 7 principles of research evaluation manifesto (comments in italics)

  1. Metrics are not a substitute for assessment – Don’t blame it on the metrics
  2. Spend time and money to produce high quality data – Print your results on glossy paper
  3. Metrics should be transparent and accessible- Everyone can have a say even if they don’t know s**
  4. Data should be verified by those evaluated – Be careful not to insult anyone
  5. Be sensitive to field differences – Use long words to avoid homonyms
  6. Normalize data to account for variations by fields and over time – If your data is useless for one field, make slight adaptations and use them for another field or try again in 10 years
  7. Metrics should align with strategic goals – Follow the money

The Daily Issue: Edition 73: http://sti2014.cwts.nl/News?article=n-w2&title=Daily+Issues+now+online!

 References

(Full text of all articles is available here: http://sti2014.cwts.nl/download/f-y2w2.pdf)

(1) http://sti2014.cwts.nl/Home
(2) http://www.spullenmannen.nl/index.php?lang=en&page=werk&proj=29
(3) Lepori, B., Heller-Schuh, B., Scherngell, Th., Barber, M. (2014) “Understanding factors influencing participation in European programs of Higher Education Institutions”, Proceedings of STI 2014 Leiden, p. 345
(4) Sandström, U., Heyman, U. and Van den Besselaar, P. (2014) “The Complex Relationship between Competitive Funding and Performance”, Proceedings of STI 2014 Leiden, p. 523
(5) Van der Weijden, I. and Calero Medina, C. (2014) “Gender, Academic Position and Scientific Publishing: a bibliometric analysis of the oeuvres of researchers”, Proceedings of STI 2014 Leiden, p. 673
(6) ACUMEN Survey (2011), available http://cybermetrics.wlv.ac.uk/survey-acumen.html
(7) Paul-Hus, A., Bouvier, R., Ni, C., Sugimoto, C.R., Pislyakov, V. and Larivière, V. (2014) “Women and science in Russia: a historical bibliometric analysis”, Proceedings of STI 2014 Leiden, p. 411
(8) Sugimoto, C.R., Ni, C., West, J.D., and Larivière, V. (2014) “Innovative women: an analysis of global gender disparities in patenting”, Proceedings of STI 2014 Leiden, p. 611
(9) Derycke, H., Levecque, K., Debacker, N., Vandevelde, K. and Anseel, F. (2014) “Factors influencing PhD students’ scientific productivity”, Proceedings of STI 2014 Leiden, p. 155
(10) Van der Weijden, I., Zahedi, Z., Must, U. and Meijer, I. (2014) “Gender Differences in Societal Orientation and Output of Individual Scientists”, Proceedings of STI 2014 Leiden, p. 680
(11) Wong, P.K. and Singh, A. (2014) “A Preliminary Examination of the Relationship between Co-Publications with Industry and Technology Commercialization Output of Academics: The Case of the National University of Singapore”, Proceedings of STI 2014 Leiden, p. 702
(12) Yegros-Yegros, A., Azagra-Caro, J.M., López-Ferrer, M. and Tijssen, R.J.W. (2014) “Do University-Industry co-publication volumes correspond with university funding from business firms?”, Proceedings of STI 2014 Leiden, p. 716
(13) Yegros-Yegros, A. and Tijssen, R. (2014) “University-Industry dual appointments: global trends and their role in the interaction with industry”, Proceedings of STI 2014 Leiden, p. 712
(14) Ho, Y.P. and Wong, P.K. (2014) “Using Patent Indicators to Evaluate the Strategic Priorities of Public Research Institutions: An exploratory study”, Proceedings of STI 2014 Leiden, p. 276
(15) Zahedi, Z., Costas, R. and Wouters, P. (2014) “Broad altmetric analysis of Mendeley readerships through the ‘academic status’ of the readers of scientific publications”, Proceedings of STI 2014 Leiden, p. 720
(16) Levitt, J. and Thelwall, M. (2014) “Investigating the Impact of the Award of the Nobel Prize on Tweets”, Proceedings of STI 2014 Leiden, p. 359
(17) Bar-Ilan, J. and Van der Weijden, I. (2014) “Altmetric gender bias? – Preliminary results”, Proceedings of STI 2014 Leiden, p. 26
(18) CWTS (2014) “The Leiden manifesto in the making: proposal of a set of principles on the use of assessment metrics in the S&T indicators conference”, Available at: http://citationculture.wordpress.com/2014/09/15/the-leiden-manifesto-in-the-making-proposal-of-a-set-of-principles-on-the-use-of-assessment-metrics-in-the-st-indicators-conference/
(19) Moed, H.F., Halevi, G. (2014) “Metrics-Based Research Assessment”, Proceedings of STI 2014 Leiden, p. 391
(20) Moed, H.F., Plume, A. (2011) “The multi-dimensional research assessment matrix”, Research Trends, Issue 23, May 2011, Available at: https://www.researchtrends.com/issue23-may-2011/the-multi-dimensional-research-assessment-matrix/
VN:F [1.9.22_1171]
Rating: 7.0/10 (2 votes cast)

Building research capacity in Sub-Saharan Africa through inter-regional collaboration

George Lan discusses findings from a study on collaboration in Sub-Saharan Africa, carried out by the World Bank and Elsevier.

Read more >


Over the next few decades, Sub-Saharan Africa (SSA) will benefit from a “demographic dividend” – a dramatic increase in its working-age population (1). Yet, increasing the subcontinent’s labor pool cannot push Africa toward a developed, knowledge-based society without simultaneously increasing the subcontinent’s capacity to train and educate that talent.

The strength of a region’s research enterprise is closely correlated with that region’s long-term development and important drivers of economic success. Research suggests that bibliometric indicators on publications can help characterize the stage of a country’s scientific development (2).

A recent study conducted by the World Bank and Elsevier looked at the state of Science, Technology, Engineering, and Mathematics research in SSA (3). For this analysis, Sub-Saharan Africa is divided into three regions (West & Central Africa, Southern Africa, and East Africa). The country of South Africa is considered separately and independently from Sub-Saharan Africa due to large differences in research capacity and output between the two.

By many measures, SSA has made great strides in its research performance, doubling its overall research output over the past decade and significantly increasing its global article share (4). However, as past studies show, article growth in other countries and regions in the developing world – particular Asia – outpaced that of SSA in recent years (5).

Moreover, SSA researchers collaborate extensively with international colleagues. Between 2003 and 2012, international collaborations as a percentage of Southern Africa’s total article output increased from 60.7% to 79.1% (Note 1). For Eastern Africa, international collaborations consistently comprised between 65% and 71% of the region’s total output. Although West & Central African researchers collaborate with international colleagues at relatively lower levels (between 40% and 50% of its total research output came from international collaborations), those rates are still well above the world average.

However, echoing the findings of past studies (6), collaboration between different African regions remains low. To calculate the number of collaborations between East Africa and West & Central Africa, for example, we counted all publications in which at least one author holds an affiliation to an East African institution and another author holds an affiliation to a West & Central African institution. As Figure 1 shows, inter-regional collaborations constitute a small fraction of Sub-Saharan Africa’s total international collaborations. In 2012, less than 6% of the region’s total output resulted from inter-regional collaborations, while nearly 60% came from international collaborations. Moreover, more than half of those inter-regional collaborations were co-authored with colleagues from institutions in OECD countries (Note 2).

WorldBank fig 1

Figure 1 - International and Inter-regional collaborations as percentage of Sub-Saharan African total research output, 2003-2012. Source: Scopus.

Figure 2 displays the trends of inter-regional collaboration for specifically East Africa vis-à-vis the other regions and South Africa. The top three trend lines (those with stronger lines) correspond to all collaborations between East Africa (EA) and West & Central Africa (WC), Southern Africa (SA), and South Africa (ZA) respectively. The bottom three trend lines correspond specifically to collaborations in which no co-authors were affiliated with institutions in OECD countries.

 

WorldBank fig 2

Figure 2 - Different types of inter-regional collaborations as percentage of East Africa’s total research output, 2003-2012. (EA = East Africa, WC = West & Central Africa, SA = Southern Africa, and ZA = South Africa). Source: Scopus.

Relative to East Africa’s overall rates of international collaboration (which comprise over 60% of East Africa’s total output), its level of inter-regional collaboration with other SSA regions is low, at about 2%. East Africa’s collaborations with South Africa have increased considerably over time, from 3.9% in 2003 to 7.9% in 2012. This growth has been driven mostly through collaborations involving partners at institutions in developed countries. The annual growth rate of East Africa-South Africa collaborations with an additional OECD partner was 8.2%, compared to 3.3% for those collaborations without an OECD partner.

These patterns of low inter-regional collaboration rates (especially without another OECD country as a partner) hold for West & Central Africa as well. In 2012, collaborations between West & Central Africa and other SSA regions accounted for only 0.9% of the former’s total research output.

Overall, the level of inter-regional collaborations in Sub-Saharan Africa has increased over the past decade, and this has been largely driven by collaborations involving OECD countries. On the one hand, this is welcome news, bolstering the subcontinent’s research capacity. On the other hand, in order for the regions to further develop, there needs to be a greater focus on Africa-centric collaboration.

Notes

(1) NB: this analysis defines international collaboration as multi-authored research outputs with authors affiliated with institutions in at least one region in SSA (West & Central Africa, Southern Africa, or East Africa) and elsewhere (including another SSA region).
(2) OECD member countries include: Australia, Austria, Belgium, Canada, Chile, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom, and the United States.
 

 References

(1) Drummond, P. et al. (2014) “Africa Rising: Harnessing the Demographic Dividend”, IMF Working Paper. Retrieved from https://www.imf.org/external/pubs/ft/wp/2014/wp14143.pdf.
(2) Moed, H. and Halevi, G. (2014) “International Scientific Collaboration”. In Higher Education in Asia: Expanding Out, Expanding Up (Part 3), UNESCO Institute for Statistics. Retrieved from http://www.uis.unesco.org/Library/Documents/higher-education-asia-graduate-university-research-2014-en.pdf
(3) Lan, G. J., Blom, A., Kamalski, J., Lau, G., Baas, J., & Adil, M. (2014) “A Decade of Development in Sub-Saharan African Science, Technology, Engineering & Mathematics Research”, Washington, D.C. Retrieved from www.worldbank.org/africa/stemresearchreport.
(4) Schemm, Y. (2013) “Africa doubles research output over past decade, moves towards a knowledge-based economy”, Research Trends, Issue 35, December 2013. Retrieved from https://www.researchtrends.com/issue-35-december-2013/africa-doubles-research-output/
(5) Huggett, S. (2013). “The bibliometrics of the developing world”, Research Trends, Issue 35, December 2013. Retrieved from https://www.researchtrends.com/issue-35-december-2013/the-bibliometrics-of-the-developing-world/
(6) Boshoff, N. (2009) “South–South research collaboration of countries in the Southern African Development Community (SADC)”, Scientometrics, Vol. 84, No. 2, pp. 481–503. doi:10.1007/s11192-009-0120-0
 
VN:F [1.9.22_1171]
Rating: 9.5/10 (2 votes cast)

Over the next few decades, Sub-Saharan Africa (SSA) will benefit from a “demographic dividend” – a dramatic increase in its working-age population (1). Yet, increasing the subcontinent’s labor pool cannot push Africa toward a developed, knowledge-based society without simultaneously increasing the subcontinent’s capacity to train and educate that talent.

The strength of a region’s research enterprise is closely correlated with that region’s long-term development and important drivers of economic success. Research suggests that bibliometric indicators on publications can help characterize the stage of a country’s scientific development (2).

A recent study conducted by the World Bank and Elsevier looked at the state of Science, Technology, Engineering, and Mathematics research in SSA (3). For this analysis, Sub-Saharan Africa is divided into three regions (West & Central Africa, Southern Africa, and East Africa). The country of South Africa is considered separately and independently from Sub-Saharan Africa due to large differences in research capacity and output between the two.

By many measures, SSA has made great strides in its research performance, doubling its overall research output over the past decade and significantly increasing its global article share (4). However, as past studies show, article growth in other countries and regions in the developing world – particular Asia – outpaced that of SSA in recent years (5).

Moreover, SSA researchers collaborate extensively with international colleagues. Between 2003 and 2012, international collaborations as a percentage of Southern Africa’s total article output increased from 60.7% to 79.1% (Note 1). For Eastern Africa, international collaborations consistently comprised between 65% and 71% of the region’s total output. Although West & Central African researchers collaborate with international colleagues at relatively lower levels (between 40% and 50% of its total research output came from international collaborations), those rates are still well above the world average.

However, echoing the findings of past studies (6), collaboration between different African regions remains low. To calculate the number of collaborations between East Africa and West & Central Africa, for example, we counted all publications in which at least one author holds an affiliation to an East African institution and another author holds an affiliation to a West & Central African institution. As Figure 1 shows, inter-regional collaborations constitute a small fraction of Sub-Saharan Africa’s total international collaborations. In 2012, less than 6% of the region’s total output resulted from inter-regional collaborations, while nearly 60% came from international collaborations. Moreover, more than half of those inter-regional collaborations were co-authored with colleagues from institutions in OECD countries (Note 2).

WorldBank fig 1

Figure 1 - International and Inter-regional collaborations as percentage of Sub-Saharan African total research output, 2003-2012. Source: Scopus.

Figure 2 displays the trends of inter-regional collaboration for specifically East Africa vis-à-vis the other regions and South Africa. The top three trend lines (those with stronger lines) correspond to all collaborations between East Africa (EA) and West & Central Africa (WC), Southern Africa (SA), and South Africa (ZA) respectively. The bottom three trend lines correspond specifically to collaborations in which no co-authors were affiliated with institutions in OECD countries.

 

WorldBank fig 2

Figure 2 - Different types of inter-regional collaborations as percentage of East Africa’s total research output, 2003-2012. (EA = East Africa, WC = West & Central Africa, SA = Southern Africa, and ZA = South Africa). Source: Scopus.

Relative to East Africa’s overall rates of international collaboration (which comprise over 60% of East Africa’s total output), its level of inter-regional collaboration with other SSA regions is low, at about 2%. East Africa’s collaborations with South Africa have increased considerably over time, from 3.9% in 2003 to 7.9% in 2012. This growth has been driven mostly through collaborations involving partners at institutions in developed countries. The annual growth rate of East Africa-South Africa collaborations with an additional OECD partner was 8.2%, compared to 3.3% for those collaborations without an OECD partner.

These patterns of low inter-regional collaboration rates (especially without another OECD country as a partner) hold for West & Central Africa as well. In 2012, collaborations between West & Central Africa and other SSA regions accounted for only 0.9% of the former’s total research output.

Overall, the level of inter-regional collaborations in Sub-Saharan Africa has increased over the past decade, and this has been largely driven by collaborations involving OECD countries. On the one hand, this is welcome news, bolstering the subcontinent’s research capacity. On the other hand, in order for the regions to further develop, there needs to be a greater focus on Africa-centric collaboration.

Notes

(1) NB: this analysis defines international collaboration as multi-authored research outputs with authors affiliated with institutions in at least one region in SSA (West & Central Africa, Southern Africa, or East Africa) and elsewhere (including another SSA region).
(2) OECD member countries include: Australia, Austria, Belgium, Canada, Chile, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom, and the United States.
 

 References

(1) Drummond, P. et al. (2014) “Africa Rising: Harnessing the Demographic Dividend”, IMF Working Paper. Retrieved from https://www.imf.org/external/pubs/ft/wp/2014/wp14143.pdf.
(2) Moed, H. and Halevi, G. (2014) “International Scientific Collaboration”. In Higher Education in Asia: Expanding Out, Expanding Up (Part 3), UNESCO Institute for Statistics. Retrieved from http://www.uis.unesco.org/Library/Documents/higher-education-asia-graduate-university-research-2014-en.pdf
(3) Lan, G. J., Blom, A., Kamalski, J., Lau, G., Baas, J., & Adil, M. (2014) “A Decade of Development in Sub-Saharan African Science, Technology, Engineering & Mathematics Research”, Washington, D.C. Retrieved from www.worldbank.org/africa/stemresearchreport.
(4) Schemm, Y. (2013) “Africa doubles research output over past decade, moves towards a knowledge-based economy”, Research Trends, Issue 35, December 2013. Retrieved from https://www.researchtrends.com/issue-35-december-2013/africa-doubles-research-output/
(5) Huggett, S. (2013). “The bibliometrics of the developing world”, Research Trends, Issue 35, December 2013. Retrieved from https://www.researchtrends.com/issue-35-december-2013/the-bibliometrics-of-the-developing-world/
(6) Boshoff, N. (2009) “South–South research collaboration of countries in the Southern African Development Community (SADC)”, Scientometrics, Vol. 84, No. 2, pp. 481–503. doi:10.1007/s11192-009-0120-0
 
VN:F [1.9.22_1171]
Rating: 9.5/10 (2 votes cast)

Brain research: Mining emerging trends and top research concepts

In their contribution, Georgin Lau and Judith Kamalski present the most important findings from a recent report on Brain Research and explain the multi-method and iterative approach that was used to define the field of Brain and Neuroscience research.

Read more >


Like the brain itself, brain research is complex and encompasses the study of Brain Anatomy, Neuroscience, Cognitive Science, and interrelated disciplines. Disciplinary silos are breaking down, with investigators from fields including Medicine, Biology, Engineering, Computer Science, and Psychology working within large collaborative research initiatives. The growing interest in new ways to treat or even prevent brain disorders, as well as the push towards cross-disciplinary research, provides context for a recently launched Brain Research Report (1) that offers an overview of the state of research in the area of Brain and Neuroscience. This report was discussed at Neuroscience 2014, the Society for Neuroscience’s Annual Meeting, taking place in November in Washington, DC. Beyond understanding the publication output, growth and impact of key countries in Brain and Neuroscience research, new methodologies were experimented with to mine for emerging trends in this field, and to discern different research emphasis between funded grant awards and existing Brain and Neuroscience publications.

 

Brain research is Neuroscience and more

The document sets underlying our analyses were created using text mining and natural language processing techniques inherent in the semantic Elsevier Fingerprint EngineTM. Our approach to define Brain and Neuroscience is multi-method and iterative, and relies on both automatic and manual input to select relevant articles for analysis. By combining three approaches – an initial journal-based classification system, semantic fingerprinting using the Fingerprint Engine, and internal and external expert review and selection of key concepts – we were able to identify a broad set of articles that best represent the entire field of Brain and Neuroscience research. For example, our document set comprised about 91% of all articles in the Neuroscience journal category in 2009-2013, and 64% of the articles in the Psychology journal category in Scopus (see Figure 1). Figure 2 shows the concepts where the selection rate was 100%, meaning that all documents that contained these concepts were included.

 

BrainRes fig 1

Figure 1 – Selected articles were not only from the Neuroscience journal category in Scopus, but also other related journal categories. The top 10 journal categories are shown in this figure, along with the proportion of all documents in each journal category which were included in our selected document set. Source: Scopus.

BrainRes fig 2

Figure 2 - Concepts from selected document set where the selection rate was 100%, meaning that all relevant documents that contained these concepts were included in our analysis. The size of each concept is weighted by the number of occurrences in the selected document set. Source: Scopus. (Note: Click on image to enlarge it).

 

Emerging trends

Trends and correlations can provide insight into how topics of interest emerge from research outputs; however, the challenge remains to differentiate obvious trends from those that are emergent: one approach is to compute them using big data, and then have the results validated by experienced practitioners and scientists of the field. The burst detection algorithm proposed by Kleinberg (2) provides a model for the robust and efficient identification of word bursts, and allows the identification of rapid growth within categories or thesauri. By applying the burst detection algorithm, we were able to find concepts which displayed rapid growth over the years, signaling a “burst of activity”. Compared to the period 2003-2008, both broad and specific Brain and Neuroscience concepts grew rapidly in 2009-2013. These include concepts such as “High-throughput Nucleotide Sequencing,” “Molecular Targeted Therapy,” “Molecular Docking Simulation,” “Sirtuin 1,” “Purinergic P2X Receptor Antagonists” and “Anti-N-Methyl-D-Aspartate Receptor Encephalitis.”

Top concepts in published Brain and Neuroscience research were organized by overall theme (semantic group). Under the disorders group, concepts such as “Stroke,” “Depression,” “Neoplasms,” and “Alzheimer Disease” were seen most often, while under anatomy, “Brain,” “Eye,” and “Neurons” were most common (see Table 1).

 

Activities & Behaviors Anatomy Chemicals & Drugs Disorders Genes & Molecular Sequences
Exercise (12,473) Eye (14,836) Proteins (12,255) Stroke (21,404) Single Nucleotide Polymorphism (4,007)
Suicide (6,106) Neurons (14,388) Glucose (7,423) Depression (21,668) Alleles (3,248)
Motor Activity (6,454) Cells (15,167) Food (8,477) Neoplasms (25,047) Genome (2,742)
Speech (8,055) Muscles (10,758) Alcohols (6,396) Alzheimer Disease (14,522) Quantitative Trait Loci (590)
Behavior (11,274) Stem Cells (7,034) Insulin (6,021) Pain (16,719) Major Histocompatibility Complex (450)
Smoking (4,667) Brain (15,980) MicroRNAs (4,180) Schizophrenia (13,752) Homeobox Genes (449)
Costs and Cost Analysis (6,437) T-Lymphocytes (6,261) Pharmaceutical Preparations (10,822) Parkinson Disease (11,366) Catalytic Domain (811)
Residence Characteristics (7,277) Bone and Bones (7,257) Peptides (6,718) Wounds and Injuries (13,414) Transcriptome (777)
Walking (5,517) Spermatozoa (3,944) Acids (5,225) Syndrome (13,258) Transgenes (513)
Work (7,139) Face (5,974) Cocaine (3,153) Multiple Sclerosis (9,275) Oncogenes (394)

Table 1 - Top 10 concepts that occurred in Brain and Neuroscience research articles from Scopus between 2008 and 2013, based on the semantic groups to which they belong, sorted by the sum of term frequency-inverse document frequency (tf-idf) of the concept in the document set, where the tf-idf value reflects the relevance and importance of the concept in the document. Figures in parentheses are the frequency with which the concept occurred in the set of Brain and Neuroscience research articles from Scopus between 2008 and 2013 (i.e. the tf value). Source: Scopus.

 

Different research emphasis in the EU and the US

Next, we compared the top concepts within the Brain and Neuroscience research publications from Scopus against publications produced by the recipients of funded grant awards related to Brain and Neuroscience research from the National Institutes of Health (NIH) (3), and project abstracts that were available from the list of brain research projects supported by the European Commission (EC) (4). Concepts were extracted from about 2 million Brain and Neuroscience articles from Scopus, 59,637 articles produced by recipients of funded grant awards relating to Brain and Neuroscience research from NIH, and 136 project abstracts available from the Brain research projects supported by the EC. As expected, concepts such as “Brain,” “Neurons,” “Seizures,” and “Brain Neoplasms” were seen with similar frequency in the published articles and the NIH-funded grant abstracts. However, concepts such as “Eye,” “Pain,” and “Stress, Psychological” were more highly represented in published articles than in NIH-funded abstracts, suggesting a divergence from funding to publication.

Not surprisingly, NIH-funded abstracts more often contained disease-related concepts, consistent with the NIH’s focus on areas of research with perceived high societal impact. Compared to the research funded by the EC, US research focused on the concepts “Glioma,” “Child Development Disorders, Pervasive,” and “Bipolar Disorder.” Conversely, concepts such as “Memory Disorders,” “Vision Disorders,” “Myasthenia Gravis,” “Hearing Loss,” and “Alkalosis” were more frequent in the EC-funded research compared to the US, suggesting a different emphasis in research relating to disorders in Brain and Neuroscience (see Table 2). In the US, drugs related to substance abuse were highly researched, with the appearance of concepts such as “Methamphetamine,” “Nicotine,” and “Cannabis.” In contrast, antipsychotic drugs such as “Risperidone” and “Clozapine” that are mainly used to treat schizophrenia were areas of focus in the EC-funded research (see Table 3).

Top 10 concepts relating to disorders in:
Set A - Brain and Neuroscience articles from Scopus Set B - Brain and Neuroscience funded grant awards from the NIH Set C - Brain research project synopses supported by the European Commission
Stroke (21,404) Alzheimer Disease (842) Stroke (6)
Depression (21,668) Stroke (328,070) Parkinson Disease (7)
Neoplasms (25,047) Schizophrenia (19,489) Schizophrenia (5)
Alzheimer Disease (14,522) Pain (15,742) Memory Disorders (3)
Pain (16,719) Parkinson Disease (15,963) Vision Disorders (2)
Schizophrenia (13,752) Depression (6,028) Alzheimer Disease (4)
Parkinson Disease (11,366) Neoplasms (14,585) Myasthenia Gravis (1)
Wounds and Injuries (13,414) Glioma (9,271) Hearing Loss (3)
Syndrome (13,258) Child Development Disorders, Pervasive (4,062) Alkalosis (1)
Multiple Sclerosis (9,275) Bipolar Disorder (2,571) Pain (1)

Table 2 - Top 10 concepts that occurred in Brain and Neuroscience research articles relating to disorders from document sets A, B and C, based on the sum of term frequency-inverse document frequency (tf-idf) of the concept in the document set that it belonged to. Figures in parentheses are the frequency with which the concept occurred in the document set. Highlighted in grey are concepts that appeared in the top 10 disorder-related concepts in all three document sets, reflecting common areas of focus. Highlighted in orange are concepts that only appeared in Set A and Set B. Concepts that are not highlighted were those unique to each document set, indicating different areas of focus in disorder-related concepts in Brain and Neuroscience research.

Top 10 concepts relating to chemicals & drugs in:
Set A - Brain and Neuroscience articles from Scopus Set B – Brain and Neuroscience output from funded grant awards from the NIH Set C – Brain research project synopses supported by the European Commission
Proteins (12,255) Alcohols (663) Enzymes (2)
Glucose (7,423) Cocaine (4,670) NADPH Oxidase (1)
Food (8,477) Ethanol (653) Inflammation Mediators (1)
Alcohols (6,396) Methamphetamine (13,551) Anticonvulsants (2)
Insulin (6,021) Analgesics, Opioid (1,068) Quantum Dots (1)
MicroRNAs (4,180) Nicotine (14,836) Iron (1)
Pharmaceutical Preparations (10,822) MicroRNAs (407,989) Peptides (1)
Peptides (6,718) Dopamine (6,756) Risperidone (1)
Acids (5,225) Cannabis (3,270) Clozapine (1)
Cocaine (3,153) Prions (17,586) Phosphotransferases (2)

Table 3 - Top 10 concepts that occurred in Brain and Neuroscience research articles relating to chemicals & drugs from document sets A, B and C, based on the sum of term frequency-inverse document frequency (tf-idf) of the concept in the document set that it belonged to. Figures in parentheses are the frequency at which the concept occurred in the document set. Highlighted in orange are concepts that only appeared in Set A and Set B. Highlighted in grey are concepts that only appeared in Set A and Set C. Concepts that are not highlighted were those unique to each document set, indicating different areas of focus in chemicals & drugs-related concepts in Brain and Neuroscience research.

 

Conclusion

The hidden complexities of the brain are being explored by scientists working across boundaries and across disciplines to overcome technological challenges and to develop new techniques, methods, and better equipment to study the brain. In our study of the top concepts in funded grant awards, research is driven towards a better understanding of diseases and disorders related to Brain and Neuroscience, such as autism and Alzheimer Disease. This is coupled with an emphasis on drug development, for instance in the area of schizophrenia treatment. Strong research is also evident in the area of genes and molecular sequences where concepts such as connectome and transcriptome have either been detected as having rapid growth or are already considered important concepts in Brain and Neuroscience research publications.

By providing the first attempt to understand the overall state of research in Brain and Neuroscience, the report reveals patterns of activities globally, which we hope will be useful to policy makers and decision makers in steering future strategy in Brain research. There is also potential to conduct a deeper analysis of research in specific semantic groups of Brain and Neuroscience research, for example, focusing only on disorders, or chemical and drugs related publications and concepts.

Exploring the brain is akin to exploring the mind and exploring the self. Thus it is with great interest and anticipation that we watch for further developments in this important field of science, which will certainly affect us in one way or another as we learn more about our own brains.

 

References

(1) “Brain Research Report,” Available at: http://www.elsevier.com/research-intelligence/brain-science-report-2014
(2) Kleinberg, J. (2002) “Bursty and hierarchical structure in streams,” Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’02. New York, New York, USA: ACM Press, p. 91. doi: 10.1145/775060.775061
(3) National Institutes of Health (NIH), Available at: http://projectreporter.nih.gov/reporter.cfm
(4) European Commission (2013)Brain Research supported by the European Union 2007-2012,” Available at: http://ec.europa.eu/research/health/pdf/brain-research_en.pdf
VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)

Like the brain itself, brain research is complex and encompasses the study of Brain Anatomy, Neuroscience, Cognitive Science, and interrelated disciplines. Disciplinary silos are breaking down, with investigators from fields including Medicine, Biology, Engineering, Computer Science, and Psychology working within large collaborative research initiatives. The growing interest in new ways to treat or even prevent brain disorders, as well as the push towards cross-disciplinary research, provides context for a recently launched Brain Research Report (1) that offers an overview of the state of research in the area of Brain and Neuroscience. This report was discussed at Neuroscience 2014, the Society for Neuroscience’s Annual Meeting, taking place in November in Washington, DC. Beyond understanding the publication output, growth and impact of key countries in Brain and Neuroscience research, new methodologies were experimented with to mine for emerging trends in this field, and to discern different research emphasis between funded grant awards and existing Brain and Neuroscience publications.

 

Brain research is Neuroscience and more

The document sets underlying our analyses were created using text mining and natural language processing techniques inherent in the semantic Elsevier Fingerprint EngineTM. Our approach to define Brain and Neuroscience is multi-method and iterative, and relies on both automatic and manual input to select relevant articles for analysis. By combining three approaches – an initial journal-based classification system, semantic fingerprinting using the Fingerprint Engine, and internal and external expert review and selection of key concepts – we were able to identify a broad set of articles that best represent the entire field of Brain and Neuroscience research. For example, our document set comprised about 91% of all articles in the Neuroscience journal category in 2009-2013, and 64% of the articles in the Psychology journal category in Scopus (see Figure 1). Figure 2 shows the concepts where the selection rate was 100%, meaning that all documents that contained these concepts were included.

 

BrainRes fig 1

Figure 1 – Selected articles were not only from the Neuroscience journal category in Scopus, but also other related journal categories. The top 10 journal categories are shown in this figure, along with the proportion of all documents in each journal category which were included in our selected document set. Source: Scopus.

BrainRes fig 2

Figure 2 - Concepts from selected document set where the selection rate was 100%, meaning that all relevant documents that contained these concepts were included in our analysis. The size of each concept is weighted by the number of occurrences in the selected document set. Source: Scopus. (Note: Click on image to enlarge it).

 

Emerging trends

Trends and correlations can provide insight into how topics of interest emerge from research outputs; however, the challenge remains to differentiate obvious trends from those that are emergent: one approach is to compute them using big data, and then have the results validated by experienced practitioners and scientists of the field. The burst detection algorithm proposed by Kleinberg (2) provides a model for the robust and efficient identification of word bursts, and allows the identification of rapid growth within categories or thesauri. By applying the burst detection algorithm, we were able to find concepts which displayed rapid growth over the years, signaling a “burst of activity”. Compared to the period 2003-2008, both broad and specific Brain and Neuroscience concepts grew rapidly in 2009-2013. These include concepts such as “High-throughput Nucleotide Sequencing,” “Molecular Targeted Therapy,” “Molecular Docking Simulation,” “Sirtuin 1,” “Purinergic P2X Receptor Antagonists” and “Anti-N-Methyl-D-Aspartate Receptor Encephalitis.”

Top concepts in published Brain and Neuroscience research were organized by overall theme (semantic group). Under the disorders group, concepts such as “Stroke,” “Depression,” “Neoplasms,” and “Alzheimer Disease” were seen most often, while under anatomy, “Brain,” “Eye,” and “Neurons” were most common (see Table 1).

 

Activities & Behaviors Anatomy Chemicals & Drugs Disorders Genes & Molecular Sequences
Exercise (12,473) Eye (14,836) Proteins (12,255) Stroke (21,404) Single Nucleotide Polymorphism (4,007)
Suicide (6,106) Neurons (14,388) Glucose (7,423) Depression (21,668) Alleles (3,248)
Motor Activity (6,454) Cells (15,167) Food (8,477) Neoplasms (25,047) Genome (2,742)
Speech (8,055) Muscles (10,758) Alcohols (6,396) Alzheimer Disease (14,522) Quantitative Trait Loci (590)
Behavior (11,274) Stem Cells (7,034) Insulin (6,021) Pain (16,719) Major Histocompatibility Complex (450)
Smoking (4,667) Brain (15,980) MicroRNAs (4,180) Schizophrenia (13,752) Homeobox Genes (449)
Costs and Cost Analysis (6,437) T-Lymphocytes (6,261) Pharmaceutical Preparations (10,822) Parkinson Disease (11,366) Catalytic Domain (811)
Residence Characteristics (7,277) Bone and Bones (7,257) Peptides (6,718) Wounds and Injuries (13,414) Transcriptome (777)
Walking (5,517) Spermatozoa (3,944) Acids (5,225) Syndrome (13,258) Transgenes (513)
Work (7,139) Face (5,974) Cocaine (3,153) Multiple Sclerosis (9,275) Oncogenes (394)

Table 1 - Top 10 concepts that occurred in Brain and Neuroscience research articles from Scopus between 2008 and 2013, based on the semantic groups to which they belong, sorted by the sum of term frequency-inverse document frequency (tf-idf) of the concept in the document set, where the tf-idf value reflects the relevance and importance of the concept in the document. Figures in parentheses are the frequency with which the concept occurred in the set of Brain and Neuroscience research articles from Scopus between 2008 and 2013 (i.e. the tf value). Source: Scopus.

 

Different research emphasis in the EU and the US

Next, we compared the top concepts within the Brain and Neuroscience research publications from Scopus against publications produced by the recipients of funded grant awards related to Brain and Neuroscience research from the National Institutes of Health (NIH) (3), and project abstracts that were available from the list of brain research projects supported by the European Commission (EC) (4). Concepts were extracted from about 2 million Brain and Neuroscience articles from Scopus, 59,637 articles produced by recipients of funded grant awards relating to Brain and Neuroscience research from NIH, and 136 project abstracts available from the Brain research projects supported by the EC. As expected, concepts such as “Brain,” “Neurons,” “Seizures,” and “Brain Neoplasms” were seen with similar frequency in the published articles and the NIH-funded grant abstracts. However, concepts such as “Eye,” “Pain,” and “Stress, Psychological” were more highly represented in published articles than in NIH-funded abstracts, suggesting a divergence from funding to publication.

Not surprisingly, NIH-funded abstracts more often contained disease-related concepts, consistent with the NIH’s focus on areas of research with perceived high societal impact. Compared to the research funded by the EC, US research focused on the concepts “Glioma,” “Child Development Disorders, Pervasive,” and “Bipolar Disorder.” Conversely, concepts such as “Memory Disorders,” “Vision Disorders,” “Myasthenia Gravis,” “Hearing Loss,” and “Alkalosis” were more frequent in the EC-funded research compared to the US, suggesting a different emphasis in research relating to disorders in Brain and Neuroscience (see Table 2). In the US, drugs related to substance abuse were highly researched, with the appearance of concepts such as “Methamphetamine,” “Nicotine,” and “Cannabis.” In contrast, antipsychotic drugs such as “Risperidone” and “Clozapine” that are mainly used to treat schizophrenia were areas of focus in the EC-funded research (see Table 3).

Top 10 concepts relating to disorders in:
Set A - Brain and Neuroscience articles from Scopus Set B - Brain and Neuroscience funded grant awards from the NIH Set C - Brain research project synopses supported by the European Commission
Stroke (21,404) Alzheimer Disease (842) Stroke (6)
Depression (21,668) Stroke (328,070) Parkinson Disease (7)
Neoplasms (25,047) Schizophrenia (19,489) Schizophrenia (5)
Alzheimer Disease (14,522) Pain (15,742) Memory Disorders (3)
Pain (16,719) Parkinson Disease (15,963) Vision Disorders (2)
Schizophrenia (13,752) Depression (6,028) Alzheimer Disease (4)
Parkinson Disease (11,366) Neoplasms (14,585) Myasthenia Gravis (1)
Wounds and Injuries (13,414) Glioma (9,271) Hearing Loss (3)
Syndrome (13,258) Child Development Disorders, Pervasive (4,062) Alkalosis (1)
Multiple Sclerosis (9,275) Bipolar Disorder (2,571) Pain (1)

Table 2 - Top 10 concepts that occurred in Brain and Neuroscience research articles relating to disorders from document sets A, B and C, based on the sum of term frequency-inverse document frequency (tf-idf) of the concept in the document set that it belonged to. Figures in parentheses are the frequency with which the concept occurred in the document set. Highlighted in grey are concepts that appeared in the top 10 disorder-related concepts in all three document sets, reflecting common areas of focus. Highlighted in orange are concepts that only appeared in Set A and Set B. Concepts that are not highlighted were those unique to each document set, indicating different areas of focus in disorder-related concepts in Brain and Neuroscience research.

Top 10 concepts relating to chemicals & drugs in:
Set A - Brain and Neuroscience articles from Scopus Set B – Brain and Neuroscience output from funded grant awards from the NIH Set C – Brain research project synopses supported by the European Commission
Proteins (12,255) Alcohols (663) Enzymes (2)
Glucose (7,423) Cocaine (4,670) NADPH Oxidase (1)
Food (8,477) Ethanol (653) Inflammation Mediators (1)
Alcohols (6,396) Methamphetamine (13,551) Anticonvulsants (2)
Insulin (6,021) Analgesics, Opioid (1,068) Quantum Dots (1)
MicroRNAs (4,180) Nicotine (14,836) Iron (1)
Pharmaceutical Preparations (10,822) MicroRNAs (407,989) Peptides (1)
Peptides (6,718) Dopamine (6,756) Risperidone (1)
Acids (5,225) Cannabis (3,270) Clozapine (1)
Cocaine (3,153) Prions (17,586) Phosphotransferases (2)

Table 3 - Top 10 concepts that occurred in Brain and Neuroscience research articles relating to chemicals & drugs from document sets A, B and C, based on the sum of term frequency-inverse document frequency (tf-idf) of the concept in the document set that it belonged to. Figures in parentheses are the frequency at which the concept occurred in the document set. Highlighted in orange are concepts that only appeared in Set A and Set B. Highlighted in grey are concepts that only appeared in Set A and Set C. Concepts that are not highlighted were those unique to each document set, indicating different areas of focus in chemicals & drugs-related concepts in Brain and Neuroscience research.

 

Conclusion

The hidden complexities of the brain are being explored by scientists working across boundaries and across disciplines to overcome technological challenges and to develop new techniques, methods, and better equipment to study the brain. In our study of the top concepts in funded grant awards, research is driven towards a better understanding of diseases and disorders related to Brain and Neuroscience, such as autism and Alzheimer Disease. This is coupled with an emphasis on drug development, for instance in the area of schizophrenia treatment. Strong research is also evident in the area of genes and molecular sequences where concepts such as connectome and transcriptome have either been detected as having rapid growth or are already considered important concepts in Brain and Neuroscience research publications.

By providing the first attempt to understand the overall state of research in Brain and Neuroscience, the report reveals patterns of activities globally, which we hope will be useful to policy makers and decision makers in steering future strategy in Brain research. There is also potential to conduct a deeper analysis of research in specific semantic groups of Brain and Neuroscience research, for example, focusing only on disorders, or chemical and drugs related publications and concepts.

Exploring the brain is akin to exploring the mind and exploring the self. Thus it is with great interest and anticipation that we watch for further developments in this important field of science, which will certainly affect us in one way or another as we learn more about our own brains.

 

References

(1) “Brain Research Report,” Available at: http://www.elsevier.com/research-intelligence/brain-science-report-2014
(2) Kleinberg, J. (2002) “Bursty and hierarchical structure in streams,” Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’02. New York, New York, USA: ACM Press, p. 91. doi: 10.1145/775060.775061
(3) National Institutes of Health (NIH), Available at: http://projectreporter.nih.gov/reporter.cfm
(4) European Commission (2013)Brain Research supported by the European Union 2007-2012,” Available at: http://ec.europa.eu/research/health/pdf/brain-research_en.pdf
VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)

Level the playing field in scientific international collaboration with the use of a new indicator: Field-Weighted Internationalization Score

Hans Pohl, Guillaume Warnan and Jeroen Baas propose a new indicator to measure collaboration at researcher and institute level.

Read more >


Introduction

International relations are and have always been inherent in higher education and research (1). However, internationalization of higher education institutions (HEIs) exhibits a growing trend, as illustrated by bibliometric data (2). Amongst other things, the internationalization trends challenge the leadership of the HEI and lead to changes in management structure (3).

An assessment of the internationalization impact has to be aligned with the core missions of the HEI (4) and there is a need to manage and measure various internationalization aspects:

“Without a clear set of rationales, followed by a set of objectives or policy statements, a plan or set of strategies, and a monitoring and evaluation system, the process of internationalization is often an ad hoc, reactive, and fragmented response to the overwhelming number of new international opportunities available” (5).

Common internationalization indicators include share of international students and staff, and share of international co-publications. Indicators of this type are widely used for comparisons, rankings such as QS World University Rankings and even for the allocation of funding to HEIs (6).

This paper addresses one clearly defined but rather crude indicator: the share of international co-publications (for a given researcher or institution). The indicator has several advantages, among them relatively unbiased data, the possibility to study all levels from individual researchers to countries and the ease of interpreting it. But there are also weaknesses. Comparisons of researchers, groups of researchers or even HEIs with different scientific profiles are difficult, as the typical share of international co-publications varies substantially between different scientific fields. This is illustrated in Figure 1, which also shows how the share of international co-publications has increased over time in all scientific fields.

FWIS fig 1

Figure 1 - Share of international co-publications per scientific field 2009 and 2013. Source: Scopus

Another weakness is that the share of international co-publications changes with different types of publications (see Table 1).

Scientific field Share of international co-publications
All types Articles Conference proceedings Reviews
Medicine 16.8% 18.8% 17.1% 17.7%
Chemistry 19.6% 20.9% 17.7% 19.3%
Social Sciences 12.0% 12.6% 12.4% 8.3%
Global 18.2% 20.2% 14,9% 17.5%

Table 1 - International co-publications per publication type overall and for 3 different scientific fields, 2013

The aim of this paper is to develop and test an indicator that eliminates these weaknesses without losing the advantages. The indicator described in this piece, named the Field-Weighted Internationalization Score (FWIS), builds on the Field-Weighted Citation Impact (FWCI) calculation.

 

Theoretical framework

While many articles study the concept of scholarly collaboration (7) or point out the importance of international collaboration (8), the assessment of international collaboration remains a more limited field of study. It really became a subject of interest in the 1990s (9-11). Indexes were created (12), but never aimed at comparing institutions or research entities with one another.

The FWIS is calculated using the same base-normalization as is applied in the calculation of the Field-Weighted Citation Impact (13). This in turn is based on the scientific consensus reached recently (14), after criticism that normalization scores should be calculated at the publication level (15) and the contributing counts need to be fractionalized (16).  In essence, this means that each publication will have a calculated expected value, normalized for publication year, document type, and field. The FWCI score for each publication is the actual value divided by the expected value.

 

FWIS Methodology

The same logic is used for the calculation of the FWIS, and instead of citation counts, a simple binary indication of the presence of collaboration on the publication is included. Citation counts behave a little different, as a publication can be cited for instance twice as much as another publication. The simple binary indication of an international co-publication recognizes only two states: either the publication is internationally co-authored (value is 1), or the publication is not (value is 0). This calculation therefore relates to the percentage of internationally co-authored publications, rather than the average internationality of publications (where FWCI does relate to the average number of citations per publication).

In order to overcome the pitfall of measuring collaboration rates against a global rate – when most entities will appear to achieve collaboration rates that are higher than expected – the expected value of collaboration per publication is calculated by weighting the publications by the number of countries that appear on the publication.

To illustrate the methodology with an example (see Table 2 - we will first assume all documents are from the same year, document type and subject, and gradually add complexity to the example to fully understand the calculations): suppose we have a total of 4 publications in our database, which includes 3 different countries: China, USA and UK. The global share of international co-publications is 50% as 2 out of 4 are internationally co-authored.

Publication China USA UK International?
#1 1 0 (no)
#2 1 0 (no)
#3 1 1 1 (yes)
#4 1 1 1 1 (yes)

Table 2 - First example with 4 publications and 3 countries

In our example, China has 50% international publications, USA has 67% and UK 100%. If you were to compare these percentages to the global average, it would appear as if all of these are above or exactly at the global average. To remedy this effect in collaboration, we multiply the weight of publications by the number of collaborating countries contributing to the publication. In our example, that would mean a global average of (1*0+1*0+2*1+3*1)/(1+1+2+3)=71% and not 50%.

Multiplying by number of countries on a publication means that the percentage of internationally co-authored publications is affected by the average number of countries on a publication. When comparing values that have been calculated for different fields (and thus having different average number of countries per publication) this indirectly causes different results. If for instance in a field, without multiplying by country, a group of researchers have 30% international co-publications vs. a global average of 15% (twice as high), and in another field the same group has 10% international collaboration vs. a global average of 5% (also twice as high), it may be that the FWIS derived from those publications per field is different if the average number of countries on international publications is different. The rationale of this difference is that fields with more countries per publication have a higher likelihood of international collaboration.

FWIS uses a publication-oriented approach, which means that an expected and actual value for each publication is calculated. The expected count is derived by taking the total number of international co-publications divided by the total number of publications, and by weighting these counts with the number of countries involved. This would mean in our example (see Table 3): (1*0+1*0+2*1+3*1)/(1+1+2+3)=0.71. The FWIS for each publication is derived by dividing the actual value (0 or 1) by the expected value.

Publication China USA UK Count of countries International? Expected score per publication FWIS
#1 1 1 0 (no) 0.71 0
#2 1 1 0 (no) 0.71 0
#3 1 1 2 1 (yes) 0.71 1.41
#4 1 1 1 3 1 (yes) 0.71 1.41

Table 3 - Addition of the count of countries and FWIS taking into account the number of countries

In order to calculate the score for an entity (entity could for example be a country, institution or group of researchers), we simply take the arithmetic mean of each FWIS score for the entity’s publications. For instance, for China this would be: (0+1.41)/2=0.71.

When calculating the global score for the entire dataset t – as is required to validate the calculation and end up with a score of 1.00 – each publication again needs to be weighted, using the count of countries that are present on the publication. In our example, the global value is derived by: (0*1+0*1+1.41*2+1.41*3)/(1+1+2+3)= 1.00. The same weighting is required when calculating the score for entities that span multiple countries. For instance, for a continent, the value is derived by multiplying the score for each country that is part of that continent.

To fully understand the model, we also need to consider the properties from FWCI that have remained the same: normalization by subject, publication type and year. Normalizing publication types and year of publication are relatively straightforward, by simply taking the average international rate within each subgroup (for example, 2008 and reviews). Subjects are a little more complicated, because publications can belong to multiple subjects at the same time.

Let’s take a look at our initial example, and this time adding subject classifications to the publications (see Table 4).

Publication China USA UK Count of countries Subject classification
#1 1 1 A
#2 1 1 B
#3 1 1 2 B, C
#4 1 1 1 3 A, C

Table 4 - Addition of subject classifications

In order to account for which subject a publication belongs to, and thus to calculate the expected value per subject, each publication is weighted to the subject by fractionalizing the publication. For publication #3, this means that 50% of the publication counts towards subject B and 50% to subject C. For each subject, the expected value is therefore in this example (multiplying by country-count and fractional subjects):

Subject A: (1*1*0 + 3*0.5*1)/(1*1+3*0.5)=0.6
Subject B: (1*1*0 + 2*0.5*1)/(1*1+2*0.5)=0.5
Subject C: (2*0.5*1 + 3*0.5*1)/(2*0.5+3*0.5)=1.0

To form the expected counts using each of these normalized scores, we take the harmonic mean of the subjects per publication (see Table 5). For publication #3 this is 2/((1/0.5)+(1/1))=0.67, and for publication #4 this is 2/((1/0.6)+(1/1))=0.75.  The FWIS per publication again is derived by dividing International (1 or 0) by the expected score.

Publication China USA UK Count of countries Subject classification Inter-national Expected score per publication FWIS
#1 1 1 A 0 (no) 0.6 0
#2 1 1 B 0 (no) 0.5 0
#3 1 1 2 B, C 1 (yes) 0.67 1.5
#4 1 1 1 3 A, C 1 (yes) 0.75 1.33

Table 5 - Addition of the FWIS per publication taking into account the number of countries and the subject classification

In this example, the FWIS for China is (0+1.33)/2=0.66. To validate the model, the global average still needs to be 1.0 across the subject fields. Applying the same country-count weighting as before, this is calculated as (0*1+0*1+1.5*2+1.33*3)/(1+1+2+3)=1.0.

 

Testing the new metric

The FWIS was recently tested on a real-case example. In collaboration with the Swedish Foundation for International Cooperation in Research and Higher Education (STINT), we compared 28 Swedish universities on the basis of their level of internationalization.

A first analysis was based on the share of international co-publications (see Figure 2 – right part). That analysis put forward institutions focused on disciplines where international collaboration is naturally strong such as Economics, Econometrics and Finance (Stockholm School of Economics) or Life Sciences (Stockholm University).

A second analysis used the FWIS (see Figure 2 – left part), and both rankings were finally compared (see Figure 3).

FWIS fig 2

Figure 2 - FWIS and share of international co-publications per Swedish university - 2013
40% of the institutions (11 out of 28) experienced a major change (greater than four places) in their ranking position due to the change of indicator used as a basis for the ranking. Source: SciVal and Scopus.

Figure 3 - Comparison of the ranking of Swedish universities based on the share of international co-publications or FWIS - 2013. Source: SciVal and Scopus. Note: This Figure was updated on 21 November 2014 to correct the placement of the blue captions.

The example of Luleå University of Technology is very representative of the impact of the use of the FWIS instead of the share of international co-publications. Luleå focuses predominantly on engineering-type disciplines (see Figure 4) which are typically quite weak in terms of international collaboration (see Figure 1).

FWIS fig 4

Figure 4 - Split of publication output per journal category for Luleå University of Technology - 2013. Source: SciVal.

Luleå’s share of international co-publications in those disciplines may appear limited (around 50%), but they are much greater than the global average (see Table 6). When changing from a ranking based on share of international co-publications to one based on FWIS, Luleå moves up 7 positions.

FWIS Table 6

Table 6 - Share of international co-publications and FWC for a selected number of disciplines - 2013. Source: SciVal and Scopus.

The FWIS indicator gives Luleå University of Technology a better value as it takes the specific mix of the university’s scientific fields into account, i.e. the output of Luleå is compared fairly with that of peers instead of assuming that all universities have the same mix of scientific production.

 

Conclusions

Responding to the need for better management and understanding of internationalization of research and higher education, this paper elaborates and tests a new indicator relating to international research collaboration. The proposed FWIS indicator is argued to enhance the possibilities to measure and compare internationalization of HEIs. The very common indicator using the share of international co-publications includes biases due to scientific profile, type of publication and year of publication. Using a method similar to the calculation of FWCI, the proposed indicator eliminates these biases with the same underlying dataset.

 

References

(1) Smeby, J.C. & Trondal, J. (2005) “Globalisation or europeanisation? International contact among university staff”, Higher Education, Vol. 49, No. 4, pp. 449–466.
(2) Smith, C.L. et al. (2011) Knowledge, networks and nations: Global scientific collaboration in the 21st century. London, UK: The Royal Society.
(3) Sporn, B. (2007) “Governance and Administration: Organizational and Structural Trends”. In J.J.F. Forest & P.G. Altbach, eds. International Handbook of Higher Education Part One: Global Themes and Contemporary Challenges. Amsterdam, Netherlands: Springer, pp. 141–157.
(4) Hudzik, J.K. & Stohl, M. (2009) “Modelling assessment of the outcomes and impacts of internationalisation”. In L. Johnson, ed. Measuring success in the internationalisation of higher education. Amsterdam, Netherlands: European Association for International Education (EAIE), pp. 9–21.
(5) Knight, J. (2005) “An internationalization model: responding to new realities and challenges”. In H. de Wit et al., eds. Higher Education in Latin America: The International Dimension. Washington D.C., USA: The World Bank, pp. 1–39.
(6) Swedish Research Council (2013) Kartläggning av olika nationella system för utvärdering av forskningens kvalitet – förstudie inför regeringsuppdraget U2013/1700/F. Stockholm, Sweden.
(7) Katz, J.S. & Martin, B.R. (1997) “What is research collaboration?”, Research Policy, Vol. 26, No. 1, pp. 1–18.
(8) Adams, J. (2013) “Collaborations: The fourth age of research”, Nature, No. 497, pp. 557–560.
(9) Luukkonen, T. et al. (1993) “The measurement of international scientific collaboration”, Scientometrics, Vol. 28, No. 1, pp. 15–36.
(10) De Lange, C. & Glänzel, W. (1997), "Modelling and measuring multilateral co-authorship in international scientific collaboration. Part I. Development of a new model using a series expansion approach", Scientometrics, Vol. 40, No. 3, pp. 593-604.
(11) Zitt, M. & Bassecoulard, E. (1998) “Internationalization of scientific journals: A measurement based on publication and citation scope”, Scientometrics, Vol. 41, No. 1-2, pp. 255–271.
(12) Glänzel, W. & De Lange, C. (2002) “A distributional approach to multinationality measures of international scientific collaboration”, Scientometrics, Vol. 54, No. 1, pp. 75–89.
(13) Colledge, L. & Verlinde, R. (2014) Scival Metrics Guidebook, Available at: http://www.elsevier.com/__data/assets/pdf_file/0006/184749/scival-metrics-guidebook-v1_01-february2014.pdf.
(14) Waltman, L. et al. (2011), “Towards a new crown indicator: An empirical analysis”, Scientometrics, Vol. 87, No. 3, pp. 467–481.
(15) Lundberg, J. (2007) “Lifting the crown-citation z-score”, Journal of Informetrics, Vol. 1, No. 2, pp. 145–154.
(16) Opthof, T. & Leydesdorff, L. (2010) “Caveats for the journal and field normalizations in the CWTS (“Leiden”) evaluations of research performance”, Journal of Informetrics, Vol. 4, No. 3, pp. 423–430.
VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)

Introduction

International relations are and have always been inherent in higher education and research (1). However, internationalization of higher education institutions (HEIs) exhibits a growing trend, as illustrated by bibliometric data (2). Amongst other things, the internationalization trends challenge the leadership of the HEI and lead to changes in management structure (3).

An assessment of the internationalization impact has to be aligned with the core missions of the HEI (4) and there is a need to manage and measure various internationalization aspects:

“Without a clear set of rationales, followed by a set of objectives or policy statements, a plan or set of strategies, and a monitoring and evaluation system, the process of internationalization is often an ad hoc, reactive, and fragmented response to the overwhelming number of new international opportunities available” (5).

Common internationalization indicators include share of international students and staff, and share of international co-publications. Indicators of this type are widely used for comparisons, rankings such as QS World University Rankings and even for the allocation of funding to HEIs (6).

This paper addresses one clearly defined but rather crude indicator: the share of international co-publications (for a given researcher or institution). The indicator has several advantages, among them relatively unbiased data, the possibility to study all levels from individual researchers to countries and the ease of interpreting it. But there are also weaknesses. Comparisons of researchers, groups of researchers or even HEIs with different scientific profiles are difficult, as the typical share of international co-publications varies substantially between different scientific fields. This is illustrated in Figure 1, which also shows how the share of international co-publications has increased over time in all scientific fields.

FWIS fig 1

Figure 1 - Share of international co-publications per scientific field 2009 and 2013. Source: Scopus

Another weakness is that the share of international co-publications changes with different types of publications (see Table 1).

Scientific field Share of international co-publications
All types Articles Conference proceedings Reviews
Medicine 16.8% 18.8% 17.1% 17.7%
Chemistry 19.6% 20.9% 17.7% 19.3%
Social Sciences 12.0% 12.6% 12.4% 8.3%
Global 18.2% 20.2% 14,9% 17.5%

Table 1 - International co-publications per publication type overall and for 3 different scientific fields, 2013

The aim of this paper is to develop and test an indicator that eliminates these weaknesses without losing the advantages. The indicator described in this piece, named the Field-Weighted Internationalization Score (FWIS), builds on the Field-Weighted Citation Impact (FWCI) calculation.

 

Theoretical framework

While many articles study the concept of scholarly collaboration (7) or point out the importance of international collaboration (8), the assessment of international collaboration remains a more limited field of study. It really became a subject of interest in the 1990s (9-11). Indexes were created (12), but never aimed at comparing institutions or research entities with one another.

The FWIS is calculated using the same base-normalization as is applied in the calculation of the Field-Weighted Citation Impact (13). This in turn is based on the scientific consensus reached recently (14), after criticism that normalization scores should be calculated at the publication level (15) and the contributing counts need to be fractionalized (16).  In essence, this means that each publication will have a calculated expected value, normalized for publication year, document type, and field. The FWCI score for each publication is the actual value divided by the expected value.

 

FWIS Methodology

The same logic is used for the calculation of the FWIS, and instead of citation counts, a simple binary indication of the presence of collaboration on the publication is included. Citation counts behave a little different, as a publication can be cited for instance twice as much as another publication. The simple binary indication of an international co-publication recognizes only two states: either the publication is internationally co-authored (value is 1), or the publication is not (value is 0). This calculation therefore relates to the percentage of internationally co-authored publications, rather than the average internationality of publications (where FWCI does relate to the average number of citations per publication).

In order to overcome the pitfall of measuring collaboration rates against a global rate – when most entities will appear to achieve collaboration rates that are higher than expected – the expected value of collaboration per publication is calculated by weighting the publications by the number of countries that appear on the publication.

To illustrate the methodology with an example (see Table 2 - we will first assume all documents are from the same year, document type and subject, and gradually add complexity to the example to fully understand the calculations): suppose we have a total of 4 publications in our database, which includes 3 different countries: China, USA and UK. The global share of international co-publications is 50% as 2 out of 4 are internationally co-authored.

Publication China USA UK International?
#1 1 0 (no)
#2 1 0 (no)
#3 1 1 1 (yes)
#4 1 1 1 1 (yes)

Table 2 - First example with 4 publications and 3 countries

In our example, China has 50% international publications, USA has 67% and UK 100%. If you were to compare these percentages to the global average, it would appear as if all of these are above or exactly at the global average. To remedy this effect in collaboration, we multiply the weight of publications by the number of collaborating countries contributing to the publication. In our example, that would mean a global average of (1*0+1*0+2*1+3*1)/(1+1+2+3)=71% and not 50%.

Multiplying by number of countries on a publication means that the percentage of internationally co-authored publications is affected by the average number of countries on a publication. When comparing values that have been calculated for different fields (and thus having different average number of countries per publication) this indirectly causes different results. If for instance in a field, without multiplying by country, a group of researchers have 30% international co-publications vs. a global average of 15% (twice as high), and in another field the same group has 10% international collaboration vs. a global average of 5% (also twice as high), it may be that the FWIS derived from those publications per field is different if the average number of countries on international publications is different. The rationale of this difference is that fields with more countries per publication have a higher likelihood of international collaboration.

FWIS uses a publication-oriented approach, which means that an expected and actual value for each publication is calculated. The expected count is derived by taking the total number of international co-publications divided by the total number of publications, and by weighting these counts with the number of countries involved. This would mean in our example (see Table 3): (1*0+1*0+2*1+3*1)/(1+1+2+3)=0.71. The FWIS for each publication is derived by dividing the actual value (0 or 1) by the expected value.

Publication China USA UK Count of countries International? Expected score per publication FWIS
#1 1 1 0 (no) 0.71 0
#2 1 1 0 (no) 0.71 0
#3 1 1 2 1 (yes) 0.71 1.41
#4 1 1 1 3 1 (yes) 0.71 1.41

Table 3 - Addition of the count of countries and FWIS taking into account the number of countries

In order to calculate the score for an entity (entity could for example be a country, institution or group of researchers), we simply take the arithmetic mean of each FWIS score for the entity’s publications. For instance, for China this would be: (0+1.41)/2=0.71.

When calculating the global score for the entire dataset t – as is required to validate the calculation and end up with a score of 1.00 – each publication again needs to be weighted, using the count of countries that are present on the publication. In our example, the global value is derived by: (0*1+0*1+1.41*2+1.41*3)/(1+1+2+3)= 1.00. The same weighting is required when calculating the score for entities that span multiple countries. For instance, for a continent, the value is derived by multiplying the score for each country that is part of that continent.

To fully understand the model, we also need to consider the properties from FWCI that have remained the same: normalization by subject, publication type and year. Normalizing publication types and year of publication are relatively straightforward, by simply taking the average international rate within each subgroup (for example, 2008 and reviews). Subjects are a little more complicated, because publications can belong to multiple subjects at the same time.

Let’s take a look at our initial example, and this time adding subject classifications to the publications (see Table 4).

Publication China USA UK Count of countries Subject classification
#1 1 1 A
#2 1 1 B
#3 1 1 2 B, C
#4 1 1 1 3 A, C

Table 4 - Addition of subject classifications

In order to account for which subject a publication belongs to, and thus to calculate the expected value per subject, each publication is weighted to the subject by fractionalizing the publication. For publication #3, this means that 50% of the publication counts towards subject B and 50% to subject C. For each subject, the expected value is therefore in this example (multiplying by country-count and fractional subjects):

Subject A: (1*1*0 + 3*0.5*1)/(1*1+3*0.5)=0.6
Subject B: (1*1*0 + 2*0.5*1)/(1*1+2*0.5)=0.5
Subject C: (2*0.5*1 + 3*0.5*1)/(2*0.5+3*0.5)=1.0

To form the expected counts using each of these normalized scores, we take the harmonic mean of the subjects per publication (see Table 5). For publication #3 this is 2/((1/0.5)+(1/1))=0.67, and for publication #4 this is 2/((1/0.6)+(1/1))=0.75.  The FWIS per publication again is derived by dividing International (1 or 0) by the expected score.

Publication China USA UK Count of countries Subject classification Inter-national Expected score per publication FWIS
#1 1 1 A 0 (no) 0.6 0
#2 1 1 B 0 (no) 0.5 0
#3 1 1 2 B, C 1 (yes) 0.67 1.5
#4 1 1 1 3 A, C 1 (yes) 0.75 1.33

Table 5 - Addition of the FWIS per publication taking into account the number of countries and the subject classification

In this example, the FWIS for China is (0+1.33)/2=0.66. To validate the model, the global average still needs to be 1.0 across the subject fields. Applying the same country-count weighting as before, this is calculated as (0*1+0*1+1.5*2+1.33*3)/(1+1+2+3)=1.0.

 

Testing the new metric

The FWIS was recently tested on a real-case example. In collaboration with the Swedish Foundation for International Cooperation in Research and Higher Education (STINT), we compared 28 Swedish universities on the basis of their level of internationalization.

A first analysis was based on the share of international co-publications (see Figure 2 – right part). That analysis put forward institutions focused on disciplines where international collaboration is naturally strong such as Economics, Econometrics and Finance (Stockholm School of Economics) or Life Sciences (Stockholm University).

A second analysis used the FWIS (see Figure 2 – left part), and both rankings were finally compared (see Figure 3).

FWIS fig 2

Figure 2 - FWIS and share of international co-publications per Swedish university - 2013
40% of the institutions (11 out of 28) experienced a major change (greater than four places) in their ranking position due to the change of indicator used as a basis for the ranking. Source: SciVal and Scopus.

Figure 3 - Comparison of the ranking of Swedish universities based on the share of international co-publications or FWIS - 2013. Source: SciVal and Scopus. Note: This Figure was updated on 21 November 2014 to correct the placement of the blue captions.

The example of Luleå University of Technology is very representative of the impact of the use of the FWIS instead of the share of international co-publications. Luleå focuses predominantly on engineering-type disciplines (see Figure 4) which are typically quite weak in terms of international collaboration (see Figure 1).

FWIS fig 4

Figure 4 - Split of publication output per journal category for Luleå University of Technology - 2013. Source: SciVal.

Luleå’s share of international co-publications in those disciplines may appear limited (around 50%), but they are much greater than the global average (see Table 6). When changing from a ranking based on share of international co-publications to one based on FWIS, Luleå moves up 7 positions.

FWIS Table 6

Table 6 - Share of international co-publications and FWC for a selected number of disciplines - 2013. Source: SciVal and Scopus.

The FWIS indicator gives Luleå University of Technology a better value as it takes the specific mix of the university’s scientific fields into account, i.e. the output of Luleå is compared fairly with that of peers instead of assuming that all universities have the same mix of scientific production.

 

Conclusions

Responding to the need for better management and understanding of internationalization of research and higher education, this paper elaborates and tests a new indicator relating to international research collaboration. The proposed FWIS indicator is argued to enhance the possibilities to measure and compare internationalization of HEIs. The very common indicator using the share of international co-publications includes biases due to scientific profile, type of publication and year of publication. Using a method similar to the calculation of FWCI, the proposed indicator eliminates these biases with the same underlying dataset.

 

References

(1) Smeby, J.C. & Trondal, J. (2005) “Globalisation or europeanisation? International contact among university staff”, Higher Education, Vol. 49, No. 4, pp. 449–466.
(2) Smith, C.L. et al. (2011) Knowledge, networks and nations: Global scientific collaboration in the 21st century. London, UK: The Royal Society.
(3) Sporn, B. (2007) “Governance and Administration: Organizational and Structural Trends”. In J.J.F. Forest & P.G. Altbach, eds. International Handbook of Higher Education Part One: Global Themes and Contemporary Challenges. Amsterdam, Netherlands: Springer, pp. 141–157.
(4) Hudzik, J.K. & Stohl, M. (2009) “Modelling assessment of the outcomes and impacts of internationalisation”. In L. Johnson, ed. Measuring success in the internationalisation of higher education. Amsterdam, Netherlands: European Association for International Education (EAIE), pp. 9–21.
(5) Knight, J. (2005) “An internationalization model: responding to new realities and challenges”. In H. de Wit et al., eds. Higher Education in Latin America: The International Dimension. Washington D.C., USA: The World Bank, pp. 1–39.
(6) Swedish Research Council (2013) Kartläggning av olika nationella system för utvärdering av forskningens kvalitet – förstudie inför regeringsuppdraget U2013/1700/F. Stockholm, Sweden.
(7) Katz, J.S. & Martin, B.R. (1997) “What is research collaboration?”, Research Policy, Vol. 26, No. 1, pp. 1–18.
(8) Adams, J. (2013) “Collaborations: The fourth age of research”, Nature, No. 497, pp. 557–560.
(9) Luukkonen, T. et al. (1993) “The measurement of international scientific collaboration”, Scientometrics, Vol. 28, No. 1, pp. 15–36.
(10) De Lange, C. & Glänzel, W. (1997), "Modelling and measuring multilateral co-authorship in international scientific collaboration. Part I. Development of a new model using a series expansion approach", Scientometrics, Vol. 40, No. 3, pp. 593-604.
(11) Zitt, M. & Bassecoulard, E. (1998) “Internationalization of scientific journals: A measurement based on publication and citation scope”, Scientometrics, Vol. 41, No. 1-2, pp. 255–271.
(12) Glänzel, W. & De Lange, C. (2002) “A distributional approach to multinationality measures of international scientific collaboration”, Scientometrics, Vol. 54, No. 1, pp. 75–89.
(13) Colledge, L. & Verlinde, R. (2014) Scival Metrics Guidebook, Available at: http://www.elsevier.com/__data/assets/pdf_file/0006/184749/scival-metrics-guidebook-v1_01-february2014.pdf.
(14) Waltman, L. et al. (2011), “Towards a new crown indicator: An empirical analysis”, Scientometrics, Vol. 87, No. 3, pp. 467–481.
(15) Lundberg, J. (2007) “Lifting the crown-citation z-score”, Journal of Informetrics, Vol. 1, No. 2, pp. 145–154.
(16) Opthof, T. & Leydesdorff, L. (2010) “Caveats for the journal and field normalizations in the CWTS (“Leiden”) evaluations of research performance”, Journal of Informetrics, Vol. 4, No. 3, pp. 423–430.
VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)

Reporting Back: The APAC research intelligence conference

Alexander van Servellen and Ikuko Oba report back from the first APAC research intelligence conference, which focused on the challenges institutions face with regard to managing research and the best practices employed to optimize research strategy and impact. 

Read more >


The first APAC Research Intelligence Conference was attended by 109 people from 70 institutions, coming from 8 different countries worldwide. The topic of discussion at this two day event hosted at The Nanyang Executive Center at NTU in Singapore was Research Excellence, the challenges which institutions face with regard to managing research and the best practices employed to optimize research strategy and impact. The idea to organize this event stemmed from a common interest in having a platform to facilitate open discussion on the topic by dedicated professionals, and that was certainly achieved as 9 speakers took the stage to share their insight and experiences. This article reviews selected parts of each speaker’s presentation.

Group photo of speakers

Group photo taken during the conference, with presenters mentioned in the article in bold.
Back row from the left – Hiroshi Fukunari, Marcel Vonder, Thomas Thayer, Anders Karlsson, William Gunn, Kevin Carlsten, Lim Kok Keng
Front row from the left - Byoung Yoon Kim, Hirofumi Seike, Douglas Robertson, Giles Carden, Michael Khor.

 

Day 1

Professor Bertil Andersson, President of Nanyang Technological University (NTU) in Singapore, presented ‘Nanyang Technological University, Singapore: A Drive in Excellence’.

Professor Andersson described Singapore as a country with a vibrant eco-system of world-class research producing institutions. He highlighted the important role of the Singapore government, as not only talking about developing the knowledge economy, but also walking the talk by providing funding, having a dedicated Research Innovation & Enterprise Council chaired by the Prime Minister, using 5-year planning cycles, and by having a tradition of philanthropic endowments and incentivizing private donations. NTU is one of the fast-rising universities in both the world ranking and research impact. Figure 1 shows NTU’s Field-Weighted Citation Impact surpassing that of Asia’s top institutes by 2012.

Professor Andersson attributed their success to being young and having been able to start from scratch rather than reorganizing an existing structure, receiving long-term generous finance, and being able to recruit senior and junior faculty from abroad by maintaining a strong international profile.

 

NTU AnderssonA university is about its people, its people and its people……its good people. I think personally the biggest secret to our success has been that we’ve been able to recruit top people from Europe, United States and Asia… of a very high caliber. And we also recruited many young people. The superstars of tomorrow have come to NTU in big numbers and we had funding for that”.

Professor Bertil Andersson, NTU

 

 

 

Rep Back RT 38  Fig 1
Figure 1 –The field-weighted citation impact of NTU and selected comparator institutions 2004-2012. Source: SciVal.

 

Professor Byoung Yoon Kim, Vice President of Research at the Advanced Institute of Science and Technology (KAIST) in South Korea presented ‘Strategic Role of KAIST in Advancing Korean Economic Development.

Professor Kim outlined the role KAIST has played in developing Korea’s economy in the last 40 years and spoke about the role they hope to play in the next 40 years. KAIST was established in 1971 with a mission to produce professionals to transform Korea into an industrialized nation. As an initiative for change and development, it was not only a new university, but was also under a different ministry, and therefore did not share budget with the other universities. KAIST recruited the best professors worldwide and successfully contributed to Korea’s economic growth by fostering talents who established companies now known worldwide, which generate the majority of Korean’s income.

Looking forward, Professor Kim spoke about the Startup-KAIST movement, which aims to establish a model that the country should follow by spreading a culture of entrepreneurship, to develop an eco-system to help establish and globalizecompany activities. Professor Kim echoed Professor Andersson in attributing the success of KAIST in part to having started as an independent university rather than changing an existing system and culture. He said if the same money went to another existing University, it would not have produced the same results. KAIST represented a departure from the old system.

 

yoonkim_hires1948X1461KAIST has to also find out what it should be doing for the next 40 years in order to be different and justify its existence.  We should not compare our university with SNU… it has a different mission. Although my president (laughs) and most government officers are very interested in university rankings, I try not to talk about it, because it is important in a sense, but it should not be the goal…”

Professor Byoung Yoon Kim, KAIST

 

 

 

Dr. Anders Karlsson, Vice President Global Academic Relations APAC, Elsevier, presented ‘The Global Trends on Internationalization and Assessing Impact Beyond Research.

Dr. Karlsson posed a number of questions; the most central being ‘is collaborative work better?’  He showed the positive correlation between the international collaboration share of a country and their Field-Weighted Citation Impact, found in the report prepared by Elsevier for the Department of Business Innovation and Skills (BIS) in the UK (1), and was quick to point out that correlation does not explain causality. From the same study, he presented data thatshowsthe UK’s international collaborative papers were cited 60% more often than papers collaborated on only within UK. That data was positioned as strong evidence demonstrating the leverage the UK gets from collaborating internationally, in terms of the positive effect on overall scholarly influence.

Dr. Karlsson investigated whether international articles are judged better in peer review. He used evidence provided from a study (2) which looked at papers submitted in Italy for peer review, and found that papers with more authors were judged higher in excellence.

 

Anders KarlssonIf you collaborate more, your citation impact increases, basically you have a broader base, and you reach out more broadly

International collaboration should be high on the strategic agenda of countries which want to increase their citation impact.

Dr. Anders Karlsson, Elsevier

 

 

 

 

Dr. Giles Carden, Director, Strategic Planning and Analytics, Warwick University presented ‘Research Planning: Embedding analytics in a new research performance challenge process at the University of Warwick’.

Giles CardenDr. Carden introduced Warwick University’s approach to using analytics for managing research performance, and explainedtheir imperative strategic rationales, achievements, and future direction. The context for developing analytics was to supportWarwick’s goal of becoming an undisputed world leader in research and scholarship, plus the fact that the UK’s national research exercise in part based their assessment on these types of analytics. Distribution of UK’s 1.6 billion pounds in block grants coming from the government is informed by the assessment outcome of the Research Excellence Framework (REF). Thus, Warwick developed an analytics and planning process in tandem and embedded the analytics into the process to be successful in this very competitive environment.

Dr. Carden shared analytics showing Warwick’s collaboration with the USA (see Figure 2), stating this was important to boost citation impact. He also revealed that Warwick’s Research Assessment and Planning group reviews the performance of each individual academic in a substantial post, and showed an author profile in SciVal (see Figure 2). Communication was the key to the project’s success.It was not about being out to get people, but to identify patterns that can help researchers turn their performance around. As a result, Warwick University improved academic staff accountability, grew in research income and research students, published more in high impact journals, and increased citations – along with a cultural shift within the University. In closing, Dr. Carden discussed the future of analytics and big data as likely involving predictive analytics, and also highlighted the limitations of analytics.

Rep Back RT 38  Fig 2

 

Figure 2 The collaboration map shows the institutions which Warwick University has collaborated with represented by a bubble which shows the number of co-authored publications (2011-2013). Source: SciVal.

 

Rep Back RT 38  Fig 3

 

Figure 3 Author profile showing the publications, citations, citations per paper and h-index of a specific author. Source: SciVal.

 

Dr. Douglas Robertson, Director of Research Services Division, The Australian National University (ANU), presented ‘The Changing World of Research Support and the Challenges of Impact from Basic Science: Some Reflections’.

Dr. Robertson has been active in research administration since 1983, and reflected on the changing nature in university research support and on some concerns. Research administration has become much more complex, and he questioned whether the quality of research is any better as a consequence. He encouraged contemplation about whether the development and current practice of research administration is really to the benefit of science and society.

Douglas Robertson“Life was very simple in 1983. When you were sent a research award, it ran to one side of A4 that said ‘we’d like to give you some money, will you please write back and say whether you’d like it. And if you could tell us what you did in three years’ time, we’d be very grateful.’ Now research contracts in the UK can run to 90 or 100 pages, of very closely typed script…there has been quite a lot of change… “

Dr. Douglas Robertson, ANU

 

 

He asked whether we are spending too much money on administrating research and not enough money on actually doing it. He also stated that several Nobel Prize winners have questioned whether they would have been funded under the current systems. Today, researchers have to report more often, get more permission, andjustify more why their research is worth investing in, while the focus is now more on the societal impact than the impact on research and other researchers.

Dr. Robertson also questioned whether the race to publish is a good thing, citing a number of studies which report observed lack of reproducibility, including one in the pharmaceutical industry where it was revealed that in only ~20–25% of the projects, were the relevant published data completely in line with our in-house findings (3).

I find it challenging to figure out how we create an effective research environment rather than one that is easy to measure. I am of the opinion that if you are using public money, and produce work that cannot be reproduced, it is not a good outcome, the aim is that you publish so that others can build on your publication, that you patent so that others can build on your invention, and if your publication does not achieve that, we have concerns. Particularly in the life sciences, the pressure on scientists is phenomenal…”

Dr. Douglas Robertson, ANU

Finally, Dr. Robertson raised the importance of curiosity driven research and concerns about the increased shift in focus to applied science. Scientists are increasingly required to indicate what their research will be used for rather than being left to freely explore the unknown. He underlined the importance of basic research, stating that applied research is only possible when you have a solid foundation of basic research.

 

Day 2

Dr. Hirofumi Seike, University Research Administrator, Management Associate Professor, Tohoku University presented ‘University Internationalization and its Impact’.

Dr. Seike raised internationalization as a challenge, and why? Tohoku commits to providing students the best quality international perspective possible, and the university believes international experience will ensure a high quality education and research,  as well as expanding their human networks.Many global issues can only be solved through international collaboration, but Japan encounters a problem of students not wanting to study abroad. In this sense, he feels Japan is falling behind.

In terms of research, he feels that Japan has stagnated, while other Asian countries are increasing their presence. The government shares a strong sense of urgency which leads them to initiate multiple globalization projects and set targets such as to include 10 universities in the top 100 in world rankings.

Dr. Seike introduced one of the government initiatives, WPI,which aims to establish world-class research institutes. Tohoku University was chosen as one of them. WPI empowered the awardees to have their own governance,which allowed competitive recruitment to assemble world-class innovative scientists that can lead from basic research to industry application.

 

Tohoku Seike"WPI established a special zone within the existing university framework.  It’s a new approach… not just the expansion of the existing system. It should be the showcase of the best research… the best of the best.”

Dr. Hirofumi Seike, Tohoku University

 

 

 

 

Professor Paul K.H. Tam, Pro-Vice-Chancellor and Vice-President (Research), University of Hong Kong (HKU) presented ‘Research Excellence and Internationalization at the University of Hong Kong – Striking the Right Mix of Metrics and Faculty Expertise’.

Professor Tam described the University of Hong Kong (HKU) as an institute of great heritage but with many structural issues that needed to be resolved – and shared the ways they overcame these challenges. It was a transformation from a predominately teaching university to a comprehensive research university.

A major motivation for institutions of Higher Education is competition, and the introduction of other universities in Hong Kong ‘awoke the giant from a deep sleep’. While the transformation was also self-motivated, there were important external factors which came from the government; the establishment of The Research Grants Council followed by the introduction of the Research Assessment Exercise.The previous funding system allocated 75% of the money into a recurrent grant that supported continuity and sustainability. Distribution was based on student places (75%) and only 25% was related to research. The government changed the system to drive major change, and allocation is now judged using performance indicators.

What does it mean to be a ‘world-class’ university? HKU agreed upon having a tradition of research excellence with internationally competitive staff and more importantly, a strong culture that will attract students globally as the choice of institution for those who want a career in research. HKU has been very successful despite there being 8 institutions. They are responsible for over half of large program grants, and have the top position in every assessment indicator, be it grant amount or research output.

Talking about university transformation, Professor Tam spoke about the guiding principles of providing an enabling environment for researchers and respecting academic freedom by keeping a bottom up approach which is top facilitated.

HKU_TamWhat I consider the greatest asset of the university is human resources, the talents. It is the role of the university leaders to provide an enabling environment for the researchers – this is my guiding principle. The other principle I have is that we have enjoyed the principle of academic freedom and we respect that and continue to cherish it. To respect that means the approach is bottom up. There can be a lot of debate between top down and bottom up approaches. We have kept a bottom up approach but introduced a top facilitated bottom up approach.”

Professor Paul Tam, HKU

 

Dr. John Green, Life Fellow, Queens’ College, University of Cambridge, presented ‘Evidence Based decision making in academic research’.

First Dr. Green created context by talking about increasing interdisciplinarity and internationalization in science, and the increasing demand for evidence to evaluate outcomes and justify funding expenditure. He spoke about how Imperial College evaluates interdisciplinary institutes that work cross-departmentally every 3 or 4 years, and on what basis they close institutes down.

Dr. Green touched on the potential of getting lost in the avalanche of data available today and the importance of getting meaningful information from the data. It is important to understand where the strength of an institution lies, where to focus its strategy, who to collaborate with. He explained the importance of due diligence about specific partnerships, the need to find ways to connect researchers and facilitate the mobility that will create the collaboration. He stated firmly that these things do not happen bottom up, that there is a need to facilitate them based on evidence to inform the facilitation. At Imperial, he created and used a systemthat presents the research performance dashboards at the departmental level.

 

John GreenThe world has changed now, and if only some of the systems which are available to you now were available to me then, I would not have re-invented the wheel… Pure has now come into the market, which does exactly what we were trying to do, but it does it better. It is a system which sits on top of your internal IT systems and harvests information from it and provides you with dashboards, and that is exactly the concept that I have been talking about.” (Figure.4)

Dr. John Green, Life Fellow, Queens’ College, University of Cambridge

 

 

Rep Back Fig 4

Figure 4 Example of a dashboard in Pure which shows research outputs, journals and activities for a university

Having spoken about the metrics, he pointed out the need to standardize the definitions and methodology involved in generating metrics, because everyone tends to do it differently which means that the results cannot accurately be compared. How can we compare the number of researchersif each university defined researcher counts differently? The Snowball project, a non-commercial initiative in which Dr. Green and Elsevier are involved, resulted in agreed upon methodology for these metrics that are endorsed by a group of distinguished UK universities.

“I don’t want to give the impression that metrics are everything. Metrics are one of a number of ways to come to a judgment… and help you come to a view of something. In no sense are they a way to navigate your car. A Satellite Navigation system is something that tells you what the best route is and how you might change the route if there are traffic jams… But you have to decide which is the best route for you, based on that information and other information too (for example where you want to go for lunch, do you want the prettiest route or the autoroute). That is why we need other measures such as peer review to complement metrics.

Dr. John Green, Life Fellow, Queens’ College, University of Cambridge

 

Dr. William Gunn, Head of Academic Outreach, Mendeley, presented ‘Innovation – Scientific and technical foundation development for altmetrics in the US’.

Dr. Gunn spoke about totally new metrics, which may compliment, or arguably even be an alternative to traditional metrics, hence the term ‘altmetrics’. He suggests that new forms of scholarship need new metrics. Altmetrics are faster to accrue compared to citation data, and they use research and social media data that is totally outside of the traditional research metrics. Altmetrics for impact include usage of articles, peer-review such as via post publication commentary services,and social media activities such as discussions on blog posts to measure attention and impact work had given to others. He drives home the point that there are many ways to look at the overall influence of a paper or group of papers, and that citations are just a tiny fraction of that.

Gunn_William_Mendeley2There are 125 times more downloads of papers (than citations) and a universe of social activities, that are being aggregated…, so there is a lot more data out there that we can gather, work with and use to understand the impact our work is having”.

Dr. William Gunn, Head of Academic Outreach, Mendeley

 

 

 

 

Nonetheless, altmetrics are not without challenges with regard to transparency and consistency. There are different services that provide altmetrics such as Plum Analytics, Impact Story, and Altmetric, and if you query them all on one specific DOI, there are differences in the metric values reported back, which leads us to question which value is correct. There are also problems with identity attribution if researchers use a fake identity, and finally altmetrics can also be gamed, although it is difficult if people make use of many sources and many different metrics.

Looking back, the conference was fascinating in that the speakers and participants alike were passionate and often candid in sharing their views and experiences, resulting in lively discussions, which we all could learn from.

References

(1) ELSEVIER (2013) “International Comparative Performance of the UK Research Base – 2013”. Available at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/263729/bis-13-1297-international-comparative-performance-of-the-UK-research-base-2013.pdf
 (2) Franceschet, M., Costantini, A. (2010) “The effect of scholar collaboration on impact and quality of academic papers”, Journal of Informetrics, Vol. 4, No. 4,  pp. 540-553.
 (3) Prinz, F., Schlange, T., Asadullah, K. (2011) “Believe it or not: How much can we rely on published data on potential drug targets?”, Nature Reviews Drug Discovery, Vol. 10, No. 9, pp. 712-713.

 

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The first APAC Research Intelligence Conference was attended by 109 people from 70 institutions, coming from 8 different countries worldwide. The topic of discussion at this two day event hosted at The Nanyang Executive Center at NTU in Singapore was Research Excellence, the challenges which institutions face with regard to managing research and the best practices employed to optimize research strategy and impact. The idea to organize this event stemmed from a common interest in having a platform to facilitate open discussion on the topic by dedicated professionals, and that was certainly achieved as 9 speakers took the stage to share their insight and experiences. This article reviews selected parts of each speaker’s presentation.

Group photo of speakers

Group photo taken during the conference, with presenters mentioned in the article in bold.
Back row from the left – Hiroshi Fukunari, Marcel Vonder, Thomas Thayer, Anders Karlsson, William Gunn, Kevin Carlsten, Lim Kok Keng
Front row from the left - Byoung Yoon Kim, Hirofumi Seike, Douglas Robertson, Giles Carden, Michael Khor.

 

Day 1

Professor Bertil Andersson, President of Nanyang Technological University (NTU) in Singapore, presented ‘Nanyang Technological University, Singapore: A Drive in Excellence’.

Professor Andersson described Singapore as a country with a vibrant eco-system of world-class research producing institutions. He highlighted the important role of the Singapore government, as not only talking about developing the knowledge economy, but also walking the talk by providing funding, having a dedicated Research Innovation & Enterprise Council chaired by the Prime Minister, using 5-year planning cycles, and by having a tradition of philanthropic endowments and incentivizing private donations. NTU is one of the fast-rising universities in both the world ranking and research impact. Figure 1 shows NTU’s Field-Weighted Citation Impact surpassing that of Asia’s top institutes by 2012.

Professor Andersson attributed their success to being young and having been able to start from scratch rather than reorganizing an existing structure, receiving long-term generous finance, and being able to recruit senior and junior faculty from abroad by maintaining a strong international profile.

 

NTU AnderssonA university is about its people, its people and its people……its good people. I think personally the biggest secret to our success has been that we’ve been able to recruit top people from Europe, United States and Asia… of a very high caliber. And we also recruited many young people. The superstars of tomorrow have come to NTU in big numbers and we had funding for that”.

Professor Bertil Andersson, NTU

 

 

 

Rep Back RT 38  Fig 1
Figure 1 –The field-weighted citation impact of NTU and selected comparator institutions 2004-2012. Source: SciVal.

 

Professor Byoung Yoon Kim, Vice President of Research at the Advanced Institute of Science and Technology (KAIST) in South Korea presented ‘Strategic Role of KAIST in Advancing Korean Economic Development.

Professor Kim outlined the role KAIST has played in developing Korea’s economy in the last 40 years and spoke about the role they hope to play in the next 40 years. KAIST was established in 1971 with a mission to produce professionals to transform Korea into an industrialized nation. As an initiative for change and development, it was not only a new university, but was also under a different ministry, and therefore did not share budget with the other universities. KAIST recruited the best professors worldwide and successfully contributed to Korea’s economic growth by fostering talents who established companies now known worldwide, which generate the majority of Korean’s income.

Looking forward, Professor Kim spoke about the Startup-KAIST movement, which aims to establish a model that the country should follow by spreading a culture of entrepreneurship, to develop an eco-system to help establish and globalizecompany activities. Professor Kim echoed Professor Andersson in attributing the success of KAIST in part to having started as an independent university rather than changing an existing system and culture. He said if the same money went to another existing University, it would not have produced the same results. KAIST represented a departure from the old system.

 

yoonkim_hires1948X1461KAIST has to also find out what it should be doing for the next 40 years in order to be different and justify its existence.  We should not compare our university with SNU… it has a different mission. Although my president (laughs) and most government officers are very interested in university rankings, I try not to talk about it, because it is important in a sense, but it should not be the goal…”

Professor Byoung Yoon Kim, KAIST

 

 

 

Dr. Anders Karlsson, Vice President Global Academic Relations APAC, Elsevier, presented ‘The Global Trends on Internationalization and Assessing Impact Beyond Research.

Dr. Karlsson posed a number of questions; the most central being ‘is collaborative work better?’  He showed the positive correlation between the international collaboration share of a country and their Field-Weighted Citation Impact, found in the report prepared by Elsevier for the Department of Business Innovation and Skills (BIS) in the UK (1), and was quick to point out that correlation does not explain causality. From the same study, he presented data thatshowsthe UK’s international collaborative papers were cited 60% more often than papers collaborated on only within UK. That data was positioned as strong evidence demonstrating the leverage the UK gets from collaborating internationally, in terms of the positive effect on overall scholarly influence.

Dr. Karlsson investigated whether international articles are judged better in peer review. He used evidence provided from a study (2) which looked at papers submitted in Italy for peer review, and found that papers with more authors were judged higher in excellence.

 

Anders KarlssonIf you collaborate more, your citation impact increases, basically you have a broader base, and you reach out more broadly

International collaboration should be high on the strategic agenda of countries which want to increase their citation impact.

Dr. Anders Karlsson, Elsevier

 

 

 

 

Dr. Giles Carden, Director, Strategic Planning and Analytics, Warwick University presented ‘Research Planning: Embedding analytics in a new research performance challenge process at the University of Warwick’.

Giles CardenDr. Carden introduced Warwick University’s approach to using analytics for managing research performance, and explainedtheir imperative strategic rationales, achievements, and future direction. The context for developing analytics was to supportWarwick’s goal of becoming an undisputed world leader in research and scholarship, plus the fact that the UK’s national research exercise in part based their assessment on these types of analytics. Distribution of UK’s 1.6 billion pounds in block grants coming from the government is informed by the assessment outcome of the Research Excellence Framework (REF). Thus, Warwick developed an analytics and planning process in tandem and embedded the analytics into the process to be successful in this very competitive environment.

Dr. Carden shared analytics showing Warwick’s collaboration with the USA (see Figure 2), stating this was important to boost citation impact. He also revealed that Warwick’s Research Assessment and Planning group reviews the performance of each individual academic in a substantial post, and showed an author profile in SciVal (see Figure 2). Communication was the key to the project’s success.It was not about being out to get people, but to identify patterns that can help researchers turn their performance around. As a result, Warwick University improved academic staff accountability, grew in research income and research students, published more in high impact journals, and increased citations – along with a cultural shift within the University. In closing, Dr. Carden discussed the future of analytics and big data as likely involving predictive analytics, and also highlighted the limitations of analytics.

Rep Back RT 38  Fig 2

 

Figure 2 The collaboration map shows the institutions which Warwick University has collaborated with represented by a bubble which shows the number of co-authored publications (2011-2013). Source: SciVal.

 

Rep Back RT 38  Fig 3

 

Figure 3 Author profile showing the publications, citations, citations per paper and h-index of a specific author. Source: SciVal.

 

Dr. Douglas Robertson, Director of Research Services Division, The Australian National University (ANU), presented ‘The Changing World of Research Support and the Challenges of Impact from Basic Science: Some Reflections’.

Dr. Robertson has been active in research administration since 1983, and reflected on the changing nature in university research support and on some concerns. Research administration has become much more complex, and he questioned whether the quality of research is any better as a consequence. He encouraged contemplation about whether the development and current practice of research administration is really to the benefit of science and society.

Douglas Robertson“Life was very simple in 1983. When you were sent a research award, it ran to one side of A4 that said ‘we’d like to give you some money, will you please write back and say whether you’d like it. And if you could tell us what you did in three years’ time, we’d be very grateful.’ Now research contracts in the UK can run to 90 or 100 pages, of very closely typed script…there has been quite a lot of change… “

Dr. Douglas Robertson, ANU

 

 

He asked whether we are spending too much money on administrating research and not enough money on actually doing it. He also stated that several Nobel Prize winners have questioned whether they would have been funded under the current systems. Today, researchers have to report more often, get more permission, andjustify more why their research is worth investing in, while the focus is now more on the societal impact than the impact on research and other researchers.

Dr. Robertson also questioned whether the race to publish is a good thing, citing a number of studies which report observed lack of reproducibility, including one in the pharmaceutical industry where it was revealed that in only ~20–25% of the projects, were the relevant published data completely in line with our in-house findings (3).

I find it challenging to figure out how we create an effective research environment rather than one that is easy to measure. I am of the opinion that if you are using public money, and produce work that cannot be reproduced, it is not a good outcome, the aim is that you publish so that others can build on your publication, that you patent so that others can build on your invention, and if your publication does not achieve that, we have concerns. Particularly in the life sciences, the pressure on scientists is phenomenal…”

Dr. Douglas Robertson, ANU

Finally, Dr. Robertson raised the importance of curiosity driven research and concerns about the increased shift in focus to applied science. Scientists are increasingly required to indicate what their research will be used for rather than being left to freely explore the unknown. He underlined the importance of basic research, stating that applied research is only possible when you have a solid foundation of basic research.

 

Day 2

Dr. Hirofumi Seike, University Research Administrator, Management Associate Professor, Tohoku University presented ‘University Internationalization and its Impact’.

Dr. Seike raised internationalization as a challenge, and why? Tohoku commits to providing students the best quality international perspective possible, and the university believes international experience will ensure a high quality education and research,  as well as expanding their human networks.Many global issues can only be solved through international collaboration, but Japan encounters a problem of students not wanting to study abroad. In this sense, he feels Japan is falling behind.

In terms of research, he feels that Japan has stagnated, while other Asian countries are increasing their presence. The government shares a strong sense of urgency which leads them to initiate multiple globalization projects and set targets such as to include 10 universities in the top 100 in world rankings.

Dr. Seike introduced one of the government initiatives, WPI,which aims to establish world-class research institutes. Tohoku University was chosen as one of them. WPI empowered the awardees to have their own governance,which allowed competitive recruitment to assemble world-class innovative scientists that can lead from basic research to industry application.

 

Tohoku Seike"WPI established a special zone within the existing university framework.  It’s a new approach… not just the expansion of the existing system. It should be the showcase of the best research… the best of the best.”

Dr. Hirofumi Seike, Tohoku University

 

 

 

 

Professor Paul K.H. Tam, Pro-Vice-Chancellor and Vice-President (Research), University of Hong Kong (HKU) presented ‘Research Excellence and Internationalization at the University of Hong Kong – Striking the Right Mix of Metrics and Faculty Expertise’.

Professor Tam described the University of Hong Kong (HKU) as an institute of great heritage but with many structural issues that needed to be resolved – and shared the ways they overcame these challenges. It was a transformation from a predominately teaching university to a comprehensive research university.

A major motivation for institutions of Higher Education is competition, and the introduction of other universities in Hong Kong ‘awoke the giant from a deep sleep’. While the transformation was also self-motivated, there were important external factors which came from the government; the establishment of The Research Grants Council followed by the introduction of the Research Assessment Exercise.The previous funding system allocated 75% of the money into a recurrent grant that supported continuity and sustainability. Distribution was based on student places (75%) and only 25% was related to research. The government changed the system to drive major change, and allocation is now judged using performance indicators.

What does it mean to be a ‘world-class’ university? HKU agreed upon having a tradition of research excellence with internationally competitive staff and more importantly, a strong culture that will attract students globally as the choice of institution for those who want a career in research. HKU has been very successful despite there being 8 institutions. They are responsible for over half of large program grants, and have the top position in every assessment indicator, be it grant amount or research output.

Talking about university transformation, Professor Tam spoke about the guiding principles of providing an enabling environment for researchers and respecting academic freedom by keeping a bottom up approach which is top facilitated.

HKU_TamWhat I consider the greatest asset of the university is human resources, the talents. It is the role of the university leaders to provide an enabling environment for the researchers – this is my guiding principle. The other principle I have is that we have enjoyed the principle of academic freedom and we respect that and continue to cherish it. To respect that means the approach is bottom up. There can be a lot of debate between top down and bottom up approaches. We have kept a bottom up approach but introduced a top facilitated bottom up approach.”

Professor Paul Tam, HKU

 

Dr. John Green, Life Fellow, Queens’ College, University of Cambridge, presented ‘Evidence Based decision making in academic research’.

First Dr. Green created context by talking about increasing interdisciplinarity and internationalization in science, and the increasing demand for evidence to evaluate outcomes and justify funding expenditure. He spoke about how Imperial College evaluates interdisciplinary institutes that work cross-departmentally every 3 or 4 years, and on what basis they close institutes down.

Dr. Green touched on the potential of getting lost in the avalanche of data available today and the importance of getting meaningful information from the data. It is important to understand where the strength of an institution lies, where to focus its strategy, who to collaborate with. He explained the importance of due diligence about specific partnerships, the need to find ways to connect researchers and facilitate the mobility that will create the collaboration. He stated firmly that these things do not happen bottom up, that there is a need to facilitate them based on evidence to inform the facilitation. At Imperial, he created and used a systemthat presents the research performance dashboards at the departmental level.

 

John GreenThe world has changed now, and if only some of the systems which are available to you now were available to me then, I would not have re-invented the wheel… Pure has now come into the market, which does exactly what we were trying to do, but it does it better. It is a system which sits on top of your internal IT systems and harvests information from it and provides you with dashboards, and that is exactly the concept that I have been talking about.” (Figure.4)

Dr. John Green, Life Fellow, Queens’ College, University of Cambridge

 

 

Rep Back Fig 4

Figure 4 Example of a dashboard in Pure which shows research outputs, journals and activities for a university

Having spoken about the metrics, he pointed out the need to standardize the definitions and methodology involved in generating metrics, because everyone tends to do it differently which means that the results cannot accurately be compared. How can we compare the number of researchersif each university defined researcher counts differently? The Snowball project, a non-commercial initiative in which Dr. Green and Elsevier are involved, resulted in agreed upon methodology for these metrics that are endorsed by a group of distinguished UK universities.

“I don’t want to give the impression that metrics are everything. Metrics are one of a number of ways to come to a judgment… and help you come to a view of something. In no sense are they a way to navigate your car. A Satellite Navigation system is something that tells you what the best route is and how you might change the route if there are traffic jams… But you have to decide which is the best route for you, based on that information and other information too (for example where you want to go for lunch, do you want the prettiest route or the autoroute). That is why we need other measures such as peer review to complement metrics.

Dr. John Green, Life Fellow, Queens’ College, University of Cambridge

 

Dr. William Gunn, Head of Academic Outreach, Mendeley, presented ‘Innovation – Scientific and technical foundation development for altmetrics in the US’.

Dr. Gunn spoke about totally new metrics, which may compliment, or arguably even be an alternative to traditional metrics, hence the term ‘altmetrics’. He suggests that new forms of scholarship need new metrics. Altmetrics are faster to accrue compared to citation data, and they use research and social media data that is totally outside of the traditional research metrics. Altmetrics for impact include usage of articles, peer-review such as via post publication commentary services,and social media activities such as discussions on blog posts to measure attention and impact work had given to others. He drives home the point that there are many ways to look at the overall influence of a paper or group of papers, and that citations are just a tiny fraction of that.

Gunn_William_Mendeley2There are 125 times more downloads of papers (than citations) and a universe of social activities, that are being aggregated…, so there is a lot more data out there that we can gather, work with and use to understand the impact our work is having”.

Dr. William Gunn, Head of Academic Outreach, Mendeley

 

 

 

 

Nonetheless, altmetrics are not without challenges with regard to transparency and consistency. There are different services that provide altmetrics such as Plum Analytics, Impact Story, and Altmetric, and if you query them all on one specific DOI, there are differences in the metric values reported back, which leads us to question which value is correct. There are also problems with identity attribution if researchers use a fake identity, and finally altmetrics can also be gamed, although it is difficult if people make use of many sources and many different metrics.

Looking back, the conference was fascinating in that the speakers and participants alike were passionate and often candid in sharing their views and experiences, resulting in lively discussions, which we all could learn from.

References

(1) ELSEVIER (2013) “International Comparative Performance of the UK Research Base – 2013”. Available at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/263729/bis-13-1297-international-comparative-performance-of-the-UK-research-base-2013.pdf
 (2) Franceschet, M., Costantini, A. (2010) “The effect of scholar collaboration on impact and quality of academic papers”, Journal of Informetrics, Vol. 4, No. 4,  pp. 540-553.
 (3) Prinz, F., Schlange, T., Asadullah, K. (2011) “Believe it or not: How much can we rely on published data on potential drug targets?”, Nature Reviews Drug Discovery, Vol. 10, No. 9, pp. 712-713.

 

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Tracking scientific development and collaborations – The case of 25 Asian countries

Henk Moed and Gali Halevi explain how a country’s current stage of scientific development can be determined through the use of a bibliometric model, and illustrate its use by examining 25 countries in Asia.

Read more >


Bibliometric indicators based on publications in international, peer reviewed journals can be used to characterize the current stage of a country’s scientific development.  A simple bibliometric model for different phases of development of a country’s national research system distinguishes four phases: (1) pre-development; (2) building-up; (3) consolidation and expansion; and (4) internationalization (see Figures 1, 2). The model assumes that during the various phases of a country’s scientific development, the number of published articles in peer reviewed journals shows a more or less continuous increase, although the rate of increase may vary substantially over the years. But a bibliometric indicator measuring the share of a country’s internationally co-authored articles discriminates between the various phases in the development.

  1. Pre-development phase:
    In this phase the level of research activity in a country is low. Research oriented towards the international research front is carried out by a limited number of researchers only.  There is no clear research policy and structural funding of research. Activities result from initiatives by a limited number of active researchers, who may in some years seek collaborations with foreign colleagues. The publication output is low.  From a statistical point of view, indicators are based on low numbers and may show large annual fluctuations. This is especially true for the percentage of internationally co-authored articles.
  1. Building-up phase:
    Researchers in the country start establishing projects with foreign research teams, often funded by foreign or international agencies, and focusing on a particular topic. They begin collaborating with colleagues from more developed countries. Internationally co-authored articles constitute one of the outputs. National researchers enter international scientific networks. The role of the country’s authors in the collaboration is secondary rather than primary. The percentage of internationally co-authored articles relative to a country’s total publication output tends to increase, but is often not statistically significant, due to the fact that the absolute number of annual publications from a country is low, and the internationally co-authored papers may be concentrated in particular years.
  1. Consolidation and expansion:
    The country develops its own scientific infrastructure. The amount of funds available for research increases. The national research capacity increases. Nationally oriented journals internationalize and have a larger probability of being indexed in Scopus and other international scientific literature databases. More and more research papers are based on research carried out by national institutions only. The number of internationally co-authored papers increases as well, but at a rate that is lower than that of the country’s total output; hence, the percentage of internationally co-authored papers declines. 
  1. Internationalization:
    National research capacity is further expanding; research institutions in the country start functioning as fully fledged partners and more and more often take the lead in international collaborations. Overall impact increases; the country’s researcher’s influence the global research agenda; the country more and more becomes one of the world leaders, at least in specific research domains. Both the number of publications and the share of internationally co-authored articles increase.

Asia Figure 1

Figure 1 - Bibliometric model for capturing the state of scientific development

 

Asia Figure 2
Figure 2 - Schematic overview of trends in bibliometric indicators per development phase.
Source:
UNESCO report “Higher Education in Asia – Expanding up, Expanding Out”; p. 80.

  

Concept   Main questions   Indicators; classifications
Publication output How many articles did a country publish and how did this number develop over time? The number of research articles, reviews and conference papers published in journals and conference proceedings indexed in Scopus during 1997-2012
Disciplinary specialization In which subject field does a country specialize? Use of a subject classification into 26 main disciplines available in Scopus
Distribution by institutional sector How important are the various institutional sectors in research? Use of a classification into 4 institutional sectors: Higher Education; Government; Private; Health
Global and regional collaboration How frequently do Asian countries collaborate with each other and with countries outside the region? Based on the number of articles co-authored by researchers from different countries; calculation of the percentage share of a country’s articles co-authored with researchers working abroad
State of scientific development In what phase of its scientific development is a country? Based on a simple model taking into account the trend in a country’s annual number of publications and the percentage share of internationally co-authored articles

Table 1 - Main bibliometric indicators and classifications used in this study

This study analyzed data on scientific publications for 25 Asian countries (see Table 2) extracted from Scopus, a multidisciplinary database covering publications in 20,000 peer reviewed, mostly international journals. Data on all publications indexed in the Scopus database were organized by country and sorted into three adjacent time periods: (a) 1997-2001, (b) 2002-2007 and (c) 2008-2012. This yielded approximately 6.5 million records for the region as a whole over these three time periods. These publication records were sorted into 26 research disciplines implemented in Scopus. Publications were coded to denote the number of co-authorships among authors from countries in the study set and with authors in other countries outside the studied countries. Publications were further categorized by authors’ type of institutional affiliations, e.g., whether they were affiliated with a higher education institution, government, a private sector organization, or were employed in the health sector. Figure 1 describes the most important indicators and document classifications applied in the following analysis.

 

Countries

Afghanistan Iran South Korea
Bangladesh Japan Pakistan
Bhutan Laos Philippines
Brunei Macao Singapore
Cambodia Malaysia Sri Lanka
China Maldives Thailand
Hong Kong Myanmar Vietnam
India Nepal
Indonesia North Korea

Table 2 - List of countries included in the analysis


Trends in scientific output 1997-2012

Figure 3 shows that there are substantial differences among countries in their average number of publications per year, by up to 400%. Among countries with more than 1,000 papers per year per country, the largest output is from China. However, Iran, Malaysia and Pakistan have a compound annual growth rate above 15 per cent.

Asia Figure 3

Figure 3 - Number and annual growth rate of publications indexed in Scopus 1997-2012
Note: The horizontal axis gives the average number of publications indexed in Scopus per year over the time period 1997-2012 on a logarithmic scale. The vertical axis gives the compound annual growth rate (CAGR) in the number of publications over the same time period. If P1 and P2 denote the number of publications from a country in 1997 and 2012, respectively, CAGR is defined as  - 1.
Source: UNESCO report “Higher Education in Asia – Expanding up, Expanding Out”, p.85.

 

Scientific output in relation to PhD students enrollment and FTEs

The data below compare two bibliometric indicators – the number of published articles and the number of publishing authors in a year – with two non-bibliometric indicators, namely the number of FTE researchers in a country and the number of doctoral degrees awarded by that country. Figure 4 indicates that the number of publications generated within a country increases in almost linear fashion with the number of doctoral degrees. This suggests that doctoral students play a key role in the production of a country’s publication output in international, Scopus indexed journals.

Asia Figure 4
Figure 4 - Number of publications indexed in Scopus in relation to doctoral enrollment by country (UNESCO, 2006). Note: Publication counts relate to the average number of publications from a country per year during 1997-2012, and the number of doctoral degrees to the most recent year for which data are available (mostly 2011). The dashed line represents the best fit of a power law relationship of the type y. Plotting this functional relationship on a double logarithmic scale, it yields a straight line. The exponent α in the relationship is called the scaling parameter or exponent, and is in a double log plot represented by the slope of the straight line. If α=1, y increases linearly with x. If α>1 y increases superlinearly with x, indicative for a cumulative advantage. If α<1 y increases sublinearly with x, indicative for a cumulative disadvantage. The  value is a measure of the goodness of fit of the power law relationship. It ranges between 0 (no fit) and 1 (perfect fit).
Source:
UNESCO report “Higher Education in Asia – Expanding up, Expanding Out”, p.82.

Likewise, the number of authors from a country publishing research articles (at least in Scopus) increases with the number of FTE researchers (Figure 5). Further, research intensive countries, i.e., countries that have a large number of FTE researchers per inhabitant, tend to have a higher share of researchers in the business sector than do less research intensive countries. Since researchers in the business sector tend to publish less in international journals, this factor may explain why the increase in the number of publishing authors has a somewhat weaker relationship to FTE researchers in the country.

Asia Figure 5

Figure 5 - The relationship between the number FTE researchers (UNESCO, 2006) in a country and the number of authors of publications indexed in Scopus. Note: Author counts relate to the average number of publishing authors from a country per year during 1997-2012, and FTE research to the most recent year for which data are available (mostly 2011). For the meaning of the dashed line and the parameters in the functional relationship see the legend of Figure 1. For the full country name corresponding with a country code see Table 1.
Source:
UNESCO report “Higher Education in Asia – Expanding up, Expanding Out”, p.83.

International co-authorships

The trend in the percentage of internationally co-authored papers for 13 countries between 2003 and 2011 is presented in Figure 6. Three out of five high income countries such as China, Singapore and Japan, show a positive trend in international co-authorship. Seven out of nine of middle income countries such as India and Indonesia, show a significant decline in the percentage of internationally co-authored articles, and none shows a significant positive trend. A negative decline could be a sign of the consolidation and expansion phase in scientific development which is apparently dominant in middle income countries.

Asia Figure 6
Figure 6 - Trends in percentage of internationally co-authored articles in selected countries 2003-1011.
Source: UNESCO report “
Higher Education in Asia – Expanding up, Expanding Out”, p.84.

 

Trends in scientific collaborations

Figure 7 shows that there are tight co-authorship clusters within the region. Japan has a central role in the collaborative co-authorship scheme of the region. Japan’s research focus on Medicine, Biochemistry, Physics and Engineering enables it to become a central hub of collaborations, bringing together research from different areas in the region. In addition there is a formation of three clusters of research collaboration within the region. The first cluster includes China, Hong Kong (Special Administrative Region of China), Singapore and Macao (SAR of China), which constitute the East Asian region. As can be seen China also serves as a link between Hong Kong (SAR of China), Macao (SAR of China) and Singapore to other members of the region such as Japan, India and Thailand. The China / Hong Kong (SAR of China) / Singapore/ Macao (SAR of China) cluster focuses on the areas of Engineering, Physics and Astronomy as well as Computer Science for the most part. The second cluster, which includes India, Malaysia, Bangladesh, Pakistan and Afghanistan, constitutes the South Asian region and focuses on Medicine, Agriculture, Chemistry and Engineering. The third cluster, which includes Thailand as its center, closely connects Indonesia, Sri Lanka, Brunei, Nepal, Laos, Cambodia, Vietnam, Myanmar and Laos and together constitutes the South Asian region. This cluster focuses mostly on Agriculture, Medicine and Earth sciences. Finally the map shows that the Republic of Korea and China play an essential role in bridging between Democratic People's Republic of Korea and other countries.

Asia Figure 7
Figure 7 - Regional scientific collaborations.
Source: UNESCO report “
Higher Education in Asia – Expanding up, Expanding Out”, p. 88.

 

International scientific collaboration

Figure 8 shows the international scientific collaborations between Asian countries and the global community. There are four distinct “pockets” of international collaborations in the region. The United States, Canada, Germany, Spain and Italy form close collaborative relations with China, India, The Republic of Korea and Singapore. Secondly, the United Kingdom has a major role in connecting other European countries such as France, Belgium and Switzerland with SEA countries that display lower scientific output with the international community. The United Kingdom also serves as a bridge between Laos, Cambodia, Myanmar, Nepal, Bangladesh, Bhutan and others and the European scientific community. Australia forms a third circle of collaborations, bridging among Indonesia, the Philippines, Malaysia, Sri Lanka and Brunei. The map also shows that the Russian Federation is somewhat of an outlier forming single collaborations with the Republic of Korea, Japan, India, and Pakistan.

Asia Figure 8
Figure 8 - International Scientific collaborations between Asian countries and the global community.
Source: UNESCO report “
Higher Education in Asia – Expanding up, Expanding Out”, p. 89.

 

Conclusions

  1. Scientific output:
    the region has seen a significant increase in its scientific output from 1997 to 2012. There are, however, large differences between individual countries within the region. China has a leading role in scientific output and growth. However, attention should be given to countries such as Malaysia and Pakistan which have a compound annual growth rate above 15 percent in this time period.
  2. Regional and international collaborations:
    The most evident progress seen through the bibliometric analysis is both the increasing scientific collaborations between the countries of the region and a significant growth of international collaborations between the countries of the region and the international scientific community. The regional co-authorships networks show that smaller countries entering the scientific arena, such as Nepal, Bhutan and Sri Lanka, increasingly collaborate with larger countries in the region thus gaining expertise and increased output. These countries also used their collaborators as a bridge to the international scientific community. Larger countries such as China, Japan, Thailand and others, show increased international collaborative ties in the form of co-authorships and are functioning as hubs for smaller countries in their international scientific endeavors.

 

Disclaimer: This article is an extract of a study conducted for the latest UNESCO report “Higher Education in Asia – Expanding up, Expanding Out”. All figures are property of UNESCO Institute for Statistics (UIS), United Nations University-International Institute for Software Technology (UNU-IIST), Elsevier Inc. and UNESCO International Institute for Educational Planning (IIEP) (2014). Higher Education in Asia: Expanding Out, Expanding Up. ISBN 978-92-9189-147-4 licensed under CC-BY-SA 3.0 IGO. Montreal: UIS. http://www.uis.unesco.org.

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Bibliometric indicators based on publications in international, peer reviewed journals can be used to characterize the current stage of a country’s scientific development.  A simple bibliometric model for different phases of development of a country’s national research system distinguishes four phases: (1) pre-development; (2) building-up; (3) consolidation and expansion; and (4) internationalization (see Figures 1, 2). The model assumes that during the various phases of a country’s scientific development, the number of published articles in peer reviewed journals shows a more or less continuous increase, although the rate of increase may vary substantially over the years. But a bibliometric indicator measuring the share of a country’s internationally co-authored articles discriminates between the various phases in the development.

  1. Pre-development phase:
    In this phase the level of research activity in a country is low. Research oriented towards the international research front is carried out by a limited number of researchers only.  There is no clear research policy and structural funding of research. Activities result from initiatives by a limited number of active researchers, who may in some years seek collaborations with foreign colleagues. The publication output is low.  From a statistical point of view, indicators are based on low numbers and may show large annual fluctuations. This is especially true for the percentage of internationally co-authored articles.
  1. Building-up phase:
    Researchers in the country start establishing projects with foreign research teams, often funded by foreign or international agencies, and focusing on a particular topic. They begin collaborating with colleagues from more developed countries. Internationally co-authored articles constitute one of the outputs. National researchers enter international scientific networks. The role of the country’s authors in the collaboration is secondary rather than primary. The percentage of internationally co-authored articles relative to a country’s total publication output tends to increase, but is often not statistically significant, due to the fact that the absolute number of annual publications from a country is low, and the internationally co-authored papers may be concentrated in particular years.
  1. Consolidation and expansion:
    The country develops its own scientific infrastructure. The amount of funds available for research increases. The national research capacity increases. Nationally oriented journals internationalize and have a larger probability of being indexed in Scopus and other international scientific literature databases. More and more research papers are based on research carried out by national institutions only. The number of internationally co-authored papers increases as well, but at a rate that is lower than that of the country’s total output; hence, the percentage of internationally co-authored papers declines. 
  1. Internationalization:
    National research capacity is further expanding; research institutions in the country start functioning as fully fledged partners and more and more often take the lead in international collaborations. Overall impact increases; the country’s researcher’s influence the global research agenda; the country more and more becomes one of the world leaders, at least in specific research domains. Both the number of publications and the share of internationally co-authored articles increase.

Asia Figure 1

Figure 1 - Bibliometric model for capturing the state of scientific development

 

Asia Figure 2
Figure 2 - Schematic overview of trends in bibliometric indicators per development phase.
Source:
UNESCO report “Higher Education in Asia – Expanding up, Expanding Out”; p. 80.

  

Concept   Main questions   Indicators; classifications
Publication output How many articles did a country publish and how did this number develop over time? The number of research articles, reviews and conference papers published in journals and conference proceedings indexed in Scopus during 1997-2012
Disciplinary specialization In which subject field does a country specialize? Use of a subject classification into 26 main disciplines available in Scopus
Distribution by institutional sector How important are the various institutional sectors in research? Use of a classification into 4 institutional sectors: Higher Education; Government; Private; Health
Global and regional collaboration How frequently do Asian countries collaborate with each other and with countries outside the region? Based on the number of articles co-authored by researchers from different countries; calculation of the percentage share of a country’s articles co-authored with researchers working abroad
State of scientific development In what phase of its scientific development is a country? Based on a simple model taking into account the trend in a country’s annual number of publications and the percentage share of internationally co-authored articles

Table 1 - Main bibliometric indicators and classifications used in this study

This study analyzed data on scientific publications for 25 Asian countries (see Table 2) extracted from Scopus, a multidisciplinary database covering publications in 20,000 peer reviewed, mostly international journals. Data on all publications indexed in the Scopus database were organized by country and sorted into three adjacent time periods: (a) 1997-2001, (b) 2002-2007 and (c) 2008-2012. This yielded approximately 6.5 million records for the region as a whole over these three time periods. These publication records were sorted into 26 research disciplines implemented in Scopus. Publications were coded to denote the number of co-authorships among authors from countries in the study set and with authors in other countries outside the studied countries. Publications were further categorized by authors’ type of institutional affiliations, e.g., whether they were affiliated with a higher education institution, government, a private sector organization, or were employed in the health sector. Figure 1 describes the most important indicators and document classifications applied in the following analysis.

 

Countries

Afghanistan Iran South Korea
Bangladesh Japan Pakistan
Bhutan Laos Philippines
Brunei Macao Singapore
Cambodia Malaysia Sri Lanka
China Maldives Thailand
Hong Kong Myanmar Vietnam
India Nepal
Indonesia North Korea

Table 2 - List of countries included in the analysis


Trends in scientific output 1997-2012

Figure 3 shows that there are substantial differences among countries in their average number of publications per year, by up to 400%. Among countries with more than 1,000 papers per year per country, the largest output is from China. However, Iran, Malaysia and Pakistan have a compound annual growth rate above 15 per cent.

Asia Figure 3

Figure 3 - Number and annual growth rate of publications indexed in Scopus 1997-2012
Note: The horizontal axis gives the average number of publications indexed in Scopus per year over the time period 1997-2012 on a logarithmic scale. The vertical axis gives the compound annual growth rate (CAGR) in the number of publications over the same time period. If P1 and P2 denote the number of publications from a country in 1997 and 2012, respectively, CAGR is defined as  - 1.
Source: UNESCO report “Higher Education in Asia – Expanding up, Expanding Out”, p.85.

 

Scientific output in relation to PhD students enrollment and FTEs

The data below compare two bibliometric indicators – the number of published articles and the number of publishing authors in a year – with two non-bibliometric indicators, namely the number of FTE researchers in a country and the number of doctoral degrees awarded by that country. Figure 4 indicates that the number of publications generated within a country increases in almost linear fashion with the number of doctoral degrees. This suggests that doctoral students play a key role in the production of a country’s publication output in international, Scopus indexed journals.

Asia Figure 4
Figure 4 - Number of publications indexed in Scopus in relation to doctoral enrollment by country (UNESCO, 2006). Note: Publication counts relate to the average number of publications from a country per year during 1997-2012, and the number of doctoral degrees to the most recent year for which data are available (mostly 2011). The dashed line represents the best fit of a power law relationship of the type y. Plotting this functional relationship on a double logarithmic scale, it yields a straight line. The exponent α in the relationship is called the scaling parameter or exponent, and is in a double log plot represented by the slope of the straight line. If α=1, y increases linearly with x. If α>1 y increases superlinearly with x, indicative for a cumulative advantage. If α<1 y increases sublinearly with x, indicative for a cumulative disadvantage. The  value is a measure of the goodness of fit of the power law relationship. It ranges between 0 (no fit) and 1 (perfect fit).
Source:
UNESCO report “Higher Education in Asia – Expanding up, Expanding Out”, p.82.

Likewise, the number of authors from a country publishing research articles (at least in Scopus) increases with the number of FTE researchers (Figure 5). Further, research intensive countries, i.e., countries that have a large number of FTE researchers per inhabitant, tend to have a higher share of researchers in the business sector than do less research intensive countries. Since researchers in the business sector tend to publish less in international journals, this factor may explain why the increase in the number of publishing authors has a somewhat weaker relationship to FTE researchers in the country.

Asia Figure 5

Figure 5 - The relationship between the number FTE researchers (UNESCO, 2006) in a country and the number of authors of publications indexed in Scopus. Note: Author counts relate to the average number of publishing authors from a country per year during 1997-2012, and FTE research to the most recent year for which data are available (mostly 2011). For the meaning of the dashed line and the parameters in the functional relationship see the legend of Figure 1. For the full country name corresponding with a country code see Table 1.
Source:
UNESCO report “Higher Education in Asia – Expanding up, Expanding Out”, p.83.

International co-authorships

The trend in the percentage of internationally co-authored papers for 13 countries between 2003 and 2011 is presented in Figure 6. Three out of five high income countries such as China, Singapore and Japan, show a positive trend in international co-authorship. Seven out of nine of middle income countries such as India and Indonesia, show a significant decline in the percentage of internationally co-authored articles, and none shows a significant positive trend. A negative decline could be a sign of the consolidation and expansion phase in scientific development which is apparently dominant in middle income countries.

Asia Figure 6
Figure 6 - Trends in percentage of internationally co-authored articles in selected countries 2003-1011.
Source: UNESCO report “
Higher Education in Asia – Expanding up, Expanding Out”, p.84.

 

Trends in scientific collaborations

Figure 7 shows that there are tight co-authorship clusters within the region. Japan has a central role in the collaborative co-authorship scheme of the region. Japan’s research focus on Medicine, Biochemistry, Physics and Engineering enables it to become a central hub of collaborations, bringing together research from different areas in the region. In addition there is a formation of three clusters of research collaboration within the region. The first cluster includes China, Hong Kong (Special Administrative Region of China), Singapore and Macao (SAR of China), which constitute the East Asian region. As can be seen China also serves as a link between Hong Kong (SAR of China), Macao (SAR of China) and Singapore to other members of the region such as Japan, India and Thailand. The China / Hong Kong (SAR of China) / Singapore/ Macao (SAR of China) cluster focuses on the areas of Engineering, Physics and Astronomy as well as Computer Science for the most part. The second cluster, which includes India, Malaysia, Bangladesh, Pakistan and Afghanistan, constitutes the South Asian region and focuses on Medicine, Agriculture, Chemistry and Engineering. The third cluster, which includes Thailand as its center, closely connects Indonesia, Sri Lanka, Brunei, Nepal, Laos, Cambodia, Vietnam, Myanmar and Laos and together constitutes the South Asian region. This cluster focuses mostly on Agriculture, Medicine and Earth sciences. Finally the map shows that the Republic of Korea and China play an essential role in bridging between Democratic People's Republic of Korea and other countries.

Asia Figure 7
Figure 7 - Regional scientific collaborations.
Source: UNESCO report “
Higher Education in Asia – Expanding up, Expanding Out”, p. 88.

 

International scientific collaboration

Figure 8 shows the international scientific collaborations between Asian countries and the global community. There are four distinct “pockets” of international collaborations in the region. The United States, Canada, Germany, Spain and Italy form close collaborative relations with China, India, The Republic of Korea and Singapore. Secondly, the United Kingdom has a major role in connecting other European countries such as France, Belgium and Switzerland with SEA countries that display lower scientific output with the international community. The United Kingdom also serves as a bridge between Laos, Cambodia, Myanmar, Nepal, Bangladesh, Bhutan and others and the European scientific community. Australia forms a third circle of collaborations, bridging among Indonesia, the Philippines, Malaysia, Sri Lanka and Brunei. The map also shows that the Russian Federation is somewhat of an outlier forming single collaborations with the Republic of Korea, Japan, India, and Pakistan.

Asia Figure 8
Figure 8 - International Scientific collaborations between Asian countries and the global community.
Source: UNESCO report “
Higher Education in Asia – Expanding up, Expanding Out”, p. 89.

 

Conclusions

  1. Scientific output:
    the region has seen a significant increase in its scientific output from 1997 to 2012. There are, however, large differences between individual countries within the region. China has a leading role in scientific output and growth. However, attention should be given to countries such as Malaysia and Pakistan which have a compound annual growth rate above 15 percent in this time period.
  2. Regional and international collaborations:
    The most evident progress seen through the bibliometric analysis is both the increasing scientific collaborations between the countries of the region and a significant growth of international collaborations between the countries of the region and the international scientific community. The regional co-authorships networks show that smaller countries entering the scientific arena, such as Nepal, Bhutan and Sri Lanka, increasingly collaborate with larger countries in the region thus gaining expertise and increased output. These countries also used their collaborators as a bridge to the international scientific community. Larger countries such as China, Japan, Thailand and others, show increased international collaborative ties in the form of co-authorships and are functioning as hubs for smaller countries in their international scientific endeavors.

 

Disclaimer: This article is an extract of a study conducted for the latest UNESCO report “Higher Education in Asia – Expanding up, Expanding Out”. All figures are property of UNESCO Institute for Statistics (UIS), United Nations University-International Institute for Software Technology (UNU-IIST), Elsevier Inc. and UNESCO International Institute for Educational Planning (IIEP) (2014). Higher Education in Asia: Expanding Out, Expanding Up. ISBN 978-92-9189-147-4 licensed under CC-BY-SA 3.0 IGO. Montreal: UIS. http://www.uis.unesco.org.

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The black eagle soars: Germany’s bibliometric trends 2004-2013

In this piece, Stephanie Oeben and Sarah Huggett take a bibliometric investigate Germany’s research performance during the past decade, and discuss trends in German publication output and its citation impact.

Read more >


Since the scientific revolution, Germany has been a major contender in Science and Technology, and throughout the 19th Century, German was a preponderant language in scholarly communications around the globe. Although two World Wars took their toll on Germany’s scientific progress, in the modern era the country is still the home or birthplace of many Nobel Laureates. In today’s world, Germany remains a major scientific hub, producing over 6% of the world’s scholarly output in 2012, and German scholars are particularly active in disciplines such as Mathematics and Physical Sciences (1). In recent years, the country has seen a fairly steady rise in internal R&D expenditure, approaching 80 billion Euros in 2012 (2). Germany exceeded 10% of the world’s citations in 2012, leading to high relative citation impact of its research in all fields. German research also leads to technological innovations – Germany is second only to the USA in patent citation share (1). In this piece Research Trends takes a bibliometric look at trends in German research during the past decade.

 

Germany now

In the past five years (2009-2013), 497,212 Germany-based authors published 726,090 papers which were cited 5,045,807 times, resulting in a Field Weighted Citation Impact (FWCI) of 1.43. The country is highly internationally collaborative, with 48.3% of 2013 German scholarly papers resulting from international collaborations (source: SciVal).

MEASURING IMPACT: CITATION WINDOWS AND FIELD-WEIGHTING

Citations accrue to published articles over time, as articles are first read and subsequently cited by other authors in their own published articles. Citation practices, such as the number, type and age of articles cited in the reference list, may also differ by research field. As such, in comparative assessments of research outputs, citations must be counted over consistent time windows, and publication and field-specific differences in citation frequencies must be accounted for. Field-weighted citation impact is an indicator of mean citation impact, and compares the actual number of citations received by an article with the expected number of citations for articles of the same document type (article, review or conference proceeding paper), publication year and subject field. Where the article is classified in two or more subject fields, the harmonic mean of the actual and expected citation rates is used. The indicator is therefore always defined with reference to a global baseline of 1.0 and intrinsically accounts for differences in citation accrual over time, differences in citation rates for different document types (reviews typically attract more citations than research articles, for example) as well as subject-specific differences in citation frequencies overall and over time and document types. It is one of the most sophisticated indicators in the modern bibliometric toolkit. (1)

 

Germany 2004-2013

Germany has seen increases in international collaboration over time, as have several of its European neighbors (see Figure 1). The UK in particular has seen a higher increase rate than Germany in the past decade: while the UK was less internationally collaborative than Germany in 2004, by 2013, nearly half of its scholarly output (49.7%) was the result of international collaboration. That same year, more than half of the scholarly outputs of France and the Netherlands were internationally collaborative. Meanwhile, Spain and Italy show parallel increasing trends but lower percentages of international collaboration over the whole period, whilst Poland, the least internationally collaborative country selected, shows overall decreases in international collaboration over time, amounting to less than a third of its 2013 output.

Germany fig1

Figure 1 - Germany and selected European countries’ 2004-2013 international collaboration percentages. Source: SciVal (Scopus data)

Germany’s scholarly output has grown to reach 137,865 papers in 2013. Among its selected European neighbors it is second only to the UK, which published about 10,000 more papers that same year. Other selected European countries also see growth over time, but their scholarly outputs remain significantly below that of Germany and the UK (see Figure 2).

Germany fig2

Figure 2 - Germany and selected European countries’ 2004-2013 scholarly output. Source: SciVal (Scopus data). (Note: Owing to usual indexing lags for some recently-published content at the time of data extraction (mid 2014), the 2013 data point may not reflect a complete view of the final 2013 publication outputs of each country shown).

Some of the German outputs show high and increasing citability; for instance, German publications that are amongst the top 1% cited papers rose strongly over time, to reach nearly 2.4% of the country’s scholarly output in 2013. For comparison, 2.5% of the UK’s scholarly output was in the top 1% cited papers in 2013, and a significantly higher 3.1% of the Netherlands’ (see Figure 3). Germany and the UK have higher absolute numbers of papers in the top 1% cited papers than the Netherlands, but normalizing for output size reveals that a higher proportion of the Netherlands’ scholarly output is in the top 1% cited papers.

Germany fig3

Figure 3 - Proportion of 2004-2013 German and selected European countries’ publications in the top 1% cited papers. Source: SciVal (Scopus data).

Germany’s growth is not limited to the top cited outputs either, as demonstrated by the rising trend of Germany’s FWCI, from an already high 1.27 in 2004 to an impressive 1.49 in 2013. The Netherlands and the UK have higher FWCIs across the whole decade, and so does Italy in 2013 (1.60). Although in 2004 Italy’s FWCI was inferior to Germany’s, it has seen strong increases over the past 10 years, catching up to Germany in 2010 and 2011 before clearly overtaking it in 2012 and 2013, when it even marginally surpassed the UK’s (see Figure 4).

 

Germany fig4
Figure 4 - Germany and selected European countries’ 2004-2013 FWCI. Source: SciVal (Scopus data).

Finally, looking at the language diversity of scholarly publications, research has shown that non-English outputs tend to have lower citation impact (3). Taken together with the steadily decreasing proportion of German research published in German (see Figure 5), this may help explain some of the increase observed in FWCI.

 

Germany fig5

Figure 5 - Proportion of German-language German output 2004-2013. Source: Scopus.

 

Conclusion

Germany’s academic achievements are long-standing, and despite some historical turbulence, Germany has managed to maintain its status as one of the main scientific powers in Europe and on the global scene. Compared to selected European neighbors, Germany remains a solid contender with a robust performance, in particular in terms of output, even though in the last decade it has been overtaken by the UK in terms of international collaboration and by Italy in terms of FWCI. Recent trends such as increases in funding and output bode well for the bibliometrics future of the country, while boosting international collaboration could help further improve the nation’s citation impact (4).

 

References

(1)    BIS report http://www.elsevier.com/__data/assets/pdf_file/0018/171711/Elsevier_BIS_2013_web_Dec2013-2.pdf
(2)    Federal Statistical Office https://www.destatis.de/DE/ZahlenFakten/GesellschaftStaat/BildungForschungKultur/ForschungEntwicklung/Tabellen/ForschungEntwicklungSektoren.html
(3)    Van Leeuwen, T.N., Moed, H.F., Tijssen, R.J.W., Visser, M.S., van Raan, A.F.J. (2001) “Language biases in the coverage of the Science Citation Index and its consequences for international comparisons of national research performance”, Scientometrics, Vol. 51, No. 1, pp. 335-346.
(4)    Science Europe & Elsevier (2013), “Comparative Benchmarking of European and US Research Collaboration and Researcher Mobility”, retrieved from http://www.scienceeurope.org/uploads/Public documents and speeches/SE and Elsevier Report Final.pdf; The Royal Society (2011), “Knowledge, networks and nations: Global scientific collaboration in the 21st century”, (J. Wilson, L. Clarke, N. Day, T. Elliot, H. Harden-Davies, T. McBride, … R. Zaman, Eds.) (p. 113). London: The Royal Society. Retrieved from http://royalsociety.org/policy/projects/knowledge-networks-nations/report/

 

 

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Since the scientific revolution, Germany has been a major contender in Science and Technology, and throughout the 19th Century, German was a preponderant language in scholarly communications around the globe. Although two World Wars took their toll on Germany’s scientific progress, in the modern era the country is still the home or birthplace of many Nobel Laureates. In today’s world, Germany remains a major scientific hub, producing over 6% of the world’s scholarly output in 2012, and German scholars are particularly active in disciplines such as Mathematics and Physical Sciences (1). In recent years, the country has seen a fairly steady rise in internal R&D expenditure, approaching 80 billion Euros in 2012 (2). Germany exceeded 10% of the world’s citations in 2012, leading to high relative citation impact of its research in all fields. German research also leads to technological innovations – Germany is second only to the USA in patent citation share (1). In this piece Research Trends takes a bibliometric look at trends in German research during the past decade.

 

Germany now

In the past five years (2009-2013), 497,212 Germany-based authors published 726,090 papers which were cited 5,045,807 times, resulting in a Field Weighted Citation Impact (FWCI) of 1.43. The country is highly internationally collaborative, with 48.3% of 2013 German scholarly papers resulting from international collaborations (source: SciVal).

MEASURING IMPACT: CITATION WINDOWS AND FIELD-WEIGHTING

Citations accrue to published articles over time, as articles are first read and subsequently cited by other authors in their own published articles. Citation practices, such as the number, type and age of articles cited in the reference list, may also differ by research field. As such, in comparative assessments of research outputs, citations must be counted over consistent time windows, and publication and field-specific differences in citation frequencies must be accounted for. Field-weighted citation impact is an indicator of mean citation impact, and compares the actual number of citations received by an article with the expected number of citations for articles of the same document type (article, review or conference proceeding paper), publication year and subject field. Where the article is classified in two or more subject fields, the harmonic mean of the actual and expected citation rates is used. The indicator is therefore always defined with reference to a global baseline of 1.0 and intrinsically accounts for differences in citation accrual over time, differences in citation rates for different document types (reviews typically attract more citations than research articles, for example) as well as subject-specific differences in citation frequencies overall and over time and document types. It is one of the most sophisticated indicators in the modern bibliometric toolkit. (1)

 

Germany 2004-2013

Germany has seen increases in international collaboration over time, as have several of its European neighbors (see Figure 1). The UK in particular has seen a higher increase rate than Germany in the past decade: while the UK was less internationally collaborative than Germany in 2004, by 2013, nearly half of its scholarly output (49.7%) was the result of international collaboration. That same year, more than half of the scholarly outputs of France and the Netherlands were internationally collaborative. Meanwhile, Spain and Italy show parallel increasing trends but lower percentages of international collaboration over the whole period, whilst Poland, the least internationally collaborative country selected, shows overall decreases in international collaboration over time, amounting to less than a third of its 2013 output.

Germany fig1

Figure 1 - Germany and selected European countries’ 2004-2013 international collaboration percentages. Source: SciVal (Scopus data)

Germany’s scholarly output has grown to reach 137,865 papers in 2013. Among its selected European neighbors it is second only to the UK, which published about 10,000 more papers that same year. Other selected European countries also see growth over time, but their scholarly outputs remain significantly below that of Germany and the UK (see Figure 2).

Germany fig2

Figure 2 - Germany and selected European countries’ 2004-2013 scholarly output. Source: SciVal (Scopus data). (Note: Owing to usual indexing lags for some recently-published content at the time of data extraction (mid 2014), the 2013 data point may not reflect a complete view of the final 2013 publication outputs of each country shown).

Some of the German outputs show high and increasing citability; for instance, German publications that are amongst the top 1% cited papers rose strongly over time, to reach nearly 2.4% of the country’s scholarly output in 2013. For comparison, 2.5% of the UK’s scholarly output was in the top 1% cited papers in 2013, and a significantly higher 3.1% of the Netherlands’ (see Figure 3). Germany and the UK have higher absolute numbers of papers in the top 1% cited papers than the Netherlands, but normalizing for output size reveals that a higher proportion of the Netherlands’ scholarly output is in the top 1% cited papers.

Germany fig3

Figure 3 - Proportion of 2004-2013 German and selected European countries’ publications in the top 1% cited papers. Source: SciVal (Scopus data).

Germany’s growth is not limited to the top cited outputs either, as demonstrated by the rising trend of Germany’s FWCI, from an already high 1.27 in 2004 to an impressive 1.49 in 2013. The Netherlands and the UK have higher FWCIs across the whole decade, and so does Italy in 2013 (1.60). Although in 2004 Italy’s FWCI was inferior to Germany’s, it has seen strong increases over the past 10 years, catching up to Germany in 2010 and 2011 before clearly overtaking it in 2012 and 2013, when it even marginally surpassed the UK’s (see Figure 4).

 

Germany fig4
Figure 4 - Germany and selected European countries’ 2004-2013 FWCI. Source: SciVal (Scopus data).

Finally, looking at the language diversity of scholarly publications, research has shown that non-English outputs tend to have lower citation impact (3). Taken together with the steadily decreasing proportion of German research published in German (see Figure 5), this may help explain some of the increase observed in FWCI.

 

Germany fig5

Figure 5 - Proportion of German-language German output 2004-2013. Source: Scopus.

 

Conclusion

Germany’s academic achievements are long-standing, and despite some historical turbulence, Germany has managed to maintain its status as one of the main scientific powers in Europe and on the global scene. Compared to selected European neighbors, Germany remains a solid contender with a robust performance, in particular in terms of output, even though in the last decade it has been overtaken by the UK in terms of international collaboration and by Italy in terms of FWCI. Recent trends such as increases in funding and output bode well for the bibliometrics future of the country, while boosting international collaboration could help further improve the nation’s citation impact (4).

 

References

(1)    BIS report http://www.elsevier.com/__data/assets/pdf_file/0018/171711/Elsevier_BIS_2013_web_Dec2013-2.pdf
(2)    Federal Statistical Office https://www.destatis.de/DE/ZahlenFakten/GesellschaftStaat/BildungForschungKultur/ForschungEntwicklung/Tabellen/ForschungEntwicklungSektoren.html
(3)    Van Leeuwen, T.N., Moed, H.F., Tijssen, R.J.W., Visser, M.S., van Raan, A.F.J. (2001) “Language biases in the coverage of the Science Citation Index and its consequences for international comparisons of national research performance”, Scientometrics, Vol. 51, No. 1, pp. 335-346.
(4)    Science Europe & Elsevier (2013), “Comparative Benchmarking of European and US Research Collaboration and Researcher Mobility”, retrieved from http://www.scienceeurope.org/uploads/Public documents and speeches/SE and Elsevier Report Final.pdf; The Royal Society (2011), “Knowledge, networks and nations: Global scientific collaboration in the 21st century”, (J. Wilson, L. Clarke, N. Day, T. Elliot, H. Harden-Davies, T. McBride, … R. Zaman, Eds.) (p. 113). London: The Royal Society. Retrieved from http://royalsociety.org/policy/projects/knowledge-networks-nations/report/

 

 

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

A quick look at references to research data repositories

In this contribution, Sarah Huggett investigates whether there is a way to estimate the visibility of research data in the published literature, and presents some initial findings.

Read more >


Introduction

While published papers are one of the most visible outputs of the research process, in a way they are only the tip of the iceberg: the research workflow is composed of much more than meets the eye of the external observer (see Figure 1).

Data fig 1

Figure 1 - Researcher Workflow. Source: Elsevier’s Response to HEFCE’s call for evidence: independent review of the role of metrics in research assessment

 

Most scholarly research uses data in one guise or another, and recently there have been calls for data to become more systematically visible research output rather than remain a background variable of academic endeavors. For instance, the Force 11 community movement, which aims to support the advancement of scholarly communications, has issued eight Data Citation Principles that stress the importance of data being “considered legitimate, citable products of research” (1). These principles highlight main citation issues, such as access, unique identification, and interoperability and flexibility. Research Trends’ curiosity was piqued: could there be a way to estimate the visibility of research data in the published literature?

 

Methodology

Researchers may make data available in data repositories, and authors may subsequently reference these data in their scholarly outputs. So how could these data citations be analyzed?

One of the challenges mentioned by Force11 is unique identification: researchers may refer to datasets they cite by various names; however, the web addresses of the repositories in which the data reside can be used as reliable identifiers. So first, a list of data repositories was needed; this was extracted from databib (a website describing itself as “a searchable catalog / registry / directory bibliography of research data repositories”) in June 2014. This yielded 971 results (see examples in text box) of data repositories in various fields, countries, and of various sizes. Notably nearly half of the listed repositories originate from the USA (see Figure 2).

  Data fig2

 Figure 2 - Geographical distribution of data repositories. Source: databib

 

Examples of data repositories from the databib list:

1000 Genomes (Thousand Genomes) (A deep catalog of human genetic variation)
DataONE (Data Observation Network for Earth)
Dryad
Flybase
Freebase
Marine Geoscience Data System
Ontario Data Documentation, Extraction Service and Infrastructure (ODESI)
Sloan Digital Sky Survey
TreeBASE
World Data Center
WormBase

 

Second, papers citing these repositories websites needed to be identified. The Scopus advanced search function allows searching the reference fields of papers for websites, which was done for all URLs on the databib list, truncating the addresses and using wildcards as appropriate. The records of the papers identified as containing the URLs in their reference lists were then extracted.
There are two main potential caveats to this approach:

  • If an author fails to include the website to the references or mentions the website in the full text but not the references, their papers will not be retrieved by this search method.
  • Some of the websites listed by databib are more than just data repositories. If a researcher references the website with a purpose other than data citation, then their paper will still be retrieved by this search method.

 

Results

This analysis returned 178,909 1996-2014 documents, with a whopping 19% annual growth (CAGR) between 2009-2013, leading to over 30,000 papers in 2013. Most of the documents are articles (113,618 articles with 24% 2009-2013 CAGR), conference papers (37,410 conference papers with 7% 2009-2013 CAGR), and reviews (19,334 reviews with 16% 2009-2013 CAGR) (see Figure 3).

Data fig3

Figure 3 - 1996-2013 documents citing bibdata websites. Source: Scopus

 

These documents received 1,879,964 citations, and a word cloud of 2013 papers’ document titles (see Figure 4) shows the preponderance of health-related topics.

Data fig 4

Figure 4 - word cloud of words of titles of 2013 documents citing bibdata websites. Source: Scopus and tagxedo

 

Conclusion

The visibility of research data as estimated by references to data repositories in the published literature has seen strong growth in recent years. The topics covered by these papers are preponderantly centered on health-related issues. This currently topical issue is seeing initiatives aiming to further integrate research data into the more traditional outputs of research that are scholarly communications (1). There are still challenges ahead, in particular regarding unique identification and meta-data integration, which would allow more rigorous and accurate bibliometrics analyses. Nevertheless, with current computational storage capacities and increasing demand from the research community, the future of research data currently appears full of potential promises.

 

References

(1)    Force 11 - “Joint Declaration of Data Citation Principles”, accessed at https://www.force11.org/datacitation in August 2014.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Introduction

While published papers are one of the most visible outputs of the research process, in a way they are only the tip of the iceberg: the research workflow is composed of much more than meets the eye of the external observer (see Figure 1).

Data fig 1

Figure 1 - Researcher Workflow. Source: Elsevier’s Response to HEFCE’s call for evidence: independent review of the role of metrics in research assessment

 

Most scholarly research uses data in one guise or another, and recently there have been calls for data to become more systematically visible research output rather than remain a background variable of academic endeavors. For instance, the Force 11 community movement, which aims to support the advancement of scholarly communications, has issued eight Data Citation Principles that stress the importance of data being “considered legitimate, citable products of research” (1). These principles highlight main citation issues, such as access, unique identification, and interoperability and flexibility. Research Trends’ curiosity was piqued: could there be a way to estimate the visibility of research data in the published literature?

 

Methodology

Researchers may make data available in data repositories, and authors may subsequently reference these data in their scholarly outputs. So how could these data citations be analyzed?

One of the challenges mentioned by Force11 is unique identification: researchers may refer to datasets they cite by various names; however, the web addresses of the repositories in which the data reside can be used as reliable identifiers. So first, a list of data repositories was needed; this was extracted from databib (a website describing itself as “a searchable catalog / registry / directory bibliography of research data repositories”) in June 2014. This yielded 971 results (see examples in text box) of data repositories in various fields, countries, and of various sizes. Notably nearly half of the listed repositories originate from the USA (see Figure 2).

  Data fig2

 Figure 2 - Geographical distribution of data repositories. Source: databib

 

Examples of data repositories from the databib list:

1000 Genomes (Thousand Genomes) (A deep catalog of human genetic variation)
DataONE (Data Observation Network for Earth)
Dryad
Flybase
Freebase
Marine Geoscience Data System
Ontario Data Documentation, Extraction Service and Infrastructure (ODESI)
Sloan Digital Sky Survey
TreeBASE
World Data Center
WormBase

 

Second, papers citing these repositories websites needed to be identified. The Scopus advanced search function allows searching the reference fields of papers for websites, which was done for all URLs on the databib list, truncating the addresses and using wildcards as appropriate. The records of the papers identified as containing the URLs in their reference lists were then extracted.
There are two main potential caveats to this approach:

  • If an author fails to include the website to the references or mentions the website in the full text but not the references, their papers will not be retrieved by this search method.
  • Some of the websites listed by databib are more than just data repositories. If a researcher references the website with a purpose other than data citation, then their paper will still be retrieved by this search method.

 

Results

This analysis returned 178,909 1996-2014 documents, with a whopping 19% annual growth (CAGR) between 2009-2013, leading to over 30,000 papers in 2013. Most of the documents are articles (113,618 articles with 24% 2009-2013 CAGR), conference papers (37,410 conference papers with 7% 2009-2013 CAGR), and reviews (19,334 reviews with 16% 2009-2013 CAGR) (see Figure 3).

Data fig3

Figure 3 - 1996-2013 documents citing bibdata websites. Source: Scopus

 

These documents received 1,879,964 citations, and a word cloud of 2013 papers’ document titles (see Figure 4) shows the preponderance of health-related topics.

Data fig 4

Figure 4 - word cloud of words of titles of 2013 documents citing bibdata websites. Source: Scopus and tagxedo

 

Conclusion

The visibility of research data as estimated by references to data repositories in the published literature has seen strong growth in recent years. The topics covered by these papers are preponderantly centered on health-related issues. This currently topical issue is seeing initiatives aiming to further integrate research data into the more traditional outputs of research that are scholarly communications (1). There are still challenges ahead, in particular regarding unique identification and meta-data integration, which would allow more rigorous and accurate bibliometrics analyses. Nevertheless, with current computational storage capacities and increasing demand from the research community, the future of research data currently appears full of potential promises.

 

References

(1)    Force 11 - “Joint Declaration of Data Citation Principles”, accessed at https://www.force11.org/datacitation in August 2014.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Publish or perish? The rise of the fractional author…

Andrew Plume and Daphne van Weijen investigate how the pressure researchers feel to publish their work has affected co-authorship patterns over the past 10 years. Are researchers publishing more unique articles or co-authoring more articles?

Read more >


“Publish or perish” is a common phrase used to describe the pressure researchers feel to publish their research findings in order to stay relevant and be successful within the academic community. It’s been around a very long time, although the origins of the phrase are somewhat unclear. Some researchers attribute the phrase to Kimball C. Atwood III, who is said to have coined the phrase in 1950 (1, 2). A 1996 article by Eugene Garfield (3) traces the phrase back to at least 1942, while according to Wikipedia (4) the term was used even earlier, in a 1932 non-academic book by Harold Jefferson Coolidge (5). The phenomenon has become a focus of academic research itself, as a search for the phrase in Scopus retrieved 305 documents published on the topic from 1962 to date. On average, more than 20 articles per year were published on the topic over the past 5 years (2009 – 2013), with 37 articles alone published in 2013. Nonetheless, it seems clear that researchers suffer from this phenomenon on an increasing scale.

One common belief is that as a result of the rise of the “publish or perish” culture, and in order to remain successful in academia, each researcher is publishing more and more articles every year. But is this true? Are researchers publishing more unique articles or co-authoring more articles? One of the earliest studies in our literature search that tried to answer this question, by F.P. De Villiers, was published in 1984 and focused on changes in authorship in the South African Medical Journal from 1971 to 1982 (6). Results of the study indicated that:

“the mean number of authors per article increased from 1.77 in 1971 to 2.35 in 1982, while the proportion of articles with only 1 author decreased from 60.8% to 40.8%. Possible reasons for this are mentioned, of which the pressure to publish may not be the least.” (6).

Although this sounds intuitively plausible, these results were restricted to articles published in a single journal, and in only one research area, about 30 to 40 years ago. Since then, we’ve seen an increase in papers authored by an extremely large number of researchers, most notably the ATLAS collaboration papers published in 2008 (2,926 authors) (7) and 2012 (3,171 authors) (8) and a Nature article on the Initial Sequencing and Analysis of the Human Genome by the International Human Genome Sequencing Consortiumwith about 2,900 authors published in 2009 (9). But the question remains how researchers are currently dealing with the increased pressure to publish. In other words, are individual researchers actually writing more articles every year, or are there just more authors writing more collaboratively? To answer this question we collected trend data from Scopus for 2003 – 2013 and checked different characteristics of authorship patterns over time; the data simply counted the number of articles (articles, reviews and conference papers) published each year and the count of authorships and unique author names associated with these. Here we use the term ‘authorships’ to define the occurrence of an individual on an article, while the concept of a ‘unique author’ reflects an individual who has appeared on one or more articles in a given period (here a single year).

 

Main findings

Results of our analysis show that there has been a consistent growth in the number of articles published over the past decade; from 1.3 million in 2003 to 2.4 million in 2013 (see Figure 1). At the same time, the number of authorships has increased at a far greater rate from 4.6 million in 2003 to 10 million in 2013.

Authorship fig1

Figure 1 - Growth in volume of articles published, authorships and unique authors from 2003 – 2013. Source: Scopus.

 

Over the past ten years or so, the number of authorships per unique author (2.31 in 2013) has increased while the number of articles per unique author (0.56 in 2013) has declined (see Figure 2), while the total number of articles published per year has increased (see Figure 1). At the same time, the average number of authorships per article has increased from 3.5 to 4.15 authors from 2003 to 2013, which suggests that authors are collaborating and co-authoring more now than they were 10 years ago. (At the same time, the percentage of single authored papers has declined from 20% in 2003 to 13% in 2013; data not shown).

 Authorship fig2

Figure 2 -  Authorship patterns over time (2003 – 2013). Source: Scopus.

 

In other words, the number of authorships per article is rising: 10 years ago, an average paper had about 3.5 authors, now it has over 4 authors. This rise in ‘fractional authorship’ (the claiming of credit for authorship of a published articles by more than one individual) is most likely driven by research collaboration, and is an efficient mechanism by which each author can increase their apparent productivity from the same underlying research contributions (i.e. articles per unique author) of 0.56 articles per unique author per year.

This means that a single author can produce a single authored article once every two years or a co-authored article with one other author every year. Now, with the rise of ‘fractional authorship’ or fractional contributions to papers, we’re seeing that the way in which authors are using this half a paper’s capacity per year is changing. A given author may achieve this output by appearing as ninth author on 5 different paper (5 x 0.1 authorships per paper), instead of co-authoring as second author on a pair of 4-author papers per year (2 x 0.25 authorships per paper).

These findings build on earlier observations (10) in which the increases in authorships per article (at 1.9% mean annual growth rate in the period 1980-2002), authorships per unique author (at 1.2%) were contrasted by a decline in article per unique author (at -0.7%). In the current data, the comparable rates are 1.8%, 0.9% and -0.8%; suggesting the continuation of a long-term trend stretching back not just one decade but at least three.

These findings are confirmed by research in several specialty fields, including software engineering, where the average number of authors per paper has risen on average by about 0.4 authors per decade from 1970 to 2012 (11), and pediatric surgery, which has seen a marked increase in papers authored by 6 or more authors and also in multi-national papers (12).

If each active author does not increase their fractional article output each year, what is driving the observed volume increase in research outputs globally? Here, the answer is quite simple – the research workforce is growing at a similar rate year-on-year to the volume of article production (at about 3-4% p.a.; data not shown), and so new entrants into research fields are responsible for creating new knowledge which eventually sees publication in the peer-reviewed literature.


Conclusion

Despite opinions to the contrary, these data suggest that there has been no apparent increase in overall productivity per active author over the last decade. Instead, authors are using their authorship potential more wisely by becoming more collaborative in the way they work, which is driving an apparent inflation in each author’s productivity as well as author bylines. Instead, the underlying driver of the volume increase in articles published is simply the introduction of new entrants/authors into the market. That is not surprising, as the total population of researchers globally continues to rise every year, and they become increasingly subject to the principles of "publish or perish": and so the cycle continues.

 

References

(1) Research Trends (2010) Did you know…  “Publish or perish” has been worrying researchers for 60 years?  Research Trends, issue 16, March 2010.
(2) Sojka, R.E. and Mayland, H.F. (1991) Driving Science With One Eye On the Peer Review Mirror
(3) Garfield, Eugene, (1996). "What Is The Primordial Reference For The Phrase 'Publish Or Perish'?"The Scientist 10 (12): 11.
(4) http://en.wikipedia.org/wiki/Publish_or_perish, accessed July 7th, 2014.
(5) Coolidge, Harold Jefferson, (1932) Archibald Cary Coolidge: Life and Letters, p. 308 (source: Wikipedia). 
(6) De Villiers, F.P. (1984) South African Medical Journal, Volume 66, Issue 23, 8 December 1984, Pages 882-883.
(7) The ATLAS Collaboration et al (2008). JINST 3 S08003 doi:10.1088/1748-0221/3/08/S08003
(8) The ATLAS Collaboration et al (2012). Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC, Physics Letters B, Volume 716, Issue 1, 17 September 2012, Pages 1–29, DOI: 10.1016/j.physletb.2012.08.020.
(9) International Human Genome Sequencing Consortium(2009). Initial sequencing and analysis of the human genome,Nature, V412, 565.
(10) Moed, H.F. (2005). Citation Analysis in Research Evaluation. Dordrecht (Netherlands): Springer. ISBN 1-4020-3713-9, 346 pp.
(11) Fernandes, J.M. (2014) Authorship trends in software engineering. Scientometrics, DOI 10.1007/s11192-014-1331-6.
(12) Pinter, A. (2014), Changing Authorship Patterns and Publishing Habits in the European Journal of Pediatric Surgery: A 10-Year Analysis, European Journal of Pediatric Surgery, [Epub ahead of print]
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

“Publish or perish” is a common phrase used to describe the pressure researchers feel to publish their research findings in order to stay relevant and be successful within the academic community. It’s been around a very long time, although the origins of the phrase are somewhat unclear. Some researchers attribute the phrase to Kimball C. Atwood III, who is said to have coined the phrase in 1950 (1, 2). A 1996 article by Eugene Garfield (3) traces the phrase back to at least 1942, while according to Wikipedia (4) the term was used even earlier, in a 1932 non-academic book by Harold Jefferson Coolidge (5). The phenomenon has become a focus of academic research itself, as a search for the phrase in Scopus retrieved 305 documents published on the topic from 1962 to date. On average, more than 20 articles per year were published on the topic over the past 5 years (2009 – 2013), with 37 articles alone published in 2013. Nonetheless, it seems clear that researchers suffer from this phenomenon on an increasing scale.

One common belief is that as a result of the rise of the “publish or perish” culture, and in order to remain successful in academia, each researcher is publishing more and more articles every year. But is this true? Are researchers publishing more unique articles or co-authoring more articles? One of the earliest studies in our literature search that tried to answer this question, by F.P. De Villiers, was published in 1984 and focused on changes in authorship in the South African Medical Journal from 1971 to 1982 (6). Results of the study indicated that:

“the mean number of authors per article increased from 1.77 in 1971 to 2.35 in 1982, while the proportion of articles with only 1 author decreased from 60.8% to 40.8%. Possible reasons for this are mentioned, of which the pressure to publish may not be the least.” (6).

Although this sounds intuitively plausible, these results were restricted to articles published in a single journal, and in only one research area, about 30 to 40 years ago. Since then, we’ve seen an increase in papers authored by an extremely large number of researchers, most notably the ATLAS collaboration papers published in 2008 (2,926 authors) (7) and 2012 (3,171 authors) (8) and a Nature article on the Initial Sequencing and Analysis of the Human Genome by the International Human Genome Sequencing Consortiumwith about 2,900 authors published in 2009 (9). But the question remains how researchers are currently dealing with the increased pressure to publish. In other words, are individual researchers actually writing more articles every year, or are there just more authors writing more collaboratively? To answer this question we collected trend data from Scopus for 2003 – 2013 and checked different characteristics of authorship patterns over time; the data simply counted the number of articles (articles, reviews and conference papers) published each year and the count of authorships and unique author names associated with these. Here we use the term ‘authorships’ to define the occurrence of an individual on an article, while the concept of a ‘unique author’ reflects an individual who has appeared on one or more articles in a given period (here a single year).

 

Main findings

Results of our analysis show that there has been a consistent growth in the number of articles published over the past decade; from 1.3 million in 2003 to 2.4 million in 2013 (see Figure 1). At the same time, the number of authorships has increased at a far greater rate from 4.6 million in 2003 to 10 million in 2013.

Authorship fig1

Figure 1 - Growth in volume of articles published, authorships and unique authors from 2003 – 2013. Source: Scopus.

 

Over the past ten years or so, the number of authorships per unique author (2.31 in 2013) has increased while the number of articles per unique author (0.56 in 2013) has declined (see Figure 2), while the total number of articles published per year has increased (see Figure 1). At the same time, the average number of authorships per article has increased from 3.5 to 4.15 authors from 2003 to 2013, which suggests that authors are collaborating and co-authoring more now than they were 10 years ago. (At the same time, the percentage of single authored papers has declined from 20% in 2003 to 13% in 2013; data not shown).

 Authorship fig2

Figure 2 -  Authorship patterns over time (2003 – 2013). Source: Scopus.

 

In other words, the number of authorships per article is rising: 10 years ago, an average paper had about 3.5 authors, now it has over 4 authors. This rise in ‘fractional authorship’ (the claiming of credit for authorship of a published articles by more than one individual) is most likely driven by research collaboration, and is an efficient mechanism by which each author can increase their apparent productivity from the same underlying research contributions (i.e. articles per unique author) of 0.56 articles per unique author per year.

This means that a single author can produce a single authored article once every two years or a co-authored article with one other author every year. Now, with the rise of ‘fractional authorship’ or fractional contributions to papers, we’re seeing that the way in which authors are using this half a paper’s capacity per year is changing. A given author may achieve this output by appearing as ninth author on 5 different paper (5 x 0.1 authorships per paper), instead of co-authoring as second author on a pair of 4-author papers per year (2 x 0.25 authorships per paper).

These findings build on earlier observations (10) in which the increases in authorships per article (at 1.9% mean annual growth rate in the period 1980-2002), authorships per unique author (at 1.2%) were contrasted by a decline in article per unique author (at -0.7%). In the current data, the comparable rates are 1.8%, 0.9% and -0.8%; suggesting the continuation of a long-term trend stretching back not just one decade but at least three.

These findings are confirmed by research in several specialty fields, including software engineering, where the average number of authors per paper has risen on average by about 0.4 authors per decade from 1970 to 2012 (11), and pediatric surgery, which has seen a marked increase in papers authored by 6 or more authors and also in multi-national papers (12).

If each active author does not increase their fractional article output each year, what is driving the observed volume increase in research outputs globally? Here, the answer is quite simple – the research workforce is growing at a similar rate year-on-year to the volume of article production (at about 3-4% p.a.; data not shown), and so new entrants into research fields are responsible for creating new knowledge which eventually sees publication in the peer-reviewed literature.


Conclusion

Despite opinions to the contrary, these data suggest that there has been no apparent increase in overall productivity per active author over the last decade. Instead, authors are using their authorship potential more wisely by becoming more collaborative in the way they work, which is driving an apparent inflation in each author’s productivity as well as author bylines. Instead, the underlying driver of the volume increase in articles published is simply the introduction of new entrants/authors into the market. That is not surprising, as the total population of researchers globally continues to rise every year, and they become increasingly subject to the principles of "publish or perish": and so the cycle continues.

 

References

(1) Research Trends (2010) Did you know…  “Publish or perish” has been worrying researchers for 60 years?  Research Trends, issue 16, March 2010.
(2) Sojka, R.E. and Mayland, H.F. (1991) Driving Science With One Eye On the Peer Review Mirror
(3) Garfield, Eugene, (1996). "What Is The Primordial Reference For The Phrase 'Publish Or Perish'?"The Scientist 10 (12): 11.
(4) http://en.wikipedia.org/wiki/Publish_or_perish, accessed July 7th, 2014.
(5) Coolidge, Harold Jefferson, (1932) Archibald Cary Coolidge: Life and Letters, p. 308 (source: Wikipedia). 
(6) De Villiers, F.P. (1984) South African Medical Journal, Volume 66, Issue 23, 8 December 1984, Pages 882-883.
(7) The ATLAS Collaboration et al (2008). JINST 3 S08003 doi:10.1088/1748-0221/3/08/S08003
(8) The ATLAS Collaboration et al (2012). Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC, Physics Letters B, Volume 716, Issue 1, 17 September 2012, Pages 1–29, DOI: 10.1016/j.physletb.2012.08.020.
(9) International Human Genome Sequencing Consortium(2009). Initial sequencing and analysis of the human genome,Nature, V412, 565.
(10) Moed, H.F. (2005). Citation Analysis in Research Evaluation. Dordrecht (Netherlands): Springer. ISBN 1-4020-3713-9, 346 pp.
(11) Fernandes, J.M. (2014) Authorship trends in software engineering. Scientometrics, DOI 10.1007/s11192-014-1331-6.
(12) Pinter, A. (2014), Changing Authorship Patterns and Publishing Habits in the European Journal of Pediatric Surgery: A 10-Year Analysis, European Journal of Pediatric Surgery, [Epub ahead of print]
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
Page 1 of 2512345...1020...Last »