Welcome to Research Trends

Research Trends is an online publication providing objective, up-to-the-minute insights into scientific trends based on bibliometric analysis. Worldwide, there is a growing demand for quality research performance measurement and trend-related information by deans, faculty heads, researchers, funding bodies and ranking agencies.

More about Research Trends

Research Indicators

Standardizing research metrics and indicators – Perspectives & approaches

Lisa Colledge and Gali Halevi examine the topic of implementing standards in research metrics, following the STI conference, and the opinions of leading researchers whom they interviewed.

Read more >

The increase of data availability and computational advances has led to a plethora of metrics and indicators being developed for different levels of research evaluation. Whether at the individual, program, department or institution level, there are numerous methodologies and indicators offered to capture the impact of research output. These advances have also highlighted the fact that metrics must be applied appropriately depending on the goal and subject of the evaluation and should be used alongside qualitative inputs, such as peer review.

However, this has not solved the challenge of finding core quality and validity measures that will guide the current and future development of evaluative metrics and indicators. While innovation in the field of research metrics is ongoing, funders, institutions and departments are already using output metrics to measure specific elements. Such metrics as are being used cannot be scaled up to global indicators, however. This means that the field now faces a divide: although new metrics exist, they are oftentimes not suitable or cannot be scaled up to the global research ecosystem. Therefore, evaluators still use metrics that have already been recognized as unsuitable measures of individuals’ performance, such as journal-level indicators. But for lack of agreed upon alternatives, such metrics are being used routinely in inappropriate circumstances despite their shortcomings.

The need for quality and validity measures that will guide the development of research metrics and ensure that they are applied in an appropriate and fair way is at the heart of several discussions carried out via conferences and listservs, especially in the Scientometrics, Science Policy, and Research Funding communities.

One such panel discussion was held at the Science and Technology Indicators (STI) 2014 conference in Leiden. The panel focused on the need for standardization in the field of research metrics that will speak to their validity, quality and appropriate use and ways to arrive at a consensus. The panel consisted of Dr. Lisa Colledge (Elsevier, Director of Research Metrics), Stephen Curry (professor of Structural Biology at Imperial College, London, and member of HEFCE  Higher Education Funding Council for England steering group on the use of research metrics in performance measurement), Stefanie Haustein (University of Montreal), Jonathan Adams (Chief Scientist at Digital Science), and Diana Hicks (Georgia Institute of Technology).

The Snowball Metrics initiative (1), presented by Dr. Lisa Colledge, is an example of research universities collaborating internationally to arrive at a commonly agreed upon set of measures of research (outputs as well as other aspects of the research process). Snowball Metrics’ aim is for universities to agree on a set of metrics methodologies that give strategic insight into all of a university’s activities. These metrics should be understood by everyone in the same way, so that when universities calculate metrics using these “recipes” they all follow the same protocol (2).

Lisa emphasized that Snowball Metrics welcomes feedback from the research community, to improve the existing recipes and to expand the set of recipes available. Elsevier is involved in Snowball Metrics at the invitation of the universities who drive it, to project manage and to provide technical expertise where needed. The Snowball Metrics program has responded to the HEFCE review (3), and this initiative has significantly influenced Elsevier’s overall approach to the use of research metrics, expressed in a response to the same HEFCE review (4). The main principles of Elsevier’s manifesto are:

  1. A set of multiple metrics distributed across the entire research workflow is needed.
  2. Metrics must be available to be selected for all relevant peers.
  3. The generation and use of metrics should be automated and scalable.
  4. Quantitative information provided by metrics must be complemented by qualitative evidence to ensure the most complete and accurate input to answer a question.
  5. The combination of multiple metrics gives the most reliable quantitative input.
  6. Disciplinary and other characteristics that affect metrics, but that do not indicate different levels of performance, must be taken into account.
  7. Metrics should be carefully selected to ensure that they are appropriate to the question being asked.
  8. We cannot prevent the inappropriate or irresponsible use of metrics, but we can encourage responsible use by being transparent, and intolerant of “gaming”.
  9. Those in the research community who apply metrics in their day-to-day work, and who are themselves evaluated through their use, should ideally define the set of metrics to be used. It is highly desirable that this same community, or those empowered by the community on their behalf, maintains the metric definitions.
  10. There should be no methodological black boxes.
  11. Metric methodologies should be independent of the data sources and tools needed to generate them, and also independent of the business and access models through which the underlying data are made available.
  12. Aggregated or composite metrics should be avoided.

Dr. Ian Viney, Director of Strategic Evaluation and Impact, Medical Research Council, supports this approach, saying that “standards, at least properly described metrics, are important if you want to have reproducibility for your analyses, across different organizations and/or timescales. Evaluation of research, is itself research and development – success and failure should be properly documented.” Therefore, “’recipes’ should be available for discussion; testing and modification and effective approaches should become accepted standards – methods that everyone can apply.” Dr. Viney also commented on the gap between research metrics and the research community saying that:

The link between these outputs and research activity or impact is little understood.  What is most interesting is the development of metrics relating to other logically important areas of research activity – e.g. the ways in which researchers influence policy setting processes, or research feeds into policies, the way in which research teams develop new processes and products, the way in which research materials are disseminated and used.  We can make a good argument that these activities are intermediate indicators of impact, they logically describe steps along a pathway to impact.  They describe activities however not well reported in any standard format, and data is not readily available on these outputs.” 

 Dr. Ian Viney:  “We should be open about our methods, discussion across stakeholders is helpful, and work such as Snowball Metrics will help accelerate the field. I will be convinced that a particular method should become a standard when it has been successfully and reproducibly applied, when it helps us better understand research progress, productivity and/or quality. The scientometrics community should provide expert advice to stakeholders regarding the development of suitable approaches.  This community has a central role in proposing the most promising methods for wider use.


Dr. Jonathan Adams, Chief Scientist at Digital Science
, who participated in the panel, cautioned against rigid setting of standards. In his view “It is infeasible to set comprehensive written standards for metrics, indicators or evaluation methodologies when there is a diverse range of contexts, cultures and jurisdictions in which they might be applied and when data access and data diversity are changing very rapidly.” Therefore, his opinion is that any attempt to create such standards would create “an artificial vision of security and stability” that might be used inappropriately by research agencies and managers.

Dr. Paul Wouters, Director of The Centre for Science and Technology Studies (CWTS) and professor of Scientometrics, added his concern regarding standardization of metrics stating that “standards may be important for the construction of databases of research products. So at the technical level they can be useful. However, standards can mislead users if they are essentially captured by narrow interests.”

Following the conference, CWTS, a part of Leiden University, published the “The Leiden Manifesto in the Making: proposal of a set of principles on the use of assessment metrics(5).

In the manifesto Paul Wouters, Sarah de Rijcke and their colleagues summarized some principles around which the debate about standardization and quality should revolve:

  1. There should be a connection between assessment procedures and the primary process of knowledge creation. If such a connection doesn’t exist then the assessment loses a part of its usefulness for researchers and scholars.
  2. Standards developed by universities and data provided should be monitored and benefit from the technical expertise of the Scientometrics community. Although the Scientometrics community does not want to set standard themselves, it should take an active part in documenting them and ensuring their validity and quality.
  3. There’s a need to strengthen the working relationship with the public nature of the infrastructure of meta-data, including current research information systems, publication databases and citation indexes including those available from for-profit companies.
  4. Taking these issues together provides an inspiring collective research agenda for the Scientometrics community.

Dr. Wouters added that the main motivation should be to “prevent misuse or harmful applications by deans, universities or other stakeholders in scientometrics. Although many studies in scientometrics suffer from deficient methods, this problem cannot be solved with standards, but only with better education and software (which may build on some technical standards).

Dr. Paul Wouters:

“I do not think that global standards are currently possible or even desirable, therefore, Principles of good evaluation practices: YES. Universal technical standards: NO.

The Scientometrics community should analyze, train, educate, clarify, and also take on board the study of how the Scientometric indicators influence the conduct of science and scholarship.”


Dr. Peter Dahler-Larsen, a professor at the Department of Political Science at the University of Copenhagen, recently contributed to The Citation Culture blog on the topic of development of quality standards for Science & Technology indicators. Dr. Dahler-Larsen commented on his contribution to Research Trends, saying that “it is important to follow the discussion of standards, because in some fields standards pave the way for a particular set of practices that embody particular values - for better or for worse.” The main motivation for the development of standards, added Dr. Dahler-Larsen, is “NOT their agreed-upon character” but rather their ability to “inspire ethical and methodological awareness, and this can take place even without much consensus.” Yet, Dr. Dahler-Larsen says that in spite of their importance he “does not have high hopes about the adoption of standards in policy-making.”

 Dr. Dahler-Larsen:

“The most important function of standards is to raise awareness and debate. Standards can be helpful in discussion of problematic policy-making initiatives.
The Scientometric community has an important role to play because presumably, they comprise experts who knows about the pros and cons and pitfalls related to particular measurement approaches etc.  Their accumulated experience should inform better practice

Dr. John T. Green, who chairs the Snowball Metrics Steering Committee, believes that “whilst some argue that it is impossible to define or agree standard metrics because of the diverse range of contexts and geographies, like it or not, funders and governments are using such measures - some almost slavishly and exclusively (as in Taiwan to allocate government funding). Therefore, whilst it is ideologically acceptable for the Scientometric community to take the high ground and claim that because metrics cannot be perfect therefore none should be developed, to do so is ignoring reality – let us at least do our best and develop metrics as best we can (as indeed has happened over time with bibliometrics). I believe it is important for the academic community to engage and ensure that whatever is used to measure them is fit for purpose, or as fit as can be, especially given that they should never be used in isolation – metrics are only a part, albeit an important part, of the evaluation landscape. Thus the approach of Snowball – bottom-up and owned by the academic community.”

Professor Jun Ikeda, Chief Advisor to the President of the University of Tsukuba, Japan, supports the development of standards in metrics. In his view they will save researchers time when reporting to funders. Prof. Ikeda pointed to the fact that in many cases there is a real difficulty to compare universities’ performance and says that “If every university defines things in their own way, and calculates metrics in their own way, then seeing a metric that is higher or lower than someone else's is meaningless because the difference might not be real, but just due to different ways of working with the data. I want to do apples to apples comparisons, to be sure that I can be confident in differences that I see, and confident in taking decisions based on them.”

Prof. Ikeda:

“The biggest gap is for the research community to drive the direction that this whole area is going in. A lot is happening, but we feel a bit like it is all being done to us. There is space for us to take control of our own destiny, and shape things as we would like them to be, and as they make the most sense to us.

Research-focused universities need to be active in defining the metrics that they want to use to give insights into their strategies, Prof. Ikeda said. “Ideally the researchers within our universities would also support and use the same metrics to help them to promote their careers and to understand how they are performing relative to their own peers.” Although the debate about whether standardization in research metrics is necessary or even desirable, there is no doubt that the discussion of itself is of importance as it serves as an instrument to raise awareness about the complexity of the topic as a whole. Standards may not be easy to develop or implement, but there is little doubt that consensus regarding their proper use is needed. As more data becomes available and more metrics are developed, the issue of their usefulness and accuracy in different settings becomes crucial. Data providers, evaluators, funders and the Scientometric community must work together to not only aggregate, calculate and produce metrics, but also test them in different contexts and educate the wider audience as to their proper use.



(1) Colledge, L. (2014) “Snowball Metrics Recipe Book”, Available at: http://www.snowballmetrics.com/wp-content/uploads/snowball-recipe-book_HR.pdf
(2) Snowball Metrics, “Why is this initiative important to the higher education sector?”, Available at: http://www.snowballmetrics.com/benefits
(3) Snowball Metrics (2014) “Response to the call for evidence to the independent review of the role of metrics in research assessment”, Available at: http://www.snowballmetrics.com/wp-content/uploads/Snowball-response-to-HEFCE-review-on-metrics-300614F.pdf
(4) Elsevier (2014) “Response to HEFCE’s call for evidence: independent review of the role of metrics in research assessment”, Available at: http://www.elsevier.com/__data/assets/pdf_file/0015/210813/Elsevier-response-HEFCE-review-role-of-metrics.pdf

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
Reporting Back

Reporting Back: STI 2014 Leiden, The Netherlands

The latest STI conference, held at Leiden University in September, focused on the topic of implementing standards in research metrics. Gali Halevi attended the conference and reports back.

Read more >

One of the main attractions at the Science and Technology Indicators (STI) conference (1) held in Leiden in September 2014, was “The Daily Issue”. Invented by Diana Wildschut, Harmen Zijp and Patrick Nederkoorn, the reporters had three hours to find out what was happening at the conference and report about it using 1950s equipment and without telephones or internet (2). The result was a hilarious newsletter published every day and handed to the audience who came to realize how the world of Scientometrics looks to outsiders. An example included an item on the issue of serendipity in scientific process which resulted in the invention of “Serendipimetry”, a metric that measures serendipity (see additional interpretation in the text box below).


(The Daily Issue; No. 72 http://sti2014.cwts.nl/download/f-z2r2.pdf)

Some of the most valuable scientific outcomes are the result of accidental discoveries. This article explores the possibility of a metric of serendipity. Firstly, a clear distinction has to be made between a serendipity indicator and a serendipitous indicator. The latter may only be meaningful in the way it could assist chance events in finding information. More interesting however, it could be to actually measure, or at least estimate, the degree of serendipity that led to a research result. And yet another angle would be the presentation of research that might facilitate its receivers, e.g. the readers of an article, in making odd detours, living through paradigm shifts et cetera.

Alongside the traditional topics often discussed at the STI conference such as statistical representation of scientific output in forms of performance indicators and metrics, this year the conference put a strong focus on innovation. New datasets and algorithms were among the topics given significant attention. Examples include new data derived from funding systems which were explored in relation to productivity, efficiency, and patenting. Looking at the factors that influence participation in government funded programs, Lepori et al. (3) found a very strong concentration of participations from a very small number of European research universities. They also showed that these numbers can be predicted with high precision from organizational characteristics and, especially, size and international reputation. Relationships between funding, competitiveness and performance (4) were found to contradict previous findings, whereas here the researchers found that the share of institutional funding does not correlate with competitiveness, overall performance, and top performance. Additional research papers using funding systems data are available here.

New gender and career data currently available brought forth a series of studies dedicated to the relationship between gender, career level and scientific output. Van der Weijden and Calero Medina (5) studied the oeuvres of female and male scientists in academic publishing using bibliometrics. Using data from the ACUMEN survey (6), their analysis confirmed the traditional gender pattern: men produce on average a higher number of publications compared to women, regardless of their academic position and research field, and women are not evenly represented across authorship positions. Paul-Hus et al. (7) studied the place of women in the Russian scientific research system in various disciplines and how this position has evolved during the last forty years.  They found that gender parity is far from being achieved and that women remain underrepresented in terms of their relative contribution to scientific output across disciplines. Sugimoto et al. (8) presented a study featuring a global analysis of women in patents from 1976 to 2013, which found that women’s contribution to patenting remains even lower than would be predicted given their representation in Science, Technology, Engineering, and Mathematics.

Career-related studies also open new paths to understanding the relationships between academic positions, publishing, and relative scientific contributions of researchers throughout their careers. Derycke et al. (9) studied the factors influencing PhD students’ scientific productivity and found that scientific discipline, phase of the PhD process, funding situation, family situation, and organizational culture within the research team are important factors predicting the number of publications. Van der Weijden (10) used a survey to study PhD students’ perceptions of career perspectives in academic R&D, non-academic R&D, and outside R&D, and assessed to what extent these career perspectives influence their job choice. She found that several career-related aspects, such as long-term career perspectives and the availability of permanent positions, are judged much more negatively for the academic R&D sector.

A session on University-Industry collaborations featured interesting research topics such as the relationship between industry-academia co-authorships and their influence on academic commercialization output (Wong & Singh (11); Yegros-Yegros et al. (12)) as well as global trends in University-Industry relationships using affiliation analysis of dual publications (Yegros-Yegros & Tijssen (13)). Related to this topic was a session on patents analysis which was used to study topics such as scientific evaluation and strategic priorities (Ping Ho & Wong (14)).

Measures of online attention, a topic of discussion in the past couple of years, was given special focus at the conference with probably the most studies featured in a session. Studies covered topics such as Mendeley readership analysis and their relationship with academic status (Zahedi et al (15)), Tweets on the Nobel Prize awards and their impact (Levitt & Thelwall (16)), and gender biases (Bar-Ilan & Van der Weijden (17)).

True to its slogan “Context counts: Pathways to master big and little data”, this conference selected a wide range of studies using newly available data to explore topics that provide context to scientific output, including gender, career, university-industry and measurement of engagement. In addition, the selected keynote lectures provided some overall strategic insight into metrics development. Diana Hicks and Henk Moed encouraged the audience to think more strategically about the application of metrics for evaluation purposes. The 7 principles manifesto suggested by Diana Hicks provides evaluators with a framework which can be used to perform assessments of researchers, institutions and programs. This manifesto was picked up by the CWTS group in Leiden headed by Paul Wouters, who is now working on creating an agreed upon set of principles that could potentially inform evaluation and funding systems (18).

Henk Moed (19) called for special attention to be given to the context and purpose of evaluation, using meta-analysis to inform the choice of data and methodology of the evaluation. Presenting the “The multi-dimensional research assessment matrix”, he gave some examples of how to compile correct and fair evaluation indicators using guiding questions that inform the process (20).

If there is one message that could be drawn from this conference it is that the plethora of recently available data, statistical analysis and indicators is an overall positive development only if they are used in the correct context and are able to answer the questions posed. There is no one metric that fits all evaluation objectives and therefore the data selected, the method used and the conclusions drawn must be made carefully, keeping in mind that context is probably the key factor to successful assessment.

Daily Issue example

The Daily Issue | Edition 73 commenting on Diana Hick’s 7 principles of research evaluation manifesto (comments in italics)

  1. Metrics are not a substitute for assessment – Don’t blame it on the metrics
  2. Spend time and money to produce high quality data – Print your results on glossy paper
  3. Metrics should be transparent and accessible- Everyone can have a say even if they don’t know s**
  4. Data should be verified by those evaluated – Be careful not to insult anyone
  5. Be sensitive to field differences – Use long words to avoid homonyms
  6. Normalize data to account for variations by fields and over time – If your data is useless for one field, make slight adaptations and use them for another field or try again in 10 years
  7. Metrics should align with strategic goals – Follow the money

The Daily Issue: Edition 73: http://sti2014.cwts.nl/News?article=n-w2&title=Daily+Issues+now+online!


(Full text of all articles is available here: http://sti2014.cwts.nl/download/f-y2w2.pdf)

(1) http://sti2014.cwts.nl/Home
(2) http://www.spullenmannen.nl/index.php?lang=en&page=werk&proj=29
(3) Lepori, B., Heller-Schuh, B., Scherngell, Th., Barber, M. (2014) “Understanding factors influencing participation in European programs of Higher Education Institutions”, Proceedings of STI 2014 Leiden, p. 345
(4) Sandström, U., Heyman, U. and Van den Besselaar, P. (2014) “The Complex Relationship between Competitive Funding and Performance”, Proceedings of STI 2014 Leiden, p. 523
(5) Van der Weijden, I. and Calero Medina, C. (2014) “Gender, Academic Position and Scientific Publishing: a bibliometric analysis of the oeuvres of researchers”, Proceedings of STI 2014 Leiden, p. 673
(6) ACUMEN Survey (2011), available http://cybermetrics.wlv.ac.uk/survey-acumen.html
(7) Paul-Hus, A., Bouvier, R., Ni, C., Sugimoto, C.R., Pislyakov, V. and Larivière, V. (2014) “Women and science in Russia: a historical bibliometric analysis”, Proceedings of STI 2014 Leiden, p. 411
(8) Sugimoto, C.R., Ni, C., West, J.D., and Larivière, V. (2014) “Innovative women: an analysis of global gender disparities in patenting”, Proceedings of STI 2014 Leiden, p. 611
(9) Derycke, H., Levecque, K., Debacker, N., Vandevelde, K. and Anseel, F. (2014) “Factors influencing PhD students’ scientific productivity”, Proceedings of STI 2014 Leiden, p. 155
(10) Van der Weijden, I., Zahedi, Z., Must, U. and Meijer, I. (2014) “Gender Differences in Societal Orientation and Output of Individual Scientists”, Proceedings of STI 2014 Leiden, p. 680
(11) Wong, P.K. and Singh, A. (2014) “A Preliminary Examination of the Relationship between Co-Publications with Industry and Technology Commercialization Output of Academics: The Case of the National University of Singapore”, Proceedings of STI 2014 Leiden, p. 702
(12) Yegros-Yegros, A., Azagra-Caro, J.M., López-Ferrer, M. and Tijssen, R.J.W. (2014) “Do University-Industry co-publication volumes correspond with university funding from business firms?”, Proceedings of STI 2014 Leiden, p. 716
(13) Yegros-Yegros, A. and Tijssen, R. (2014) “University-Industry dual appointments: global trends and their role in the interaction with industry”, Proceedings of STI 2014 Leiden, p. 712
(14) Ho, Y.P. and Wong, P.K. (2014) “Using Patent Indicators to Evaluate the Strategic Priorities of Public Research Institutions: An exploratory study”, Proceedings of STI 2014 Leiden, p. 276
(15) Zahedi, Z., Costas, R. and Wouters, P. (2014) “Broad altmetric analysis of Mendeley readerships through the ‘academic status’ of the readers of scientific publications”, Proceedings of STI 2014 Leiden, p. 720
(16) Levitt, J. and Thelwall, M. (2014) “Investigating the Impact of the Award of the Nobel Prize on Tweets”, Proceedings of STI 2014 Leiden, p. 359
(17) Bar-Ilan, J. and Van der Weijden, I. (2014) “Altmetric gender bias? – Preliminary results”, Proceedings of STI 2014 Leiden, p. 26
(18) CWTS (2014) “The Leiden manifesto in the making: proposal of a set of principles on the use of assessment metrics in the S&T indicators conference”, Available at: http://citationculture.wordpress.com/2014/09/15/the-leiden-manifesto-in-the-making-proposal-of-a-set-of-principles-on-the-use-of-assessment-metrics-in-the-st-indicators-conference/
(19) Moed, H.F., Halevi, G. (2014) “Metrics-Based Research Assessment”, Proceedings of STI 2014 Leiden, p. 391
(20) Moed, H.F., Plume, A. (2011) “The multi-dimensional research assessment matrix”, Research Trends, Issue 23, May 2011, Available at: http://www.researchtrends.com/issue23-may-2011/the-multi-dimensional-research-assessment-matrix/
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Building research capacity in Sub-Saharan Africa through inter-regional collaboration

George Lan discusses findings from a study on collaboration in Sub-Saharan Africa, carried out by the World Bank and Elsevier.

Read more >

Over the next few decades, Sub-Saharan Africa (SSA) will benefit from a “demographic dividend” – a dramatic increase in its working-age population (1). Yet, increasing the subcontinent’s labor pool cannot push Africa toward a developed, knowledge-based society without simultaneously increasing the subcontinent’s capacity to train and educate that talent.

The strength of a region’s research enterprise is closely correlated with that region’s long-term development and important drivers of economic success. Research suggests that bibliometric indicators on publications can help characterize the stage of a country’s scientific development (2).

A recent study conducted by the World Bank and Elsevier looked at the state of Science, Technology, Engineering, and Mathematics research in SSA (3). For this analysis, Sub-Saharan Africa is divided into three regions (West & Central Africa, Southern Africa, and East Africa). The country of South Africa is considered separately and independently from Sub-Saharan Africa due to large differences in research capacity and output between the two.

By many measures, SSA has made great strides in its research performance, doubling its overall research output over the past decade and significantly increasing its global article share (4). However, as past studies show, article growth in other countries and regions in the developing world – particular Asia – outpaced that of SSA in recent years (5).

Moreover, SSA researchers collaborate extensively with international colleagues. Between 2003 and 2012, international collaborations as a percentage of Southern Africa’s total article output increased from 60.7% to 79.1% (Note 1). For Eastern Africa, international collaborations consistently comprised between 65% and 71% of the region’s total output. Although West & Central African researchers collaborate with international colleagues at relatively lower levels (between 40% and 50% of its total research output came from international collaborations), those rates are still well above the world average.

However, echoing the findings of past studies (6), collaboration between different African regions remains low. To calculate the number of collaborations between East Africa and West & Central Africa, for example, we counted all publications in which at least one author holds an affiliation to an East African institution and another author holds an affiliation to a West & Central African institution. As Figure 1 shows, inter-regional collaborations constitute a small fraction of Sub-Saharan Africa’s total international collaborations. In 2012, less than 6% of the region’s total output resulted from inter-regional collaborations, while nearly 60% came from international collaborations. Moreover, more than half of those inter-regional collaborations were co-authored with colleagues from institutions in OECD countries (Note 2).

WorldBank fig 1

Figure 1 - International and Inter-regional collaborations as percentage of Sub-Saharan African total research output, 2003-2012. Source: Scopus.

Figure 2 displays the trends of inter-regional collaboration for specifically East Africa vis-à-vis the other regions and South Africa. The top three trend lines (those with stronger lines) correspond to all collaborations between East Africa (EA) and West & Central Africa (WC), Southern Africa (SA), and South Africa (ZA) respectively. The bottom three trend lines correspond specifically to collaborations in which no co-authors were affiliated with institutions in OECD countries.


WorldBank fig 2

Figure 2 - Different types of inter-regional collaborations as percentage of East Africa’s total research output, 2003-2012. (EA = East Africa, WC = West & Central Africa, SA = Southern Africa, and ZA = South Africa). Source: Scopus.

Relative to East Africa’s overall rates of international collaboration (which comprise over 60% of East Africa’s total output), its level of inter-regional collaboration with other SSA regions is low, at about 2%. East Africa’s collaborations with South Africa have increased considerably over time, from 3.9% in 2003 to 7.9% in 2012. This growth has been driven mostly through collaborations involving partners at institutions in developed countries. The annual growth rate of East Africa-South Africa collaborations with an additional OECD partner was 8.2%, compared to 3.3% for those collaborations without an OECD partner.

These patterns of low inter-regional collaboration rates (especially without another OECD country as a partner) hold for West & Central Africa as well. In 2012, collaborations between West & Central Africa and other SSA regions accounted for only 0.9% of the former’s total research output.

Overall, the level of inter-regional collaborations in Sub-Saharan Africa has increased over the past decade, and this has been largely driven by collaborations involving OECD countries. On the one hand, this is welcome news, bolstering the subcontinent’s research capacity. On the other hand, in order for the regions to further develop, there needs to be a greater focus on Africa-centric collaboration.


(1) NB: this analysis defines international collaboration as multi-authored research outputs with authors affiliated with institutions in at least one region in SSA (West & Central Africa, Southern Africa, or East Africa) and elsewhere (including another SSA region).
(2) OECD member countries include: Australia, Austria, Belgium, Canada, Chile, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom, and the United States.


(1) Drummond, P. et al. (2014) “Africa Rising: Harnessing the Demographic Dividend”, IMF Working Paper. Retrieved from https://www.imf.org/external/pubs/ft/wp/2014/wp14143.pdf.
(2) Moed, H. and Halevi, G. (2014) “International Scientific Collaboration”. In Higher Education in Asia: Expanding Out, Expanding Up (Part 3), UNESCO Institute for Statistics. Retrieved from http://www.uis.unesco.org/Library/Documents/higher-education-asia-graduate-university-research-2014-en.pdf
(3) Lan, G. J., Blom, A., Kamalski, J., Lau, G., Baas, J., & Adil, M. (2014) “A Decade of Development in Sub-Saharan African Science, Technology, Engineering & Mathematics Research”, Washington, D.C. Retrieved from www.worldbank.org/africa/stemresearchreport.
(4) Schemm, Y. (2013) “Africa doubles research output over past decade, moves towards a knowledge-based economy”, Research Trends, Issue 35, December 2013. Retrieved from http://www.researchtrends.com/issue-35-december-2013/africa-doubles-research-output/
(5) Huggett, S. (2013). “The bibliometrics of the developing world”, Research Trends, Issue 35, December 2013. Retrieved from http://www.researchtrends.com/issue-35-december-2013/the-bibliometrics-of-the-developing-world/
(6) Boshoff, N. (2009) “South–South research collaboration of countries in the Southern African Development Community (SADC)”, Scientometrics, Vol. 84, No. 2, pp. 481–503. doi:10.1007/s11192-009-0120-0
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

View all Articles