Articles

Research trends is an online magazine providing objective insights into scientific trends based on bibliometrics analyses.

The Beginning of a Beautiful Friendship – Social Sciences Research and Industry Products

If you think the term ‘co-citation analysis’ is strictly related to bibliometrics, and lays within the realm of Library and Information Science, think again! Co-citation analysis morphs into patents The term was coined in 1973 when Henry Small described this methodology in his renowned article ‘Co-citation in the scientific literature: a new measure of the […]

Read more >


If you think the term ‘co-citation analysis’ is strictly related to bibliometrics, and lays within the realm of Library and Information Science, think again!

Co-citation analysis morphs into patents

The term was coined in 1973 when Henry Small described this methodology in his renowned article ‘Co-citation in the scientific literature: a new measure of the relationship between two documents’, which was cited hundreds of times in the following years. Co-citation analysis has been used for numerous discoveries such as the emergence of author networks in a particular field of investigation, and mapping the emergence of a scientific field1–3. Small’s technique of using co-citations to analyze a body of research in order to discover scientific trends and collaborations (among other things) was in the most part used in the social sciences; then in the late 1990s the term suddenly started appearing not only in academic discussions but also in patent literature, and the approach has been used by industry pioneers such as Xerox, IBM, Microsoft, and more recently Google.

Technology catches up

How did co-citation analysis, which was conceived as a way to analyze relationships between articles, morph into a product apparatus? Looking at the time frame when patents using the term appear (see Figure 1), it seems that computational development at that time might give us an answer. The late 1990s saw a surge in different types of electronic databases which enabled indexing and retrieval of documents. These years also saw an increase in both co-citation analysis research publications and patents alike, including some interesting surges in 2009 and 2010, when semantic information retrieval and web page rankings took center stage in web-based applications. The first patent that uses co-citation analysis as a method is a Xerox patent that uses co-citation analysis to generate clusters of documents in a database4.

Figure 1 – Occurrence of the term ‘co-citation analysis’ in peer reviewed articles versus patents. Sources: TotalPatent and Scopus.

This patent (US Patent 6038574) describes a co-citation analysis method in which hyperlinks in web pages are viewed as references, and the relationships found are used to help create a web page index. Following the Xerox patent application, a series of patents mentioning co-citation analysis in either the method, claims, or references to prior publications appear through the patent literature featuring prominent technological companies (see Figure 2).

Figure 2 – Co-citation patents assignees. Source: TotalPatent.

Looking at some of the patents one can see some interesting applications of co-citation analysis: for example Google Patent (WO2008134373A1), which uses “co-citation analysis, or anchor text analysis (e.g., analysis of text in or near links to the multimedia events) to determine related multimedia events” (Description of Embodiment; 0035). IBM applied co-citation analysis to link analysis on the Web (US7792827B2), and Microsoft used co-citation analysis in their development of latent semantic analysis (US20070239431A1).

Technology? Transferred!

This type of relationship between academic research and product development is a phenomenon that we have learned to expect in areas of medicine or engineering, for example. In these areas, we can more easily find a direct link between theoretical models and their growth into tangible products. What this analysis shows is that social science research can go through a similar evolution, but the cycle through which it develops takes longer to deploy; in this case more than 20 years.

References

1. Culnan, M. J. (1987) Mapping the intellectual structure of MIS, 1980-1985: a co-citation analysis. MIS Quarterly: Management Information Systems, Vol. 11, pp. 341–350.
2. Chen, C. (1999) Visualising semantic spaces and author co-citation networks in digital libraries. Information Processing and Management, Vol. 35, pp. 401–420.
3. Zhao, D., & Strotmann, A. (2011) Intellectual structure of stem cell research: a comprehensive author co-citation analysis of a highly collaborative and multidisciplinary field. Scientometrics, Vol. 87, 115–131.
4. US Patent 6038574 (2000) Method and apparatus for clustering a collection of linked documents using co-citation analysis.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

If you think the term ‘co-citation analysis’ is strictly related to bibliometrics, and lays within the realm of Library and Information Science, think again!

Co-citation analysis morphs into patents

The term was coined in 1973 when Henry Small described this methodology in his renowned article ‘Co-citation in the scientific literature: a new measure of the relationship between two documents’, which was cited hundreds of times in the following years. Co-citation analysis has been used for numerous discoveries such as the emergence of author networks in a particular field of investigation, and mapping the emergence of a scientific field1–3. Small’s technique of using co-citations to analyze a body of research in order to discover scientific trends and collaborations (among other things) was in the most part used in the social sciences; then in the late 1990s the term suddenly started appearing not only in academic discussions but also in patent literature, and the approach has been used by industry pioneers such as Xerox, IBM, Microsoft, and more recently Google.

Technology catches up

How did co-citation analysis, which was conceived as a way to analyze relationships between articles, morph into a product apparatus? Looking at the time frame when patents using the term appear (see Figure 1), it seems that computational development at that time might give us an answer. The late 1990s saw a surge in different types of electronic databases which enabled indexing and retrieval of documents. These years also saw an increase in both co-citation analysis research publications and patents alike, including some interesting surges in 2009 and 2010, when semantic information retrieval and web page rankings took center stage in web-based applications. The first patent that uses co-citation analysis as a method is a Xerox patent that uses co-citation analysis to generate clusters of documents in a database4.

Figure 1 – Occurrence of the term ‘co-citation analysis’ in peer reviewed articles versus patents. Sources: TotalPatent and Scopus.

This patent (US Patent 6038574) describes a co-citation analysis method in which hyperlinks in web pages are viewed as references, and the relationships found are used to help create a web page index. Following the Xerox patent application, a series of patents mentioning co-citation analysis in either the method, claims, or references to prior publications appear through the patent literature featuring prominent technological companies (see Figure 2).

Figure 2 – Co-citation patents assignees. Source: TotalPatent.

Looking at some of the patents one can see some interesting applications of co-citation analysis: for example Google Patent (WO2008134373A1), which uses “co-citation analysis, or anchor text analysis (e.g., analysis of text in or near links to the multimedia events) to determine related multimedia events” (Description of Embodiment; 0035). IBM applied co-citation analysis to link analysis on the Web (US7792827B2), and Microsoft used co-citation analysis in their development of latent semantic analysis (US20070239431A1).

Technology? Transferred!

This type of relationship between academic research and product development is a phenomenon that we have learned to expect in areas of medicine or engineering, for example. In these areas, we can more easily find a direct link between theoretical models and their growth into tangible products. What this analysis shows is that social science research can go through a similar evolution, but the cycle through which it develops takes longer to deploy; in this case more than 20 years.

References

1. Culnan, M. J. (1987) Mapping the intellectual structure of MIS, 1980-1985: a co-citation analysis. MIS Quarterly: Management Information Systems, Vol. 11, pp. 341–350.
2. Chen, C. (1999) Visualising semantic spaces and author co-citation networks in digital libraries. Information Processing and Management, Vol. 35, pp. 401–420.
3. Zhao, D., & Strotmann, A. (2011) Intellectual structure of stem cell research: a comprehensive author co-citation analysis of a highly collaborative and multidisciplinary field. Scientometrics, Vol. 87, 115–131.
4. US Patent 6038574 (2000) Method and apparatus for clustering a collection of linked documents using co-citation analysis.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Two’s company: how scale affects research groups

In the 1880s, French agricultural engineer Max Ringelmann carried out a series of experiments exploring how much people put into task in individual and group settings. To do so, Ringelmann had male students pull on a rope and measured the force exerted, first individually and then when they were part of a team. Ringelmann found […]

Read more >


In the 1880s, French agricultural engineer Max Ringelmann carried out a series of experiments exploring how much people put into task in individual and group settings. To do so, Ringelmann had male students pull on a rope and measured the force exerted, first individually and then when they were part of a team. Ringelmann found that while the overall performance increased as students were added to the team, the average performance exerted per worker, or individual performance, decreased linearly with each additional worker1. In other words, if each individual had put their effort into their own work, the result would have been greater than when they worked as a team. The effect would come to be studied by social psychologists throughout the twentieth century looking at group performance, but, until 1986, the earliest known source of information about Ringelmann’s studies was a 1927 paper by the German industrial psychologist Walther Moede.

In 1974, Alan Ingham and colleagues performed similar rope-pulling experiments in an attempt to verify the existence of the Ringelmann effect2. Their findings showed individual performance to decrease significantly with the first few additional co-workers, up to a group of three workers — beyond which additional workers did not significantly decrease individual performance. Ingham et al. described two possible causes for the decrease in individual performance: “It remained unclear whether group members pulled less hard because of incoordination or because of losses in motivation”2. Controlling for incoordination in a further experiment in which an individual pulled on the rope — but in some cases believed he was part of a group — the study found a very similar pattern of decreased performance, showing that loss of motivation occurred when co-workers (whether perceived or real) were added to the team.

Ringelmann in research

The Ringelmann effect has real implications for scientific research. Scientific researchers are not simply pulling a rope in unison — but if individual performance decreases as additional researchers come to work on a particular problem, this makes large-scale research projects an unattractive prospect not only for individuals, but also for funding bodies. As Andrea Bonaccorsi and Cinzia Daraio put it, “pressure on public budgets in almost all industrialised countries has led governments to pursue (or at least declare they pursue) efficiency in the allocation and management of resources in the public research sector. The increasing societal demand for accountability and transparency of science also makes it important to demonstrate that public funding follows clear rules”3.

Ton van Raan has investigated the relationship between a variety of bibliometric indicators of size and research quality, at the level of the research group4. Van Raan states that “[t]he research group is the most important working floor entity in science, as clearly shown by the internal structure of universities and research institutes”; however, van Raan goes on to say that obtaining data at the level of a research group is far more difficult than for individual authors, institutes or even for whole countries: this is because research groups are not captured in the bibliographic fields attached to papers, such as author names or institutional addresses.

Sizing up subject fields

When Ralph Kenna and Bertrand Berche started to investigate the relationship between the size of a research group and the performance of those groups, they turned to the UK’s Research Assessment Exercise (RAE) as a source of data. The RAE captures data regarding not only the quality of research groups, but their field of research and size. In a rapid chain of papers, Kenna and Berche have compared the sizes of research groups in various fields with the quality assigned to those groups’ research5–9. Their findings have shown that in every field, there exists a critical mass for increased productivity, with an upper and lower boundary. The lower boundary relates to the classical notion of critical mass, “loosely described as the minimum size a research team must attain for it to be viable in the longer term”; between the lower and upper boundary for critical mass, “the overall strength of research teams tends to rise quadratically with increasing size”; beyond the upper boundary, “research quality levels out”. The basic implication is that “this levelling off refutes arguments which advocate ever increasing concentration of research support into a few large institutions”, and their research shows optimal research group sizes in a number of disciplines8. Lately, Kenna and Berche have even used their approach to develop a method of normalizing quality between different research disciplines9.

While Kenna and Berche have developed a way to analyze the effect research group size on its overall performance, issues remain regarding how research groups can be assessed. Currently, data supplied for national research assessment programs seem to be the best option, where available; however, tools such as SciVal Strata are starting to address the problem, opening up ways of assessing research groups by citation analysis. (For more information on this approach, the reader is referred to the article by Judith Kamalski and Colby Riese in this issue.)

References

1. Kravitz, D.A. & Martin, B. (1986) Ringelmann rediscovered: the original article. Journal of Personality and Social Psychology, Vol. 50, No. 5, pp. 936–941.
2. Ingham, A.G. et al. (1974) The Ringelmann effect: studies of group size and group performance. Journal of Experimental Social Psychology, Vol. 10, pp. 371–384.
3. Bonaccorsi, A. & Daraio, C. (2005) Exploring size and agglomeration effects on public research productivity. Scientometrics, Vol. 63, No. 1, pp. 87–120.
4. van Raan, A.F.J. (2008) Scaling rules in the science system: influence of field-specific citations characteristics on the impact of research groups. Journal of the American Society for Information Science and Technology, Vol. 59, No. 4, pp. 565–576.
5. Kenna, R. & Berche, B. (2010) The extensive nature of group quality. Europhysics Letters, Vol. 90, DOI: 10.1209/0295-5075/90/58002.
6. Kenna, R. & Berche, B. (2011) Critical mass and the dependency of research quality on group size. Scientometrics, Vol. 86, pp. 527–540.
7. Kenna, R. & Berche, B. (2011) Statistics of statisticians: critical mass of statistics and operational research groups in the UK. arXiv:1102.4914v2.
8. Kenna, R. & Berche, B. (2011) Concentration versus dispersion of research resources: a contribution to the debate. arXiv:1006.3701v1.
9. Kenna, R. & Berche, B. (2011) Normalization of peer-evaluation measures of group research quality across academic disciplines. Research Evaluation, Vol. 20, No. 2, pp. 107–116.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

In the 1880s, French agricultural engineer Max Ringelmann carried out a series of experiments exploring how much people put into task in individual and group settings. To do so, Ringelmann had male students pull on a rope and measured the force exerted, first individually and then when they were part of a team. Ringelmann found that while the overall performance increased as students were added to the team, the average performance exerted per worker, or individual performance, decreased linearly with each additional worker1. In other words, if each individual had put their effort into their own work, the result would have been greater than when they worked as a team. The effect would come to be studied by social psychologists throughout the twentieth century looking at group performance, but, until 1986, the earliest known source of information about Ringelmann’s studies was a 1927 paper by the German industrial psychologist Walther Moede.

In 1974, Alan Ingham and colleagues performed similar rope-pulling experiments in an attempt to verify the existence of the Ringelmann effect2. Their findings showed individual performance to decrease significantly with the first few additional co-workers, up to a group of three workers — beyond which additional workers did not significantly decrease individual performance. Ingham et al. described two possible causes for the decrease in individual performance: “It remained unclear whether group members pulled less hard because of incoordination or because of losses in motivation”2. Controlling for incoordination in a further experiment in which an individual pulled on the rope — but in some cases believed he was part of a group — the study found a very similar pattern of decreased performance, showing that loss of motivation occurred when co-workers (whether perceived or real) were added to the team.

Ringelmann in research

The Ringelmann effect has real implications for scientific research. Scientific researchers are not simply pulling a rope in unison — but if individual performance decreases as additional researchers come to work on a particular problem, this makes large-scale research projects an unattractive prospect not only for individuals, but also for funding bodies. As Andrea Bonaccorsi and Cinzia Daraio put it, “pressure on public budgets in almost all industrialised countries has led governments to pursue (or at least declare they pursue) efficiency in the allocation and management of resources in the public research sector. The increasing societal demand for accountability and transparency of science also makes it important to demonstrate that public funding follows clear rules”3.

Ton van Raan has investigated the relationship between a variety of bibliometric indicators of size and research quality, at the level of the research group4. Van Raan states that “[t]he research group is the most important working floor entity in science, as clearly shown by the internal structure of universities and research institutes”; however, van Raan goes on to say that obtaining data at the level of a research group is far more difficult than for individual authors, institutes or even for whole countries: this is because research groups are not captured in the bibliographic fields attached to papers, such as author names or institutional addresses.

Sizing up subject fields

When Ralph Kenna and Bertrand Berche started to investigate the relationship between the size of a research group and the performance of those groups, they turned to the UK’s Research Assessment Exercise (RAE) as a source of data. The RAE captures data regarding not only the quality of research groups, but their field of research and size. In a rapid chain of papers, Kenna and Berche have compared the sizes of research groups in various fields with the quality assigned to those groups’ research5–9. Their findings have shown that in every field, there exists a critical mass for increased productivity, with an upper and lower boundary. The lower boundary relates to the classical notion of critical mass, “loosely described as the minimum size a research team must attain for it to be viable in the longer term”; between the lower and upper boundary for critical mass, “the overall strength of research teams tends to rise quadratically with increasing size”; beyond the upper boundary, “research quality levels out”. The basic implication is that “this levelling off refutes arguments which advocate ever increasing concentration of research support into a few large institutions”, and their research shows optimal research group sizes in a number of disciplines8. Lately, Kenna and Berche have even used their approach to develop a method of normalizing quality between different research disciplines9.

While Kenna and Berche have developed a way to analyze the effect research group size on its overall performance, issues remain regarding how research groups can be assessed. Currently, data supplied for national research assessment programs seem to be the best option, where available; however, tools such as SciVal Strata are starting to address the problem, opening up ways of assessing research groups by citation analysis. (For more information on this approach, the reader is referred to the article by Judith Kamalski and Colby Riese in this issue.)

References

1. Kravitz, D.A. & Martin, B. (1986) Ringelmann rediscovered: the original article. Journal of Personality and Social Psychology, Vol. 50, No. 5, pp. 936–941.
2. Ingham, A.G. et al. (1974) The Ringelmann effect: studies of group size and group performance. Journal of Experimental Social Psychology, Vol. 10, pp. 371–384.
3. Bonaccorsi, A. & Daraio, C. (2005) Exploring size and agglomeration effects on public research productivity. Scientometrics, Vol. 63, No. 1, pp. 87–120.
4. van Raan, A.F.J. (2008) Scaling rules in the science system: influence of field-specific citations characteristics on the impact of research groups. Journal of the American Society for Information Science and Technology, Vol. 59, No. 4, pp. 565–576.
5. Kenna, R. & Berche, B. (2010) The extensive nature of group quality. Europhysics Letters, Vol. 90, DOI: 10.1209/0295-5075/90/58002.
6. Kenna, R. & Berche, B. (2011) Critical mass and the dependency of research quality on group size. Scientometrics, Vol. 86, pp. 527–540.
7. Kenna, R. & Berche, B. (2011) Statistics of statisticians: critical mass of statistics and operational research groups in the UK. arXiv:1102.4914v2.
8. Kenna, R. & Berche, B. (2011) Concentration versus dispersion of research resources: a contribution to the debate. arXiv:1006.3701v1.
9. Kenna, R. & Berche, B. (2011) Normalization of peer-evaluation measures of group research quality across academic disciplines. Research Evaluation, Vol. 20, No. 2, pp. 107–116.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

How do European universities perceive the rankings? Global University rankings and their impact

Research Trends reports back on a meeting on global university rankings organized by the European University Association (EUA), with a focus on how European universities perceive such rankings.

Read more >


On June 15, 2011, the European University Association (EUA) made public the results of the report ‘Global University rankings and their impact’. This report, led by Professor Andrejs Rauhvargers, provides a comparative analysis of the methodologies used in the most popular rankings*. The presentation of the report’s results was followed by a panel discussion with university leaders and higher education experts** about the impact of ranking on universities. The report does not intend to rank their various rankings but to analyze the methodologies and indicate the current developments of alternatives to measure university quality and performance in all its dimensions and complexity.

Useful rankings

The authors of the report recognize that rankings are here to stay, given their high level of acceptance by various stakeholders. The report acknowledges the positive aspects of the rankings for universities: they draw the attention of governments to higher education and research; they improve accountability and management methods; and they demonstrate the importance of collecting reliable data. Regarding the robustness of the data on the output, both Web of Science and Scopus were mentioned as reliable databases as far as the sciences and medicine are concerned.

Main findings and criticisms

Going through the comparison of the various methodologies, the report details what is actually measured, how the scores for indicators are measured, and how the final scores are calculated — and therefore what the results actually mean.

The first criticism of university rankings is that they tend to principally measure research activities and not teaching. Moreover, the ‘unintended consequences’ of the rankings are clear, with more and more institutions tending to modify their strategy in order to improve their position in the rankings instead of focusing on their main missions.

For some ranking systems, lack of transparency is a major concern, and the QS World University Ranking in particular was criticized for not being sufficiently transparent.

The report also reveals the subjectivity in the proxies chosen and in the weight attached to each, which leads to composite scores that reflect the ranking provider’s concept of quality (for example, it may be decided that a given indicator may count for 25% or 50% of overall assessment score, yet this choice reflects a subjective assessment of what is important for a high-quality institute). In addition, indicator scores are not absolute but relative measures, which can complicate comparisons of indicator scores. For example, if the indicator is number of students per faculty, what does a score of, say, 23 mean? That there are 23 students per faculty member? Or does it mean that this institute has 23% of the students per faculty compared with institutes with the highest number of students/faculty? Moreover, considering simple counts or relative values is not neutral. As an example, the Academic Ranking of World Universities ranking does not take into consideration the size of the institutions.

Other indicators measuring teaching quality are perceived as strongly questionable, far more so than the ones measuring research. Moreover, the EUA report describes how differences in the way academics publish and cite each other in different fields can create a strong bias in rankings. As such, attempts have been made to normalize across disciplines, and the field-normalization in the Leiden Ranking is a highly regarded example of this effort.

Recommendations

The EUA report makes several recommendations for ranking-makers, including the need to mention what the ranking is for, and for whom it is intended. Among the suggestions to improve the rankings, the following received the greatest attention from the audience:

  1. Include non-journal publications properly, including books, which are especially important for social sciences and the arts and humanities;
  2. Address language issues (is an abstract available in English, as local language versions are often less visible?);
  3. Include more universities: currently the rankings assess only 1–3% of the 17,000 existing universities worldwide;
  4. Take into consideration the teaching mission with relevant indicators.

Which ranking, which evaluation tool for which purpose?

Going further, the panel discussion’s participants recommend going beyond the rankings and analysing in detail what information is needed by institutions to assess the diversity of research activities, and to take strategic decisions and implement those choices. Citation analysis or any other single indicator is obviously not sufficient to make decisions on a well-informed basis. The discussion should be to determine what the best tools are depending on the question.

Expectations about the U-Multirank project are high, considering its aim to show the various missions of the universities, far away from a league table, helping students to make choices.

As a conclusion, Jean-Marc Rapp, President of EUA, outlined the next steps in the EUA’s agenda: to analyze both the desired and un-intended consequences of the ranking systems, and comparing the different ways of assessing universities (ranking/rating, benchmarking, quality assessment and so on). This proves again how crucial those evaluation matters are for universities, who are looking for advice about how to make the most informed choices.

Useful links

The original report:  http://www.eua.be/pubs/Global_University_Rankings_and_Their_Impact.pdf

Notes

*The EUA looked at the following rankings: Shanghai Academic Ranking of World Universities (ARWU); Times Higher Education World University Ranking (in cooperation with Quacquarelli Symonds until 2009, and Thomson Reuters from 2010); World’s Best Universities Ranking (QS); Global Universities ranking (Reitor); HEEACT Rankings (Taiwan); EU University-based Research Assessment (AUBR Working Group, EU); Leiden Ranking; CHE University/Excellence rankings; U-Multirank; U-Map classification; and Webometrics.

**Participants involved in the meeting: 1) Presentation of the report: Jan Truszczynski, Director General for Education & Culture, European Commission; Allan Päll, Vice-Chairperson, European Student’s Union; Gero Federkeil, Vice President (VP) International Observatory on Academic Rankings and Excellence. 2) Panel discussion: Chaired by Professor Ellen Hazelkorn, VP Research & Enterprise, Dublin, Ireland; Professor Jean-Pierre Finance,  President Université Henri Poincaré; Sir Howard Newby, Vice-Chancellor, University of Liverpool, United Kingdom; Jens Oddershede, Rector, University of Southern Denmark, Chairman of Universities Denmark.

Curriculum Vitae: Andrejs Rauvargers

Andrejs Rauhvargers was born 1952 in Riga, Latvia, and has a Ph.D. in Chemistry from the University of Latvia. He is Secretary General of the Latvian Rectors’ Conference and Professor at the Faculty of Education at the University of Latvia. He has also served as Deputy State Secretary at Latvian Ministry of Education, where he participated in developing legislation for higher education and was closely involved in the establishment of the higher education quality assurance system in Latvia and its coordination with neighboring countries Estonia and Lithuania. He was also responsible for establishing a system for academic recognition in Latvia.

Internationally, Rauhvargers is a member of the Bologna Follow-Up Group and since 2005 has chaired the working group studying the progress in the 46 ‘Bologna’ countries and preparing the Bologna Stocktaking reports published in 2007 and 2009. He served as president of the European Network of Academic Recognition Centres (ENIC) from 1997 to 2001 and as President of the Intergovernmental Committee of the Lisbon Recognition Convention from 2001 to 2008.

He is the author of several other major reports and a number of publications on various aspects of international higher education, both national and international, and has participated in and managed several higher education reform projects in Croatia, Latvia, Lithuania, Montenegro and Poland. He has been an invited speaker in his field in more than 20 countries. In 2006 Andrejs Rauhvargers was awarded the European Association for International Education’s Constance Meldrum Award for innovation, leadership, and inspiration in international higher education.

Source: http://www.eua.be/about/who-we-are/secretariat/Andrejs-Rauhvargers.aspx

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

On June 15, 2011, the European University Association (EUA) made public the results of the report ‘Global University rankings and their impact’. This report, led by Professor Andrejs Rauhvargers, provides a comparative analysis of the methodologies used in the most popular rankings*. The presentation of the report’s results was followed by a panel discussion with university leaders and higher education experts** about the impact of ranking on universities. The report does not intend to rank their various rankings but to analyze the methodologies and indicate the current developments of alternatives to measure university quality and performance in all its dimensions and complexity.

Useful rankings

The authors of the report recognize that rankings are here to stay, given their high level of acceptance by various stakeholders. The report acknowledges the positive aspects of the rankings for universities: they draw the attention of governments to higher education and research; they improve accountability and management methods; and they demonstrate the importance of collecting reliable data. Regarding the robustness of the data on the output, both Web of Science and Scopus were mentioned as reliable databases as far as the sciences and medicine are concerned.

Main findings and criticisms

Going through the comparison of the various methodologies, the report details what is actually measured, how the scores for indicators are measured, and how the final scores are calculated — and therefore what the results actually mean.

The first criticism of university rankings is that they tend to principally measure research activities and not teaching. Moreover, the ‘unintended consequences’ of the rankings are clear, with more and more institutions tending to modify their strategy in order to improve their position in the rankings instead of focusing on their main missions.

For some ranking systems, lack of transparency is a major concern, and the QS World University Ranking in particular was criticized for not being sufficiently transparent.

The report also reveals the subjectivity in the proxies chosen and in the weight attached to each, which leads to composite scores that reflect the ranking provider’s concept of quality (for example, it may be decided that a given indicator may count for 25% or 50% of overall assessment score, yet this choice reflects a subjective assessment of what is important for a high-quality institute). In addition, indicator scores are not absolute but relative measures, which can complicate comparisons of indicator scores. For example, if the indicator is number of students per faculty, what does a score of, say, 23 mean? That there are 23 students per faculty member? Or does it mean that this institute has 23% of the students per faculty compared with institutes with the highest number of students/faculty? Moreover, considering simple counts or relative values is not neutral. As an example, the Academic Ranking of World Universities ranking does not take into consideration the size of the institutions.

Other indicators measuring teaching quality are perceived as strongly questionable, far more so than the ones measuring research. Moreover, the EUA report describes how differences in the way academics publish and cite each other in different fields can create a strong bias in rankings. As such, attempts have been made to normalize across disciplines, and the field-normalization in the Leiden Ranking is a highly regarded example of this effort.

Recommendations

The EUA report makes several recommendations for ranking-makers, including the need to mention what the ranking is for, and for whom it is intended. Among the suggestions to improve the rankings, the following received the greatest attention from the audience:

  1. Include non-journal publications properly, including books, which are especially important for social sciences and the arts and humanities;
  2. Address language issues (is an abstract available in English, as local language versions are often less visible?);
  3. Include more universities: currently the rankings assess only 1–3% of the 17,000 existing universities worldwide;
  4. Take into consideration the teaching mission with relevant indicators.

Which ranking, which evaluation tool for which purpose?

Going further, the panel discussion’s participants recommend going beyond the rankings and analysing in detail what information is needed by institutions to assess the diversity of research activities, and to take strategic decisions and implement those choices. Citation analysis or any other single indicator is obviously not sufficient to make decisions on a well-informed basis. The discussion should be to determine what the best tools are depending on the question.

Expectations about the U-Multirank project are high, considering its aim to show the various missions of the universities, far away from a league table, helping students to make choices.

As a conclusion, Jean-Marc Rapp, President of EUA, outlined the next steps in the EUA’s agenda: to analyze both the desired and un-intended consequences of the ranking systems, and comparing the different ways of assessing universities (ranking/rating, benchmarking, quality assessment and so on). This proves again how crucial those evaluation matters are for universities, who are looking for advice about how to make the most informed choices.

Useful links

The original report:  http://www.eua.be/pubs/Global_University_Rankings_and_Their_Impact.pdf

Notes

*The EUA looked at the following rankings: Shanghai Academic Ranking of World Universities (ARWU); Times Higher Education World University Ranking (in cooperation with Quacquarelli Symonds until 2009, and Thomson Reuters from 2010); World’s Best Universities Ranking (QS); Global Universities ranking (Reitor); HEEACT Rankings (Taiwan); EU University-based Research Assessment (AUBR Working Group, EU); Leiden Ranking; CHE University/Excellence rankings; U-Multirank; U-Map classification; and Webometrics.

**Participants involved in the meeting: 1) Presentation of the report: Jan Truszczynski, Director General for Education & Culture, European Commission; Allan Päll, Vice-Chairperson, European Student’s Union; Gero Federkeil, Vice President (VP) International Observatory on Academic Rankings and Excellence. 2) Panel discussion: Chaired by Professor Ellen Hazelkorn, VP Research & Enterprise, Dublin, Ireland; Professor Jean-Pierre Finance,  President Université Henri Poincaré; Sir Howard Newby, Vice-Chancellor, University of Liverpool, United Kingdom; Jens Oddershede, Rector, University of Southern Denmark, Chairman of Universities Denmark.

Curriculum Vitae: Andrejs Rauvargers

Andrejs Rauhvargers was born 1952 in Riga, Latvia, and has a Ph.D. in Chemistry from the University of Latvia. He is Secretary General of the Latvian Rectors’ Conference and Professor at the Faculty of Education at the University of Latvia. He has also served as Deputy State Secretary at Latvian Ministry of Education, where he participated in developing legislation for higher education and was closely involved in the establishment of the higher education quality assurance system in Latvia and its coordination with neighboring countries Estonia and Lithuania. He was also responsible for establishing a system for academic recognition in Latvia.

Internationally, Rauhvargers is a member of the Bologna Follow-Up Group and since 2005 has chaired the working group studying the progress in the 46 ‘Bologna’ countries and preparing the Bologna Stocktaking reports published in 2007 and 2009. He served as president of the European Network of Academic Recognition Centres (ENIC) from 1997 to 2001 and as President of the Intergovernmental Committee of the Lisbon Recognition Convention from 2001 to 2008.

He is the author of several other major reports and a number of publications on various aspects of international higher education, both national and international, and has participated in and managed several higher education reform projects in Croatia, Latvia, Lithuania, Montenegro and Poland. He has been an invited speaker in his field in more than 20 countries. In 2006 Andrejs Rauhvargers was awarded the European Association for International Education’s Constance Meldrum Award for innovation, leadership, and inspiration in international higher education.

Source: http://www.eua.be/about/who-we-are/secretariat/Andrejs-Rauhvargers.aspx

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Individual Researcher Assessment: from Newby to Expert

This article shows how differences in publication counts between individual researchers may say more about where they are in their scientific careers than about the quality of their research.

Read more >


How do we know whether a particular researcher is promising, or to whom funding should be allocated, or who the best candidate for a certain position is? Research Trends tries to offer some guidance.

A basic, but important, question relates to who is doing the assessment, and why. Is it a line manager, a funding body, or potential collaborators? The evaluator and the evaluated may even be one and the same, as when a researcher sets out to benchmark their own performance. In these scenarios, the goal might be to recruit, promote, or retain a researcher; to allocate time or equipment to a certain researcher; to determine who should receive awards or money; or to assess one’s own position. Depending on the goal, different aspects of the evaluation of scientific quality may come into play.

In all these cases, one can turn to bibliometric indicators to supplement other methods, such as peer review and interviews. Whether the focus is on productivity, impact, or collaborations, the numbers can shed a light on an individual’s performance. When the researcher in question is established and has published extensively, this should be quite straightforward. But in the early days of a scientific career, this is much harder.

We can distinguish several stages in a researcher’s career. To keep it simple, here we look at three different and somewhat arbitrary career stages: Years 1, 5 and 10.

Year 1: a promising future

Imagine a young and promising researcher in Year 1 of their career, who has not published anything yet — though there’s plenty in the pipeline and lots of exciting ideas. Their time is mostly spent reading and forming ideas for future research. So how can we judge this individual’s performance? Bibliometric analysis is not going to be helpful here, so we might consider looking at their examination results or peer-review comments. Other factors might include their networking activities: are they members of a scientific network? Do they contribute to the network’s discussions? Have they run any workshops or given conferences presentations?

Year 5: underway to the next stage

By now the researcher has published a few articles, and is slowly building a reputation in the field. Their time is mostly spent conducting experiments, networking, and writing up articles. Here, metrics can be more useful in assessing performance than in the earlier stages, but traditional metrics based on averages will not provide an accurate measure because of the small number of publications and citations involved. More immediate metrics could be provided by looking at usage, or downloads, of the researcher’s articles (e.g., how many times have their publications been viewed in a certain database, such as Scopus?). Another relevant aspect could be collaboration: has the researcher published with other reputable researchers in different institutions and countries?

Year 10: established and independent

By the time a researcher gets to this stage of their career, the track record is sufficient for metrics such as the h-index to provide a meaningful measure of output. But one could also look at public presence: how does this person contribute to conferences and events in the relevant subject area, or as a keynote speaker, or in the media? In some fields, patent data may also be relevant.

Let the data do the talking

Research Trends conducted a quick analysis in SciVal Strata (for previous analyses by Research Trends using Strata, see here), a new tool by Elsevier intended to provide a visualization of the activity of a researcher in different stages of their career. Researchers themselves are able to assess their own complete, scientific impact, and also view themselves as part of a team. Strata is not constrained by one particular metric, as a single performance measure is certainly not sufficient to represent an individual researcher’s performance accurately or fairly.

In Figure 1, we see an anonymous young researcher after a few years in science. In Figure 2, we see the experienced, established researcher. While this is just one of many possible ways of looking at performance, it is clear to see how the two profiles differ. The young researcher has only just started to publish, and has not received any citations to date. The established researcher has papers almost every year: the older ones have all been cited, and among the more recent ones there are only a few uncited papers.

Figure 1 – Profile for a young researcher. Only one paper has been published and this is yet to be cited. Source: SciVal Strata, data from Scopus.

Figure 2 – Profile for an established researcher. Dark-colored bars represent cited documents, and lighter-colored bars denote uncited documents.a young researcher. Source: SciVal Strata, data from Scopus.

Take-away lessons for individual assessment

For every stage in a researcher’s career, and for every goal that the person doing the assessment has in mind, there are appropriate tools and measurements. But it is important to bear in mind that consideration must also be given to non-bibliometric indicators that have value for a particular assessor or institution, and which can provide extra information that might tip the balance one way or the other in an overall assessment.

Some quotes by researchers on measuring performance:

“At the start of my scientific career, I measured my performance by looking at the respect I gained from my colleagues. Later, my measure of performance became the public significance of my work as expressed at local and international conferences. And in the final stage of my career, I aim at obtaining results which I can compare with the best achievements in my area of research”

”In my opinion the ‘research performance’ of an individual is very much defined by the influence of his/her ideas over the colleagues in his/her scientific field”

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

How do we know whether a particular researcher is promising, or to whom funding should be allocated, or who the best candidate for a certain position is? Research Trends tries to offer some guidance.

A basic, but important, question relates to who is doing the assessment, and why. Is it a line manager, a funding body, or potential collaborators? The evaluator and the evaluated may even be one and the same, as when a researcher sets out to benchmark their own performance. In these scenarios, the goal might be to recruit, promote, or retain a researcher; to allocate time or equipment to a certain researcher; to determine who should receive awards or money; or to assess one’s own position. Depending on the goal, different aspects of the evaluation of scientific quality may come into play.

In all these cases, one can turn to bibliometric indicators to supplement other methods, such as peer review and interviews. Whether the focus is on productivity, impact, or collaborations, the numbers can shed a light on an individual’s performance. When the researcher in question is established and has published extensively, this should be quite straightforward. But in the early days of a scientific career, this is much harder.

We can distinguish several stages in a researcher’s career. To keep it simple, here we look at three different and somewhat arbitrary career stages: Years 1, 5 and 10.

Year 1: a promising future

Imagine a young and promising researcher in Year 1 of their career, who has not published anything yet — though there’s plenty in the pipeline and lots of exciting ideas. Their time is mostly spent reading and forming ideas for future research. So how can we judge this individual’s performance? Bibliometric analysis is not going to be helpful here, so we might consider looking at their examination results or peer-review comments. Other factors might include their networking activities: are they members of a scientific network? Do they contribute to the network’s discussions? Have they run any workshops or given conferences presentations?

Year 5: underway to the next stage

By now the researcher has published a few articles, and is slowly building a reputation in the field. Their time is mostly spent conducting experiments, networking, and writing up articles. Here, metrics can be more useful in assessing performance than in the earlier stages, but traditional metrics based on averages will not provide an accurate measure because of the small number of publications and citations involved. More immediate metrics could be provided by looking at usage, or downloads, of the researcher’s articles (e.g., how many times have their publications been viewed in a certain database, such as Scopus?). Another relevant aspect could be collaboration: has the researcher published with other reputable researchers in different institutions and countries?

Year 10: established and independent

By the time a researcher gets to this stage of their career, the track record is sufficient for metrics such as the h-index to provide a meaningful measure of output. But one could also look at public presence: how does this person contribute to conferences and events in the relevant subject area, or as a keynote speaker, or in the media? In some fields, patent data may also be relevant.

Let the data do the talking

Research Trends conducted a quick analysis in SciVal Strata (for previous analyses by Research Trends using Strata, see here), a new tool by Elsevier intended to provide a visualization of the activity of a researcher in different stages of their career. Researchers themselves are able to assess their own complete, scientific impact, and also view themselves as part of a team. Strata is not constrained by one particular metric, as a single performance measure is certainly not sufficient to represent an individual researcher’s performance accurately or fairly.

In Figure 1, we see an anonymous young researcher after a few years in science. In Figure 2, we see the experienced, established researcher. While this is just one of many possible ways of looking at performance, it is clear to see how the two profiles differ. The young researcher has only just started to publish, and has not received any citations to date. The established researcher has papers almost every year: the older ones have all been cited, and among the more recent ones there are only a few uncited papers.

Figure 1 – Profile for a young researcher. Only one paper has been published and this is yet to be cited. Source: SciVal Strata, data from Scopus.

Figure 2 – Profile for an established researcher. Dark-colored bars represent cited documents, and lighter-colored bars denote uncited documents.a young researcher. Source: SciVal Strata, data from Scopus.

Take-away lessons for individual assessment

For every stage in a researcher’s career, and for every goal that the person doing the assessment has in mind, there are appropriate tools and measurements. But it is important to bear in mind that consideration must also be given to non-bibliometric indicators that have value for a particular assessor or institution, and which can provide extra information that might tip the balance one way or the other in an overall assessment.

Some quotes by researchers on measuring performance:

“At the start of my scientific career, I measured my performance by looking at the respect I gained from my colleagues. Later, my measure of performance became the public significance of my work as expressed at local and international conferences. And in the final stage of my career, I aim at obtaining results which I can compare with the best achievements in my area of research”

”In my opinion the ‘research performance’ of an individual is very much defined by the influence of his/her ideas over the colleagues in his/her scientific field”

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Is science in your country declining? Or is your country becoming a scientific super power, and how quickly?

This article looks at the interpretation of trends in publication counts by country.

Read more >


Analysing longitudinal trends in the publication output of nations has a long tradition in the field of bibliometrics. Derek de Solla Price (1978)1 and Francis Narin (1976)2, two founding fathers of the field, began exploring the utility of this type of bibliometric analysis in the 1970s, and they continue to have a considerable impact both on scientific and research policy debates.

History of analyzing

This national-level focus on scientific output is captured in many articles published over the past 25 years: ‘The continuing decline of British science’ (Martin et al., 1986)3; La recherche française est-elle en bonne santé? (Callon & Leydesdorff, 1987)4; ‘The emergence of China as a leading nation in science’ (Zhou & Leydesdorff, 2006)5; ‘The race for world leadership of science and technology: status and forecasts’ (Shelton & Foland, 2009)6; ‘Tipping the balance: the rise of China as a science superpower (Plume, 20117; The Royal Society, 20118)’; and ‘Is Italian science declining?’ (Daraio & Moed, 2011)9.

Publication counting seems so simple. But one has to make a series of methodological decisions that specify precisely how the counting is carried out. These decisions determine the numbers that are generated, and should be taken into account when interpreting the outcomes and drawing conclusions from these figures. Table 1 lists ten crucial methodological factors in this process.

A case study: China versus US

Several recent studies have assessed trends in US publication output compared with China. They differ with respect to many factors listed in Table 1. All of these studies found both a decline in US output and an increase for China during specific sub-intervals within the 2000–2010 time period. They even ‘predict’, by means of extrapolation, the year in which China will surpass the US in total publication output. The extrapolated cross-over years differ among the various studies but range between 2013 and a date in the following decade.

A recent study carried out by Loet Leydesdorff10 compared measures of scientific publication output generated by web versions of Web of Science (WoS) and Scopus. While the WoS analysis showed a steady decline in US output during 2000–2010, the Scopus results suggested that the US had a constant world share of publications during 2004–2009, and increased its share in 2010. A study conducted at Elsevier replicated the findings derived from Scopus’ web version. However, Elsevier’s study also used results derived from a special bibliometric version of Scopus created at Elsevier, one that draws on the same raw data as in the web version but loads it into a different software environment and applies several data-cleaning processes.

Figure 1 shows the outcomes of this comparison. Notably, the results for the US differ considerably between the two Scopus versions. These discrepancies are due to the fact that not all author affiliations contain the name of the country in which the authors’ institutions are located. This is especially true for US affiliations: many indicate the US state, but not the country name. In Chinese publications, such a phenomenon occurs less frequently, possibly because Chinese authors find it important to highlight their country of origin.

In Elsevier’s bibliometric version of Scopus a large fraction of missing country names were added, which increased the measured number of publications; however, in the web version of Scopus this data cleaning is still ongoing (at present, only missing affiliations from 2010 are added). The process operates backwards in time: by the end of the year additions for the years 2005–2009 will be added. So the increase in US world share in 2010 previously derived from the web version of Scopus is due to more complete capturing of affiliation countries in that year.

What does this mean?

This case illustrates once more how careful one must be when interpreting bibliometric trend data (even at the level of countries), how outcomes can differ between one database version and another, how affiliation practices can differ among countries, and how these differences can affect both numbers and annual trends.

There is no absolute norm for what constitutes good database coverage. Scopus tends to have a more comprehensive coverage, especially of Chinese journals, while WoS has more selective journal coverage. Each gives a specific view of the US and Chinese output. Both databases give a declining trend for the US and an increasing one for China. The crossover times are different, and sooner for Scopus than for WoS, but this should be expected from a database that has a more comprehensive coverage of Chinese journals.

Factor Description
Selection of a database Which database does one use in the measurement? Coverage may differ substantially from one database to another.
Different versions of a database Different versions of a database may exist. For instance, several groups have created their own bibliometric versions based on raw data from Scopus or Web of Science, adding information to it, performing data cleaning and so on. Results from such bibliometric versions may differ from those obtained with the web versions of the same databases
Changes in database coverage Database coverage may change over time; for instance, new journals may be added from a particular year onwards. How does one deal with these changes?
Adequacy of database coverage Does a database cover the publication output of a country and/or in a research field sufficiently well? For instance, databases principally covering journals miss important output in social sciences and humanities (published in books) and in engineering (published in conference proceedings)
Fractional versus integer counting How should one count a paper co-published between a US and a UK author? As one US paper and one UK paper (integer counts)? Or as 0.5 US and 0.5 UK papers (fractional counts)? More sophisticated schemes can also be explored.
Absolute or relative counts Does one analyze the absolute number of published articles, or article shares (for instance, the percentage of papers published from a particular country relative to the total number of articles indexed for the database)?
Time period considered To which time period does the data collection relate? This is especially important when examining longitudinal data. For instance, a country may show an increase in some years, and a steady state or even decline in a subsequent time period.
Document types included in the counts Databases index many types of documents: full research articles, but also shorter letters, reviews, editorials, discussion papers, and more. Which types should be included in the counts?
Publication year vs. database or tape year A paper published at the end of a calendar year (e.g., in December 2010) may be included in the database in the next year (e.g., March 2011). Is such a paper counted as a 2010 or as a 2011 paper?
Country delimitation Papers are assigned to countries according to the geographical location of the institutions of publishing authors. But how precisely is this done? Does the database include the affiliations of all authors? Have variations in country names been taken into account?

Table 1 – Methodological issues in bibliometric analysis of nations

Figure 1 – Scopus Bib V: data from bibliometric version of Scopus created at Elsevier; Scopus Web V: The Web version of Scopus. Source: Scopus.

References

1. Price, D.J.D. (1978) Towards a model for science indicators. In Toward a Metric of Science: The Advent of Science Indicators (eds Elkana, Y., Lederberg, J., Merton, R.K., Thackray, A. & Zuckerman, H.) (New York: John Wiley, pp. 69–95).
2. Narin, F. (1976) Evaluative Bibliometrics: The Use of Publication and Citation Analysis in the Evaluation of Scientific Activity. (Washington D.C.: National Science Foundation,).
3. Martin, B.R, Irvine, J., Narin, F. & Sterritt, C. (1987) The continuing decline of British science. Nature, Vol. 330, pp. 123–126.
4. Callon, M. & Leydesdorff, L. (1987). La recherche française est-elle en bonne santé? La Recherche Vol. 18, pp. 412–419.
5. Zhou, P., & Leydesdorff, L. (2006) The emergence of China as a leading nation in science. Research Policy, Vol. 35, pp. 83–104.
6. Shelton, R. D. & Foland, P. (2009) The race for world leadership of science and technology: status and forecasts. Proceedings of the 12th International Conference of the International Society for Scientometrics and Informetrics (eds Larsen, B & Larsen, J.), Volume I, pp. 369–380 (Rio de Janeiro, Brazil, July 14–17, 2009).
7. Plume, A. (2011) Tipping the balance: The rise of China as a science superpower. Research Trends, Issue 22.
8. The Royal Society (2011) Knowledge, Networks and Nations: Global Scientific Collaboration in the 21st Century.
9. Daraio, C. & Moed, H.F. (2011). Is Italian science declining? Research Policy, Vol. 40, pp 1380–1392.
10. Leydesdorff, L. (2011). World shares of publications of the USA, EU-27, and China compared and predicated using the new interface of the Web-of-Science versus Scopus. arXiv:1110.1802v2 [cs.DL].

Comment by Loet Leydesdorff:

When can the cross-over between China and the USA be expected using Scopus data?

Moed et al.’s article1 is a reaction to a recent paper2 in which I showed that the cross-over between China and the USA would be postponed until after 2020 when using the Science Citation Index-Expanded of Thomson-Reuters. By contrast, a team at Elsevier had argued in a report of the Royal Society, and on the basis of Scopus data, for a possible cross-over as early as 2013 (Refs 3.4).

Figure 1 – Predicted cross-over between the USA and China based on the new Scopus data; confidence intervals at the 95%-level. (SPSS, v.18.) Sources: Moed et al. (2011)1; the open circles are from Leydesdorff (2011)2.

The new analysis additionally clarifies why the linear fit for the US data remains poor (R2 = 0.71) — it is because of problems with this data. However, the fit for China is not different from previously reported studies (R2 = 0.97). Using the Science Citation Index (WoS v.5), one can find more precise fits and therefore a higher reliability for the prediction of a cross-over occurring after 2020.As noted2, these longer-term predictions are unlikely to be valid because of decreasing marginal returns in competitive markets. The metrics are embedded in a long-standing debate which I first entered in 1987 (see Refs 5,6). Given the new data, the prediction in the report of the Royal Society that the cross-over in the Scopus database would take place as early as 2013 can be postponed by approximately two years.


Loet Leydesdorff
Amsterdam School of Communication Research,
University of Amsterdam,
http://www.leydesdorff.net; loet@leydesdorff.net

References:

1. Moed, H. F., Plume, A., Aisati, M, & Berkvens, P. (2011). Is science in your country declining? Or is your country becoming a scientific super power, and how quickly? Research Trends, Issue 25.
2. Leydesdorff, L. (2011). World shares of publications of the USA, EU-27, and China compared and predicated using the new interface of the Web-of-Science versus Scopus. arXiv:1110.1802v2 [cs.DL].
3. The Royal Society (2011) Knowledge, Networks and Nations: Global Scientific Collaboration in the 21st Century.
4. Plume, A. (2011) Tipping the balance: The rise of China as a science superpower. Research Trends, Issue 22.
5. Callon, M. & Leydesdorff, L. (1987). La recherche française est-elle en bonne santé? La Recherche Vol. 18, pp. 412–419.
6. Shelton, R. D., & Leydesdorff, L. (in press). Publish or patent: bibliometric evidence for empirical trade-offs in national funding strategies. Journal of the American Society for Information Science and Technology.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Analysing longitudinal trends in the publication output of nations has a long tradition in the field of bibliometrics. Derek de Solla Price (1978)1 and Francis Narin (1976)2, two founding fathers of the field, began exploring the utility of this type of bibliometric analysis in the 1970s, and they continue to have a considerable impact both on scientific and research policy debates.

History of analyzing

This national-level focus on scientific output is captured in many articles published over the past 25 years: ‘The continuing decline of British science’ (Martin et al., 1986)3; La recherche française est-elle en bonne santé? (Callon & Leydesdorff, 1987)4; ‘The emergence of China as a leading nation in science’ (Zhou & Leydesdorff, 2006)5; ‘The race for world leadership of science and technology: status and forecasts’ (Shelton & Foland, 2009)6; ‘Tipping the balance: the rise of China as a science superpower (Plume, 20117; The Royal Society, 20118)’; and ‘Is Italian science declining?’ (Daraio & Moed, 2011)9.

Publication counting seems so simple. But one has to make a series of methodological decisions that specify precisely how the counting is carried out. These decisions determine the numbers that are generated, and should be taken into account when interpreting the outcomes and drawing conclusions from these figures. Table 1 lists ten crucial methodological factors in this process.

A case study: China versus US

Several recent studies have assessed trends in US publication output compared with China. They differ with respect to many factors listed in Table 1. All of these studies found both a decline in US output and an increase for China during specific sub-intervals within the 2000–2010 time period. They even ‘predict’, by means of extrapolation, the year in which China will surpass the US in total publication output. The extrapolated cross-over years differ among the various studies but range between 2013 and a date in the following decade.

A recent study carried out by Loet Leydesdorff10 compared measures of scientific publication output generated by web versions of Web of Science (WoS) and Scopus. While the WoS analysis showed a steady decline in US output during 2000–2010, the Scopus results suggested that the US had a constant world share of publications during 2004–2009, and increased its share in 2010. A study conducted at Elsevier replicated the findings derived from Scopus’ web version. However, Elsevier’s study also used results derived from a special bibliometric version of Scopus created at Elsevier, one that draws on the same raw data as in the web version but loads it into a different software environment and applies several data-cleaning processes.

Figure 1 shows the outcomes of this comparison. Notably, the results for the US differ considerably between the two Scopus versions. These discrepancies are due to the fact that not all author affiliations contain the name of the country in which the authors’ institutions are located. This is especially true for US affiliations: many indicate the US state, but not the country name. In Chinese publications, such a phenomenon occurs less frequently, possibly because Chinese authors find it important to highlight their country of origin.

In Elsevier’s bibliometric version of Scopus a large fraction of missing country names were added, which increased the measured number of publications; however, in the web version of Scopus this data cleaning is still ongoing (at present, only missing affiliations from 2010 are added). The process operates backwards in time: by the end of the year additions for the years 2005–2009 will be added. So the increase in US world share in 2010 previously derived from the web version of Scopus is due to more complete capturing of affiliation countries in that year.

What does this mean?

This case illustrates once more how careful one must be when interpreting bibliometric trend data (even at the level of countries), how outcomes can differ between one database version and another, how affiliation practices can differ among countries, and how these differences can affect both numbers and annual trends.

There is no absolute norm for what constitutes good database coverage. Scopus tends to have a more comprehensive coverage, especially of Chinese journals, while WoS has more selective journal coverage. Each gives a specific view of the US and Chinese output. Both databases give a declining trend for the US and an increasing one for China. The crossover times are different, and sooner for Scopus than for WoS, but this should be expected from a database that has a more comprehensive coverage of Chinese journals.

Factor Description
Selection of a database Which database does one use in the measurement? Coverage may differ substantially from one database to another.
Different versions of a database Different versions of a database may exist. For instance, several groups have created their own bibliometric versions based on raw data from Scopus or Web of Science, adding information to it, performing data cleaning and so on. Results from such bibliometric versions may differ from those obtained with the web versions of the same databases
Changes in database coverage Database coverage may change over time; for instance, new journals may be added from a particular year onwards. How does one deal with these changes?
Adequacy of database coverage Does a database cover the publication output of a country and/or in a research field sufficiently well? For instance, databases principally covering journals miss important output in social sciences and humanities (published in books) and in engineering (published in conference proceedings)
Fractional versus integer counting How should one count a paper co-published between a US and a UK author? As one US paper and one UK paper (integer counts)? Or as 0.5 US and 0.5 UK papers (fractional counts)? More sophisticated schemes can also be explored.
Absolute or relative counts Does one analyze the absolute number of published articles, or article shares (for instance, the percentage of papers published from a particular country relative to the total number of articles indexed for the database)?
Time period considered To which time period does the data collection relate? This is especially important when examining longitudinal data. For instance, a country may show an increase in some years, and a steady state or even decline in a subsequent time period.
Document types included in the counts Databases index many types of documents: full research articles, but also shorter letters, reviews, editorials, discussion papers, and more. Which types should be included in the counts?
Publication year vs. database or tape year A paper published at the end of a calendar year (e.g., in December 2010) may be included in the database in the next year (e.g., March 2011). Is such a paper counted as a 2010 or as a 2011 paper?
Country delimitation Papers are assigned to countries according to the geographical location of the institutions of publishing authors. But how precisely is this done? Does the database include the affiliations of all authors? Have variations in country names been taken into account?

Table 1 – Methodological issues in bibliometric analysis of nations

Figure 1 – Scopus Bib V: data from bibliometric version of Scopus created at Elsevier; Scopus Web V: The Web version of Scopus. Source: Scopus.

References

1. Price, D.J.D. (1978) Towards a model for science indicators. In Toward a Metric of Science: The Advent of Science Indicators (eds Elkana, Y., Lederberg, J., Merton, R.K., Thackray, A. & Zuckerman, H.) (New York: John Wiley, pp. 69–95).
2. Narin, F. (1976) Evaluative Bibliometrics: The Use of Publication and Citation Analysis in the Evaluation of Scientific Activity. (Washington D.C.: National Science Foundation,).
3. Martin, B.R, Irvine, J., Narin, F. & Sterritt, C. (1987) The continuing decline of British science. Nature, Vol. 330, pp. 123–126.
4. Callon, M. & Leydesdorff, L. (1987). La recherche française est-elle en bonne santé? La Recherche Vol. 18, pp. 412–419.
5. Zhou, P., & Leydesdorff, L. (2006) The emergence of China as a leading nation in science. Research Policy, Vol. 35, pp. 83–104.
6. Shelton, R. D. & Foland, P. (2009) The race for world leadership of science and technology: status and forecasts. Proceedings of the 12th International Conference of the International Society for Scientometrics and Informetrics (eds Larsen, B & Larsen, J.), Volume I, pp. 369–380 (Rio de Janeiro, Brazil, July 14–17, 2009).
7. Plume, A. (2011) Tipping the balance: The rise of China as a science superpower. Research Trends, Issue 22.
8. The Royal Society (2011) Knowledge, Networks and Nations: Global Scientific Collaboration in the 21st Century.
9. Daraio, C. & Moed, H.F. (2011). Is Italian science declining? Research Policy, Vol. 40, pp 1380–1392.
10. Leydesdorff, L. (2011). World shares of publications of the USA, EU-27, and China compared and predicated using the new interface of the Web-of-Science versus Scopus. arXiv:1110.1802v2 [cs.DL].

Comment by Loet Leydesdorff:

When can the cross-over between China and the USA be expected using Scopus data?

Moed et al.’s article1 is a reaction to a recent paper2 in which I showed that the cross-over between China and the USA would be postponed until after 2020 when using the Science Citation Index-Expanded of Thomson-Reuters. By contrast, a team at Elsevier had argued in a report of the Royal Society, and on the basis of Scopus data, for a possible cross-over as early as 2013 (Refs 3.4).

Figure 1 – Predicted cross-over between the USA and China based on the new Scopus data; confidence intervals at the 95%-level. (SPSS, v.18.) Sources: Moed et al. (2011)1; the open circles are from Leydesdorff (2011)2.

The new analysis additionally clarifies why the linear fit for the US data remains poor (R2 = 0.71) — it is because of problems with this data. However, the fit for China is not different from previously reported studies (R2 = 0.97). Using the Science Citation Index (WoS v.5), one can find more precise fits and therefore a higher reliability for the prediction of a cross-over occurring after 2020.As noted2, these longer-term predictions are unlikely to be valid because of decreasing marginal returns in competitive markets. The metrics are embedded in a long-standing debate which I first entered in 1987 (see Refs 5,6). Given the new data, the prediction in the report of the Royal Society that the cross-over in the Scopus database would take place as early as 2013 can be postponed by approximately two years.


Loet Leydesdorff
Amsterdam School of Communication Research,
University of Amsterdam,
http://www.leydesdorff.net; loet@leydesdorff.net

References:

1. Moed, H. F., Plume, A., Aisati, M, & Berkvens, P. (2011). Is science in your country declining? Or is your country becoming a scientific super power, and how quickly? Research Trends, Issue 25.
2. Leydesdorff, L. (2011). World shares of publications of the USA, EU-27, and China compared and predicated using the new interface of the Web-of-Science versus Scopus. arXiv:1110.1802v2 [cs.DL].
3. The Royal Society (2011) Knowledge, Networks and Nations: Global Scientific Collaboration in the 21st Century.
4. Plume, A. (2011) Tipping the balance: The rise of China as a science superpower. Research Trends, Issue 22.
5. Callon, M. & Leydesdorff, L. (1987). La recherche française est-elle en bonne santé? La Recherche Vol. 18, pp. 412–419.
6. Shelton, R. D., & Leydesdorff, L. (in press). Publish or patent: bibliometric evidence for empirical trade-offs in national funding strategies. Journal of the American Society for Information Science and Technology.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Letter to the editor

Dear Sarah Huggett, dear Editor, With high interest we read your recent article “Heading for success: or how not tot title your paper”, in Issue 24 (September 2011) of Research Trends. For articles focusing on trends it is important to cover the most recent literature. Therefore we would like to draw your attention to the […]

Read more >


Dear Sarah Huggett, dear Editor,

With high interest we read your recent article "Heading for success: or how not tot title your paper", in Issue 24 (September 2011) of Research Trends. For articles focusing on trends it is important to cover the most recent literature. Therefore we would like to draw your attention to the following.

In our recent contribution to the Journal of Informetrics 1, we publish a broad analysis of the occurrence and impact of all non-alphanumeric characters in 650,000 titles of peer-reviewed publications published between 1999 and 2008. In this analysis, we did not limit our investigation to a single field, but sampled publications from all fields, including the social sciences and humanities, in as far they are available in the Web of Science database.

Based on this extensive analysis, we draw the following main conclusions regarding the effect on impact:

1. Inclusion of a non-alphanumeric character has a positive effect on impact.

2. However, this effect is not visible if the most frequent non-alphanumeric characters (the hyphen, colon, comma, left and right parenthesis) are disregarded from the calculation.

3. In specific major fields, the effect can:

  • be positive, like in the overall case;
  • be negative, i.e. including a non-alphanumeric character has an adverse effect;
  • be absent, i.e. there appears to be no significant effect.

Additionally, we found that the relative occurrence of non-alphanumeric characters in titles has not been increasing between 1999 and 2008. We found this result was somewhat of a surprise, because some of the most cited analyses on specific non-alphanumeric characters in titles (see for instance Ref. 2) could give the impression that (at least the colon and question mark) “are on the rise” in science. Although this may be true for the limited scope of both analyses, we conclude that this does not hold for scientific publications in general.

In the discussion of our results, we hypothesize that these results can be explained by the need for an author to confirm to a general format of a title. If an author strays too far from this format, then he or she may run an increased risk that the title is disregarded by their peers, thus reducing the chance to get cited. In fact, such a conclusion is clearly in line with both Srivastava's and Blencowe's remarks, from which we gather that they will refrain from using a title that is “special” in any way, which would disturb the message they would want to convey.

Although our conclusions do not disagree with the results of Jamali and Nikzad (2011) nor with your analysis on the Cell publications, we think that our contribution adds important nuance to both results. For an author it is advisable to take a good look at the general format of titles in the field they want to publishing in. So, if that field features a lot of question marks or exclamations marks, then do include such characters in a title. Also, if that field is not used to see colons in titles of normal articles, then it may not advisable to include such a colon (let alone three).

Leiden, October 2011

Reindert (Renald) K. Buter
Anthony (Ton) F.J. Van Raan

References

  • Buter, R.K., & Van Raan, A.F.J. (2011) Non-alphanumeric characters in titles of scientific publications: an analysis of their occurrence and correlation with citation impact. Journal of Informetrics, Vol. 5, pp. 608-617.
  • Dillon, J.T. (1982) Impact of the colon: Preliminary reactions. American Psychologist, Vol. 37, pp. 716.
  • Jamali, H.R. & Nikzad M. (2011) Article title type and its relation with the number of downloads and citations. Scientometrics, Vol. 88, pp. 653-661 .
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Dear Sarah Huggett, dear Editor,

With high interest we read your recent article "Heading for success: or how not tot title your paper", in Issue 24 (September 2011) of Research Trends. For articles focusing on trends it is important to cover the most recent literature. Therefore we would like to draw your attention to the following.

In our recent contribution to the Journal of Informetrics 1, we publish a broad analysis of the occurrence and impact of all non-alphanumeric characters in 650,000 titles of peer-reviewed publications published between 1999 and 2008. In this analysis, we did not limit our investigation to a single field, but sampled publications from all fields, including the social sciences and humanities, in as far they are available in the Web of Science database.

Based on this extensive analysis, we draw the following main conclusions regarding the effect on impact:

1. Inclusion of a non-alphanumeric character has a positive effect on impact.

2. However, this effect is not visible if the most frequent non-alphanumeric characters (the hyphen, colon, comma, left and right parenthesis) are disregarded from the calculation.

3. In specific major fields, the effect can:

  • be positive, like in the overall case;
  • be negative, i.e. including a non-alphanumeric character has an adverse effect;
  • be absent, i.e. there appears to be no significant effect.

Additionally, we found that the relative occurrence of non-alphanumeric characters in titles has not been increasing between 1999 and 2008. We found this result was somewhat of a surprise, because some of the most cited analyses on specific non-alphanumeric characters in titles (see for instance Ref. 2) could give the impression that (at least the colon and question mark) “are on the rise” in science. Although this may be true for the limited scope of both analyses, we conclude that this does not hold for scientific publications in general.

In the discussion of our results, we hypothesize that these results can be explained by the need for an author to confirm to a general format of a title. If an author strays too far from this format, then he or she may run an increased risk that the title is disregarded by their peers, thus reducing the chance to get cited. In fact, such a conclusion is clearly in line with both Srivastava's and Blencowe's remarks, from which we gather that they will refrain from using a title that is “special” in any way, which would disturb the message they would want to convey.

Although our conclusions do not disagree with the results of Jamali and Nikzad (2011) nor with your analysis on the Cell publications, we think that our contribution adds important nuance to both results. For an author it is advisable to take a good look at the general format of titles in the field they want to publishing in. So, if that field features a lot of question marks or exclamations marks, then do include such characters in a title. Also, if that field is not used to see colons in titles of normal articles, then it may not advisable to include such a colon (let alone three).

Leiden, October 2011

Reindert (Renald) K. Buter
Anthony (Ton) F.J. Van Raan

References

  • Buter, R.K., & Van Raan, A.F.J. (2011) Non-alphanumeric characters in titles of scientific publications: an analysis of their occurrence and correlation with citation impact. Journal of Informetrics, Vol. 5, pp. 608-617.
  • Dillon, J.T. (1982) Impact of the colon: Preliminary reactions. American Psychologist, Vol. 37, pp. 716.
  • Jamali, H.R. & Nikzad M. (2011) Article title type and its relation with the number of downloads and citations. Scientometrics, Vol. 88, pp. 653-661 .
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Mapping & Measuring Scientific Output

This event focused on scientific output measurements, methodologies and mapping techniques. Research Trends summarizes and reports back.

Read more >


Hundreds of delegates participated in a day-long symposium on scientific evaluation metrics, held in Santa Fe New Mexico on May 10th 2011, the result of a collaboration between Elsevier and Miriam Blake, Director of the Los Alamos National Laboratory (LANL) Research Library. This symposium focused on scientific output measurements, methodologies and mapping techniques, and was globally broadcasted from the conference ballroom, where world-renowned speakers presented and discussed existing and emerging metrics used to evaluate the value and impact of research publications.

Measuring the impact and value of scientific publications is critical as governments are increasingly seeking to distribute research funds in ways that support high-quality research in strategically important fields..  Through the years, several methodologies and metrics have been developed in order to address some of the many factors that can be taken into consideration when assessing scientific output. New evaluative metrics have emerged to capitalise on the wealth of bibliometric data in analyzing citation counts, article usage, and the emergence and significance of collaborative scientific networks. In addition, ever-increasing computational power enables rigorous relative and comparative analysis of journal citations and publication relationships to be calculated and used in unique ways, including a variety of visualization solutions.

Symposium themes generated from registrants feedback and comments

New developments

The symposium offered insight into the topic of research evaluation metrics as a whole. The discussion was headed by Dr. Eugene Garfield and Dr. Henk Moed, who addressed established and emerging trends in bibliometrics research, with both stressing the necessity of using more than one method to accurately capture the impact of research publications and authors. Emerging and innovative approaches using journal and scientific networks, weighted reference analysis and article usage data were also presented. Dr. Jevin West discussed the latest developments in the EigenFactor (http://www.eigenfactor.org); Dr. Henry Small unveiled a new method for citations text mining in emerging scientific communities; Dr. Johan Bollen presented the MESUR (http://www.mesur.org/MESUR.html) project; and Dr. Kevin Boyack (http://mapofscience.com) demonstrated how co-citation analysis can be used to identify emerging and established scientific competencies within institutions as well as countries. The methodological discussion was accompanied by a demonstration of visualization solutions that capture these relationships and enable a broad view of scientific trends, networks and research foci.

More than a thousand words

The power of visualizing such networks was demonstrated by Dr. Katy Börner, who headed the discussion on the variety and diversity of scientific mapping tools. Dr. Börner brought a wealth of examples through the “Places & Spaces” (http://scimaps.org/) exhibition, which was displayed in the conference room and via her presentation. Maps for scientific policy and economic decision makers, along with maps for forecasting and research references, were among the examples displayed and discussed by Dr. Börner. This was followed by Mr. Bradford Paley, who discussed visual and cognitive engineering techniques that support the analysis of scientometric networks (http://wbpaley.com/brad/Elsevier.html).

Multidimensionality

With the evident paradigm shift from print and paper to official and nonofficial online networks, as well as usage data and the wealth of data that they offer, the main discussion point during the symposium was the need for multidimensionality of measurements that capture and represent the complex arena we call “Scientific Impact”. Today, using one method, value or score to determine whether a researcher, research group or institution is indeed impactful seems invalid. If there is a lesson to be learned from this research event, it is that the scientific community has to find the correct and fair balance between a variety of computational metrics and qualitative peer-review processes.

Presentations & Audio recordings of this event are available:
http://www.elsevier.com/wps/find/librarianshome.librarians/LCPresentations

References:

  1. OECD (2010), Performance-based Funding for Public Research in Tertiary Education Institutions: Workshop Proceedings, OECD Publishing. http://dx.doi.org/10.1787/9789264094611-en
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Hundreds of delegates participated in a day-long symposium on scientific evaluation metrics, held in Santa Fe New Mexico on May 10th 2011, the result of a collaboration between Elsevier and Miriam Blake, Director of the Los Alamos National Laboratory (LANL) Research Library. This symposium focused on scientific output measurements, methodologies and mapping techniques, and was globally broadcasted from the conference ballroom, where world-renowned speakers presented and discussed existing and emerging metrics used to evaluate the value and impact of research publications.

Measuring the impact and value of scientific publications is critical as governments are increasingly seeking to distribute research funds in ways that support high-quality research in strategically important fields..  Through the years, several methodologies and metrics have been developed in order to address some of the many factors that can be taken into consideration when assessing scientific output. New evaluative metrics have emerged to capitalise on the wealth of bibliometric data in analyzing citation counts, article usage, and the emergence and significance of collaborative scientific networks. In addition, ever-increasing computational power enables rigorous relative and comparative analysis of journal citations and publication relationships to be calculated and used in unique ways, including a variety of visualization solutions.

Symposium themes generated from registrants feedback and comments

New developments

The symposium offered insight into the topic of research evaluation metrics as a whole. The discussion was headed by Dr. Eugene Garfield and Dr. Henk Moed, who addressed established and emerging trends in bibliometrics research, with both stressing the necessity of using more than one method to accurately capture the impact of research publications and authors. Emerging and innovative approaches using journal and scientific networks, weighted reference analysis and article usage data were also presented. Dr. Jevin West discussed the latest developments in the EigenFactor (http://www.eigenfactor.org); Dr. Henry Small unveiled a new method for citations text mining in emerging scientific communities; Dr. Johan Bollen presented the MESUR (http://www.mesur.org/MESUR.html) project; and Dr. Kevin Boyack (http://mapofscience.com) demonstrated how co-citation analysis can be used to identify emerging and established scientific competencies within institutions as well as countries. The methodological discussion was accompanied by a demonstration of visualization solutions that capture these relationships and enable a broad view of scientific trends, networks and research foci.

More than a thousand words

The power of visualizing such networks was demonstrated by Dr. Katy Börner, who headed the discussion on the variety and diversity of scientific mapping tools. Dr. Börner brought a wealth of examples through the “Places & Spaces” (http://scimaps.org/) exhibition, which was displayed in the conference room and via her presentation. Maps for scientific policy and economic decision makers, along with maps for forecasting and research references, were among the examples displayed and discussed by Dr. Börner. This was followed by Mr. Bradford Paley, who discussed visual and cognitive engineering techniques that support the analysis of scientometric networks (http://wbpaley.com/brad/Elsevier.html).

Multidimensionality

With the evident paradigm shift from print and paper to official and nonofficial online networks, as well as usage data and the wealth of data that they offer, the main discussion point during the symposium was the need for multidimensionality of measurements that capture and represent the complex arena we call “Scientific Impact”. Today, using one method, value or score to determine whether a researcher, research group or institution is indeed impactful seems invalid. If there is a lesson to be learned from this research event, it is that the scientific community has to find the correct and fair balance between a variety of computational metrics and qualitative peer-review processes.

Presentations & Audio recordings of this event are available:
http://www.elsevier.com/wps/find/librarianshome.librarians/LCPresentations

References:

  1. OECD (2010), Performance-based Funding for Public Research in Tertiary Education Institutions: Workshop Proceedings, OECD Publishing. http://dx.doi.org/10.1787/9789264094611-en
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

A “democratization” of university rankings: U-Multirank

Research Trends reports on a new multi-dimensional approach of ranking universities. Its unique property is that it does not collapse different scores into one overall score, thereby increasing transparency.

Read more >


The rise of university ranking systems has engendered status anxiety among many institutions, and created a “reputation race” in which they strive to place higher up the university charts year on year. Concerns have been aired that this is leading to an homogenization of the university sector, as aspiring institutions imitate the model of more successful research-intensive institutions. And while the ranking scores do capture an important aspect of each university’s overall quality, they don’t speak to a diverse range of other issues, such as student satisfaction within these institutions.  

U-Multirank is a new initiative to change this. The system — designed and tested by the Consortium for Higher Education and Research Performance Assessment, and supported by the European Commission — aims to increase transparency in the information available to stakeholders about universities, and encourage functional diversity of the institutions1. Unlike traditional university rankings such as the ARWU2, QS3 and THE4 rankings, U-Multirank features separate indicators that are not collapsed into an overall score. In this article Frans van Vught, project leader of U-Multirank, discusses development of the system and his hopes for it.

Traditional university ranking systems encourage institutions to focus on areas that carry the greatest ranking weight, such as scientific research performance. One benefit of these rankings is that they publicize the achievements of universities that perform well, albeit in this specific range of activities. Will U-Multirank move away from a culture of looking for success stories?

U-Multirank is a multi-dimensional, user-driven ranking tool, addressing the functions of higher education and research institutions across five dimensions: research, education, knowledge exchange, regional engagement and international orientation. In each dimension it offers indicators to compare institutions. In this sense it certainly focuses on the goals institutions set themselves. But unlike most current rankings, U-Multirank does not limit itself to one dimension only (research). It allows institutions to show whether they are winners or improvers over a range of dimensions.

As it is impossible to directly measure the quality of an institute, proxy measures, such as graduation rates and publication output, have to be used instead. Yet as Geoffrey Boulton argues, “[i]f ranking proxies are poor measures of the underlying value to society of universities, rankings will at best be irrelevant to the achievement of those values, at worst, they will undermine it.”5 What criteria have you considered when selecting indicators, and are there indicators you would like to include but cannot at present?

When ranking in higher education and research we need to work with proxy indicators, since a comprehensive and generally acceptable set of indicators for ‘quality’ does not exist. Quality and excellence are relative concepts and can only be judged in the context of the purposes stakeholders relate to these concepts. Quality in this sense is ‘fitness for purpose’, and purposes are different for different stakeholders.

For the selection of U-Multirank’s indicators we made use of a long and intensive process of stakeholder consultation, which included a broad variety of stakeholders, including the higher education and research institutions themselves. This stakeholder consultation reflected the criterion of ‘relevance’ in the process of indicator selection. In addition we used the criteria of validity, reliability, comparability and feasibility. For ‘feasibility’ we focused on the availability of data and the effort required to collect extra data. We tried to ensure that data availability would not become the most important factor in the selection process. However, the empirical pilot test of the feasibility of U-Multirank indicators showed that particularly in the dimensions of ‘knowledge exchange’ and ‘regional engagement’ data availability is limited.

A recent report drew attention to U-Multirank’s ‘traffic light’ rating system, commenting that “institutions should not be ranked on aspects that they explicitly choose not to pursue within their mission.”6 Is this a valid criticism? Could it lead — as the authors suggest — to a decrease in functional diversity as “institutions compet[e] to avoid being awarded a poor ranking against any of the criteria”?

I think this argument is invalid. U-Multirank is user-driven. This is based on the fundamental epistemological position that any description of reality is conceptually driven; rankings imply a selection of reality aspects that are assumed to be relevant. Any ranking reflects the conceptual framework of its creator, who should therefore be a user of the ranking. U-Multirank is a ‘democratization’ of rankings.

We designed a tool that allows users to select the institutions or programs they are interested in. This is U-Map7, a mapping instrument that allows the selection of institutional activity profiles. In U-Multirank only comparable institutions are compared: apples are compared with apples, not oranges. Institutions that do not pursue certain mission aspects should not be compared on these aspects. U-Multirank is designed to avoid this, so as not to encourage imitation or discourage functional diversity. On the contrary, U-Multirank shows and supports the rich diversity in higher education systems.

However harmful to the goal of encouraging diversity, traditional ranking systems have the advantage that people know how to read them: the simplest comparison between universities is seeing which has the higher rank. Will U-Multirank’s users need guidance to compare institutions?

We hope to address both the wish to have a general picture of institutional performances and the wish to go into detail. U-Multirank offers a set of presentation modes that allow both a quick and general overview of multidimensional performance on the one hand, and a more detailed comparison per dimension on the other. Testing these presentation modes with different groups of stakeholders showed that our approach was highly appreciated and additional guidance was not needed. The general overview is presented in the so-called ‘sunburst charts’ that show a multidimensional performance profile per institution (see Figure 1). The detailed presentations are offered as tables in which performance categories are shown per indicator. 

Figure 1 – U-Multirank’s ‘sunburst’ charts “[give] an impression ‘at a glance’ of the performance of an institution”.8 The charts show the performance of each institution across a number of indicators, with one ‘ray’ per indicator: where an institution ranks highly in an indicator, the ‘ray’ is larger. These indicators are grouped into categories around the chart. These two charts show the performance of two institutions: a large Scandinavian university (top) and a large southern European university (bottom).

Curriculum Vitae: Frans van Vught

Frans van Vught (1950) is a high-level expert and advisor at the European Commission. In addition he is president of the European Center for Strategic Management of Universities (ESMU), president of the Netherlands House for Education and Research (NETHER), and member of the board of the European Institute of Technology Foundation (EITF), all based in Brussels. He was president and Rector of the University of Twente, the Netherlands (1997–2005). He has been a higher education researcher for most of his life and published widely in this field. His many international functions include membership of the University Grants Committee of Hong Kong, the board of the European University Association (EUA) (2005–2009), the German ‘Akkreditierungsrat’ (2005–2009), and the L.H. Martin Institute for higher education leadership and management in Australia. Van Vught is a sought-after international speaker and has been a consultant to many international organizations, national governments and higher education institutions all over the world. He is honorary professor at the University of Twente and at the University of Melbourne, and holds several honorary doctorates.

References:

  1. http://www.u-multirank.eu/
  2. http://www.arwu.org/index.jsp#
  3. http://www.topuniversities.com/university-rankings/world-university-rankings
  4. http://www.timeshighereducation.co.uk/world-university-rankings
  5. Boulton, G. (2010). University rankings: Diversity, excellence and the European initiative. League of European Research Universities Advice Paper. No. 3, June.
  6. Beer, J. et al. (2011). Let variety flourish. Times Higher Education, 2 June.
  7. http://www.u-map.eu/
  8. U-Multirank (2011). The design and testing the feasibility of a multi-dimensional global university ranking. Draft version for distribution at the U-Multirank conference, Brussels, Thursday 9 June 2011.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The rise of university ranking systems has engendered status anxiety among many institutions, and created a “reputation race” in which they strive to place higher up the university charts year on year. Concerns have been aired that this is leading to an homogenization of the university sector, as aspiring institutions imitate the model of more successful research-intensive institutions. And while the ranking scores do capture an important aspect of each university’s overall quality, they don’t speak to a diverse range of other issues, such as student satisfaction within these institutions.  

U-Multirank is a new initiative to change this. The system — designed and tested by the Consortium for Higher Education and Research Performance Assessment, and supported by the European Commission — aims to increase transparency in the information available to stakeholders about universities, and encourage functional diversity of the institutions1. Unlike traditional university rankings such as the ARWU2, QS3 and THE4 rankings, U-Multirank features separate indicators that are not collapsed into an overall score. In this article Frans van Vught, project leader of U-Multirank, discusses development of the system and his hopes for it.

Traditional university ranking systems encourage institutions to focus on areas that carry the greatest ranking weight, such as scientific research performance. One benefit of these rankings is that they publicize the achievements of universities that perform well, albeit in this specific range of activities. Will U-Multirank move away from a culture of looking for success stories?

U-Multirank is a multi-dimensional, user-driven ranking tool, addressing the functions of higher education and research institutions across five dimensions: research, education, knowledge exchange, regional engagement and international orientation. In each dimension it offers indicators to compare institutions. In this sense it certainly focuses on the goals institutions set themselves. But unlike most current rankings, U-Multirank does not limit itself to one dimension only (research). It allows institutions to show whether they are winners or improvers over a range of dimensions.

As it is impossible to directly measure the quality of an institute, proxy measures, such as graduation rates and publication output, have to be used instead. Yet as Geoffrey Boulton argues, “[i]f ranking proxies are poor measures of the underlying value to society of universities, rankings will at best be irrelevant to the achievement of those values, at worst, they will undermine it.”5 What criteria have you considered when selecting indicators, and are there indicators you would like to include but cannot at present?

When ranking in higher education and research we need to work with proxy indicators, since a comprehensive and generally acceptable set of indicators for ‘quality’ does not exist. Quality and excellence are relative concepts and can only be judged in the context of the purposes stakeholders relate to these concepts. Quality in this sense is ‘fitness for purpose’, and purposes are different for different stakeholders.

For the selection of U-Multirank’s indicators we made use of a long and intensive process of stakeholder consultation, which included a broad variety of stakeholders, including the higher education and research institutions themselves. This stakeholder consultation reflected the criterion of ‘relevance’ in the process of indicator selection. In addition we used the criteria of validity, reliability, comparability and feasibility. For ‘feasibility’ we focused on the availability of data and the effort required to collect extra data. We tried to ensure that data availability would not become the most important factor in the selection process. However, the empirical pilot test of the feasibility of U-Multirank indicators showed that particularly in the dimensions of ‘knowledge exchange’ and ‘regional engagement’ data availability is limited.

A recent report drew attention to U-Multirank’s ‘traffic light’ rating system, commenting that “institutions should not be ranked on aspects that they explicitly choose not to pursue within their mission.”6 Is this a valid criticism? Could it lead — as the authors suggest — to a decrease in functional diversity as “institutions compet[e] to avoid being awarded a poor ranking against any of the criteria”?

I think this argument is invalid. U-Multirank is user-driven. This is based on the fundamental epistemological position that any description of reality is conceptually driven; rankings imply a selection of reality aspects that are assumed to be relevant. Any ranking reflects the conceptual framework of its creator, who should therefore be a user of the ranking. U-Multirank is a ‘democratization’ of rankings.

We designed a tool that allows users to select the institutions or programs they are interested in. This is U-Map7, a mapping instrument that allows the selection of institutional activity profiles. In U-Multirank only comparable institutions are compared: apples are compared with apples, not oranges. Institutions that do not pursue certain mission aspects should not be compared on these aspects. U-Multirank is designed to avoid this, so as not to encourage imitation or discourage functional diversity. On the contrary, U-Multirank shows and supports the rich diversity in higher education systems.

However harmful to the goal of encouraging diversity, traditional ranking systems have the advantage that people know how to read them: the simplest comparison between universities is seeing which has the higher rank. Will U-Multirank’s users need guidance to compare institutions?

We hope to address both the wish to have a general picture of institutional performances and the wish to go into detail. U-Multirank offers a set of presentation modes that allow both a quick and general overview of multidimensional performance on the one hand, and a more detailed comparison per dimension on the other. Testing these presentation modes with different groups of stakeholders showed that our approach was highly appreciated and additional guidance was not needed. The general overview is presented in the so-called ‘sunburst charts’ that show a multidimensional performance profile per institution (see Figure 1). The detailed presentations are offered as tables in which performance categories are shown per indicator. 

Figure 1 – U-Multirank’s ‘sunburst’ charts “[give] an impression ‘at a glance’ of the performance of an institution”.8 The charts show the performance of each institution across a number of indicators, with one ‘ray’ per indicator: where an institution ranks highly in an indicator, the ‘ray’ is larger. These indicators are grouped into categories around the chart. These two charts show the performance of two institutions: a large Scandinavian university (top) and a large southern European university (bottom).

Curriculum Vitae: Frans van Vught

Frans van Vught (1950) is a high-level expert and advisor at the European Commission. In addition he is president of the European Center for Strategic Management of Universities (ESMU), president of the Netherlands House for Education and Research (NETHER), and member of the board of the European Institute of Technology Foundation (EITF), all based in Brussels. He was president and Rector of the University of Twente, the Netherlands (1997–2005). He has been a higher education researcher for most of his life and published widely in this field. His many international functions include membership of the University Grants Committee of Hong Kong, the board of the European University Association (EUA) (2005–2009), the German ‘Akkreditierungsrat’ (2005–2009), and the L.H. Martin Institute for higher education leadership and management in Australia. Van Vught is a sought-after international speaker and has been a consultant to many international organizations, national governments and higher education institutions all over the world. He is honorary professor at the University of Twente and at the University of Melbourne, and holds several honorary doctorates.

References:

  1. http://www.u-multirank.eu/
  2. http://www.arwu.org/index.jsp#
  3. http://www.topuniversities.com/university-rankings/world-university-rankings
  4. http://www.timeshighereducation.co.uk/world-university-rankings
  5. Boulton, G. (2010). University rankings: Diversity, excellence and the European initiative. League of European Research Universities Advice Paper. No. 3, June.
  6. Beer, J. et al. (2011). Let variety flourish. Times Higher Education, 2 June.
  7. http://www.u-map.eu/
  8. U-Multirank (2011). The design and testing the feasibility of a multi-dimensional global university ranking. Draft version for distribution at the U-Multirank conference, Brussels, Thursday 9 June 2011.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Heading for success: or how not to title your paper

Does the choice of a certain title influence citation impact? Research Trends investigates the relevant properties of a scientific title, from length to punctuation marks.

Read more >


The title of a paper acts as a gateway to its content. It’s the first thing potential readers of the paper see, before deciding to move on to the abstract or full text. As academic authors want to maximize the readership of their papers it is unsurprising that they usually take a lot of care in choosing an appropriate title. But what makes a title draw in citations?

Is longer better?

Bibliometric analyses can be used to illuminate the influence of titles on citations. Jamali and Nikzad, for example, found differences between the citation rates of articles with different types of titles. In particular, they found that articles with a question mark or colon in their title tend to be cited less1. The authors noted that “no significant correlation was found between title length and citations”, a result conflicting with another study by Habibzadeh and Yadollahie finding that “longer titles seem to be associated with higher citation rates”2.

Research Trends investigates

Faced with inconsistent evidence, Research Trends decided to conduct its own case study of scholarly papers published in Cell between 2006 and 2010, and their citations within the same window. Overall, there was no direct correlation between title length (measured in number of characters) and total citations. However, comparing the citation rates of articles of different lengths revealed that papers with titles between 31 and 40 characters were cited the most (see Figure 1). There were also differences in average number of citations per paper depending on the punctuation used in the titles: for instance, the few papers with questions marks in their titles were clearly cited less, but titles containing a comma or colon were cited more (see Figure 2). There were no papers with a semicolon in their title, and only one (uncited) paper with an exclamation mark in its title. It is interesting to note that the ten most cited papers in Research Trends’ case study did not contain any punctuation at all in their titles.

The authors explain

Research Trends contacted authors from highly cited papers in its corpus for their take on the influence of titles on citations. For some authors, such as Professor Deepak Srivastava — who published a paper in Cell with a title that included three commas3 — the main emphasis when choosing a title is semantics: "We chose a title that would reflect the major findings of the paper and the conclusion we would like the field to derive from the contribution. I don't pay too much attention to the title's effect on citations." Interestingly, different criteria are used for title assignment depending on the type of paper, as explained by Professor Ben Blencowe: "For research articles, I try to use titles that are concise while conveying the most interesting and surprising new results from the study. For review titles, I generally start with the main overall subject followed by a colon and then one or more subtopics that best describe the contents of the review. My 2006 Cell review on alternative splicing4 followed this format. It is not clear to me that this format increases citation impact — I would hope that the overall information content, timeliness and quality of writing in a review are directly related to citation impact! — but using punctuation in this way helps to convey at a glance what the review is about."

 

Are you having a laugh?

Given that straightforwardly descriptive paper titles run the risk of being dull, some authors are tempted to spice them up with a touch of humour, which may be a pun, a play on words, or an amusing metaphor. This, however, is a risky strategy. An analysis5 of papers published in two psychology journals, carried out by Sagi and Yechiam, found that “articles with highly amusing titles […] received fewer citations”, suggesting that academic authors should leave being funny to comedians.

In sum, the citation analysis of papers according to title characteristics is better at telling authors what to avoid than what to include. Our results, combined with others, suggest that a high-impact paper should be neither too short nor too long (somewhere between 30 and 40 characters appears to be the sweet spot for papers published in Cell). It may also be advisable to avoid question marks and exclamation marks (though colons and commas do not seem to have a negative impact on subsequent citation). And even when you think you have a clever joke to work in to a title, it probably won’t help you gain citations. Finally, while a catchy title can help get readers to look at your paper, it’s not going to turn a bad paper into a good one.

Figures

 

Figure 1 – Average number of citations per paper by title length for papers published in Cell 2006–2010, and their citations within the same window. Data labels show number of papers. Source: Scopus.

 

Figure 2 – Average number of citations per paper by punctuation mark for papers published in Cell 2006-2010, and their citations within the same window. Data labels show number of papers. Source: Scopus.

References

  1. Jamali, H.R. & Nikzad M. (2011) Article title type and its relation with the number of downloads and citations. Scientometrics, online first. DOI: 10.1007/s11192-011-0412-z
  2. Habibzadeh, F. & Yadollahie, M. (2010) Are Shorter Article Titles More Attractive for Citations? Cross-sectional study of 22 scientific journals. Croatian Medical Journal, Vol. 51, No. 2, pp. 165–170.
  3. Zhao Y., Srivastava D., Ransom J.F., Li A., Vedantham V., von Drehle M., Muth A.N., Tsuchihashi T., McManus M.T. & Schwartz R.J. (2007) Dysregulation of cardiogenesis, cardiac conduction, and cell cycle in mice lacking miRNA-1–2. Cell, Vol. 129, No. 2, pp. 303–317.
  4. Blencowe, B. J. (2006) Alternative splicing: new insights from global analyses”, Cell, Vol. 126, No. 1, pp. 848–858.
  5. Sagi, I. & Yechiam, E. (2008) Amusing titles in scientific journals and article citation. Journal of Information Science, Vol. 34, No. 5, pp. 680–687.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The title of a paper acts as a gateway to its content. It’s the first thing potential readers of the paper see, before deciding to move on to the abstract or full text. As academic authors want to maximize the readership of their papers it is unsurprising that they usually take a lot of care in choosing an appropriate title. But what makes a title draw in citations?

Is longer better?

Bibliometric analyses can be used to illuminate the influence of titles on citations. Jamali and Nikzad, for example, found differences between the citation rates of articles with different types of titles. In particular, they found that articles with a question mark or colon in their title tend to be cited less1. The authors noted that “no significant correlation was found between title length and citations”, a result conflicting with another study by Habibzadeh and Yadollahie finding that “longer titles seem to be associated with higher citation rates”2.

Research Trends investigates

Faced with inconsistent evidence, Research Trends decided to conduct its own case study of scholarly papers published in Cell between 2006 and 2010, and their citations within the same window. Overall, there was no direct correlation between title length (measured in number of characters) and total citations. However, comparing the citation rates of articles of different lengths revealed that papers with titles between 31 and 40 characters were cited the most (see Figure 1). There were also differences in average number of citations per paper depending on the punctuation used in the titles: for instance, the few papers with questions marks in their titles were clearly cited less, but titles containing a comma or colon were cited more (see Figure 2). There were no papers with a semicolon in their title, and only one (uncited) paper with an exclamation mark in its title. It is interesting to note that the ten most cited papers in Research Trends’ case study did not contain any punctuation at all in their titles.

The authors explain

Research Trends contacted authors from highly cited papers in its corpus for their take on the influence of titles on citations. For some authors, such as Professor Deepak Srivastava — who published a paper in Cell with a title that included three commas3 — the main emphasis when choosing a title is semantics: "We chose a title that would reflect the major findings of the paper and the conclusion we would like the field to derive from the contribution. I don't pay too much attention to the title's effect on citations." Interestingly, different criteria are used for title assignment depending on the type of paper, as explained by Professor Ben Blencowe: "For research articles, I try to use titles that are concise while conveying the most interesting and surprising new results from the study. For review titles, I generally start with the main overall subject followed by a colon and then one or more subtopics that best describe the contents of the review. My 2006 Cell review on alternative splicing4 followed this format. It is not clear to me that this format increases citation impact — I would hope that the overall information content, timeliness and quality of writing in a review are directly related to citation impact! — but using punctuation in this way helps to convey at a glance what the review is about."

 

Are you having a laugh?

Given that straightforwardly descriptive paper titles run the risk of being dull, some authors are tempted to spice them up with a touch of humour, which may be a pun, a play on words, or an amusing metaphor. This, however, is a risky strategy. An analysis5 of papers published in two psychology journals, carried out by Sagi and Yechiam, found that “articles with highly amusing titles […] received fewer citations”, suggesting that academic authors should leave being funny to comedians.

In sum, the citation analysis of papers according to title characteristics is better at telling authors what to avoid than what to include. Our results, combined with others, suggest that a high-impact paper should be neither too short nor too long (somewhere between 30 and 40 characters appears to be the sweet spot for papers published in Cell). It may also be advisable to avoid question marks and exclamation marks (though colons and commas do not seem to have a negative impact on subsequent citation). And even when you think you have a clever joke to work in to a title, it probably won’t help you gain citations. Finally, while a catchy title can help get readers to look at your paper, it’s not going to turn a bad paper into a good one.

Figures

 

Figure 1 – Average number of citations per paper by title length for papers published in Cell 2006–2010, and their citations within the same window. Data labels show number of papers. Source: Scopus.

 

Figure 2 – Average number of citations per paper by punctuation mark for papers published in Cell 2006-2010, and their citations within the same window. Data labels show number of papers. Source: Scopus.

References

  1. Jamali, H.R. & Nikzad M. (2011) Article title type and its relation with the number of downloads and citations. Scientometrics, online first. DOI: 10.1007/s11192-011-0412-z
  2. Habibzadeh, F. & Yadollahie, M. (2010) Are Shorter Article Titles More Attractive for Citations? Cross-sectional study of 22 scientific journals. Croatian Medical Journal, Vol. 51, No. 2, pp. 165–170.
  3. Zhao Y., Srivastava D., Ransom J.F., Li A., Vedantham V., von Drehle M., Muth A.N., Tsuchihashi T., McManus M.T. & Schwartz R.J. (2007) Dysregulation of cardiogenesis, cardiac conduction, and cell cycle in mice lacking miRNA-1–2. Cell, Vol. 129, No. 2, pp. 303–317.
  4. Blencowe, B. J. (2006) Alternative splicing: new insights from global analyses”, Cell, Vol. 126, No. 1, pp. 848–858.
  5. Sagi, I. & Yechiam, E. (2008) Amusing titles in scientific journals and article citation. Journal of Information Science, Vol. 34, No. 5, pp. 680–687.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Emerging scientific networks

In countries with an emerging science base, how much do key institutions participate in “outward” versus “inward” collaboration networks? This article shows different patterns in these networks for different countries.

Read more >


Examining the scientific output of countries around the world is an effective way to identify emerging scientific competencies on national, institutional and topical levels. Various studies in this area have identified up-and-coming countries in South America, Africa, Asia and Europe1–5. These studies not only describe how particular countries actually invest in their science by mapping publications by research topics and disciplines, but can also help identify the factors that stimulate or harm scientific development by pointing to over- or under-investment in particular fields and/or research groups. 

Inward or outward?

In this piece we focus not only on the scientific output as seen in publications, but also on the formation of emerging scientific networks in a number of countries from different geographical regions. These networks were defined in terms of “Inward” and “Outward” connections. “Inward” connections denote scientific collaborations mostly conducted between institutions in the same country; “Outward” connections are those between institutions in different countries. Looking at the Inward/Outward characteristics of these emerging scientific networks reveals the differences between institutions and countries at the level of international versus domestic scientific participation, and also helps identify the specific disciplines and topics that foster such scientific network exchanges.

Countries of interest

This study focuses on the analysis and identification of institutions in selected countries in Africa, Central America, Eastern Europe, Arab nations and South Asia, all of which have shown a surge in scientific output in the past 5 years. The analysis was conducted in four steps. In the first step, a selected list of countries per region was compiled (see Table 1). In the second step, these countries were searched using Scopus database for 2005–2010 publications. The country with the higher number of publications was then searched individually in the third step in order to identify the institution with the highest scientific output. Finally, using Scopus Affiliation Profile, further analysis into each institution’s topics and collaborations was completed. The results are presented in Table 2.

Africa South Africa, Nigeria, Egypt, Kenya, Tunisia, Algeria, Morocco, Tunisia, Uganda, Namibia, Ghana, Cameroon
Eastern Europe Estonia, Latvia, Lithuania, Poland, Czech Republic, Slovakia, Hungary, Romania, Bulgaria, Slovenia, Croatia, Bosnia-Herzegovina, Serbia, Kosovo, Albania, Montenegro, Macedonia
Arab Countries Iran, Iraq, Jordan, Lebanon, Qatar, Saudi Arabia, Syria, Turkey, United Arab Emirates, Yemen
Central & South America   Panama, Costa Rica, El Salvador, Nicaragua, Honduras, Guatemala, Belize, Argentina, Brazil, Bolivia, Chile, Colombia, Ecuador, Paraguay, Peru, Uruguay, Venezuela
South Asia Indonesia, Laos, Malaysia, Philippines, Singapore, Thailand, Vietnam

Table 1 – List of selected regions and countries.

Table 2 shows the five countries displaying the highest number of publications in the different regions, and the institutions that display the largest number of publications, for 2005–2010. These results suggest that the nature of scientific collaborations — Outward or Inward — are not determined by the scientific field under study. For example, both Singapore and Saudi Arabia published significantly in Engineering, yet these countries show different collaborative characteristics: the former tend to be Inward, while the latter are Outward. Similarly, while both South Africa and the Czech Republic are strong in Medicine, they typically engage in Inward and Outward collaborations, respectively.

Close to sight, close to heart

In fact, the distinction between Outward and Inward collaborations obscures the fact that both kinds of collaboration tend to occur between geographically close countries. For example, Saudi Arabia’s Outward collaborations are largely carried out with groups based in India and Pakistan, countries that are relatively close to Saudi Arabia compared with, say, the US or Western Europe. Likewise, while the Czech Republic collaborates with institutions outside its borders, they are typically geographically close (that is, other European countries).

This examination of disciplinary foci and collaborative formations shows that despite the differences in research activities and collaborative trends, collaborations are typically formed between institutions that show relative geographical proximity. This trend could be a result of many factors. For example, researchers may be more likely to form personal connections with colleagues from nearby countries, perhaps because they encounter each other at regional talks and conferences more often than colleagues from countries further afield. In addition, researchers may find it easier to work with colleagues who share the same language, or other cultural characteristics. 

Region Country Most productive institution Dominant disciplines in most productive institution Collaborative orientation Most productive institution’s major collaborators
Africa South Africa University of Cape Town Medicine and Agricultural & Biological sciences Inward Univ Stellenbosch; Univ Witwatersrand; Groote Schuur Hospital; South African Medical Research Council
Central America Costa Rica Universidad de Costa Rica Agricultural and Biological Sciences Outward Texas A and M Univ; Smithsonian Tropical Research Institute; Univ Nacional Autónoma de México; Univ Sao Paulo
Eastern Europe Czech Republic Univerzita Karlova v Praze (Charles Univ, Prague Medicine and Biochemistry Outward Institutions in Russia, France and the UK
Arab Saudi Arabia King Fahd Univ Petroleum and Minerals Engineering Outward Institutions in IEEE, India and Pakistan
South Asia Singapore National University of Singapore Engineering, Physics and Astronomy Inward Inst. Materials Research and Engineering, A-Star, Inst. Infocomm Research, A-Star, Yong Loo Lin School of Medicine
  Table 2 – Most productive institutions and their collaborators in five countries.

References

  1.  Toivanen, H., Ponomariov, B. (2011) African regional innovation systems: Bibliometric analysis of research collaboration patterns 2005-2009. Scientometrics, Vol. 88, No. 2, pp. 471–493.
  2. Zhou, P., Glänzel, W. (2010) In-depth analysis on China's international cooperation in science. Scientometrics, Vol. 82, No. 3, pp. 597-612.
  3. De Castro, L. A. B. (2005) Strategies to assure adequate scientific outputs by developing countries - A Scientometrics evaluation of Brazilian PADCT as a case study. Cybermetrics, Vol. 9, No. 1.
  4. García-Carpintero, E., Granadino, B., & Plaza, L. M. (2010) The representation of nationalities on the editorial boards of international journals and the promotion of the scientific output of the same countries. Scientometrics, Vol. 84, No. 3, 799–811.
  5. Nguyen, T. V., & Pham, L. T. (2011) Scientific output and its relationship to knowledge economy: An analysis of ASEAN countries. Scientometrics, (1 July 2011), 1–11.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Examining the scientific output of countries around the world is an effective way to identify emerging scientific competencies on national, institutional and topical levels. Various studies in this area have identified up-and-coming countries in South America, Africa, Asia and Europe1–5. These studies not only describe how particular countries actually invest in their science by mapping publications by research topics and disciplines, but can also help identify the factors that stimulate or harm scientific development by pointing to over- or under-investment in particular fields and/or research groups. 

Inward or outward?

In this piece we focus not only on the scientific output as seen in publications, but also on the formation of emerging scientific networks in a number of countries from different geographical regions. These networks were defined in terms of “Inward” and “Outward” connections. “Inward” connections denote scientific collaborations mostly conducted between institutions in the same country; “Outward” connections are those between institutions in different countries. Looking at the Inward/Outward characteristics of these emerging scientific networks reveals the differences between institutions and countries at the level of international versus domestic scientific participation, and also helps identify the specific disciplines and topics that foster such scientific network exchanges.

Countries of interest

This study focuses on the analysis and identification of institutions in selected countries in Africa, Central America, Eastern Europe, Arab nations and South Asia, all of which have shown a surge in scientific output in the past 5 years. The analysis was conducted in four steps. In the first step, a selected list of countries per region was compiled (see Table 1). In the second step, these countries were searched using Scopus database for 2005–2010 publications. The country with the higher number of publications was then searched individually in the third step in order to identify the institution with the highest scientific output. Finally, using Scopus Affiliation Profile, further analysis into each institution’s topics and collaborations was completed. The results are presented in Table 2.

Africa South Africa, Nigeria, Egypt, Kenya, Tunisia, Algeria, Morocco, Tunisia, Uganda, Namibia, Ghana, Cameroon
Eastern Europe Estonia, Latvia, Lithuania, Poland, Czech Republic, Slovakia, Hungary, Romania, Bulgaria, Slovenia, Croatia, Bosnia-Herzegovina, Serbia, Kosovo, Albania, Montenegro, Macedonia
Arab Countries Iran, Iraq, Jordan, Lebanon, Qatar, Saudi Arabia, Syria, Turkey, United Arab Emirates, Yemen
Central & South America   Panama, Costa Rica, El Salvador, Nicaragua, Honduras, Guatemala, Belize, Argentina, Brazil, Bolivia, Chile, Colombia, Ecuador, Paraguay, Peru, Uruguay, Venezuela
South Asia Indonesia, Laos, Malaysia, Philippines, Singapore, Thailand, Vietnam

Table 1 – List of selected regions and countries.

Table 2 shows the five countries displaying the highest number of publications in the different regions, and the institutions that display the largest number of publications, for 2005–2010. These results suggest that the nature of scientific collaborations — Outward or Inward — are not determined by the scientific field under study. For example, both Singapore and Saudi Arabia published significantly in Engineering, yet these countries show different collaborative characteristics: the former tend to be Inward, while the latter are Outward. Similarly, while both South Africa and the Czech Republic are strong in Medicine, they typically engage in Inward and Outward collaborations, respectively.

Close to sight, close to heart

In fact, the distinction between Outward and Inward collaborations obscures the fact that both kinds of collaboration tend to occur between geographically close countries. For example, Saudi Arabia’s Outward collaborations are largely carried out with groups based in India and Pakistan, countries that are relatively close to Saudi Arabia compared with, say, the US or Western Europe. Likewise, while the Czech Republic collaborates with institutions outside its borders, they are typically geographically close (that is, other European countries).

This examination of disciplinary foci and collaborative formations shows that despite the differences in research activities and collaborative trends, collaborations are typically formed between institutions that show relative geographical proximity. This trend could be a result of many factors. For example, researchers may be more likely to form personal connections with colleagues from nearby countries, perhaps because they encounter each other at regional talks and conferences more often than colleagues from countries further afield. In addition, researchers may find it easier to work with colleagues who share the same language, or other cultural characteristics. 

Region Country Most productive institution Dominant disciplines in most productive institution Collaborative orientation Most productive institution’s major collaborators
Africa South Africa University of Cape Town Medicine and Agricultural & Biological sciences Inward Univ Stellenbosch; Univ Witwatersrand; Groote Schuur Hospital; South African Medical Research Council
Central America Costa Rica Universidad de Costa Rica Agricultural and Biological Sciences Outward Texas A and M Univ; Smithsonian Tropical Research Institute; Univ Nacional Autónoma de México; Univ Sao Paulo
Eastern Europe Czech Republic Univerzita Karlova v Praze (Charles Univ, Prague Medicine and Biochemistry Outward Institutions in Russia, France and the UK
Arab Saudi Arabia King Fahd Univ Petroleum and Minerals Engineering Outward Institutions in IEEE, India and Pakistan
South Asia Singapore National University of Singapore Engineering, Physics and Astronomy Inward Inst. Materials Research and Engineering, A-Star, Inst. Infocomm Research, A-Star, Yong Loo Lin School of Medicine
  Table 2 – Most productive institutions and their collaborators in five countries.

References

  1.  Toivanen, H., Ponomariov, B. (2011) African regional innovation systems: Bibliometric analysis of research collaboration patterns 2005-2009. Scientometrics, Vol. 88, No. 2, pp. 471–493.
  2. Zhou, P., Glänzel, W. (2010) In-depth analysis on China's international cooperation in science. Scientometrics, Vol. 82, No. 3, pp. 597-612.
  3. De Castro, L. A. B. (2005) Strategies to assure adequate scientific outputs by developing countries - A Scientometrics evaluation of Brazilian PADCT as a case study. Cybermetrics, Vol. 9, No. 1.
  4. García-Carpintero, E., Granadino, B., & Plaza, L. M. (2010) The representation of nationalities on the editorial boards of international journals and the promotion of the scientific output of the same countries. Scientometrics, Vol. 84, No. 3, 799–811.
  5. Nguyen, T. V., & Pham, L. T. (2011) Scientific output and its relationship to knowledge economy: An analysis of ASEAN countries. Scientometrics, (1 July 2011), 1–11.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
  • Elsevier has recently launched the International Center for the Study of Research - ICSR - to help create a more transparent approach to research assessment. Its mission is to encourage the examination of research using an array of metrics and a variety of qualitative and quantitive methods.