Articles

Research trends is an online magazine providing objective insights into scientific trends based on bibliometrics analyses.

On the assessment of institutional research performance

This article distinguishes between top-down and bottom-up approaches to assess institutional performance, as well as ‘article downloads’ versus citation data.

Read more >


Introduction

A standard way of bibliometrically analyzing the performance of an institution is to select all of its publications and then calculate publication- and citation-based indicators for the institution as a whole. But there are other ways of assessing performance, and these come in top-down and bottom-up varieties. In general, bottom-up approaches tend to produce more reliable results than top-down, and also make it possible to look at performance at the level of groups of departments within an institution. Next, we illustrate a new set of indicators bases on “usage”.

Top-down and bottom-up approaches 

One of the most challenging tasks in bibliometric studies is to correctly identify and assign scientific publications to the institutions and research departments in which the authors of the paper work. Over the years, two principal approaches have been developed to tackle this task.

 The first is the top-down approach, which is used in many, if not all, ranking studies of universities. In a top-down assessment, one typically notes the institutional affiliations of authors on scientific publications, and then selects all publications with a specific institutional affiliation. Even though this process is very simple, difficulties can arise. These can be conceptual issues (e.g., are academic hospitals always a part of a university?) or problems of a more technical nature (e.g., an institution’s name may appear in numerous variations). A bibliometric analyst must therefore be aware of these potential problems, and address them properly.

 The second, bottom-up approach begins with a list of researchers who are active in a particular institution. The next step is to create a preliminary list of all articles published by each researcher, which are sent to these individuals for verification to produce a verified database. This approach allows for the grouping of authors into research groups, departments, research fields, networks, or for an analysis of the entire institution.

 While top-down approaches can be conducted more easily than bottom-up studies, mainly because they do not directly involve the researchers themselves, they are often less informative than bottom-up ones. For example, top-down approaches cannot inform managers about which particular researchers or groups are responsible for a certain outcome, nor can they identify collaborations between departments. So despite the ease of use of top-down approaches, there is a need to supplement them with bottom-up analyses to create a comprehensive view of an institution’s performance.

The analysis of usage data

A different method of assessing of an institute’s performance is by analyzing the ‘usage’ of articles, as opposed to citations of articles. Usage, in our analysis, is measured and quantified in terms of the number of clicks on links to the full-text of articles in Scopus.com, which demonstrates the intention of a Scopus.com user to view a full-text article. Here we use a case-study of an anonymous “Institute X” in the United Kingdom as an example of what usage data analysis has to offer.

In this case study, we analyze papers from 2003–2009, and usage data from 2009. We first identified countries that click through the full text of articles with at least one author based in Institute X. Next, we determined the total number of full-text UK articles accessed by each country, and calculated the proportion of these that were linked to Institute X (that is, articles with at least one author based at the Institute). Finally, we identified the 30 countries with the highest proportion of downloads of articles affiliated with Institute X. The results are shown in Figure 1.  

 

 

 

 

 

 

 

 

 

Figure 1 – For the Top 30 countries viewing UK articles, the percentage of downloads of articles with at least one author from Institute X compared to downloads of all articles with at least one author from the UK. Source: Scopus.

Figure 1 also shows that of the 30 countries clicking through to the greatest number of full-text UK articles, the English-speaking countries of Australia, Canada and the US cite the greatest proportion of articles originating from Institute X. This is shown geographically in the map in Figure 2.

 

 

 

 

 

 

 

 

 

Figure 2 – Who is viewing articles from Institute X? Source: Scopus.

Similarly, one can look at downloads per discipline to assess the relative strengths of an institute.

 

 

 

 

 

 

 

 

 

Figure 3 – Relative usage of Institute X’s papers per academic discipline compared with UK papers in the discipline. The relative usage in Figure 3 is calculated as follows: (Downloads of Institute X/Papers from Institute X)/(Downloads UK/Papers UK).For Mathematics, Neuroscience, Nursing, Psychology and Health Professionals Institute X’s publications have a higher relative usage than for the entire UK.

We can also look at downloads over time. Figure 4 shows the increasing contribution of Institute X’s downloads to all UK downloads, suggesting that Institute X is playing a more and more important role in research in the UK. 

  

 

 

 

 

 

 

 

 

Figure 4 — Institute X’s papers downloads as a percentage of UK’s papers downloads per year.

As these examples demonstrate, usage data can be used for a number of different types of analyses. One major advantage they have over citation analyses is that citations of papers only accrue in the months and years following their publication, as new papers cite the article under analysis. Usage statistics, by contrast, begin to emerge as soon as an article is available for download, and so can give a more immediate view of how researchers, and the groups and institutes to which they belong, are performing. And while the full meaning and value of usage data remains up for debate, usage analysis is nonetheless represents a useful addition to the more conventional bibliometric analysis based on citations.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Introduction

A standard way of bibliometrically analyzing the performance of an institution is to select all of its publications and then calculate publication- and citation-based indicators for the institution as a whole. But there are other ways of assessing performance, and these come in top-down and bottom-up varieties. In general, bottom-up approaches tend to produce more reliable results than top-down, and also make it possible to look at performance at the level of groups of departments within an institution. Next, we illustrate a new set of indicators bases on “usage”.

Top-down and bottom-up approaches 

One of the most challenging tasks in bibliometric studies is to correctly identify and assign scientific publications to the institutions and research departments in which the authors of the paper work. Over the years, two principal approaches have been developed to tackle this task.

 The first is the top-down approach, which is used in many, if not all, ranking studies of universities. In a top-down assessment, one typically notes the institutional affiliations of authors on scientific publications, and then selects all publications with a specific institutional affiliation. Even though this process is very simple, difficulties can arise. These can be conceptual issues (e.g., are academic hospitals always a part of a university?) or problems of a more technical nature (e.g., an institution’s name may appear in numerous variations). A bibliometric analyst must therefore be aware of these potential problems, and address them properly.

 The second, bottom-up approach begins with a list of researchers who are active in a particular institution. The next step is to create a preliminary list of all articles published by each researcher, which are sent to these individuals for verification to produce a verified database. This approach allows for the grouping of authors into research groups, departments, research fields, networks, or for an analysis of the entire institution.

 While top-down approaches can be conducted more easily than bottom-up studies, mainly because they do not directly involve the researchers themselves, they are often less informative than bottom-up ones. For example, top-down approaches cannot inform managers about which particular researchers or groups are responsible for a certain outcome, nor can they identify collaborations between departments. So despite the ease of use of top-down approaches, there is a need to supplement them with bottom-up analyses to create a comprehensive view of an institution’s performance.

The analysis of usage data

A different method of assessing of an institute’s performance is by analyzing the ‘usage’ of articles, as opposed to citations of articles. Usage, in our analysis, is measured and quantified in terms of the number of clicks on links to the full-text of articles in Scopus.com, which demonstrates the intention of a Scopus.com user to view a full-text article. Here we use a case-study of an anonymous “Institute X” in the United Kingdom as an example of what usage data analysis has to offer.

In this case study, we analyze papers from 2003–2009, and usage data from 2009. We first identified countries that click through the full text of articles with at least one author based in Institute X. Next, we determined the total number of full-text UK articles accessed by each country, and calculated the proportion of these that were linked to Institute X (that is, articles with at least one author based at the Institute). Finally, we identified the 30 countries with the highest proportion of downloads of articles affiliated with Institute X. The results are shown in Figure 1.  

 

 

 

 

 

 

 

 

 

Figure 1 – For the Top 30 countries viewing UK articles, the percentage of downloads of articles with at least one author from Institute X compared to downloads of all articles with at least one author from the UK. Source: Scopus.

Figure 1 also shows that of the 30 countries clicking through to the greatest number of full-text UK articles, the English-speaking countries of Australia, Canada and the US cite the greatest proportion of articles originating from Institute X. This is shown geographically in the map in Figure 2.

 

 

 

 

 

 

 

 

 

Figure 2 – Who is viewing articles from Institute X? Source: Scopus.

Similarly, one can look at downloads per discipline to assess the relative strengths of an institute.

 

 

 

 

 

 

 

 

 

Figure 3 – Relative usage of Institute X’s papers per academic discipline compared with UK papers in the discipline. The relative usage in Figure 3 is calculated as follows: (Downloads of Institute X/Papers from Institute X)/(Downloads UK/Papers UK).For Mathematics, Neuroscience, Nursing, Psychology and Health Professionals Institute X’s publications have a higher relative usage than for the entire UK.

We can also look at downloads over time. Figure 4 shows the increasing contribution of Institute X’s downloads to all UK downloads, suggesting that Institute X is playing a more and more important role in research in the UK. 

  

 

 

 

 

 

 

 

 

Figure 4 — Institute X’s papers downloads as a percentage of UK’s papers downloads per year.

As these examples demonstrate, usage data can be used for a number of different types of analyses. One major advantage they have over citation analyses is that citations of papers only accrue in the months and years following their publication, as new papers cite the article under analysis. Usage statistics, by contrast, begin to emerge as soon as an article is available for download, and so can give a more immediate view of how researchers, and the groups and institutes to which they belong, are performing. And while the full meaning and value of usage data remains up for debate, usage analysis is nonetheless represents a useful addition to the more conventional bibliometric analysis based on citations.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Layered assessment: Using SciVal Strata to examine research performance

Tracking the publication output of a single researcher is straightforward, but what about tracking the contributions of individuals to research groups whose composition changes dynamically over time? SciVal Strata provides the answer.

Read more >


Strata is the latest addition to Elsevier’s SciVal suite of research performance and planning tools, and provides methods of assessing individuals and groups through their publication histories. In the last issue, we showed how SciVal Spotlight could visualize the research landscape of the United States; in this article we look at the ways SciVal Strata can chart the research performance of an individual or wider group, either alone or in comparison.

 

Reading between the lines

Science is an inherently progressive, cumulative enterprise. Each year brings more qualified scientists and researchers, more papers and ever more citations to those papers. So the standard view of citations over time in SciVal Strata might come as a surprise. Figure 1 shows the average number of citations per year of papers published in the field of Ecology, Evolution, Behavior and Systematics. At a glance it appears that citations in the field are plummeting, perhaps signaling the implosion of these scientific disciplines.

Figure 1 – Average citations per paper published in the field of Ecology, Evolution, Behavior and Systematics: the three benchmarks show UK papers (purple), European papers (green), and all world papers (blue). Source: SciVal Strata.

Of course, that isn’t the case. This decreasing chart shows a decline neither in scientific quality nor citation quantity: the default view shows average citations per year to documents from each publication year, rather than counting citations cumulatively over time. Since recent publications will typically have received fewer citations to date on average – as they have had less time to accumulate those citations – the shape of the curves now makes sense.

With benchmarks quickly set up in Strata by the user, one can compare researchers to average citation rates in their field. Figure 2 shows a researcher compared with the world average. The previously noted downward slope can be seen, but when looking at an individual’s performance highs and lows can be spotted which will tend to be absent from averages. This researcher clearly had success with papers published in 2000, shown by the sharp rise in the line at that year. Smaller rises can also signify success: while 2008 shows a value that is low relative to earlier years, when the shorter amount of time 2008 papers have had to accumulate citations is taken into account, it was clearly a successful year.

Figure 2 – Average citations per document by publication year of a researcher in the field of Ecology, Evolution, Behavior and Systematics (red) compared with the world average (blue). Source: SciVal Strata.

  

Team players

SciVal Strata enables the analysis of citation patterns in entire research fields, as well as by individual researcher. SciVal Strata also has various ways of showing the contribution an individual makes to a research group; Figures 3 and 4 show two methods. In Figure 3, a researcher is directly compared with his or her team: the two lines weave in and out, as the individual or group outperforms the other, and it is easy to see some disparity in the years 2001 and 2003.

Figure 3 – A researcher (red) compared with their research group (brown) and the world average (blue), showing average citations per document by publication year. Source: SciVal Strata.

Another way of examining the contribution this researcher makes to the research group is to compare two versions of the research group: one with the researcher included, and one without (see Figure 4).

Figure 4 – A comparison of the same research group either with (brown) or without (red) one of its researchers, showing average citations per document by publication year. World average is also shown (blue). Source: SciVal Strata.

 

Open to other views

Bibliometricians commonly warn against the use of a single indicator to make assessments of research output and quality: different measures must often be used evaluate different aspects of performance. For instance, in an assessment of an individual, the number of invited lectures at international conferences is a useful, non-bibliometric indicator. In SciVal Strata, any comparison — whether between individuals, groups, or any other ‘cluster’ of researchers — can be made looking not only at average citations per papers, but also at h-index, citation and publication counts, or the ratio of cited to uncited papers. Figure 5 shows two researchers compared using their h-index values, and Figure 6 their cited and uncited papers from each year. This range of indicators, and the flexibility they allow, means that a comprehensive view of a researcher or group can be used to aid important decisions about promotion, recruitment, and collaboration.

Figure 5 –Comparison of two researchers’ h-index values. The curves show the citations received by each researchers’ papers when arranged in descending order of citations. Dropping a line to either axis from the intersection of each curve with the black line (at 45 degrees from the origin) shows the h-index: here one researcher (green) shows a higher h-index than the other (red). Source: SciVal Strata.

 

Figure 6 – Comparison of the outputs of two researchers per year. The bars show total number of documents, and each is split into solid and faded sections showing the documents that are, respectively, cited and uncited to date. Comparison of the faded and solid areas shows the uncited rate of documents published each year: as expected, this is higher in more recent years as recent documents have had less time to become cited. Source: SciVal Strata.

However, while bibliometric indicators offer a clear view of an individual’s performance — particularly when a wide range are available — it is important to note that they may not tell the whole story. For example, if each co-author of an article is assigned one full credit for the publication, this can mask differences in their actual contributions to the article: one author may have done the majority of the work. Rather than hide such difficulties, bibliometricians and others involved in research assessment need to either use more sophisticated approaches, such as the comparisons available in SciVal Strata, or combine bibliometric assessment with other indicators of research performance and researcher prestige. 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Strata is the latest addition to Elsevier’s SciVal suite of research performance and planning tools, and provides methods of assessing individuals and groups through their publication histories. In the last issue, we showed how SciVal Spotlight could visualize the research landscape of the United States; in this article we look at the ways SciVal Strata can chart the research performance of an individual or wider group, either alone or in comparison.

 

Reading between the lines

Science is an inherently progressive, cumulative enterprise. Each year brings more qualified scientists and researchers, more papers and ever more citations to those papers. So the standard view of citations over time in SciVal Strata might come as a surprise. Figure 1 shows the average number of citations per year of papers published in the field of Ecology, Evolution, Behavior and Systematics. At a glance it appears that citations in the field are plummeting, perhaps signaling the implosion of these scientific disciplines.

Figure 1 – Average citations per paper published in the field of Ecology, Evolution, Behavior and Systematics: the three benchmarks show UK papers (purple), European papers (green), and all world papers (blue). Source: SciVal Strata.

Of course, that isn’t the case. This decreasing chart shows a decline neither in scientific quality nor citation quantity: the default view shows average citations per year to documents from each publication year, rather than counting citations cumulatively over time. Since recent publications will typically have received fewer citations to date on average – as they have had less time to accumulate those citations – the shape of the curves now makes sense.

With benchmarks quickly set up in Strata by the user, one can compare researchers to average citation rates in their field. Figure 2 shows a researcher compared with the world average. The previously noted downward slope can be seen, but when looking at an individual’s performance highs and lows can be spotted which will tend to be absent from averages. This researcher clearly had success with papers published in 2000, shown by the sharp rise in the line at that year. Smaller rises can also signify success: while 2008 shows a value that is low relative to earlier years, when the shorter amount of time 2008 papers have had to accumulate citations is taken into account, it was clearly a successful year.

Figure 2 – Average citations per document by publication year of a researcher in the field of Ecology, Evolution, Behavior and Systematics (red) compared with the world average (blue). Source: SciVal Strata.

  

Team players

SciVal Strata enables the analysis of citation patterns in entire research fields, as well as by individual researcher. SciVal Strata also has various ways of showing the contribution an individual makes to a research group; Figures 3 and 4 show two methods. In Figure 3, a researcher is directly compared with his or her team: the two lines weave in and out, as the individual or group outperforms the other, and it is easy to see some disparity in the years 2001 and 2003.

Figure 3 – A researcher (red) compared with their research group (brown) and the world average (blue), showing average citations per document by publication year. Source: SciVal Strata.

Another way of examining the contribution this researcher makes to the research group is to compare two versions of the research group: one with the researcher included, and one without (see Figure 4).

Figure 4 – A comparison of the same research group either with (brown) or without (red) one of its researchers, showing average citations per document by publication year. World average is also shown (blue). Source: SciVal Strata.

 

Open to other views

Bibliometricians commonly warn against the use of a single indicator to make assessments of research output and quality: different measures must often be used evaluate different aspects of performance. For instance, in an assessment of an individual, the number of invited lectures at international conferences is a useful, non-bibliometric indicator. In SciVal Strata, any comparison — whether between individuals, groups, or any other ‘cluster’ of researchers — can be made looking not only at average citations per papers, but also at h-index, citation and publication counts, or the ratio of cited to uncited papers. Figure 5 shows two researchers compared using their h-index values, and Figure 6 their cited and uncited papers from each year. This range of indicators, and the flexibility they allow, means that a comprehensive view of a researcher or group can be used to aid important decisions about promotion, recruitment, and collaboration.

Figure 5 –Comparison of two researchers’ h-index values. The curves show the citations received by each researchers’ papers when arranged in descending order of citations. Dropping a line to either axis from the intersection of each curve with the black line (at 45 degrees from the origin) shows the h-index: here one researcher (green) shows a higher h-index than the other (red). Source: SciVal Strata.

 

Figure 6 – Comparison of the outputs of two researchers per year. The bars show total number of documents, and each is split into solid and faded sections showing the documents that are, respectively, cited and uncited to date. Comparison of the faded and solid areas shows the uncited rate of documents published each year: as expected, this is higher in more recent years as recent documents have had less time to become cited. Source: SciVal Strata.

However, while bibliometric indicators offer a clear view of an individual’s performance — particularly when a wide range are available — it is important to note that they may not tell the whole story. For example, if each co-author of an article is assigned one full credit for the publication, this can mask differences in their actual contributions to the article: one author may have done the majority of the work. Rather than hide such difficulties, bibliometricians and others involved in research assessment need to either use more sophisticated approaches, such as the comparisons available in SciVal Strata, or combine bibliometric assessment with other indicators of research performance and researcher prestige. 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Excellence in Research for Australia — a new ERA

Research Trends talks to Margaret Sheil, Chief Executive Office of the Australian Research Council, about Australia’s evolving research assessment framework.

Read more >


As Chief Executive Officer of the Australian Research Council, Professor Margaret Sheil is responsible for the Excellence in Research for Australia (ERA) initiative. ERA aims to assess research quality in the Australian higher education context by combining indicators with expert review. In this article Research Trends talks with Professor Sheil about ERA and its consequences.

Broadly speaking, Australian universities receive funding through two routes: competitive research grants, which are awarded to academics based on the strength of their proposals; and ‘block funding’, which is given to universities to cover the indirect costs of doing research in a university, such as maintaining libraries and IT systems, hiring support staff, and so on. In general, universities get more block funding if they pull in more competitive grants, and generate more publications and graduates. “This can lead to a mindset of ‘doing better by doing more’,” says Sheil. “Research excellence assessments ask whether we’re driving research excellence rather than just research quantity.”

How does Excellence in Research for Australia framework achieve this goal?

The Excellence in Research for Australia (ERA) framework came into existence in 2009, replacing the Research Quality Framework established under the previous government (but never actually implemented). Whereas the RQF tried to develop a one-size-fits-all model across the university sector, “we know that different disciplines judge quality differently,” says Sheil. So ERA takes a ‘matrix approach’ that draws on a range of indicators that collectively can be applied to the whole sector, even while some components carrying more weight in certain disciplines than other. “We decided to look at indicators of quality that are accepted within disciplines, clustered like-minded disciplines together, and said ‘If there are robust metrics, we’ll use them’”. ERA breaks down the universe of research into 157 disciplines, and for 101 of these citation analyses were a key indicator (especially the physical sciences). “Where there wasn’t confidence that metrics would work, or in areas such as the humanities where books are more important than journal publications, we used expert peer review as an indicator of quality,” says Sheil. “There’s a lot of confidence about citation analysis in many disciplines, but we believe we need experts to judge whether this makes sense for a particular discipline, and what it means in the context of other indicators.” These include ‘esteem’, such as how many members of a department belong to learned academies; ‘applied indicators’, such as the number of patents produced and income generated through commercialization of research; and success in gaining competitive grants, which have an in-built quality control component. Finally, ERA has produced a list of journals ranked by quality, which has been used to look at how many publications from a discipline get into the higher-ranking journals. “All of these indicators are grouped by discipline and by university, and then expert committees look at the total of the indicators and derive an overall assessment score,” says Sheil.

The first ERA report was released in 2010, and another will be published in 2012. What has been learned in these early days?

“There has been a misunderstanding that ERA is about ranking the quality of whole universities, rather than individual disciplines,” says Sheil. “We don’t think university rankings are meaningful, but we could have been clearer about this.” For instance, The Australian, a leading national newspaper took the assessment scores of disciplines within universities, combined these scores and then averaged them to create an overall ‘university ranking’ score. “That doesn’t make any sense,” says Sheil. A small university specializing in, for example, theology, may be world-class in this discipline and would thus be placed high in university rankings, above larger universities that score highly on some disciplines but lower on others. This can lead to the masking of pockets of excellence in institutions with a broad disciplinary remit.

The past couple of years have also revealed controversy over ERA’s journal ranking, a core element of the sytem. “We learned that we haven’t managed to stop the obsession with journal rankings, which is the most commonly misunderstood aspect of ERA.”

Some commentators have claimed that the journal rankings are not fair, and that some disciplines suffer as a result of the rankings settled on in ERA. How do you respond?

Some observers, says Sheil, seem to think that ERA is solely about the journal rankings (each journal is given a single quality rating of A*, A, B or C), which is why they’ve received such attention. “There’s a view that if you don’t have a high number of A and A* journals in your discipline, it’s disadvantaged. But if you look at zoology, which is a strong focus of Australian biology, there are hardly any A* journals but the discipline still performed very well because the work done in this area is highly cited and scores well on other indicators in the assessment matrix,” says Sheil. That’s not to suggest that the journal ranking system is perfect. “Did we get some journals wrong? For sure — there are 22,000 journals to rank, after all! But because people have got a bit obsessive about this, especially journal editors, we’re currently looking at what impact the journal ranking element had on overall assessments.”

Is there an inherent danger in performance-based research assessment that it can discourage exploratory, novel research?

“If we used the assessment outcomes to decide every allocation of research of funding, this would be a real concern,” says Sheil. “But we’re introducing other things into the grant side of the business to counterbalance that.” In addition, while ERA “looks backwards” to enable the government to assess whether direct block funding pumped into universities has been well spent, grants are essentially “forward looking”, and therefore based on different criteria. “It’s really important that when it comes to grant giving — where we’re assessing potential — that we recognise these issues, and continue to invest in and take risks on the next generation of research.”

Professor Margaret Sheil (FTSE FRACI C Chem)

1990–2000: Lecturer in the Department of Chemistry, University of Wollongong (UoW), Australia

2001: Dean of Science at UoW

2002–2007: Deputy Vice-Chancellor (Research) at UoW

2007– : Chief Executive Officer of the Australian Research Council

Professor Sheil is a member of the Cooperative Research Centres Committee, the Prime Minister’s Science Innovation and Engineering Council and the National Research Infrastructure Council. She is also a member of the Board of the Australia-India Council, the Advisory Council of the Science Industry Endowment Fund and the National Research Foundation of Korea. She is a Fellow of the Academy of Technological Sciences and Engineering (FTSE) and the Royal Australian Chemical Institute (FRACI).

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

As Chief Executive Officer of the Australian Research Council, Professor Margaret Sheil is responsible for the Excellence in Research for Australia (ERA) initiative. ERA aims to assess research quality in the Australian higher education context by combining indicators with expert review. In this article Research Trends talks with Professor Sheil about ERA and its consequences.

Broadly speaking, Australian universities receive funding through two routes: competitive research grants, which are awarded to academics based on the strength of their proposals; and ‘block funding’, which is given to universities to cover the indirect costs of doing research in a university, such as maintaining libraries and IT systems, hiring support staff, and so on. In general, universities get more block funding if they pull in more competitive grants, and generate more publications and graduates. “This can lead to a mindset of ‘doing better by doing more’,” says Sheil. “Research excellence assessments ask whether we’re driving research excellence rather than just research quantity.”

How does Excellence in Research for Australia framework achieve this goal?

The Excellence in Research for Australia (ERA) framework came into existence in 2009, replacing the Research Quality Framework established under the previous government (but never actually implemented). Whereas the RQF tried to develop a one-size-fits-all model across the university sector, “we know that different disciplines judge quality differently,” says Sheil. So ERA takes a ‘matrix approach’ that draws on a range of indicators that collectively can be applied to the whole sector, even while some components carrying more weight in certain disciplines than other. “We decided to look at indicators of quality that are accepted within disciplines, clustered like-minded disciplines together, and said ‘If there are robust metrics, we’ll use them’”. ERA breaks down the universe of research into 157 disciplines, and for 101 of these citation analyses were a key indicator (especially the physical sciences). “Where there wasn’t confidence that metrics would work, or in areas such as the humanities where books are more important than journal publications, we used expert peer review as an indicator of quality,” says Sheil. “There’s a lot of confidence about citation analysis in many disciplines, but we believe we need experts to judge whether this makes sense for a particular discipline, and what it means in the context of other indicators.” These include ‘esteem’, such as how many members of a department belong to learned academies; ‘applied indicators’, such as the number of patents produced and income generated through commercialization of research; and success in gaining competitive grants, which have an in-built quality control component. Finally, ERA has produced a list of journals ranked by quality, which has been used to look at how many publications from a discipline get into the higher-ranking journals. “All of these indicators are grouped by discipline and by university, and then expert committees look at the total of the indicators and derive an overall assessment score,” says Sheil.

The first ERA report was released in 2010, and another will be published in 2012. What has been learned in these early days?

“There has been a misunderstanding that ERA is about ranking the quality of whole universities, rather than individual disciplines,” says Sheil. “We don’t think university rankings are meaningful, but we could have been clearer about this.” For instance, The Australian, a leading national newspaper took the assessment scores of disciplines within universities, combined these scores and then averaged them to create an overall ‘university ranking’ score. “That doesn’t make any sense,” says Sheil. A small university specializing in, for example, theology, may be world-class in this discipline and would thus be placed high in university rankings, above larger universities that score highly on some disciplines but lower on others. This can lead to the masking of pockets of excellence in institutions with a broad disciplinary remit.

The past couple of years have also revealed controversy over ERA’s journal ranking, a core element of the sytem. “We learned that we haven’t managed to stop the obsession with journal rankings, which is the most commonly misunderstood aspect of ERA.”

Some commentators have claimed that the journal rankings are not fair, and that some disciplines suffer as a result of the rankings settled on in ERA. How do you respond?

Some observers, says Sheil, seem to think that ERA is solely about the journal rankings (each journal is given a single quality rating of A*, A, B or C), which is why they’ve received such attention. “There’s a view that if you don’t have a high number of A and A* journals in your discipline, it’s disadvantaged. But if you look at zoology, which is a strong focus of Australian biology, there are hardly any A* journals but the discipline still performed very well because the work done in this area is highly cited and scores well on other indicators in the assessment matrix,” says Sheil. That’s not to suggest that the journal ranking system is perfect. “Did we get some journals wrong? For sure — there are 22,000 journals to rank, after all! But because people have got a bit obsessive about this, especially journal editors, we’re currently looking at what impact the journal ranking element had on overall assessments.”

Is there an inherent danger in performance-based research assessment that it can discourage exploratory, novel research?

“If we used the assessment outcomes to decide every allocation of research of funding, this would be a real concern,” says Sheil. “But we’re introducing other things into the grant side of the business to counterbalance that.” In addition, while ERA “looks backwards” to enable the government to assess whether direct block funding pumped into universities has been well spent, grants are essentially “forward looking”, and therefore based on different criteria. “It’s really important that when it comes to grant giving — where we’re assessing potential — that we recognise these issues, and continue to invest in and take risks on the next generation of research.”

Professor Margaret Sheil (FTSE FRACI C Chem)

1990–2000: Lecturer in the Department of Chemistry, University of Wollongong (UoW), Australia

2001: Dean of Science at UoW

2002–2007: Deputy Vice-Chancellor (Research) at UoW

2007– : Chief Executive Officer of the Australian Research Council

Professor Sheil is a member of the Cooperative Research Centres Committee, the Prime Minister’s Science Innovation and Engineering Council and the National Research Infrastructure Council. She is also a member of the Board of the Australia-India Council, the Advisory Council of the Science Industry Endowment Fund and the National Research Foundation of Korea. She is a Fellow of the Academy of Technological Sciences and Engineering (FTSE) and the Royal Australian Chemical Institute (FRACI).

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Performance-Based Research Funding Systems: Rewarding (only) quality research?

In this era of fiscal austerity, government funding of scholarly research has become a pressing issue for most countries. While in the past universities and research institutions could rely on a continuous, steady flow of financial support, today governments increasingly tie funding to metrics. Therefore, the performance of tertiary education institutions has turned into a […]

Read more >


In this era of fiscal austerity, government funding of scholarly research has become a pressing issue for most countries. While in the past universities and research institutions could rely on a continuous, steady flow of financial support, today governments increasingly tie funding to metrics. Therefore, the performance of tertiary education institutions has turned into a metric-enhancing competition. As such, the development and use of Performance-Based Research Funding Systems (PRFS) has become a prominent topic of discussion among academics and government officials alike. December 2010 saw the publication of the proceedings of the OECD-Norway workshop “Performance-based Funding for Public Research in Tertiary Education Institutions“. Diana Hicks, Professor and Chair of the School of Public Policy, Georgia Institute of Technology, Atlanta, contributed the opening chapter to this volume, providing a complete literature review on PRFS across thirteen different countries. In discussion with Research Trends, Diana shares her views on some of the hot topics in the discussions on PRFS.

 

Critics of PRFS sometimes argue that it reinforces the influence of conservative scientific elders by punishing novel research and inhibiting the emergence of new and interdisciplinary research fields and departments, raising questions about conservatism and innovation in science.

Diana stresses that PRFS do not disadvantage researchers by distinguishing between young and old researchers as such, but by differentiating between established and new research departments. “One universal element of PRFS used amongst governments is that they reward demonstrated research excellence rather than potential or promise. Any department or institution new in research will go by unrecognized in PRFS in the form they currently stand”. Similar concerns apply to the assessment of research in interdisciplinary scientific fields. Although such research is not necessarily seen as ‘new’ or ‘novel’, current evaluation systems disadvantage interdisciplinary research because evaluations tend to be based on publications in the core of a field. “What we see here is a cycle of accumulation: people in evaluation committees often form a representation of the best academics within a (core) field, coming from the best (core) departments, at the same time being editors of the best (core) journals, with all focus on the core of field. If, however, the aim of research is to link core A with core B it would typically be evaluated as not belonging to either core, and thus not good”.

 

It seems a simple solution would be to build into the system a way of evaluating research relative to discipline, with the effect of adding an extra level of complexity to the system. But how complex can a system be, and still remain practically useful?

Diana recalls that systems of research evaluation typically start out as relatively simple, and become more complex over time as committees deal with criticisms coming from academics. “Because stakeholders of the system are in fact academics, there will always be on-going research on how systems can be improved in parts where it is unfair. Governments, striving for fairness and objectivity within their system, in their turn answer by implementing elaborations to the system”. Levels of complexity now vary among PRFS used across countries by different governments. “What we do not know though is what the actual cost and benefits are of increased complexity in any PRFS”. How much complexity and costs a system can bear while at the same time remaining manageable and workable is a question which current PRFS have not answered. “In doing specific cost-benefit calculations, governments may decide how much complexity is actually worthwhile”.

 

PFRS are inherently competitive, and so there are incentives to manipulate the system for self-serving ends. How significant is this issue in current PRFS, and how should agencies assessing research respond?

”This is an unfortunate side effect in most PRFS,” says Diana. The solution, Diana believes, lies in making small, incremental adaptations to the system in every round of evaluation. “Governments can tweak the rules of assessment so that universities and institutions are hampered in manipulations aimed solely to improve their score. The aim of doing this is to minimize the focus on the metrics used for evaluation, while maximizing the focus on performance”.

Professor Diana Hicks

1985-1999: Faculty member at SPRU - Science and Technology Policy Research, University of Sussex, UK

1998-2003: Senior Policy Analyst at CHI Research, US

2003- : Professor and Chair of the School of Public Policy, Georgia Tech, US

Professor Hicks has taught at the Haas School of Business at the University of California, Berkeley and worked at the National Institute of Science and Technology Policy (NISTEP) in Tokyo. She is an honorary fellow of the Science Policy Research Unit, University of Sussex, UK and on the Academic Advisory Board for Center for Science, Policy and Outcomes, Washington D.C.

Further reading:

OECD (2010). Performance-based Funding for Public Research in Tertiary Education Institutions: Workshop Proceedings, OECD Publishing.  

http://dx.doi.org/10.1787/9789264094611-en

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

In this era of fiscal austerity, government funding of scholarly research has become a pressing issue for most countries. While in the past universities and research institutions could rely on a continuous, steady flow of financial support, today governments increasingly tie funding to metrics. Therefore, the performance of tertiary education institutions has turned into a metric-enhancing competition. As such, the development and use of Performance-Based Research Funding Systems (PRFS) has become a prominent topic of discussion among academics and government officials alike. December 2010 saw the publication of the proceedings of the OECD-Norway workshop “Performance-based Funding for Public Research in Tertiary Education Institutions“. Diana Hicks, Professor and Chair of the School of Public Policy, Georgia Institute of Technology, Atlanta, contributed the opening chapter to this volume, providing a complete literature review on PRFS across thirteen different countries. In discussion with Research Trends, Diana shares her views on some of the hot topics in the discussions on PRFS.

 

Critics of PRFS sometimes argue that it reinforces the influence of conservative scientific elders by punishing novel research and inhibiting the emergence of new and interdisciplinary research fields and departments, raising questions about conservatism and innovation in science.

Diana stresses that PRFS do not disadvantage researchers by distinguishing between young and old researchers as such, but by differentiating between established and new research departments. “One universal element of PRFS used amongst governments is that they reward demonstrated research excellence rather than potential or promise. Any department or institution new in research will go by unrecognized in PRFS in the form they currently stand”. Similar concerns apply to the assessment of research in interdisciplinary scientific fields. Although such research is not necessarily seen as ‘new’ or ‘novel’, current evaluation systems disadvantage interdisciplinary research because evaluations tend to be based on publications in the core of a field. “What we see here is a cycle of accumulation: people in evaluation committees often form a representation of the best academics within a (core) field, coming from the best (core) departments, at the same time being editors of the best (core) journals, with all focus on the core of field. If, however, the aim of research is to link core A with core B it would typically be evaluated as not belonging to either core, and thus not good”.

 

It seems a simple solution would be to build into the system a way of evaluating research relative to discipline, with the effect of adding an extra level of complexity to the system. But how complex can a system be, and still remain practically useful?

Diana recalls that systems of research evaluation typically start out as relatively simple, and become more complex over time as committees deal with criticisms coming from academics. “Because stakeholders of the system are in fact academics, there will always be on-going research on how systems can be improved in parts where it is unfair. Governments, striving for fairness and objectivity within their system, in their turn answer by implementing elaborations to the system”. Levels of complexity now vary among PRFS used across countries by different governments. “What we do not know though is what the actual cost and benefits are of increased complexity in any PRFS”. How much complexity and costs a system can bear while at the same time remaining manageable and workable is a question which current PRFS have not answered. “In doing specific cost-benefit calculations, governments may decide how much complexity is actually worthwhile”.

 

PFRS are inherently competitive, and so there are incentives to manipulate the system for self-serving ends. How significant is this issue in current PRFS, and how should agencies assessing research respond?

”This is an unfortunate side effect in most PRFS,” says Diana. The solution, Diana believes, lies in making small, incremental adaptations to the system in every round of evaluation. “Governments can tweak the rules of assessment so that universities and institutions are hampered in manipulations aimed solely to improve their score. The aim of doing this is to minimize the focus on the metrics used for evaluation, while maximizing the focus on performance”.

Professor Diana Hicks

1985-1999: Faculty member at SPRU - Science and Technology Policy Research, University of Sussex, UK

1998-2003: Senior Policy Analyst at CHI Research, US

2003- : Professor and Chair of the School of Public Policy, Georgia Tech, US

Professor Hicks has taught at the Haas School of Business at the University of California, Berkeley and worked at the National Institute of Science and Technology Policy (NISTEP) in Tokyo. She is an honorary fellow of the Science Policy Research Unit, University of Sussex, UK and on the Academic Advisory Board for Center for Science, Policy and Outcomes, Washington D.C.

Further reading:

OECD (2010). Performance-based Funding for Public Research in Tertiary Education Institutions: Workshop Proceedings, OECD Publishing.  

http://dx.doi.org/10.1787/9789264094611-en

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The multi-dimensional research assessment matrix

Research output can measured in many different ways, and no single measure is sufficient to capture the diverse aspects of research performance. Research Trends invites you to explore the options…

Read more >


Research performance can be assessed along a number of different dimensions. In this article, we explore the notion of the multi-dimensional research assessment matrix, which was introduced in a report published in 2010 by an Expert Group on the Assessment of University-Based Research (AUBR), installed by the European Commission. Figure 1 presents a part of this matrix.

Research assessment is a complicated business. To design a practical, informative process requires making decisions about which methodology should be used, which indicators calculated, and which data collected. These decisions in turn reflect answers to a number of questions about the scope and purpose of the research assessment process in hand. A thorough exploration of many of these questions has been presented in Moed (2005).

Table 1 — The multi-dimensional research assessment matrix. This table presents a core part of the matrix, not the entire matrix. It aims to illustrate what the matrix looks like. It should be read column-wise: each column represents a different dimension. See AUBR (2010) for more information.

What, how, and why?

A fundamental question is the unit of the assessment: is it a country, institution, research group, individual, research field or an international network? Another basic question revolves around the purpose of the assessment: is it to inform the allocation of research funding, to improve performance, or to increase regional engagement? Then there are questions about which output dimensions should be considered: scholarly impact, innovation and social benefit, or sustainability?

The matrix distinguishes four assessment methodologies: i) peer review, which provides a judgment based on expert knowledge; ii) end-user reviews, such as customer satisfaction; iii) quantitative indicators, including bibliometric and other types of measures; and iv) self evaluation. These four methodologies can be — and often are — combined into a multi-dimensional assessment.

Bibliometric indicators have a central role in research assessment systems, and the main types are listed in Table 1. Table 2 distinguishes three generations of such indicators. Typical examples from each generation are: the Thomson Reuters journal impact factor; relative or field-normalized citation rates; and citation impact indicators giving citations from ‘top’ journals a higher weight than citations from more peripheral publications. These examples and others are explored in the boxed text.

Table 2 — Types of bibliometric indicators.

 

Table 1 also lists typical examples of non-bibliometric indicators. These include knowledge transfer activities reflected in the number of patents, licenses and spin offs; invited lectures at international conferences; the amount of external funding; Ph.D. completion rates; and the share of research-active staff relative to total staff.

The unit of assessment, the purpose of the assessment, and the output dimension considered determine the type of indicators to be used in the assessment. One indicator can by highly useful within one assessment context, but less so in another. This is illustrated in three examples presented in Figure 1.

Figure 1 – Three examples from the multi-dimensional research assessment matrix (MD-RAM) showing how the unit of assessment, purpose of the assessment, and output dimension determine the type of indicators to be used.

 

Entering the Matrix

The concept of multi-dimensionality of research performance, and the notion that the choice as to which indicators one applies is determined by the questions to be addressed and aspects to be assessed, is also clearly expressed in the recent “Knowledge, Networks and Nations” report from the Royal Society (Royal Society (2011), pp. 24-25):

“In the UK, the impact and excellence agenda has developed rapidly in recent years. The Research Assessment Exercise, a peer review based benchmarking exercise which measured the relative research strengths of university departments, is due to be replaced with a new Research Excellence Framework, which will be completed in 2014. The UK Research Councils now (somewhat controversially) ask all applicants to describe the potential economic and societal impacts of their research. The Excellence in Research for Australia (ERA) initiative assesses research quality within Australia’s higher education institutions using a combination of indicators and expert review by committees comprising experienced, internationally recognised experts."

The impact agenda is increasingly important for national and international science (in Europe, the Commissioner for Research, Innovation and Science has spoken about the need for a Europe-wide ‘innovation indicator’). The challenge of measuring the value of science in a number of ways faces all of the scientific community. Achieving this will offer new insights into how we appraise the quality of science, and the impacts of its globalisation.”

  

Exploring the indicators

  • Journal Impact Factors. The Thomson Reuters Journal Impact Factor was originally invented by Eugene Garfield to expand the coverage of his Science Citation Index with the most useful journals, but is nowadays often used in many types of research assessment processes. It is defined as the average number of citations in a particular year to documents published in a journal in the two preceding years.
  • Relative citation rates. The relative, field-normalised citation rate is based on the notion that citation frequencies differ significantly between subject fields. For instance, authors in molecular biology publish more frequently and cite each other more often than do authors in mathematics. In its simplest form the indicator is defined as the average citation rate of a unit’s papers divided by the world citation average in the subject fields in which the unit is active.
  • Influence weights. Pinski and Narin (1976) developed an important methodology for determining citation-based influence measures of scientific journals and (sub-)disciplines. One of their methodology’s key elements is that it assigns a higher weight to citations from a prestigious journal than to a citation from a less prestigious or peripheral journal.
  • Google PageRank. Pinski and Narin’s ideas also underlie Google’s measure of PageRank. The “value” of a web page is measured by the number of other web pages linking to it, but in this value assessment links from pages that are themselves frequently linked to have a higher weight than links from those to which only few other pages have linked.
  • Other studies. Similar notions may play an important role in the further development of citation impact measures. Good examples are the work by Bollen et al. (2006) on journal status, and the Scimago Journal Rank (SJR) developed by the SCImago group (González-Pereira et al., 2010), one of the two journal metrics included into Scopus.

 

References:

AUBR (2010). Expert Group on the Assessment of University-Based Research. Assessing Europe’s University-Based Research. European Commission – DG Research. http://ec.europa.eu/research/era/docs/en/areas-of-actions-universities-assessing-europe-university-based-research-2010-en.pdf

Bollen J., Rodriguez, M.A., Van De Sompel, H. (2006). Journal status. Scientometrics, Vol. 69, pp. 669-687.

González-Pereira, B., Guerrero-Bote, V.P., Moya-Anegón, F. (2010). A new approach to the metric of journals’ scientific prestige: The SJR indicator. Journal of Informetrics, Vol. 4, pp. 379-391

Moed, H.F. (2005). Citation Analysis in Research Evaluation. Dordrecht (Netherlands): Springer. ISBN 1-4020-3713-9, 346 pp.

Pinski, G., Narin, F. (1976). Citation influence for journal aggregates of scientific publications: theory, with application to the literature of physics. Information Processing and Management, Vol. 12, pp. 297–312.

Royal Society (2011). “Knowledge, Networks and Nations: Global scientific collaboration in the 21st century”. http://royalsociety.org/policy/reports/knowledge-networks-nations

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Research performance can be assessed along a number of different dimensions. In this article, we explore the notion of the multi-dimensional research assessment matrix, which was introduced in a report published in 2010 by an Expert Group on the Assessment of University-Based Research (AUBR), installed by the European Commission. Figure 1 presents a part of this matrix.

Research assessment is a complicated business. To design a practical, informative process requires making decisions about which methodology should be used, which indicators calculated, and which data collected. These decisions in turn reflect answers to a number of questions about the scope and purpose of the research assessment process in hand. A thorough exploration of many of these questions has been presented in Moed (2005).

Table 1 — The multi-dimensional research assessment matrix. This table presents a core part of the matrix, not the entire matrix. It aims to illustrate what the matrix looks like. It should be read column-wise: each column represents a different dimension. See AUBR (2010) for more information.

What, how, and why?

A fundamental question is the unit of the assessment: is it a country, institution, research group, individual, research field or an international network? Another basic question revolves around the purpose of the assessment: is it to inform the allocation of research funding, to improve performance, or to increase regional engagement? Then there are questions about which output dimensions should be considered: scholarly impact, innovation and social benefit, or sustainability?

The matrix distinguishes four assessment methodologies: i) peer review, which provides a judgment based on expert knowledge; ii) end-user reviews, such as customer satisfaction; iii) quantitative indicators, including bibliometric and other types of measures; and iv) self evaluation. These four methodologies can be — and often are — combined into a multi-dimensional assessment.

Bibliometric indicators have a central role in research assessment systems, and the main types are listed in Table 1. Table 2 distinguishes three generations of such indicators. Typical examples from each generation are: the Thomson Reuters journal impact factor; relative or field-normalized citation rates; and citation impact indicators giving citations from ‘top’ journals a higher weight than citations from more peripheral publications. These examples and others are explored in the boxed text.

Table 2 — Types of bibliometric indicators.

 

Table 1 also lists typical examples of non-bibliometric indicators. These include knowledge transfer activities reflected in the number of patents, licenses and spin offs; invited lectures at international conferences; the amount of external funding; Ph.D. completion rates; and the share of research-active staff relative to total staff.

The unit of assessment, the purpose of the assessment, and the output dimension considered determine the type of indicators to be used in the assessment. One indicator can by highly useful within one assessment context, but less so in another. This is illustrated in three examples presented in Figure 1.

Figure 1 – Three examples from the multi-dimensional research assessment matrix (MD-RAM) showing how the unit of assessment, purpose of the assessment, and output dimension determine the type of indicators to be used.

 

Entering the Matrix

The concept of multi-dimensionality of research performance, and the notion that the choice as to which indicators one applies is determined by the questions to be addressed and aspects to be assessed, is also clearly expressed in the recent “Knowledge, Networks and Nations” report from the Royal Society (Royal Society (2011), pp. 24-25):

“In the UK, the impact and excellence agenda has developed rapidly in recent years. The Research Assessment Exercise, a peer review based benchmarking exercise which measured the relative research strengths of university departments, is due to be replaced with a new Research Excellence Framework, which will be completed in 2014. The UK Research Councils now (somewhat controversially) ask all applicants to describe the potential economic and societal impacts of their research. The Excellence in Research for Australia (ERA) initiative assesses research quality within Australia’s higher education institutions using a combination of indicators and expert review by committees comprising experienced, internationally recognised experts."

The impact agenda is increasingly important for national and international science (in Europe, the Commissioner for Research, Innovation and Science has spoken about the need for a Europe-wide ‘innovation indicator’). The challenge of measuring the value of science in a number of ways faces all of the scientific community. Achieving this will offer new insights into how we appraise the quality of science, and the impacts of its globalisation.”

  

Exploring the indicators

  • Journal Impact Factors. The Thomson Reuters Journal Impact Factor was originally invented by Eugene Garfield to expand the coverage of his Science Citation Index with the most useful journals, but is nowadays often used in many types of research assessment processes. It is defined as the average number of citations in a particular year to documents published in a journal in the two preceding years.
  • Relative citation rates. The relative, field-normalised citation rate is based on the notion that citation frequencies differ significantly between subject fields. For instance, authors in molecular biology publish more frequently and cite each other more often than do authors in mathematics. In its simplest form the indicator is defined as the average citation rate of a unit’s papers divided by the world citation average in the subject fields in which the unit is active.
  • Influence weights. Pinski and Narin (1976) developed an important methodology for determining citation-based influence measures of scientific journals and (sub-)disciplines. One of their methodology’s key elements is that it assigns a higher weight to citations from a prestigious journal than to a citation from a less prestigious or peripheral journal.
  • Google PageRank. Pinski and Narin’s ideas also underlie Google’s measure of PageRank. The “value” of a web page is measured by the number of other web pages linking to it, but in this value assessment links from pages that are themselves frequently linked to have a higher weight than links from those to which only few other pages have linked.
  • Other studies. Similar notions may play an important role in the further development of citation impact measures. Good examples are the work by Bollen et al. (2006) on journal status, and the Scimago Journal Rank (SJR) developed by the SCImago group (González-Pereira et al., 2010), one of the two journal metrics included into Scopus.

 

References:

AUBR (2010). Expert Group on the Assessment of University-Based Research. Assessing Europe’s University-Based Research. European Commission – DG Research. http://ec.europa.eu/research/era/docs/en/areas-of-actions-universities-assessing-europe-university-based-research-2010-en.pdf

Bollen J., Rodriguez, M.A., Van De Sompel, H. (2006). Journal status. Scientometrics, Vol. 69, pp. 669-687.

González-Pereira, B., Guerrero-Bote, V.P., Moya-Anegón, F. (2010). A new approach to the metric of journals’ scientific prestige: The SJR indicator. Journal of Informetrics, Vol. 4, pp. 379-391

Moed, H.F. (2005). Citation Analysis in Research Evaluation. Dordrecht (Netherlands): Springer. ISBN 1-4020-3713-9, 346 pp.

Pinski, G., Narin, F. (1976). Citation influence for journal aggregates of scientific publications: theory, with application to the literature of physics. Information Processing and Management, Vol. 12, pp. 297–312.

Royal Society (2011). “Knowledge, Networks and Nations: Global scientific collaboration in the 21st century”. http://royalsociety.org/policy/reports/knowledge-networks-nations

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Research Assessment 101: An introduction

What is research assessment, what purposes does it serve, and how is it carried out? Research Trends introduces the basic components of this complex process.

Read more >


In upcoming issues of Research Trends we dedicate attention to research assessment. Here we explain why we have chosen this subject, how it is defined, its historical background, how the article series is built up, and which topics will be addressed. We also highlight a few fundamental principles that underlie the subsequent articles in the series.

Measuring returns on investment

Research assessment is a broad endeavour. At root it is an attempt to measure the return on investment in scientific-scholarly research. Research assessment includes the evaluation of research quality and measurements of research inputs, outputs and impacts, and embraces both qualitative and quantitative methodologies, including the application of bibliometric indicators and mapping, and peer review.

Research performance is increasingly regarded as a key factor in economic performance and societal welfare. As such, research assessment has become a major issue for a wide range of stakeholders, and there is consequently an increasing focus on research quality and excellence, transparency, accountability, comparability and competition.

 

Institutions compete for students, staff and funding through international rankings.

This focus means that government funding of scientific research – especially in universities – tends to be based more and more on performance criteria. Such a policy requires the organization of large-scale research assessment exercises by national governmental agencies. The articles in this issue are intended to provide a concise overview of the various approaches towards performance-based funding in a number of OECD member states.

The institutional view

Today, research institutions and universities operate in the context of a global market. International comparisons or rankings of institutions are published on a regular basis, with the aim of informing students and knowledge-seeking external groups about their quality. Research managers also use this information to benchmark their own institutions against their competitors.

In light of these developments, institutions are increasingly setting up internal research assessment processes, and building research management information systems. These are based on a variety of relevant input and output measures of the performance of individual research units within an institution, enabling managers to allocate funds within the institution according to the past performance of the research groups.

At the same time, trends in publishing have had a crucial impact on assessing research output. Major publishers now make all their content electronically available online, and researchers consistently report that their access to the literature has never been better. In addition, disciplinary or institutionally oriented publication repositories are being built, along with the implementation of institutional research management systems, which include metadata on an institution’s publication output. Currently, three large multidisciplinary citation indexes are available: Elsevier’s Scopus, Thomson Reuters’ Web of Science, and Google Scholar.

In conjunction with the increasing access to journals and literature databases, more indicators of research quality and impact are becoming available. Many bibliographical databases implement bibliometric features such as author h-indexes, as well as publication and citation charts. More specialized institutes produce other indicators, often based on raw data from the large, multi-disciplinary citation indexes. Today, the calculation of indicators is not merely the province of experts in bibliometrics, and the concept of “desktop bibliometrics” is increasing becoming a reality.

An overview of the various topics covered in this series is presented in Table 1 below. These topics will be presented in short review articles, illustrative case studies, and interviews with research assessment experts and research managers. The main principle underlying the various articles in this series is that the future of research assessment exercises lies in the intelligent combination of metrics and peer review. A necessary condition is a thorough awareness of the potentialities and limitations of each of these two broad methodologies. This article series aims to increase such awareness.

Table 1 — Overview of topics addressed

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

In upcoming issues of Research Trends we dedicate attention to research assessment. Here we explain why we have chosen this subject, how it is defined, its historical background, how the article series is built up, and which topics will be addressed. We also highlight a few fundamental principles that underlie the subsequent articles in the series.

Measuring returns on investment

Research assessment is a broad endeavour. At root it is an attempt to measure the return on investment in scientific-scholarly research. Research assessment includes the evaluation of research quality and measurements of research inputs, outputs and impacts, and embraces both qualitative and quantitative methodologies, including the application of bibliometric indicators and mapping, and peer review.

Research performance is increasingly regarded as a key factor in economic performance and societal welfare. As such, research assessment has become a major issue for a wide range of stakeholders, and there is consequently an increasing focus on research quality and excellence, transparency, accountability, comparability and competition.

 

Institutions compete for students, staff and funding through international rankings.

This focus means that government funding of scientific research – especially in universities – tends to be based more and more on performance criteria. Such a policy requires the organization of large-scale research assessment exercises by national governmental agencies. The articles in this issue are intended to provide a concise overview of the various approaches towards performance-based funding in a number of OECD member states.

The institutional view

Today, research institutions and universities operate in the context of a global market. International comparisons or rankings of institutions are published on a regular basis, with the aim of informing students and knowledge-seeking external groups about their quality. Research managers also use this information to benchmark their own institutions against their competitors.

In light of these developments, institutions are increasingly setting up internal research assessment processes, and building research management information systems. These are based on a variety of relevant input and output measures of the performance of individual research units within an institution, enabling managers to allocate funds within the institution according to the past performance of the research groups.

At the same time, trends in publishing have had a crucial impact on assessing research output. Major publishers now make all their content electronically available online, and researchers consistently report that their access to the literature has never been better. In addition, disciplinary or institutionally oriented publication repositories are being built, along with the implementation of institutional research management systems, which include metadata on an institution’s publication output. Currently, three large multidisciplinary citation indexes are available: Elsevier’s Scopus, Thomson Reuters’ Web of Science, and Google Scholar.

In conjunction with the increasing access to journals and literature databases, more indicators of research quality and impact are becoming available. Many bibliographical databases implement bibliometric features such as author h-indexes, as well as publication and citation charts. More specialized institutes produce other indicators, often based on raw data from the large, multi-disciplinary citation indexes. Today, the calculation of indicators is not merely the province of experts in bibliometrics, and the concept of “desktop bibliometrics” is increasing becoming a reality.

An overview of the various topics covered in this series is presented in Table 1 below. These topics will be presented in short review articles, illustrative case studies, and interviews with research assessment experts and research managers. The main principle underlying the various articles in this series is that the future of research assessment exercises lies in the intelligent combination of metrics and peer review. A necessary condition is a thorough awareness of the potentialities and limitations of each of these two broad methodologies. This article series aims to increase such awareness.

Table 1 — Overview of topics addressed

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Tipping the balance: The rise of China as a science superpower

The balance of global economic and political power is set to shift from West to East in the coming decades. Might the West also lose its scientific preeminence? To explore these questions, Research Trends looks at the rise, fall, and rise again of China as a scientific powerhouse.

Read more >


The Chinese mainland is home to one of the oldest civilizations in the world, and has seen massive technological and social change since it emerged as the modern state we know today as the People’s Republic of China. In the past 30 years, under reforming leaders such as Deng Xioaping, China has undergone the fastest Industrial Revolution in history, and is set to be the dominant global economic force within decades. Yet as China catches up with and overtakes the West as an economic and political powerhouse, will its scientific achievements keep pace? Chine once led the West technologically. Could it do so again — or does it already?

Early innovation, stagnation and re-emergence

Ancient China saw numerous technological innovations including paper and papermaking, woodblock and movable type printing, the invention of matches, the magnetic compass, cast iron and the iron plough, chain and belt drives, the propeller, and machines of war such as the crossbow, gunpowder, the cannon, and the rocket. However, while Europe underwent a scientific revolution starting in the 16th century, science and technology in China stagnated, a trend that accelerated with the creation of the People’s Republic of China in 1949 under the Communist rule of Mao Zedong. This was a period during which science in many industrialized nations was undergoing a post-war transition from “Little Science” characterized by individual or small-group efforts into the present era of “Big Science” typified by large-scale projects usually funded by national governments or inter-governmental groups. It was not until the establishment of a technocracy in the 1980s under the leadership of Deng Xiaoping that China started on its current path of scientific and technological advancement. The rise of China as a scientific nation can be documented on both the input and output sides, with a clear causal relationship between the two.

Returns on investment in Chinese science

In 2006, China embarked upon an ambitious 15-year plan1 to raise R&D expenditure to 2.5% of GDP, identifying energy, water resources, and environmental protection as research priorities. As part of this plan, investment in human resources is emphasized. China has become a higher education powerhouse, turning out more Ph.D. graduates than the UK annually since 2002 and closing on the US (see Figure 1). Perhaps a more immediate indicator of the scientific might of China is the size of the Chinese R&D workforce, which by 2007 already stood at 1.2 million people, exceeding that of the entire EU-27 grouping and poised to overtake the US (see Figure 2). In the absence of more recent figures, China may well already have surpassed the US in both of these key input metrics, but especially in number of researchers.

Figure 1 - Number of PhD graduates per country or country group in the period 2000-2006. Data from Appendix table 2-40 of Science and Engineering Indicators 2010 (National Science Foundation).

 

Figure 2 - Number of researchers per country or country group in the period 1995-2007. Data from Figure 3-48 of Science and Engineering Indicators 2010 (National Science Foundation).

 

The future of Chinese science

In terms of research output, China has shown remarkable growth. The number of articles appearing in the international literature — the most commonly-used indicator of research productivity — has risen exponentially in recent years. To see how this is perturbing the global balance of scientific output, each nation’s share of global article output can be determined. In 2008, China stood second only to the US by this metric, with 11.6% versus 20% of the global output, respectively. However, forecasting shares suggests that the dominance of the US is almost a thing of the past: based on a linear trend, China’s article share will surpass that of the US in 2013. Of course, this projection assumes that the observed trends to date will be maintained in the coming years, but it remains in keeping with another recent estimate of 2014 for China to surpass the US2. Indeed, the recent growth trends in the Chinese research workforce highlighted above are likely to manifest in key output metrics with a delay of just a few years.

Figure 3 - Share of global articles per country or country group in the period 1996-2016 (actual 1996-2009, projected 2010-2016). Data from Scopus.

China lost its scientific and technological edge from the 15th century onwards by becoming culturally insular, shunning exploration of the wider world and remaining suspicious of importing outside ideas and influences. Just as business depends on trading goods and services, so science depends on exchanging ideas and data, and self-imposed isolation is disastrous in either case. Now, in the 21st century, as China opens itself up to global markets — both of commerce and ideas — it again looks set to lead the world.

References:

1. Medium- and Long-term National Plan for Science and Technology Development 2006-2020

2. Leydesdorff, L., & Wagner, C. S. (2009). Macro-level indicators of the relations between research funding and research output. Journal of Informetrics, Vol. 3, No. 4, pp. 353–362.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The Chinese mainland is home to one of the oldest civilizations in the world, and has seen massive technological and social change since it emerged as the modern state we know today as the People’s Republic of China. In the past 30 years, under reforming leaders such as Deng Xioaping, China has undergone the fastest Industrial Revolution in history, and is set to be the dominant global economic force within decades. Yet as China catches up with and overtakes the West as an economic and political powerhouse, will its scientific achievements keep pace? Chine once led the West technologically. Could it do so again — or does it already?

Early innovation, stagnation and re-emergence

Ancient China saw numerous technological innovations including paper and papermaking, woodblock and movable type printing, the invention of matches, the magnetic compass, cast iron and the iron plough, chain and belt drives, the propeller, and machines of war such as the crossbow, gunpowder, the cannon, and the rocket. However, while Europe underwent a scientific revolution starting in the 16th century, science and technology in China stagnated, a trend that accelerated with the creation of the People’s Republic of China in 1949 under the Communist rule of Mao Zedong. This was a period during which science in many industrialized nations was undergoing a post-war transition from “Little Science” characterized by individual or small-group efforts into the present era of “Big Science” typified by large-scale projects usually funded by national governments or inter-governmental groups. It was not until the establishment of a technocracy in the 1980s under the leadership of Deng Xiaoping that China started on its current path of scientific and technological advancement. The rise of China as a scientific nation can be documented on both the input and output sides, with a clear causal relationship between the two.

Returns on investment in Chinese science

In 2006, China embarked upon an ambitious 15-year plan1 to raise R&D expenditure to 2.5% of GDP, identifying energy, water resources, and environmental protection as research priorities. As part of this plan, investment in human resources is emphasized. China has become a higher education powerhouse, turning out more Ph.D. graduates than the UK annually since 2002 and closing on the US (see Figure 1). Perhaps a more immediate indicator of the scientific might of China is the size of the Chinese R&D workforce, which by 2007 already stood at 1.2 million people, exceeding that of the entire EU-27 grouping and poised to overtake the US (see Figure 2). In the absence of more recent figures, China may well already have surpassed the US in both of these key input metrics, but especially in number of researchers.

Figure 1 - Number of PhD graduates per country or country group in the period 2000-2006. Data from Appendix table 2-40 of Science and Engineering Indicators 2010 (National Science Foundation).

 

Figure 2 - Number of researchers per country or country group in the period 1995-2007. Data from Figure 3-48 of Science and Engineering Indicators 2010 (National Science Foundation).

 

The future of Chinese science

In terms of research output, China has shown remarkable growth. The number of articles appearing in the international literature — the most commonly-used indicator of research productivity — has risen exponentially in recent years. To see how this is perturbing the global balance of scientific output, each nation’s share of global article output can be determined. In 2008, China stood second only to the US by this metric, with 11.6% versus 20% of the global output, respectively. However, forecasting shares suggests that the dominance of the US is almost a thing of the past: based on a linear trend, China’s article share will surpass that of the US in 2013. Of course, this projection assumes that the observed trends to date will be maintained in the coming years, but it remains in keeping with another recent estimate of 2014 for China to surpass the US2. Indeed, the recent growth trends in the Chinese research workforce highlighted above are likely to manifest in key output metrics with a delay of just a few years.

Figure 3 - Share of global articles per country or country group in the period 1996-2016 (actual 1996-2009, projected 2010-2016). Data from Scopus.

China lost its scientific and technological edge from the 15th century onwards by becoming culturally insular, shunning exploration of the wider world and remaining suspicious of importing outside ideas and influences. Just as business depends on trading goods and services, so science depends on exchanging ideas and data, and self-imposed isolation is disastrous in either case. Now, in the 21st century, as China opens itself up to global markets — both of commerce and ideas — it again looks set to lead the world.

References:

1. Medium- and Long-term National Plan for Science and Technology Development 2006-2020

2. Leydesdorff, L., & Wagner, C. S. (2009). Macro-level indicators of the relations between research funding and research output. Journal of Informetrics, Vol. 3, No. 4, pp. 353–362.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Science, music, literature and the one-hit wonder connection

Most scientists dream of getting their work into top journals such as Science and Nature. But does achieving this goal reflect the brilliance of researchers, or simply scientific good luck? Professor Isaiah T. Arkin at The Hebrew University of Jerusalem explores the issues in this guest feature.

Read more >


It is a well known fact that publishing in Science or Nature, the scientific world’s top journals, is an incredibly difficult task. Despite being near-compulsory reading for any scientist, most never get a chance to air findings in their pages. Yet in the event of success, one’s career may take a turn for the better, with doors opening to lucrative academic positions, conference invitations, funding possibilities, and more.

Chance favors the prepared mind

What then does it take for a scientist to publish in Science or Nature? Is it that those who publish in the “top two” are simply better scientists, in terms of skill, funding, infrastructure, co-worker availability and so on? Or is publication simply a matter of chance that depends on researchers stumbling upon an interesting finding? Clearly, both factors are important for success, as eloquently stated by Pasteur1, yet their relative contributions remains unknown.

In an attempt to address this question an analysis was undertaken with the aim of estimating the repeat probability of publication in Science or Nature. The rationale was based on the fact that if one finds that most publications in the top journals are by authors that publish in them repeatedly, then sheer chance does not seem to be a major contributing factor to publication.

Yet if a publication in Science or Nature is a singular event, then one might conclude that the success might have been fortuitous, in a sense that the same individual is unlikely to publish there ever again. The results of such an analysis on 37,181 Science and 28,004 Nature publications are presented in Figure 1. Of these, 71% are by authors who have just one Science or Nature paper to their credit, with 15% of papers by researchers with two, and 6% with three. Interestingly, a slightly more polarized distribution is obtained when analyzing repeat publications by “last authors”, taken to represent the principal scientist of a particular study. Here, 74% of last authors in Science or Nature are unlikely to be last authors again in the same venue.

Figure 1 - Percentage of all publications in Science and Nature as a function of the number of publications per individual researcher (all authors or last authors). In order to focus on scientific publications, rather than editorials and commentaries, the following limiting criteria for a publication’s “eligibility” were used: the presence of an abstract; no review qualifier in the PubMed database; and article length of at least three pages. Finally, in an attempt to minimize grouping publications from different individuals, only publications in which the author has at least two initials were selected for analysis. The bibliographic database used was the PubMed portal of the United States National Library of Medicine. Also shown is an analysis of authors whose books reached the top of the New York Times’ bestsellers list (according to the data assembled by Hawes Publications). A similar analysis is also presented of the probability of musical artists (both groups and individuals) repeatedly placing their songs in the top 40 chart based on data compiled by the MBG top 40.

These findings suggest the following conclusions:

• There is less than a 30% chance of repeat publication in Science or Nature. Moreover, the odds are slightly worse for repeating last authors. Thus a scientist who has published in the top two journals is unlikely to repeat the endeavor by a ratio of more than 3:1.

• The chances of publishing repeatedly in Science or Nature are slightly smaller for the principal authors of the work in comparison with the other authors.

• The above potential success rate of repeat publication in Science or Nature is much higher than that of an “average” scientist, whose probability of publishing in Science or Nature is vanishingly small. Thus a publication in Science or Nature is an indication that the scientist is far more likely to publish there again compared with one who has not done so.

• Despite being in the minority, there is a definitive proportion of articles in Science or Nature that are published by authors that do so repeatedly. This list includes, not surprisingly, some of the most famous and influential scientists of our times.

Taken together, since most of articles in the top two scientific journals are written by authors that are unlikely ever to publish there again, they may be vernacularly classified as “one-hit wonders”.

The one-hit wonder phenomenon

In line with the above, it is intriguing to repeat this analysis on other human creative endeavors, such as literature and music. Thus one can compare the sporadic nature of scientific productivity (as manifested in publications in the top two scientific journals) with other human vocations. Specifically, the analysis was repeated, searching for singers (or groups) whose songs reached the “top 40” charts, and for authors of books that topped the New York Times’ bestsellers list. As seen in Figure 1 there is a similarity between the repeat probability of success between singers, authors and scientists.

Once more, nearly two-thirds of all the songs at the top of the charts, or books that make it to the top of the bestsellers list, are by individuals that will never repeat this feat. Thus, one finds that the sporadic nature of scientific creativity is mirrored to an extent in other human activities, such as literature and music. Finally it is notable that music, the field from which the term one-hit wonder arose, is the one in which the probability of repeat success is comparatively the highest.

One final analysis was undertaken with the aim of examining the predictive power of a publication in the best journals by potential academic recruits. In other words, in some of the world’s best academic institutions, candidates for tenure track positions are normally expected to have published in the top two journals prior to appointment. It is therefore interesting to examine whether researchers who published in Science or Nature during their post-doctoral fellowship or Ph.D. studentships are likely to publish in the top journals as independent group leaders. This question may be answered by examining the likelihood that an individual who has published in Science or Nature as a first author (as is common for post-docs and students) will later have publications in these journals in which they are listed as a last author (as is common for principal investigators/corresponding authors).

As seen in Figure 2, more than 87% of all scientists that have published in Science or Nature as first authors are unlikely to publish in the same venue later on as last authors. Furthermore, less than 7% of middle authors in Science or Nature will ever become last authors. Thus candidates who successfully published papers in the world’s top journals during the course of their studies are highly unlikely to repeat this feat as independent researchers.

Figure 2 - Probability that an individual who has published in Science or Nature as a first author (light blue) or middle author (dark blue) will publish there later on as a last author. The same qualifying limitations were applied as in the analysis of Figure 1.

In conclusion, it is possible to state that for the significant majority of Science or Nature authors publication represented a one-hit wonder, and the transition from a first (or middle) author to an article’s principal investigator is highly unlikely. Thus, chance seems to be of paramount importance in relation to preparedness1 for the majority of scientists.

References

1. Dans les champs de l’observation le hasard ne favorise que les esprits préparés. “In the fields of observation chance favors only the prepared mind”. Louis Pasteur, Lecture, University of Lille (7 December 1854).

Useful links:

The Arkin Lab Home Page

Author’s note: The author wishes to thank Joshua Manor, Hadas Leonov and Prof. Joseph S.B. Mitchell for helpful discussions. This work was supported in part by a grant from the Israeli science foundation (784/01,1249/05,1581/08). ITA is the Arthur Lejwa Professor of Structural Biochemistry at the Hebrew University of Jerusalem.

Editor's note: Professor Arkin's address is The Alexander Silberman Institute of Life Sciences, Department of Biological Chemistry, The Hebrew University of Jerusalem, Edmund J. Safra Campus, Givat-Ram, Jerusalem 91904, Israel and can be contacted at arkin (at) huji.ac.il.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

It is a well known fact that publishing in Science or Nature, the scientific world’s top journals, is an incredibly difficult task. Despite being near-compulsory reading for any scientist, most never get a chance to air findings in their pages. Yet in the event of success, one’s career may take a turn for the better, with doors opening to lucrative academic positions, conference invitations, funding possibilities, and more.

Chance favors the prepared mind

What then does it take for a scientist to publish in Science or Nature? Is it that those who publish in the “top two” are simply better scientists, in terms of skill, funding, infrastructure, co-worker availability and so on? Or is publication simply a matter of chance that depends on researchers stumbling upon an interesting finding? Clearly, both factors are important for success, as eloquently stated by Pasteur1, yet their relative contributions remains unknown.

In an attempt to address this question an analysis was undertaken with the aim of estimating the repeat probability of publication in Science or Nature. The rationale was based on the fact that if one finds that most publications in the top journals are by authors that publish in them repeatedly, then sheer chance does not seem to be a major contributing factor to publication.

Yet if a publication in Science or Nature is a singular event, then one might conclude that the success might have been fortuitous, in a sense that the same individual is unlikely to publish there ever again. The results of such an analysis on 37,181 Science and 28,004 Nature publications are presented in Figure 1. Of these, 71% are by authors who have just one Science or Nature paper to their credit, with 15% of papers by researchers with two, and 6% with three. Interestingly, a slightly more polarized distribution is obtained when analyzing repeat publications by “last authors”, taken to represent the principal scientist of a particular study. Here, 74% of last authors in Science or Nature are unlikely to be last authors again in the same venue.

Figure 1 - Percentage of all publications in Science and Nature as a function of the number of publications per individual researcher (all authors or last authors). In order to focus on scientific publications, rather than editorials and commentaries, the following limiting criteria for a publication’s “eligibility” were used: the presence of an abstract; no review qualifier in the PubMed database; and article length of at least three pages. Finally, in an attempt to minimize grouping publications from different individuals, only publications in which the author has at least two initials were selected for analysis. The bibliographic database used was the PubMed portal of the United States National Library of Medicine. Also shown is an analysis of authors whose books reached the top of the New York Times’ bestsellers list (according to the data assembled by Hawes Publications). A similar analysis is also presented of the probability of musical artists (both groups and individuals) repeatedly placing their songs in the top 40 chart based on data compiled by the MBG top 40.

These findings suggest the following conclusions:

• There is less than a 30% chance of repeat publication in Science or Nature. Moreover, the odds are slightly worse for repeating last authors. Thus a scientist who has published in the top two journals is unlikely to repeat the endeavor by a ratio of more than 3:1.

• The chances of publishing repeatedly in Science or Nature are slightly smaller for the principal authors of the work in comparison with the other authors.

• The above potential success rate of repeat publication in Science or Nature is much higher than that of an “average” scientist, whose probability of publishing in Science or Nature is vanishingly small. Thus a publication in Science or Nature is an indication that the scientist is far more likely to publish there again compared with one who has not done so.

• Despite being in the minority, there is a definitive proportion of articles in Science or Nature that are published by authors that do so repeatedly. This list includes, not surprisingly, some of the most famous and influential scientists of our times.

Taken together, since most of articles in the top two scientific journals are written by authors that are unlikely ever to publish there again, they may be vernacularly classified as “one-hit wonders”.

The one-hit wonder phenomenon

In line with the above, it is intriguing to repeat this analysis on other human creative endeavors, such as literature and music. Thus one can compare the sporadic nature of scientific productivity (as manifested in publications in the top two scientific journals) with other human vocations. Specifically, the analysis was repeated, searching for singers (or groups) whose songs reached the “top 40” charts, and for authors of books that topped the New York Times’ bestsellers list. As seen in Figure 1 there is a similarity between the repeat probability of success between singers, authors and scientists.

Once more, nearly two-thirds of all the songs at the top of the charts, or books that make it to the top of the bestsellers list, are by individuals that will never repeat this feat. Thus, one finds that the sporadic nature of scientific creativity is mirrored to an extent in other human activities, such as literature and music. Finally it is notable that music, the field from which the term one-hit wonder arose, is the one in which the probability of repeat success is comparatively the highest.

One final analysis was undertaken with the aim of examining the predictive power of a publication in the best journals by potential academic recruits. In other words, in some of the world’s best academic institutions, candidates for tenure track positions are normally expected to have published in the top two journals prior to appointment. It is therefore interesting to examine whether researchers who published in Science or Nature during their post-doctoral fellowship or Ph.D. studentships are likely to publish in the top journals as independent group leaders. This question may be answered by examining the likelihood that an individual who has published in Science or Nature as a first author (as is common for post-docs and students) will later have publications in these journals in which they are listed as a last author (as is common for principal investigators/corresponding authors).

As seen in Figure 2, more than 87% of all scientists that have published in Science or Nature as first authors are unlikely to publish in the same venue later on as last authors. Furthermore, less than 7% of middle authors in Science or Nature will ever become last authors. Thus candidates who successfully published papers in the world’s top journals during the course of their studies are highly unlikely to repeat this feat as independent researchers.

Figure 2 - Probability that an individual who has published in Science or Nature as a first author (light blue) or middle author (dark blue) will publish there later on as a last author. The same qualifying limitations were applied as in the analysis of Figure 1.

In conclusion, it is possible to state that for the significant majority of Science or Nature authors publication represented a one-hit wonder, and the transition from a first (or middle) author to an article’s principal investigator is highly unlikely. Thus, chance seems to be of paramount importance in relation to preparedness1 for the majority of scientists.

References

1. Dans les champs de l’observation le hasard ne favorise que les esprits préparés. “In the fields of observation chance favors only the prepared mind”. Louis Pasteur, Lecture, University of Lille (7 December 1854).

Useful links:

The Arkin Lab Home Page

Author’s note: The author wishes to thank Joshua Manor, Hadas Leonov and Prof. Joseph S.B. Mitchell for helpful discussions. This work was supported in part by a grant from the Israeli science foundation (784/01,1249/05,1581/08). ITA is the Arthur Lejwa Professor of Structural Biochemistry at the Hebrew University of Jerusalem.

Editor's note: Professor Arkin's address is The Alexander Silberman Institute of Life Sciences, Department of Biological Chemistry, The Hebrew University of Jerusalem, Edmund J. Safra Campus, Givat-Ram, Jerusalem 91904, Israel and can be contacted at arkin (at) huji.ac.il.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

An update on Obama and American science: Uncovering US competencies

President Obama has emerged as a staunch supporter of science, and an enthusiastic advocate of its power for social good. Yet does the US have what it takes to deliver on this potential? Research Trends finds out.

Read more >


During months of presidential campaigning Barack Obama spoke with great energy and enthusiasm about the power and promise of science. Soon after his election in November 2008 Obama took a bold stand for making decisions based on science and announced that he had assembled a scientific ‘Dream Team’, in which he brought together the highest impact scientists working in the US today to provide policy advice1. In his inauguration speech at the start of 2009 Obama’s priorities in science became evident, when he spoke of “…wield[ing] technology's wonders to raise health care's quality and lower its costs; harness the sun, winds and the soil to fuel our cars and run our factories; transform[ing] our schools and colleges and universities to meet the demands of a new age”2. In February 2009, Obama signed into law The American Recovery and Reinvestment Act (ARRA), a stimulus package setting aside US$21.5 billion for federal research and development funding — one of the largest increases in research funding in decades3,4.

President Barack Obama. Image from www.whitehouse.gov.

American science in the spotlight

Obama’s conviction in the power of science and technology is not in doubt. But how likely is Obama to succeed in achieving breakthroughs in his target fields of clean energy, biomedical research and information technology? More specifically, within the broadness of these interdisciplinary fields of science, where do the US’s (hidden) competencies lie? Research Trends uses SciVal Spotlight to find out.

SciVal Spotlight is a web-based strategic analysis tool, based on Scopus data, which offers an interdisciplinary perspective of research performance that helps institutional and government leaders identify their institution’s and/or country’s academic strengths. SciVal Spotlight differs from the more traditional method of evaluating research performance based on the broad classifications of journals in main subject areas, and instead follows a bottom-up aggregation of research activity that classifies all articles published within a given institution or country based on co-citation analysis. On a country level, it creates ‘country maps’ that illustrate academic performance across scientific fields, as well as in relation to other countries, and therefore provides a much more detailed view of clustered research output per country5,6.

Overall Spotlight distinguishes 13 main research areas, with 554 underlying scientific disciplines spread between them. A Spotlight view of all US academic papers published over the five years ending 2009 reveals 1,707 distinctive competencies (DCs) within the US (that is, niches of excellence for which the US has a relative large article market share) - see Figure 1. These distinctive competencies, or DCs, become most informative when one drills down to see the main key words, journals, top disciplines, top authors and number of articles associated with them.

Figure 1 -  SciVal Spotlight map for the 5 years ending 2009, showing 1,707 distinctive competencies (DCs). For more information on the Spotlight approach, see www.scival.com/spotlight.

US distinctive competencies

So which — and how many — of these DCs relate to disciplines that are central to Obama’s vision of the promise of science? Given the broadness of these key fields, we chose to search by three selected disciplines which are sure to underlie each of the key fields: “biotechnology”, “energy fuel” and “computer networks”.

Our search on “biotechnology” revealed 24 DCs — that is, 24 areas within the broad field of biotechnology that the US excels in — which encompass studies varying from analysis of gene expression, metabolic acids, and the plasma membrane to enzymatic hydrolysis and ethanol production, with the percentage of articles by authors from US institutions (“article market share”) ranging between 30% and 54%. In the “energy fuel” discipline the US has six DCs, one of which (for example) relates to studies of carbon dioxide and supercritical carbon dioxode. Just three institutions — the University of Texas at Austin, the University of North Carolina at Chapel Hill and North Carolina State University — have a market share of articles of 30% on these topics. The “computer networks” discipline revealed five DCs, of which one, with an article market share of nearly 50%, includes studies on such topics as energy consumption and energy efficiency — a perfect example of one of Obama’s key fields overlapping that of another.

Although a much more thorough and in-depth analysis is required to get a complete answer to our leading question — where do the US’s underlying competencies lie? — with 1,707 DCs related to the research fields he aims to invest heavily in, Obama can rest assured: the US has great potential to meet the scientific and technological goals his administration has in its sights.

References:

1. Research Trends (2009) “Obama’s Dream Team”, Issue 3.

2. BBC News (January 2009) “Barack Obama's inaugural address in full”.

3. Reisch, M.S. (2009) “Equipping the science agenda”, Chemical and Engineering News, Vol. 87, No. 33, pp. 13–16.

4. SciVal White Paper (2009) “Navigating the Research Funding Environment”.

5. SciVal Spotlight Prospectus “Establish, Evaluate and Execute Informed Strategies”.

6. ElsevierNews (2009) “Elsevier Launches SciVal Spotlight: New Tool Provides Multidisciplinary View Of Research Performance”.

Useful links:

SciVal Spotlight

US Government Recovery Act

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

During months of presidential campaigning Barack Obama spoke with great energy and enthusiasm about the power and promise of science. Soon after his election in November 2008 Obama took a bold stand for making decisions based on science and announced that he had assembled a scientific ‘Dream Team’, in which he brought together the highest impact scientists working in the US today to provide policy advice1. In his inauguration speech at the start of 2009 Obama’s priorities in science became evident, when he spoke of “…wield[ing] technology's wonders to raise health care's quality and lower its costs; harness the sun, winds and the soil to fuel our cars and run our factories; transform[ing] our schools and colleges and universities to meet the demands of a new age”2. In February 2009, Obama signed into law The American Recovery and Reinvestment Act (ARRA), a stimulus package setting aside US$21.5 billion for federal research and development funding — one of the largest increases in research funding in decades3,4.

President Barack Obama. Image from www.whitehouse.gov.

American science in the spotlight

Obama’s conviction in the power of science and technology is not in doubt. But how likely is Obama to succeed in achieving breakthroughs in his target fields of clean energy, biomedical research and information technology? More specifically, within the broadness of these interdisciplinary fields of science, where do the US’s (hidden) competencies lie? Research Trends uses SciVal Spotlight to find out.

SciVal Spotlight is a web-based strategic analysis tool, based on Scopus data, which offers an interdisciplinary perspective of research performance that helps institutional and government leaders identify their institution’s and/or country’s academic strengths. SciVal Spotlight differs from the more traditional method of evaluating research performance based on the broad classifications of journals in main subject areas, and instead follows a bottom-up aggregation of research activity that classifies all articles published within a given institution or country based on co-citation analysis. On a country level, it creates ‘country maps’ that illustrate academic performance across scientific fields, as well as in relation to other countries, and therefore provides a much more detailed view of clustered research output per country5,6.

Overall Spotlight distinguishes 13 main research areas, with 554 underlying scientific disciplines spread between them. A Spotlight view of all US academic papers published over the five years ending 2009 reveals 1,707 distinctive competencies (DCs) within the US (that is, niches of excellence for which the US has a relative large article market share) - see Figure 1. These distinctive competencies, or DCs, become most informative when one drills down to see the main key words, journals, top disciplines, top authors and number of articles associated with them.

Figure 1 -  SciVal Spotlight map for the 5 years ending 2009, showing 1,707 distinctive competencies (DCs). For more information on the Spotlight approach, see www.scival.com/spotlight.

US distinctive competencies

So which — and how many — of these DCs relate to disciplines that are central to Obama’s vision of the promise of science? Given the broadness of these key fields, we chose to search by three selected disciplines which are sure to underlie each of the key fields: “biotechnology”, “energy fuel” and “computer networks”.

Our search on “biotechnology” revealed 24 DCs — that is, 24 areas within the broad field of biotechnology that the US excels in — which encompass studies varying from analysis of gene expression, metabolic acids, and the plasma membrane to enzymatic hydrolysis and ethanol production, with the percentage of articles by authors from US institutions (“article market share”) ranging between 30% and 54%. In the “energy fuel” discipline the US has six DCs, one of which (for example) relates to studies of carbon dioxide and supercritical carbon dioxode. Just three institutions — the University of Texas at Austin, the University of North Carolina at Chapel Hill and North Carolina State University — have a market share of articles of 30% on these topics. The “computer networks” discipline revealed five DCs, of which one, with an article market share of nearly 50%, includes studies on such topics as energy consumption and energy efficiency — a perfect example of one of Obama’s key fields overlapping that of another.

Although a much more thorough and in-depth analysis is required to get a complete answer to our leading question — where do the US’s underlying competencies lie? — with 1,707 DCs related to the research fields he aims to invest heavily in, Obama can rest assured: the US has great potential to meet the scientific and technological goals his administration has in its sights.

References:

1. Research Trends (2009) “Obama’s Dream Team”, Issue 3.

2. BBC News (January 2009) “Barack Obama's inaugural address in full”.

3. Reisch, M.S. (2009) “Equipping the science agenda”, Chemical and Engineering News, Vol. 87, No. 33, pp. 13–16.

4. SciVal White Paper (2009) “Navigating the Research Funding Environment”.

5. SciVal Spotlight Prospectus “Establish, Evaluate and Execute Informed Strategies”.

6. ElsevierNews (2009) “Elsevier Launches SciVal Spotlight: New Tool Provides Multidisciplinary View Of Research Performance”.

Useful links:

SciVal Spotlight

US Government Recovery Act

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

University rankings – what do they measure?

Universities have their own “Top 40” charts, in the form of worldwide rankings of university excellence. Research Trends explores what determines the ranking of institutions, and whether regional differences in higher education make meaningful international comparisons problematic.

Read more >


In Issue 9 Research Trends examined the Times Higher Education-QS World University Rankings, and we explored how the rankings of institutions in different countries have changed over the years. In this article we revisit university rankings from a country and regional perspective.  Two of the most widely known world university rankings — the THE-QS World University Rankings* and the Academic Ranking of World Universities (ARWU) produced by Shanghai Jiaotong University — measure the performance of universities using a range of indicators, which are combined into an overall score that is used to determine the university’s rank. But how do different countries and regions perform for the various indicators on which their overall scores, and therefore rankings, are based? We investigate this question using data from the 2009 ranking exercises.

*In 2010 the THE-QS ranking split into two new ranking schemes: the first, produced by QS, continued with the same methodology; the second, produced by THE in Collaboration with the information company Thomson Reuters, used modified methodologies, indicators, and data sources.

International flavour

The THE-QS ranking uses six indicators: academic peer review; employer review; faculty to student ratio; citations per faculty member; proportion of faculty that are international; and proportion of students that are international. An overall analysis of the average values for these indicators by country reveals an interesting pattern in the two ‘internationality’ indicators (see Figure 1): these measures tend to be higher in small, wealthy countries with a strong research base, such as Singapore, Ireland, Switzerland, and Hong Kong. The small size of these countries means that they have a relatively small domestic pool of students and researchers relative to the global pool, which as a result will tend to be strongly represented at these universities.  We also see high values of these measures for countries with a global research reputation, English language culture, and strong international links (see Figure 1), such as the UK (popular with students globally, due to historic research culture and reputation), and Australia, which is a popular higher education centre in the South Asia region. Interestingly, the US, whose institutions dominate the THE-QS top 200, scores relatively low for measures of ‘internationality’; this could be attributable to a number of factors, including regional isolation, high costs, and the large pool of domestic students and researchers available to populate the universities.

Figure 1 – Average scores for ‘internationality’ indicators in the 2009 THE-QS World University rankings, selected countries.

Language bias

The ARWU scheme also uses six indicators: major academic awards to alumni; major academic awards to staff; researchers in Thomson’s Highly Cited lists; publications in Nature and Science; volume of publications in Thomson’s Science Citation Index; and per capita performance (a score based on the first five indicators that is weighted by the number of staff at the university). A country-by-country breakdown of indicator scores (see Figure 2) reveals that countries with a strong English language culture perform well for the Highly Cited indicator (a measure based on data from the Web of Science database, which has a strong English language emphasis).

Figure 2 – Average scores for each indicator in the 2009 ARWU ranking, selected countries. Alumni = Weighted count of alumni winning Nobel Prizes or Fields Medals; Award = Weighted count of staff winning Nobel Prizes or Fields Medals; HiCi = Count of highly cited researchers in 21 subject categories; N&S = Count of articles published in Nature and Science in recent 5 years. PUB = Count of articles indexed in Science Citation Index-Expanded and double the count of articles indexed in Social Science Citation Index in recent year; PCP =     Per Capita Performance: weighted scores of the above five indicators divided by the number of FTE academic staff. Asterisks (*) indicate countries with a notably high HiCi score.

Scaling up to regional level in the ARWU rankings, it emerges that institutions based in North America (the US and Canada) outperform institutions in other regions on average, according to the Highly Cited and Nature/Science publication indicators, both of which are measures of high impact research (see Figure 3). In contrast, we see that institutions in the Asia-Pacific region perform poorly for the two indicators that measure major awards to alumni and staff.

Figure 3 – Average scores by region, for indicators in the 2009 ARWU ranking. Indicators as per caption for Figure 2.

In 2009, both the THE-QS and ARWU university ranking schemes showed substantial country-level and regional-level differences in indicator scores. This raises the question of whether the indicator scores effectively measure the quality of individual universities, or whether they are too strongly influenced by global variation in the higher education/research system to allow meaningful comparisons between institutions that are located in different geographical zones.

Useful links:

THE-QS World University Rankings

Academic Ranking of World Universities

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

In Issue 9 Research Trends examined the Times Higher Education-QS World University Rankings, and we explored how the rankings of institutions in different countries have changed over the years. In this article we revisit university rankings from a country and regional perspective.  Two of the most widely known world university rankings — the THE-QS World University Rankings* and the Academic Ranking of World Universities (ARWU) produced by Shanghai Jiaotong University — measure the performance of universities using a range of indicators, which are combined into an overall score that is used to determine the university’s rank. But how do different countries and regions perform for the various indicators on which their overall scores, and therefore rankings, are based? We investigate this question using data from the 2009 ranking exercises.

*In 2010 the THE-QS ranking split into two new ranking schemes: the first, produced by QS, continued with the same methodology; the second, produced by THE in Collaboration with the information company Thomson Reuters, used modified methodologies, indicators, and data sources.

International flavour

The THE-QS ranking uses six indicators: academic peer review; employer review; faculty to student ratio; citations per faculty member; proportion of faculty that are international; and proportion of students that are international. An overall analysis of the average values for these indicators by country reveals an interesting pattern in the two ‘internationality’ indicators (see Figure 1): these measures tend to be higher in small, wealthy countries with a strong research base, such as Singapore, Ireland, Switzerland, and Hong Kong. The small size of these countries means that they have a relatively small domestic pool of students and researchers relative to the global pool, which as a result will tend to be strongly represented at these universities.  We also see high values of these measures for countries with a global research reputation, English language culture, and strong international links (see Figure 1), such as the UK (popular with students globally, due to historic research culture and reputation), and Australia, which is a popular higher education centre in the South Asia region. Interestingly, the US, whose institutions dominate the THE-QS top 200, scores relatively low for measures of ‘internationality’; this could be attributable to a number of factors, including regional isolation, high costs, and the large pool of domestic students and researchers available to populate the universities.

Figure 1 – Average scores for ‘internationality’ indicators in the 2009 THE-QS World University rankings, selected countries.

Language bias

The ARWU scheme also uses six indicators: major academic awards to alumni; major academic awards to staff; researchers in Thomson’s Highly Cited lists; publications in Nature and Science; volume of publications in Thomson’s Science Citation Index; and per capita performance (a score based on the first five indicators that is weighted by the number of staff at the university). A country-by-country breakdown of indicator scores (see Figure 2) reveals that countries with a strong English language culture perform well for the Highly Cited indicator (a measure based on data from the Web of Science database, which has a strong English language emphasis).

Figure 2 – Average scores for each indicator in the 2009 ARWU ranking, selected countries. Alumni = Weighted count of alumni winning Nobel Prizes or Fields Medals; Award = Weighted count of staff winning Nobel Prizes or Fields Medals; HiCi = Count of highly cited researchers in 21 subject categories; N&S = Count of articles published in Nature and Science in recent 5 years. PUB = Count of articles indexed in Science Citation Index-Expanded and double the count of articles indexed in Social Science Citation Index in recent year; PCP =     Per Capita Performance: weighted scores of the above five indicators divided by the number of FTE academic staff. Asterisks (*) indicate countries with a notably high HiCi score.

Scaling up to regional level in the ARWU rankings, it emerges that institutions based in North America (the US and Canada) outperform institutions in other regions on average, according to the Highly Cited and Nature/Science publication indicators, both of which are measures of high impact research (see Figure 3). In contrast, we see that institutions in the Asia-Pacific region perform poorly for the two indicators that measure major awards to alumni and staff.

Figure 3 – Average scores by region, for indicators in the 2009 ARWU ranking. Indicators as per caption for Figure 2.

In 2009, both the THE-QS and ARWU university ranking schemes showed substantial country-level and regional-level differences in indicator scores. This raises the question of whether the indicator scores effectively measure the quality of individual universities, or whether they are too strongly influenced by global variation in the higher education/research system to allow meaningful comparisons between institutions that are located in different geographical zones.

Useful links:

THE-QS World University Rankings

Academic Ranking of World Universities

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
  • Elsevier has recently launched the International Center for the Study of Research - ICSR - to help create a more transparent approach to research assessment. Its mission is to encourage the examination of research using an array of metrics and a variety of qualitative and quantitive methods.