Introduction

A standard way of bibliometrically analyzing the performance of an institution is to select all of its publications and then calculate publication- and citation-based indicators for the institution as a whole. But there are other ways of assessing performance, and these come in top-down and bottom-up varieties. In general, bottom-up approaches tend to produce more reliable results than top-down, and also make it possible to look at performance at the level of groups of departments within an institution. Next, we illustrate a new set of indicators bases on “usage”.

Top-down and bottom-up approaches 

One of the most challenging tasks in bibliometric studies is to correctly identify and assign scientific publications to the institutions and research departments in which the authors of the paper work. Over the years, two principal approaches have been developed to tackle this task.

 The first is the top-down approach, which is used in many, if not all, ranking studies of universities. In a top-down assessment, one typically notes the institutional affiliations of authors on scientific publications, and then selects all publications with a specific institutional affiliation. Even though this process is very simple, difficulties can arise. These can be conceptual issues (e.g., are academic hospitals always a part of a university?) or problems of a more technical nature (e.g., an institution’s name may appear in numerous variations). A bibliometric analyst must therefore be aware of these potential problems, and address them properly.

 The second, bottom-up approach begins with a list of researchers who are active in a particular institution. The next step is to create a preliminary list of all articles published by each researcher, which are sent to these individuals for verification to produce a verified database. This approach allows for the grouping of authors into research groups, departments, research fields, networks, or for an analysis of the entire institution.

 While top-down approaches can be conducted more easily than bottom-up studies, mainly because they do not directly involve the researchers themselves, they are often less informative than bottom-up ones. For example, top-down approaches cannot inform managers about which particular researchers or groups are responsible for a certain outcome, nor can they identify collaborations between departments. So despite the ease of use of top-down approaches, there is a need to supplement them with bottom-up analyses to create a comprehensive view of an institution’s performance.

The analysis of usage data

A different method of assessing of an institute’s performance is by analyzing the ‘usage’ of articles, as opposed to citations of articles. Usage, in our analysis, is measured and quantified in terms of the number of clicks on links to the full-text of articles in Scopus.com, which demonstrates the intention of a Scopus.com user to view a full-text article. Here we use a case-study of an anonymous “Institute X” in the United Kingdom as an example of what usage data analysis has to offer.

In this case study, we analyze papers from 2003–2009, and usage data from 2009. We first identified countries that click through the full text of articles with at least one author based in Institute X. Next, we determined the total number of full-text UK articles accessed by each country, and calculated the proportion of these that were linked to Institute X (that is, articles with at least one author based at the Institute). Finally, we identified the 30 countries with the highest proportion of downloads of articles affiliated with Institute X. The results are shown in Figure 1.  

 

 

 

 

 

 

 

 

 

Figure 1 – For the Top 30 countries viewing UK articles, the percentage of downloads of articles with at least one author from Institute X compared to downloads of all articles with at least one author from the UK. Source: Scopus.

Figure 1 also shows that of the 30 countries clicking through to the greatest number of full-text UK articles, the English-speaking countries of Australia, Canada and the US cite the greatest proportion of articles originating from Institute X. This is shown geographically in the map in Figure 2.

 

 

 

 

 

 

 

 

 

Figure 2 – Who is viewing articles from Institute X? Source: Scopus.

Similarly, one can look at downloads per discipline to assess the relative strengths of an institute.

 

 

 

 

 

 

 

 

 

Figure 3 – Relative usage of Institute X’s papers per academic discipline compared with UK papers in the discipline. The relative usage in Figure 3 is calculated as follows: (Downloads of Institute X/Papers from Institute X)/(Downloads UK/Papers UK).For Mathematics, Neuroscience, Nursing, Psychology and Health Professionals Institute X’s publications have a higher relative usage than for the entire UK.

We can also look at downloads over time. Figure 4 shows the increasing contribution of Institute X’s downloads to all UK downloads, suggesting that Institute X is playing a more and more important role in research in the UK. 

  

 

 

 

 

 

 

 

 

Figure 4 — Institute X’s papers downloads as a percentage of UK’s papers downloads per year.

As these examples demonstrate, usage data can be used for a number of different types of analyses. One major advantage they have over citation analyses is that citations of papers only accrue in the months and years following their publication, as new papers cite the article under analysis. Usage statistics, by contrast, begin to emerge as soon as an article is available for download, and so can give a more immediate view of how researchers, and the groups and institutes to which they belong, are performing. And while the full meaning and value of usage data remains up for debate, usage analysis is nonetheless represents a useful addition to the more conventional bibliometric analysis based on citations.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)