Nobel Prizes can bring the winner fame, fortune and respect, but their impact can be felt beyond the individual winner. Established to recognize scientific and cultural discoveries benefiting mankind, Nobel Prizes can also be used by bibliometricians to assess scientific research. Many Nobel Prize winners, particularly in science, are affiliated to university departments; this can bring recognition to the department and university, and, more formally, has been put to use as a means of assessing research departments and universities.

Selling out?
According to many practitioners in the field of quantitative research assessment, the Academic Ranking of World Universities and other similar university rankings are primarily marketing rather than research-management tools. The Expert Group on the Assessment of University-Based Research (AUBR 2009) underlined in its 2009 report that institutional research performance is a multidimensional concept that may be poorly reflected in the currently available global rankings. A rank position itself does not tell managers how to improve their institution’s performance. They need more detailed and accurate data on the research performance of their personnel, and they need to take the context and mission of their particular institution into account.

Over the last decade, there has been growing interest in the production of rankings to assess the performance of universities on a global scale. One particular ranking exercise, the Academic Ranking of World Universities (ARWU), produced since 2003 by Shanghai Jiao Tong University, includes an indicator based on Nobel Prizes and Fields Medals awarded to staff and alumni of the university that accounts for 30% of the overall score in the rankings.

Winning a Nobel Prize is a rare event, and the distribution of Prizes across institutions changes little year on year (the Prizes have been running for over 100 years, with only a handful awarded each year), so do they have any effect on the evolution of rankings such as ARWU? To explore this question, we examined the relationship between large year-to-year rank changes and individual indicators used to determine universities’ overall scores in the ranking.

Winning by association

Having a staff member or alumnus that wins an award can give a substantial boost to a university’s position in the ARWU rankings. All institutes that rose by at least eight places in one year are associated with Nobel Prizes or Fields Medals to staff or alumni (see Table 1). The emphasis given to these Prizes in the rankings reflects their rarity value, and that they mark the best research. But does such an award, given to one or a few individuals, tell you much the overall quality of the broad range of research at large, multi-faculty universities?

Rank change (places gained) Number of institutions Reasons for change
Alumni Award HiCi N & S PUB PCP
20 1 1 1
18 1 1
16 2 1 1 1 1 1
15 1 1 1
14 2 1 1 1 1
12 2 1 1
10 6 1 1 2 1 1 4
9 3 1 2 2
8 4 1 1 1 1 1
7 1 1 1
6 4 1 2 3 1
5 2 1 1 1
Total 7 8 8 9 10 6

Table 1 – Large rank gains in the ARWU rankings 2004–2009, and the indicator changes associated with these rank changes. All universities that gained at least eight places in the rankings between years when staff or alumni received a Nobel Prize or Fields Medal (highlighted). For full details of the indicators, visit

On the flipside, failure to win a Nobel Prize or Fields Medal in a given year does not tend to harm a university’s position in the ranking. Significant drops in rank are more often associated with other ARWU indicators that describe a university’s publication record (see Table 2).

Rank change (places gained) Number of institutions Reasons for change
Alumni Award HiCi N & S PUB PCP
-13 2 2 2 1
-10 4 1 3 3
-9 3 1 3
-8 3 2 1 2
-7 8 2 6 5
-6 5 2 2 4
-5 18 1 1 4 11 8 1
Total 43 1 1 14 28 23 1

Table 2 – Large rank declines in the ARWU university rankings 2004–2009 and the associated indicator changes. Big rank falls were not associated with falls in the score for Nobel Prizes and Fields Medals, but rather the publication record (highlighted). This is because an institute cannot lose an award once gained, although ARWU does have a mechanism built in that values an award less the longer it has been held for. This may account for the small declines in ranking associated with the alumni and award scores.

This highlights an important point about the rankings: these big, rare awards, once won, continue to contribute to the overall score of a university even when the awards are effectively historical. Although an award’s effect on a university’s score does decline over the decades, these effects are negligible within the timescale that ARWU has been creating these rankings.

Furthermore, since Nobel Prizes are often awarded decades after the ground-breaking work was carried out, they do not reflect the current research strength of an institution. In addition, this research may have been conducted at a completely different university, even though the university where the winner is currently employed receives the credit for the award.

In fact, Anthony van Raan, has commented on the limitations of using of Nobel Laureates as an indicator of institutional research performance: “‘Affiliation’ is a serious problem. A scientist may have an (emeritus) position at [ARWU] University A at the time of the award (which seems to be the criterion in the Shanghai study), but the prize-winning work was done at University B. The 1999 physics Nobel Laureate Veltman is a striking example (A = University of Michigan, Ann Arbor; B = University of Utrecht).” (1)

Tipping the scales

This raises two important questions about the value of using awards to assess universities. Can awards given to only a few individuals each year really contribute to an effective means of assessing a huge number of large, multi-faculty institutions? And is right to use a measure that incorporates rapid gains, but does not allow for rapid declines?

If anything, this indicates the power of such awards, not only in recognizing specific examples of excellence, but in the way they are also taken as indicators of prestige for the universities, departments, and even research teams, associated with the recipient.


(1) van Raan, A.F.J. (2005) “Fatal Attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods”, Scientometrics, Vol. 62, No. 1, pp. 133–143.

Further reading:

Billaut, J-C.; Bouyssou, D. and Vincke, P. (2010) “Should you believe in the Shanghai ranking? An MCDM view”, Scientometrics, issue 84, pp. 237–263.

VN:F [1.9.22_1171]
Rating: 3.0/10 (1 vote cast)
The Midas touch, 3.0 out of 10 based on 1 rating