The rise of university ranking systems has engendered status anxiety among many institutions, and created a “reputation race” in which they strive to place higher up the university charts year on year. Concerns have been aired that this is leading to an homogenization of the university sector, as aspiring institutions imitate the model of more successful research-intensive institutions. And while the ranking scores do capture an important aspect of each university’s overall quality, they don’t speak to a diverse range of other issues, such as student satisfaction within these institutions.  

U-Multirank is a new initiative to change this. The system — designed and tested by the Consortium for Higher Education and Research Performance Assessment, and supported by the European Commission — aims to increase transparency in the information available to stakeholders about universities, and encourage functional diversity of the institutions1. Unlike traditional university rankings such as the ARWU2, QS3 and THE4 rankings, U-Multirank features separate indicators that are not collapsed into an overall score. In this article Frans van Vught, project leader of U-Multirank, discusses development of the system and his hopes for it.

Traditional university ranking systems encourage institutions to focus on areas that carry the greatest ranking weight, such as scientific research performance. One benefit of these rankings is that they publicize the achievements of universities that perform well, albeit in this specific range of activities. Will U-Multirank move away from a culture of looking for success stories?

U-Multirank is a multi-dimensional, user-driven ranking tool, addressing the functions of higher education and research institutions across five dimensions: research, education, knowledge exchange, regional engagement and international orientation. In each dimension it offers indicators to compare institutions. In this sense it certainly focuses on the goals institutions set themselves. But unlike most current rankings, U-Multirank does not limit itself to one dimension only (research). It allows institutions to show whether they are winners or improvers over a range of dimensions.

As it is impossible to directly measure the quality of an institute, proxy measures, such as graduation rates and publication output, have to be used instead. Yet as Geoffrey Boulton argues, “[i]f ranking proxies are poor measures of the underlying value to society of universities, rankings will at best be irrelevant to the achievement of those values, at worst, they will undermine it.”5 What criteria have you considered when selecting indicators, and are there indicators you would like to include but cannot at present?

When ranking in higher education and research we need to work with proxy indicators, since a comprehensive and generally acceptable set of indicators for ‘quality’ does not exist. Quality and excellence are relative concepts and can only be judged in the context of the purposes stakeholders relate to these concepts. Quality in this sense is ‘fitness for purpose’, and purposes are different for different stakeholders.

For the selection of U-Multirank’s indicators we made use of a long and intensive process of stakeholder consultation, which included a broad variety of stakeholders, including the higher education and research institutions themselves. This stakeholder consultation reflected the criterion of ‘relevance’ in the process of indicator selection. In addition we used the criteria of validity, reliability, comparability and feasibility. For ‘feasibility’ we focused on the availability of data and the effort required to collect extra data. We tried to ensure that data availability would not become the most important factor in the selection process. However, the empirical pilot test of the feasibility of U-Multirank indicators showed that particularly in the dimensions of ‘knowledge exchange’ and ‘regional engagement’ data availability is limited.

A recent report drew attention to U-Multirank’s ‘traffic light’ rating system, commenting that “institutions should not be ranked on aspects that they explicitly choose not to pursue within their mission.”6 Is this a valid criticism? Could it lead — as the authors suggest — to a decrease in functional diversity as “institutions compet[e] to avoid being awarded a poor ranking against any of the criteria”?

I think this argument is invalid. U-Multirank is user-driven. This is based on the fundamental epistemological position that any description of reality is conceptually driven; rankings imply a selection of reality aspects that are assumed to be relevant. Any ranking reflects the conceptual framework of its creator, who should therefore be a user of the ranking. U-Multirank is a ‘democratization’ of rankings.

We designed a tool that allows users to select the institutions or programs they are interested in. This is U-Map7, a mapping instrument that allows the selection of institutional activity profiles. In U-Multirank only comparable institutions are compared: apples are compared with apples, not oranges. Institutions that do not pursue certain mission aspects should not be compared on these aspects. U-Multirank is designed to avoid this, so as not to encourage imitation or discourage functional diversity. On the contrary, U-Multirank shows and supports the rich diversity in higher education systems.

However harmful to the goal of encouraging diversity, traditional ranking systems have the advantage that people know how to read them: the simplest comparison between universities is seeing which has the higher rank. Will U-Multirank’s users need guidance to compare institutions?

We hope to address both the wish to have a general picture of institutional performances and the wish to go into detail. U-Multirank offers a set of presentation modes that allow both a quick and general overview of multidimensional performance on the one hand, and a more detailed comparison per dimension on the other. Testing these presentation modes with different groups of stakeholders showed that our approach was highly appreciated and additional guidance was not needed. The general overview is presented in the so-called ‘sunburst charts’ that show a multidimensional performance profile per institution (see Figure 1). The detailed presentations are offered as tables in which performance categories are shown per indicator. 

Figure 1 – U-Multirank’s ‘sunburst’ charts “[give] an impression ‘at a glance’ of the performance of an institution”.8 The charts show the performance of each institution across a number of indicators, with one ‘ray’ per indicator: where an institution ranks highly in an indicator, the ‘ray’ is larger. These indicators are grouped into categories around the chart. These two charts show the performance of two institutions: a large Scandinavian university (top) and a large southern European university (bottom).

Curriculum Vitae: Frans van Vught

Frans van Vught (1950) is a high-level expert and advisor at the European Commission. In addition he is president of the European Center for Strategic Management of Universities (ESMU), president of the Netherlands House for Education and Research (NETHER), and member of the board of the European Institute of Technology Foundation (EITF), all based in Brussels. He was president and Rector of the University of Twente, the Netherlands (1997–2005). He has been a higher education researcher for most of his life and published widely in this field. His many international functions include membership of the University Grants Committee of Hong Kong, the board of the European University Association (EUA) (2005–2009), the German ‘Akkreditierungsrat’ (2005–2009), and the L.H. Martin Institute for higher education leadership and management in Australia. Van Vught is a sought-after international speaker and has been a consultant to many international organizations, national governments and higher education institutions all over the world. He is honorary professor at the University of Twente and at the University of Melbourne, and holds several honorary doctorates.

References:

  1. http://www.u-multirank.eu/
  2. http://www.arwu.org/index.jsp#
  3. http://www.topuniversities.com/university-rankings/world-university-rankings
  4. http://www.timeshighereducation.co.uk/world-university-rankings
  5. Boulton, G. (2010). University rankings: Diversity, excellence and the European initiative. League of European Research Universities Advice Paper. No. 3, June.
  6. Beer, J. et al. (2011). Let variety flourish. Times Higher Education, 2 June.
  7. http://www.u-map.eu/
  8. U-Multirank (2011). The design and testing the feasibility of a multi-dimensional global university ranking. Draft version for distribution at the U-Multirank conference, Brussels, Thursday 9 June 2011.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)