As Chief Executive Officer of the Australian Research Council, Professor Margaret Sheil is responsible for the Excellence in Research for Australia (ERA) initiative. ERA aims to assess research quality in the Australian higher education context by combining indicators with expert review. In this article Research Trends talks with Professor Sheil about ERA and its consequences.

Broadly speaking, Australian universities receive funding through two routes: competitive research grants, which are awarded to academics based on the strength of their proposals; and ‘block funding’, which is given to universities to cover the indirect costs of doing research in a university, such as maintaining libraries and IT systems, hiring support staff, and so on. In general, universities get more block funding if they pull in more competitive grants, and generate more publications and graduates. “This can lead to a mindset of ‘doing better by doing more’,” says Sheil. “Research excellence assessments ask whether we’re driving research excellence rather than just research quantity.”

How does Excellence in Research for Australia framework achieve this goal?

The Excellence in Research for Australia (ERA) framework came into existence in 2009, replacing the Research Quality Framework established under the previous government (but never actually implemented). Whereas the RQF tried to develop a one-size-fits-all model across the university sector, “we know that different disciplines judge quality differently,” says Sheil. So ERA takes a ‘matrix approach’ that draws on a range of indicators that collectively can be applied to the whole sector, even while some components carrying more weight in certain disciplines than other. “We decided to look at indicators of quality that are accepted within disciplines, clustered like-minded disciplines together, and said ‘If there are robust metrics, we’ll use them’”. ERA breaks down the universe of research into 157 disciplines, and for 101 of these citation analyses were a key indicator (especially the physical sciences). “Where there wasn’t confidence that metrics would work, or in areas such as the humanities where books are more important than journal publications, we used expert peer review as an indicator of quality,” says Sheil. “There’s a lot of confidence about citation analysis in many disciplines, but we believe we need experts to judge whether this makes sense for a particular discipline, and what it means in the context of other indicators.” These include ‘esteem’, such as how many members of a department belong to learned academies; ‘applied indicators’, such as the number of patents produced and income generated through commercialization of research; and success in gaining competitive grants, which have an in-built quality control component. Finally, ERA has produced a list of journals ranked by quality, which has been used to look at how many publications from a discipline get into the higher-ranking journals. “All of these indicators are grouped by discipline and by university, and then expert committees look at the total of the indicators and derive an overall assessment score,” says Sheil.

The first ERA report was released in 2010, and another will be published in 2012. What has been learned in these early days?

“There has been a misunderstanding that ERA is about ranking the quality of whole universities, rather than individual disciplines,” says Sheil. “We don’t think university rankings are meaningful, but we could have been clearer about this.” For instance, The Australian, a leading national newspaper took the assessment scores of disciplines within universities, combined these scores and then averaged them to create an overall ‘university ranking’ score. “That doesn’t make any sense,” says Sheil. A small university specializing in, for example, theology, may be world-class in this discipline and would thus be placed high in university rankings, above larger universities that score highly on some disciplines but lower on others. This can lead to the masking of pockets of excellence in institutions with a broad disciplinary remit.

The past couple of years have also revealed controversy over ERA’s journal ranking, a core element of the sytem. “We learned that we haven’t managed to stop the obsession with journal rankings, which is the most commonly misunderstood aspect of ERA.”

Some commentators have claimed that the journal rankings are not fair, and that some disciplines suffer as a result of the rankings settled on in ERA. How do you respond?

Some observers, says Sheil, seem to think that ERA is solely about the journal rankings (each journal is given a single quality rating of A*, A, B or C), which is why they’ve received such attention. “There’s a view that if you don’t have a high number of A and A* journals in your discipline, it’s disadvantaged. But if you look at zoology, which is a strong focus of Australian biology, there are hardly any A* journals but the discipline still performed very well because the work done in this area is highly cited and scores well on other indicators in the assessment matrix,” says Sheil. That’s not to suggest that the journal ranking system is perfect. “Did we get some journals wrong? For sure — there are 22,000 journals to rank, after all! But because people have got a bit obsessive about this, especially journal editors, we’re currently looking at what impact the journal ranking element had on overall assessments.”

Is there an inherent danger in performance-based research assessment that it can discourage exploratory, novel research?

“If we used the assessment outcomes to decide every allocation of research of funding, this would be a real concern,” says Sheil. “But we’re introducing other things into the grant side of the business to counterbalance that.” In addition, while ERA “looks backwards” to enable the government to assess whether direct block funding pumped into universities has been well spent, grants are essentially “forward looking”, and therefore based on different criteria. “It’s really important that when it comes to grant giving — where we’re assessing potential — that we recognise these issues, and continue to invest in and take risks on the next generation of research.”

Professor Margaret Sheil (FTSE FRACI C Chem)

1990–2000: Lecturer in the Department of Chemistry, University of Wollongong (UoW), Australia

2001: Dean of Science at UoW

2002–2007: Deputy Vice-Chancellor (Research) at UoW

2007– : Chief Executive Officer of the Australian Research Council

Professor Sheil is a member of the Cooperative Research Centres Committee, the Prime Minister’s Science Innovation and Engineering Council and the National Research Infrastructure Council. She is also a member of the Board of the Australia-India Council, the Advisory Council of the Science Industry Endowment Fund and the National Research Foundation of Korea. She is a Fellow of the Academy of Technological Sciences and Engineering (FTSE) and the Royal Australian Chemical Institute (FRACI).

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)