In this era of fiscal austerity, government funding of scholarly research has become a pressing issue for most countries. While in the past universities and research institutions could rely on a continuous, steady flow of financial support, today governments increasingly tie funding to metrics. Therefore, the performance of tertiary education institutions has turned into a metric-enhancing competition. As such, the development and use of Performance-Based Research Funding Systems (PRFS) has become a prominent topic of discussion among academics and government officials alike. December 2010 saw the publication of the proceedings of the OECD-Norway workshop “Performance-based Funding for Public Research in Tertiary Education Institutions“. Diana Hicks, Professor and Chair of the School of Public Policy, Georgia Institute of Technology, Atlanta, contributed the opening chapter to this volume, providing a complete literature review on PRFS across thirteen different countries. In discussion with Research Trends, Diana shares her views on some of the hot topics in the discussions on PRFS.

 

Critics of PRFS sometimes argue that it reinforces the influence of conservative scientific elders by punishing novel research and inhibiting the emergence of new and interdisciplinary research fields and departments, raising questions about conservatism and innovation in science.

Diana stresses that PRFS do not disadvantage researchers by distinguishing between young and old researchers as such, but by differentiating between established and new research departments. “One universal element of PRFS used amongst governments is that they reward demonstrated research excellence rather than potential or promise. Any department or institution new in research will go by unrecognized in PRFS in the form they currently stand”. Similar concerns apply to the assessment of research in interdisciplinary scientific fields. Although such research is not necessarily seen as ‘new’ or ‘novel’, current evaluation systems disadvantage interdisciplinary research because evaluations tend to be based on publications in the core of a field. “What we see here is a cycle of accumulation: people in evaluation committees often form a representation of the best academics within a (core) field, coming from the best (core) departments, at the same time being editors of the best (core) journals, with all focus on the core of field. If, however, the aim of research is to link core A with core B it would typically be evaluated as not belonging to either core, and thus not good”.

 

It seems a simple solution would be to build into the system a way of evaluating research relative to discipline, with the effect of adding an extra level of complexity to the system. But how complex can a system be, and still remain practically useful?

Diana recalls that systems of research evaluation typically start out as relatively simple, and become more complex over time as committees deal with criticisms coming from academics. “Because stakeholders of the system are in fact academics, there will always be on-going research on how systems can be improved in parts where it is unfair. Governments, striving for fairness and objectivity within their system, in their turn answer by implementing elaborations to the system”. Levels of complexity now vary among PRFS used across countries by different governments. “What we do not know though is what the actual cost and benefits are of increased complexity in any PRFS”. How much complexity and costs a system can bear while at the same time remaining manageable and workable is a question which current PRFS have not answered. “In doing specific cost-benefit calculations, governments may decide how much complexity is actually worthwhile”.

 

PFRS are inherently competitive, and so there are incentives to manipulate the system for self-serving ends. How significant is this issue in current PRFS, and how should agencies assessing research respond?

”This is an unfortunate side effect in most PRFS,” says Diana. The solution, Diana believes, lies in making small, incremental adaptations to the system in every round of evaluation. “Governments can tweak the rules of assessment so that universities and institutions are hampered in manipulations aimed solely to improve their score. The aim of doing this is to minimize the focus on the metrics used for evaluation, while maximizing the focus on performance”.

Professor Diana Hicks

1985-1999: Faculty member at SPRU - Science and Technology Policy Research, University of Sussex, UK

1998-2003: Senior Policy Analyst at CHI Research, US

2003- : Professor and Chair of the School of Public Policy, Georgia Tech, US

Professor Hicks has taught at the Haas School of Business at the University of California, Berkeley and worked at the National Institute of Science and Technology Policy (NISTEP) in Tokyo. She is an honorary fellow of the Science Policy Research Unit, University of Sussex, UK and on the Academic Advisory Board for Center for Science, Policy and Outcomes, Washington D.C.

Further reading:

OECD (2010). Performance-based Funding for Public Research in Tertiary Education Institutions: Workshop Proceedings, OECD Publishing.  

http://dx.doi.org/10.1787/9789264094611-en

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)