Impact assessment is now a prominent technology for research governance in the United Kingdom (UK). The current focus on the impact of research beyond academia – while clearly the buzzword of the moment in UK research policy – has complex roots in policy discourses around wealth creation, user relevance, public accountability, and evidence-based decision-making (some of which I unpack in a forthcoming paper). Given this complexity, a grudging consensus is currently being forged around the importance of strengthening the connections between academic and non-academic contexts, while controversy continues around performance-based higher education funding and the extent to which universities ought to be held accountable by the government (on behalf of the taxpayer) for the non-academic implications and outcomes of their research. While these pivotal principles, and the values underpinning them, are being renegotiated, much of the attention of both the government and the higher education institutions has been diverted, under the direct influence of the forthcoming national assessment exercise for research (REF, due in 2014), towards the technicalities of designing and using measures of impact.


The impact agenda and outcomes-based allocation of public funding for research

In a policy and governance context that favors selectivity and concentration, and on the background of economic crisis, research funding is no longer defined in policy circles as a long-term investment in intrinsically worthwhile activities. Rather, in what is described as a knowledge and innovation economy, research is expected to make a case for funding in terms of external value (1, 2). Assessing and demonstrating the non-academic impact of publicly funded university research has thus become a key element of recent UK research policy. The pursuit of research impact is now a priority for both arms of the UK public research funding system, known as the “dual support” system (3), as well as for the direct commissioning of research by government departments and agencies. The UK “dual support” system comprises separate funding streams for core research infrastructure (in the shape of outcome-based block grants distributed by the four national higher education funding councils – informed by the outcomes of the Research Excellence Framework, or REF (the REF was preceded by the Research Assessment Exercises, which, between 1986 and 2008, informed the selective allocation by the higher education funding councils of core public grants to higher education research) and for project expenditure (allocated competitively by the seven Research Councils UK).

The Royal Charters and the current strategic framework, “Excellence with Impact”, of the UK Research Councils draw direct links between good research and social, cultural, health, economic and environmental impacts. At proposal stage, the Councils are interested in potential impacts and in the ways in which they will be pursued; for example, they require impact summaries and “pathways to impact” statements in applications for funding. At the end-of-award reporting stage, the Councils are also interested in the actual impacts achieved by a project over its lifetime. The Research Councils’ interest in impact pre-dates the REF (e.g. 4, 5, and 6) and is also evident in their commissioning of studies of research impact, knowledge transfer, practice-based research, and industry engagement, many of which are evaluation studies. Examples include the areas of engineering and physical sciences (7), medical research (8), arts and humanities (9), and the social sciences (10, 11, 12, and 13). There is also a wealth of commissioned impact “case studies” which showcase successful practice (14). On this basis, the Councils have produced guidelines and “toolkits” for impact – see, for example, the Economic and Social Research Council’s online “Impact Toolkit” and Impact Case Studies. Other key players in the recent impact debates have been the British Academy, which produced its own reports on the role of the humanities and social sciences in policy-making (15, 16), the Royal Society, Universities UK action group, various learned societies, and research charities such as the Wellcome Trust and Jisc (formerly the Joint Infrastructure Committee for higher education, now a charity aiming to foster “engaged research”).

The most controversial and publicly visible move towards prioritizing research impact was its introduction, following public consultation and a pilot exercise in 2010, as one of the three key components (alongside quality of research outputs and research environment) of the Research Excellence Framework, the national exercise for the assessment of higher education research in the UK, due in 2014. Impact has thus become part of the mechanism for performance-based research governance, as the REF is intended to inform the selective allocation of public funds, to function as a mechanism for accountability, and to enable higher education benchmarking. The current documentation for the REF gives impact a 20% weighting of the final grade profile awarded to a submitting institution - down, following public consultation, from an initially proposed 25%. For the purposes of the REF, impact is defined as “an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia” (17; and 18). It will be assessed by academic and user reviewers on the basis of standard-format case studies and unit-level strategic statements, using the twin criteria of “reach” (or breadth) and “significance” (or depth) of impact. In preparing their submissions, universities are currently grappling with the need simultaneously to define, track and demonstrate the impacts of their research, a task for which they had been largely ill-prepared, in terms of infrastructure, capacity, management and strategy. Important challenges at the moment concern the variable time lag between carrying out research, achieving impact, and documenting and reporting it; the difficulties involved in either attributing (parts of) non-academic changes and benefits to particular research projects and outcomes, or demonstrating the material and distinctive contribution of this research to such changes; and evidencing chains of action and influence that may have not been documented at the time of their occurrence.

As a consequence of these initiatives, UK higher education-based researchers are now subject to multiple requirements to assess and demonstrate the impact of their work, in a variety of contexts and for a range of different purposes. The impact to be “demonstrated” could be that of a project or research unit, of a program, of a funding body/strategy, of an area of research, or of the research system as a whole – each captured at different points in time, and relative to varying time horizons and to different types and methodologies of research. Additional pressure is exercised on academic research by competition from other research settings, such as private and third-sector research, both of which may have a sharper focus on non-academic benefits as part of their rationale. Increasingly, too, public expectations from higher education-based research are influenced by the fact that other areas of public service – including health, transport, urban planning, but also culture and heritage, media, and sports – face tighter requirements to account for their use of public funding in terms of outcomes and benefits.


Capturing research impacts

The current interest in research impact, spurred on by the forthcoming REF 2014, has stimulated a growing body of literature (6, 19). Together with practical experience in program evaluation and policy analysis, this literature is already underpinning a small industry around designing and using instruments for measuring and reporting the socio-economic impacts of research. It has also inspired the production of various open-access or commercially available tools for impact tracking and visualization, such as ImpactStory and Altmetrics. Examples of methodological literature include the report to HEFCE and to BIS, on the REF impact pilot (20); the report on frameworks for assessing research impact (21); the report on knowledge transfer metrics commissioned by UNICO (22); also, internationally, the guides produced by projects such as ERiC (23), and SIAMPI (24). This technical literature is complemented by more conceptual work on higher education, research policy-making, and the relationships between research and processes of change at all levels of society.

There is also wide recognition that in the current context for research it is particularly important to reflect critically on the various strategies for increasing and demonstrating research impacts being used or promoted in different institutions and disciplines (see 25, 26, and the LSE Impact blog). In the UK, a number of centers, such as the Research Unit for Research Utilisation at the Universities of Edinburgh and St Andrews (6, 19), the Health Economics Research Group at Brunel University (27), the Public Policy Group’s HEFCE-funded Impact of Social Sciences project at the London School of Economics (11), the Science & Technology Policy Research Centre at the University of Sussex (28, 29), and, most recently, the DESCRIBE project at the University of Exeter, have made notable contributions to this process.


Concluding comment

Additional studies and evaluations based in, and commissioned by, individual universities and university mission groups have highlighted the connections between institutional contexts and impact interpretations and practices; examples include reports for the University of Oxford (25); for the University of Cambridge (9); for the Russell Group Universities (30); for the 1994 Group (31); and for the Million+ group, formerly the Coalition of Modern Universities (32). These studies explore the ways in which universities have adapted the policy-driven impact agenda to their own ways of working and to their longer-term concerns with the quality, sustainability and benefits of research activity. Impact may be the buzzword of the moment, but universities had reflected on their wider mission long before impact was deemed a metaphor worth turning into a governance technology. Many have embedded their efforts to capture research impact in their wider social accountability projects and plugged it in their continued public engagement, community interaction and outreach activities (26). In order to do this, they are reinterpreting the official agenda and articulating alternatives. These reinterpretations – and their visibility and weight in the public domain – are essential if impact is not to become yet another measure rendered meaningless by reducing it to a target for performance.

For impact indicators to be an adequate proxy of research value, they need not only to be technically refined measures, but also to be pitched at the right level, so that they can function as catalysts of, rather than destabilize, higher education activity. To do this, they depend on a healthy ecology of higher education, which in turn requires intellectual autonomy, financial sustainability and insightful governance. Without these preconditions, the high-stakes assessment of impact may fail to reflect and support ongoing research value, and end up simply capturing assessment-driven hyperactivity.


References

(1) BIS (2011) Innovation and Research Strategy for Growth. London: Department for Business, Innovation and Skills.
(2) Arts Council England (2012) Measuring the Economic Benefits of Arts and Culture. BoP Consulting.
(3) Hughes, A., Kitson, M., Bullock, A. and Milner, I. (2013) The Dual Funding Structure for Research in the UK: Research Council and Funding Council Allocation Methods and the Pathways to Impact of UK Academics. BIS report.
(4) RCUK (2002) Science Delivers. Available at: http://www.rcuk.ac.uk/documents/publications/science_delivers.pdf
(5) RCUK (2006) Increasing the Economic Impact of the Research Councils. Available at: http://www.rcuk.ac.uk/documents/publications/ktactionplan.pdf
(6) Davies, H., Nutley, S. and Walter, I. (2005) Approaches to Assessing the Non-academic Impact of Social Science Research. Report of an ESRC symposium on assessing the non-academic impact of research, 12th/13th May 2005.
(7) Salter, A., Tartari, V., D’Este, P. and Neely, A. (2010) The Republic of Engagement Exploring UK Academic Attitudes to Collaborating with Industry and Entrepreneurship. Advanced Institute of Management Research.
(8) UK Evaluation Forum (2006) Medical Research: Assessing the benefits to society. London: Academy of Medical Sciences, Medical Research Council and Wellcome Trust.
(9) Levitt, R., Claire, C., Diepeveen, S., Ní Chonaill, S., Rabinovich, L. and Tiessen, J. (2010) Assessing the Impact of Arts and Humanities Research at the University of Cambridge. RAND Europe.
(10) LSE Public Policy Group (2007) The Policy and Practice Impacts of the ESRC’s ‘Responsive Mode’ Research Grants in Politics and International Studies. ESRC report.
(11) LSE Public Policy Group (2008) Maximizing the Social, Policy and Economic Impacts of Research in the Humanities and Social Sciences. British Academy report.
(12) Meagher, L.R. and Lyall, C. (2007) Policy and Practice Impact Case Study of ESRC Grants and Fellowships in Psychology. ESRC report.
(13) Oancea, A. and Furlong, J. (2007) Expressions of excellence and the assessment of applied and practice-based research, Research Papers in Education, Vol. 22, No. 2.
(14) RCUK (2012) RCUK Impact Report 2012. Available at: http://www.rcuk.ac.uk/Documents/publications/Impactreport2012.pdf
(15) British Academy (2008) Punching Our Weight: The role of the humanities and social sciences in policy-making. London: BA.
(16) British Academy (2004) ‘That Full Complement of Riches’: The contributions of the Arts, Humanities and Social Sciences to the nation’s wealth. London: BA
(17) REF (2012/11) Assessment framework and guidance on submissions.
(18) REF (2011) Decisions on assessing research impact. REF 01.2011.
(19) Nutley, S., Percy-Smith, J. and Solesbury, W. (2003) Models of Research Impact: A cross-sector review of literature and practice. London: LSDA.
(20) Technopolis Ltd (2010) REF Research Impact Pilot Exercise Lessons-Learned Project: Feedback on Pilot Submissions. HEFCE.
(21) Grant, J., Brutsher, P.-B., Kirk, S. E., Butler, L., & Wooding, S. (2009) Capturing Research Impacts: A review of international practice. HEFCE/RAND Europe.
(22) Holi, M.T., Wickramasinghe, R. and van Leeuwen, M. (2008) Metrics for the Evaluation of Knowledge Transfer Activities at Universities. UNICO report.
(23) ERiC (2010) Evaluating the Societal Relevance of Academic Research: A guide. Evaluating Research in Context, Netherlands.
(24) SIAMPI (2010) SIAMPI Approach for the Assessment of Social Impact. Report of SIAMPI Workshop 10.
(25) Oancea, A. (2011) Interpretations and Practices of Research Impact across the Range of Disciplines. HEIF/Oxford University.
(26) Ovseiko, P.V., Oancea, A., and Buchan, A.M. (2012) Assessing research impact in academic clinical medicine: a study using Research Excellence Framework pilot impact indicators. In: BMC Health Services Research 2012, 12:478 .
(27) HERG and Rand Europe (2011) Project Retrosight. Understanding the returns from cardiovascular and stroke research. Cambridge: RAND Europe.
(28) Martin, B.R., and Tang, P. (2007) The benefits from publicly funded research. University of Sussex.
(29) Molas-Gallart, J., Tang, P., Sinclair, T., Morrow, S., and Martin, B.R. (1999) Assessing Research Impact on Non-academic Audiences. Swindon: ESRC.
(30) Russell Pioneering Research Group (2012) The Social Impact of Research Conducted in Russell Group Universities. Russell Group Papers, 3.
(31) McMillan, T., Norton, T., Jacobs, J.B., and Ker, R. (2010) Enterprising Universities: Using the research base to add value to business. 1994 Group report.
(32) Little, A. (2006) The Social and Economic Impact of Publicly Funded Research in 35 Universities. Coalition for Modern Universities.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)