Research Trends recently spoke to Diana Hicks, Professor and Chair of the School of Public Policy, Georgia Institute of Technology, Atlanta GA, USA on “Powerful numbers”, or the questions how bibliometrics are used to inform science policy.

Diana Hicks

You recently gave a talk at the Institute of Science & Technology of China (where you gave examples of how bibliometric data were used by government officials to inform science funding decisions. Could you tell us how you discovered these numbers, and about the work you did in investigating their use?

In the talk there were five famous examples of science policy analyses that have influenced policy plus one of my own.  Two I learned about because I was in the unit that produced the analyses at about the same time – Martin & Irvine’s influential analysis of the effect of declining science funding on UK output, and Francis Narin’s discovery of the strong reliance of US patents and scientific papers in areas such as biotechnology.  Two are very famous in science policy – Edwin Mansfield’s calculation of the rate of return to basic research (1), and the NSF’s flawed demographic prediction of a decline in US engineers and scientists.  Another I had seen presented at conferences and was able to follow in real time over the subsequent years – Linda Butler’s identification of the declining influence of Australian publications.  And the final one was my own, President Obama using a number related to one of my analyses in a speech.  I communicated with the authors to gather inside information on the influence the analyses exerted and when that was not possible I searched the internet.   The grey literature in which policy influence is recorded is all indexed these days, making it possible to go back (not too far in time) to put together these kind of stories.

Do you think bibliometric indicators make a difference and raise the level of policy debates, or are they only used when they are in agreement with notions or objectives that policy makers had anyway, and ignored if they point into a different direction?

I think policy making is influenced by a complex mix of information including anecdotes, news coverage, lobbying as well as academic analyses.  And while it would be naïve to expect a few numbers to eliminate all debate and determine policy in a technocratic way, if we don’t bother to develop methods of producing numbers to inform the debates, only anecdotes will be available, and that would be a worse situation.

In a recent article you published in Research Policy (2) you gave an overview of country-based research evaluation systems demonstrating different approaches and metrics used.  In your view, do you think there should or could be a way to merge these systems and create a comprehensive evaluative module, or do different countries indeed need different systems?

The bulk of university funding comes from the national level in most countries, and so systems to inform the distribution of the funding should be designed to meet the needs of the national decision makers and their universities.  On the other hand, national leaders also want a university system that is internationally competitive: therefore international evaluation systems, such as global university rankings would also be relevant, and high rankings could be rewarded with more resources.

Do you think these rankings have had an impact upon university research managers?

In the United States the domestic rankings of universities and of departments, especially business schools, has certainly influenced university management.  I think going forward the global rankings will be very influential.  They allow universities to demonstrate achievement in a globally competitive environment.   Universities can use a lower than ideal ranking as a resource for arguing for more money to improve their rankings.

Based on your experience studying research networks and collaborations, do you think that there’s a way to direct these by strategic funding or other external methods, or are these organic processes that are led by research interest?

I consulted with my colleague Juan Rogers, who has conducted studies of centers and various evaluation projects that have shown that the US Federal research funding agencies have tried consistently to direct research networks and collaborations. He informs me that arguably, the research center programs, especially those aiming at interdisciplinary research, are thought of as either facilitating collaborations by reducing transaction costs or capturing existing networks that were distributed and putting them under one roof to manage them as concentrated human capital. The results have been mixed vis a vis the management question. Networks with other shapes emerged in which now the centers are nodes rather than informal teams of individual researchers (one of the points of distinguishing broad informal networks which we labeled "knowledge value collectives" from networks that have more explicit agreed upon goals and procedures which we labeled "knowledge value alliances").

The agencies have also attempted to broker collaborations by taking a set of individual proposals that have been submitted independently and asking the PIs to get together and submit joint proposals that are bigger than each individual proposal (but maybe not as large as sum of the individual proposals) with the intent not only to save some money and "spread the wealth around" but hoping to improve the science with the expanded collaborative arrangement. Again, results are mixed. It depends on whether those involved in the "shotgun marriage" can get along. We've seen cases that had huge qualitative change consequences for a field (plant molecular biology, for example), others that fell apart, mainly due to personality clashes (according to our informants), and others that continued to coordinate their work to simulate the collaboration and satisfy the funding agency but didn't do anything very differently than they would have if they'd worked on their original proposals (areas of earth science, were examples here).

So to my mind the answer is that networks are de facto managed and manipulated, but that gaining control of them to set common goals and measure success in achieving them against invested resources, as an organization would do, is futile. If the networks are big enough, they'll adapt and many of the cliques will figure out how to game the manipulators and self appointed managers. At the same time, more modest goals, such as getting attention for problems that seem to be under-researched, may be a reasonable goal for the agencies that intervene in the networks.

From your international experience working with science policy authorities what are the main differences and/or similarities that you see between the western and asian approaches to science funding, encouraging innovation and strengthening the ties between research and industry (i.e. do you think the western world pushes more for innovation that will translate into business outcome or vice versa?)

My experience, along with that of my colleague John Walsh, suggests that at least the US and Japan may look different, but they end up achieving much the same thing.  For example, people used to think there was very little collaboration between industry and universities in Japan because of restrictive rules concerning civil service employment.  But the data showed collaboration rates were similar in Japan and western countries.  Further investigation revealed that the Japanese had developed informal mechanisms that “flew below the radar” but were just as effective as the high profile, big money deals that pharmaceutical companies were signing with US universities in those days.

The push for applied research has been said to be very strong in Japan and China, to the extent that the governments are not interested in basic research.  However, with so much research these days in Pasteur’s Quadrant (3) where contributions to both knowledge and innovation result, it is not clear that carefully constructed data would support the existence of big differences between east and west in this dimension.

What do you think are essential elements to creating a balanced and sustainable evaluative infrastructure for science? (e.g. diversified datasets, international collaborations)

There are several challenges in creating such an infrastructure, including private ownership of key resources, long term continuity, and great expense.  An evaluative infrastructure must bring together disparate data resources and add value to them through federating different databases and identifying actors – people, institutions, agencies.  It must do this in real time.  And, it must somehow provide access to resources that are at present accessed individually, in small chunks, because database owners are wary of losing their intellectual property.  This will cost a lot of money, so it doesn’t make sense for one agency, or maybe even one country, to do it.  Also, once you have set up the infrastructure, you want to keep it going.  All this suggests that the best solution is a non-profit institute, jointly funded by several governments, to engage in curating, federating, ensuring quality control and mounting the databases so that they are available to the global community.  The institute would need to be able to hire high level systems engineers as well as draw on cheap, but skilled manual labor in data cleaning.  This project would cost a lot of money, more money than funders are typically willing to spend on social sciences.  This means we would need to get maximum value by using the infrastructure for more than just evaluation.

This vision is analogous to the way economic statistics are produced.   Governments spend a great deal of money administering the surveys that underpin standard economic measures such as GDP and employment.  Government departments do this year after year, so there is continuity in the time series.  Economists can gain access to the data, under specified conditions, to use for their research.  Unfortunately, one-off research grants are not going to get us to this end point.  Nor are resources designed for search and retrieval ever going to be enough without that extra added value that makes them analytically useful.

References

1. Mansfield, E. (1980) “Basic Research and Productivity Increase in Manufacturing”, The American Economic Review. 70 (5). . 863-873
2. Hicks, D. (2012) “Performance-based university research funding systems”, Research Policy, 41(2), 251-261.
3. Stokes, D.E. (1997) Pasteur's Quadrant – Basic Science and Technological Innovation, Washington: Brookings Institution Press
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)