Julia Lane (jlane@air.org)

Q: You have an economics and statistics background. Can you tell us about how that was leveraged and used in the development of the Science of Science & Innovation Policy (SciSIP) program?
A: It helped in two ways. First, it helped me engage with much of the social science community and get them interested in studying the very interesting problems in science and innovation policy. Developing a strong researcher community is the most important part of the program. The second was in working with colleagues to build a strong data infrastructure. The need for a standardized way to connect scientific researchers receiving funding with the output that they produce was apparent from the beginning, as data were scattered around many different systems and couldn’t be patched together. I spent a lot of my career working in areas related to labour, education and health policy – particularly building datasets necessary to understand the results of policy interventions. That meant that I had a strong background to draw on, particularly when the focus of the Federal stimulus package was to track how the money created jobs.

Q: STAR METRICS might be the first serious attempt to use a triangulated approach to evaluate the impact of Government funding. What were the major forces that influenced the development of STARMETRICS? (e.g. government mandate? market forces?)
A: The overarching goal of the STAR METRICS program is to provide a better empirical basis for science policy. The program resulted from a federal mandate that asked institutions receiving stimulus grants to report on jobs resulting from them. Responding to this mandate was difficult because there was not one system that captured these data in an automated, consistent and measurable way. We developed an approach that enabled the information to be captured in a relatively low burden way. In addition, the federal agencies and the research agencies felt that this focus was far too narrow and that more aspects should be measured. Researchers funded by the SciSIP program had already developed some data, models and tools to respond to this need, and the Science of Science Policy Interagency group had developed a Roadmap (in 2008) that identified what key elements were necessary. This foundation, combined with input from agencies and research institutions, enabled us to start to build an open and automated data infrastructure that can be used by federal agencies, research institutions and researchers to document federal investments in science and to analyze the resulting relationship between inputs, outputs, and outcomes.

Q: From your experience what are the major forces that inform and drive Science Policy? (e.g. scientific advancements, the scientists, Government budgets, public opinion)
A: I and many others believe that there is no one single factor and that everything is endogenous. As everything else, when it comes to funding and budgets there are many forces involved and everything depends on everything else. One of my favourite articles on this exact matter was written by Daniel Sarewitz in 2010 (1) (). In this article he points to the importance of public opinion and as consequence the politics of funding and the gaps between scientists’ perceptions and the public’s. One factor is interwoven in the other, really. We hope that our efforts to build an open data infrastructure that incorporates as many of these factors as possible will help inform this complex process.

Q: Do you see differences between countries in their approach and methodologies inthe evaluation of science? Can you name a few?

A: Most countries still use number of publications and citations as an indicator of quality and productivity and that is worrying. We want to identify and support the best science, and I think there is good evidence that counting publications is not sufficient . We do know that it is possible to identify what it is that makes good science; tenure committees, academic administrators and peers routinely make decisions based on who they think is doing good science. The challenge is to get the community to identify what data form the basis for decisions made by these committees. In the past we relied on personal judgements and close networks of people in a certain field that knew each other and each other’s work. Nowadays, with the boost in international collaborations and team science as well as the interdisciplinary nature of science, these types of personal evaluations are no longer sustainable.

Q: There is a lot of buzz around the term “science policy” and its implications on innovation. In your opinion, does science policy encourage or discourage scientific novelty or is it more of an organic process driven by discovery, budgets or other factors?

A: As an economist I would describe the process as an endogenous process which means that funding is driven by science and science is driven by funding. Funding agencies always look for the next hot area of science to invest in. When funding allocated, the particular field will see growth which in turn attracts more funding. There’s a constant exchange between scientific innovation and discovery and investment. The challenge is to keep scientific progress so funding will remain available. This is an interesting process because we can see many examples of areas of research that died when funding was no longer available and on the other hand areas which stayed active and flourished even after funding wasn’t available. This in itself is an indicator of influence and impact.

Q: Traditionally scientific impact was measured by citations and journals’ Impact Factors. Can you give an example of how the STAR METRICS’ triangulated approach integrated traditional methodologies as well as social, workforce and economic indicators?

A: We are just starting down that path – we hope that the community will help the program develop new and better approaches. We have started to build an Application Program Interface (API) that, once launched, will permit the community to contribute their own insights. The API is based on NSF data, but will be extended to USDA data shortly. It uses new approaches, such as topic modelling techniques to mine large amounts of text (thanks to David Newman’s work at the University of California, Irvine) to describe NSF’s research portfolio. This work was combined with other new approaches, such as Lee Fleming’s work (at Harvard) to disambiguate the names of patent grantees from US Patent and Trademark Office data. A very skilled group of individuals worked to build that data infrastructure; the website that provides different lenses into this infrastructure can be seen here.

Q: What future developments would you like to see for STAR METRICS and Science Policy in general?
A: First, I’m encouraged by the growth in participating agencies and institutions both domestic and internationally; in addition to major federal agencies (OSTP, NIH, NSF, DOE, USDA and EPA), more than 85 universities are participating. Internationally Japan, Brazil, China and a number of European countries are actively exploring ways to evaluate science and innovation. There are plans to translate the Handbook of Science of Science Policy, which I edited with Kaye Husbands Fealing, Jack Marburger and Stephanie Shipp to Japanese and Chinese

I would like STAR METRICS to be thought of as more than a dataset and seen as an approach. We always have to remember that the mission is to identify the best science and get the focus on by employing modern approaches. We owe it to the taxpayer and ourselves to make funding and other decisions in a scientific manner; we must make these investments as wise as possible. At the very least, we must have some understanding on how these investments make their way through the economic and scientific system.

Q: Can you tell us about your new position and what you hope to achieve in your new role?

A: I joined the American Institutes for Research (AIR) as a Senior Managing Economist both because of their reputation for producing high quality research and their international reach. As a government employee I wasn’t always able to work internationally and that has always been a great interest of mine. AIR is a very high quality research institution with a great deal of expertise in impact assessment and evaluation on both international and domestic levels. I look forward to collaborating with institutions around the world.

Q: If there is one highlight or accomplishment that you could pick in your impressive career – what would it be?

A: Do you mean other than my children?
As far as my career, I’m very proud of the creation of the Longitudinal Employment-Household Dynamics (LEHD) program which started as a small research project of mine, and was eventually expanded to all 50 states. [Note: Julia won the Vladimir Chavrid Memorial Award for this program].

About STAR METRICS

STAR METRICS is a federal and research institution collaboration to create a repository of data and tools that will be useful to assess the impact of federal R&D investments. The National Institutes of Health (NIH) and the National Science Foundation (NSF), under the auspices of Office of Science and Technology Policy (OSTP), are leading this project. This project has been developed after a successful pilot project was conducted with several research institutions in the Federal Demonstration Partnership (FDP). For more Information visit: https://www.starmetrics.nih.gov/

References

  1. http://www.nature.com/news/2010/101110/full/468135a.html
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)