Issue 33 – June 2013

Articles

The science that changed our lives

In this contribution Dr. Gali Halevi pays tribute to Francis Narin and his work on measuring the connection between science and innovation, and provides access to the original TRACES report, first published in 1968.

Read more >


A tribute to Francis Narin and his contribution to understanding the linkage between science and innovation

A discussion about the societal effects of science would not be complete without discussing the linkage between basic science and patents. Patents are seen as the embodiment of research as they describe unique processes, methodologies and products which are the result of extensive scientific research. Patents are the link between science and market, between concepts and prototypes – and they serve as a step in the process of converting ideas into economic growth.

This topic was the focus of the American Competitiveness Initiative of 2006. One of the examples given by the White House at that time was the basic sciences that led to the development of the iPod ™ (see Figure 1). This type of linkage between basic science and innovative products is at the heart of Francis Narin’s work as the first researcher to investigate this by studying the connections between basic research and innovation.

Figure 1 - Impact of basic research on innovation. Source: American Competitiveness Initiative of 2006

In what he himself denoted as “probably his last paper”, “Tracing the Paths from Basic Research to Economic Impact” (1), Francis Narin provides a glimpse into his pioneering work which changed the way government and industry measure the value of basic science. In his long career, Narin published over 50 articles on this linkage, examining citations exchanges between basic research and intellectual property in numerous subject areas, such as Biotechnology (2), Agriculture (3), Human Genome Mapping (4), and Eye Care Technologies (5). Collaborating with researchers from around the world, Narin dedicated his career to the study of the connections between scientific citations and patents, and measuring the economic strengths of countries, companies and even the stock market (6) through their scientific and intellectual property capabilities. Through the years, Narin and his colleagues were able to prove that basic science strengthens not only a country’s academic and scientific competency, but also has a direct effect on its economic prosperity through the translation of science into products and services. One of the examples given by Narin in his last article is the connection between his own citation influence methodology and the development of Google. The “citation influence methodology”, developed in the 1970s, maps the citation links from a specific journal to the journals it cites most heavily and allows an influence map of sub-fields to be created. This methodology was later heavily cited by Sergey Brin and Larry Page as the basis for their PageRank internet search algorithm. PageRank became Google’s most unique feature which differentiated it from others and enabled its enormous success.

The original "Technology in Retrospect and Critical Events in Science" (TRACES) report (1968) is available here.

In the words of Francis Narin and his colleagues at CHI Research, a firm pioneering in the analysis of patent citations:

“Science Linkage is a measure of the extent to which a company’s technology builds upon cutting edge scientific research. It is calculated on the basis of the average number of references on a company’s patents to scientific papers, as distinct from references to previous patents. Companies whose patents cite a large number of scientific papers are assumed to be working closely with the latest scientific developments.” (7)

Economic strains and government deficits make Narin’s work more important than ever. While governments are looking at cutting funding budgets as a way to balance national debt, scientific activities are often faced with depleting resources. Narin’s work plays a central role in proving the importance of continuous government support of the sciences as they are directly linked to industrial advancement and economic growth. The article “The Increasing Linkage between U.S. Technology and Public Science” (8), published in 1997 by Narin, Hamilton and Olivastro, is one of Narin’s seminal articles and has been cited over 300 times by researchers from various disciplines (see Figures 2-3).

Figure 2 - Number of citations to “ The Increasing Linkage between U.S. Technology and Public Science” over time

In this article the authors performed a systematic examination which proved the direct linkage between publically funded science and its impact on industrial technology, while providing the empirical and methodological evidence needed for continuous government support of basic sciences. Whether for university or laboratory, publically funded research supported by government agencies such as NIH and NSF has been shown to be heavily cited in technological and innovative patents. The importance of such proof for facilitating budgetary allocations to scientific endeavors is illustrated by the fact that citations to this article still grow every year.

Figure 3 - Disciplines citing“ The Increasing Linkage between U.S. Technology and Public Science”

This innovative methodological investigation has led to an explosion of studies into the connection between basic science and innovation, which saw a surge in publications since 2008 as the economy plunged after the 2008 financial crisis (see Figure 4).

Figure 4 - Publications focusing on basic science and innovation from 1996 - 2012

Narin’s contribution to our understanding of the connection between basic research, innovation, industry and economy brought forth the need to demonstrate the importance of other disciplines to this process, for example, Social Sciences. Using Narin’s methodology of tracing non-patent literature citations in patents, Moed and Halevi demonstrated in this publication (9) how basic research in Library & Information Science was used in the development of search engines by technology companies, including the above mentioned citation influence methodology. The contribution of Social Sciences to innovation was the subject of the 1982 article by Tornatzky et al. (10), which argued that Social Sciences have been ignored in the general debate regarding national productivity and innovation mainly because they are usually nonproprietary in nature. Yet, Social Science has been shown to be instrumental as a decision aid, a source of social technology and as a tool for understanding innovation and productivity. An example of this can be seen in Lavoie (11), who demonstrated the vital role of social scientists and their expertise in the field of regenerative medicine by “providing a comprehensive framework to include both technology and market conditions, as well as considering social, economic, and ethical values” (pp. 613).

Regardless of the discipline, tracking the connection between research and innovation is of immense importance, especially in turbulent economic times when the need to prove their economic and social value is crucial. There are many factors working in today’s scientific landscape, most prevalent being budgetary constraints, that make the ability to measure Return on Investment (ROI) crucial for funding decisions. Academic and other publically funded research is being scrutinized in search of a metric or evaluative model that will enable decision makers to assess its impact on the economy and society as a whole. Francis Narin offers a sound methodology and empirical measurement to track these linkages and demonstrate the crucial role science plays in building a sustainable economy based on technological and industrial innovation. This type of study will remain important in years to come as the interest in assessing societal impact of scientific research is rapidly increasing, and the public becomes more involved in, and better informed of, funding policies using tax payers’ money.

References

(1) Narin, F. (2013) “Tracing the Paths from Basic to Economic Impact”, F&M Scientist, Winter 2013, http://www.fandm.edu/fandm-scientist
(2) McMillan, G.S., Narin, F., & Deeds, D.L. (2000) “An analysis of the critical role of public science in innovation: The case of biotechnology”, Research Policy, Vol. 29, No. 1, pp. 1-8.
(3) Perko, J.S., & Narin, F. (1997) “The transfer of public science to patented technology: A case study in agricultural science”, Journal of Technology Transfer, Vol. 22, No. 3, pp. 65-72.
(4) Anderson, J., Williams, N., Seemungal, D., Narin, F., & Olivastro, D. (1996) “Human genetic technology: Exploring the links between science and innovation”, Technology Analysis and Strategic Management, Vol. 8, No. 2, pp. 135-156.
(5) Ellwein, L.B., Kroll, P., & Narin, F. (1996) “Linkage between research sponsorship and patented eye-care technology”, Investigative Ophthalmology and Visual Science, Vol. 37, No. 12, pp. 2495-2503.
(6) Deng, Z., Lev, B. and Narin, F. (1999) “Science & Technology as Predictors of Stock Performance”, Financial Analysts Journal, Vol. 55, No. 3, pp. 20-32.
(7) Narin, F., Breitzman, A. & Thomas, P. (2004) “Using Patent Citation Indicators to Manage a Stock Portfolio”, in Moed, H.F. & Schmoch, U. (Ed.), Handbook of Quantitative Science and Technology Research. The Use of Publication and Patent Statistics in Studies of S&T Systems, pp. 553-568. Dordrecht, The Netherlands: Springer Netherlands.
(8) Narin, F., Hamilton, K.S., & Olivastro, D. (1997) “The increasing linkage between U.S. technology and public science”, Research Policy, Vol. 26, No. 3, pp. 317-330.
(9) Halevi, G. & Moed, H. (2012) “The Technological Impact of Library Science Research: A Patent Analysis”, 17th International Conference on Science and Technology Indicators (STI), Montreal, Canada 2012. http://sticonference.org/Proceedings/vol1/Halevi_Technological_371.pdf
(10) Tornatzky, L.G. et al. (1982) “Contributions of social science to innovation and productivity”, American Psychologist, Vol. 37, No. 7, pp. 737-746.
(11) Lavoie, M. (2011) “The Role of Social Scientists in Accelerating Innovation in Regenerative Medicine”, Review of Policy Research, Vol. 28, No. 6, pp. 613-630.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Buzzwords and Values: The prominence of “impact” in UK research policy and governance

Dr. Alis Oancea describes the role societal impact plays in UK research policy. How important has this become in recent years?

Read more >


Impact assessment is now a prominent technology for research governance in the United Kingdom (UK). The current focus on the impact of research beyond academia – while clearly the buzzword of the moment in UK research policy – has complex roots in policy discourses around wealth creation, user relevance, public accountability, and evidence-based decision-making (some of which I unpack in a forthcoming paper). Given this complexity, a grudging consensus is currently being forged around the importance of strengthening the connections between academic and non-academic contexts, while controversy continues around performance-based higher education funding and the extent to which universities ought to be held accountable by the government (on behalf of the taxpayer) for the non-academic implications and outcomes of their research. While these pivotal principles, and the values underpinning them, are being renegotiated, much of the attention of both the government and the higher education institutions has been diverted, under the direct influence of the forthcoming national assessment exercise for research (REF, due in 2014), towards the technicalities of designing and using measures of impact.


The impact agenda and outcomes-based allocation of public funding for research

In a policy and governance context that favors selectivity and concentration, and on the background of economic crisis, research funding is no longer defined in policy circles as a long-term investment in intrinsically worthwhile activities. Rather, in what is described as a knowledge and innovation economy, research is expected to make a case for funding in terms of external value (1, 2). Assessing and demonstrating the non-academic impact of publicly funded university research has thus become a key element of recent UK research policy. The pursuit of research impact is now a priority for both arms of the UK public research funding system, known as the “dual support” system (3), as well as for the direct commissioning of research by government departments and agencies. The UK “dual support” system comprises separate funding streams for core research infrastructure (in the shape of outcome-based block grants distributed by the four national higher education funding councils – informed by the outcomes of the Research Excellence Framework, or REF (the REF was preceded by the Research Assessment Exercises, which, between 1986 and 2008, informed the selective allocation by the higher education funding councils of core public grants to higher education research) and for project expenditure (allocated competitively by the seven Research Councils UK).

The Royal Charters and the current strategic framework, “Excellence with Impact”, of the UK Research Councils draw direct links between good research and social, cultural, health, economic and environmental impacts. At proposal stage, the Councils are interested in potential impacts and in the ways in which they will be pursued; for example, they require impact summaries and “pathways to impact” statements in applications for funding. At the end-of-award reporting stage, the Councils are also interested in the actual impacts achieved by a project over its lifetime. The Research Councils’ interest in impact pre-dates the REF (e.g. 4, 5, and 6) and is also evident in their commissioning of studies of research impact, knowledge transfer, practice-based research, and industry engagement, many of which are evaluation studies. Examples include the areas of engineering and physical sciences (7), medical research (8), arts and humanities (9), and the social sciences (10, 11, 12, and 13). There is also a wealth of commissioned impact “case studies” which showcase successful practice (14). On this basis, the Councils have produced guidelines and “toolkits” for impact – see, for example, the Economic and Social Research Council’s online “Impact Toolkit” and Impact Case Studies. Other key players in the recent impact debates have been the British Academy, which produced its own reports on the role of the humanities and social sciences in policy-making (15, 16), the Royal Society, Universities UK action group, various learned societies, and research charities such as the Wellcome Trust and Jisc (formerly the Joint Infrastructure Committee for higher education, now a charity aiming to foster “engaged research”).

The most controversial and publicly visible move towards prioritizing research impact was its introduction, following public consultation and a pilot exercise in 2010, as one of the three key components (alongside quality of research outputs and research environment) of the Research Excellence Framework, the national exercise for the assessment of higher education research in the UK, due in 2014. Impact has thus become part of the mechanism for performance-based research governance, as the REF is intended to inform the selective allocation of public funds, to function as a mechanism for accountability, and to enable higher education benchmarking. The current documentation for the REF gives impact a 20% weighting of the final grade profile awarded to a submitting institution - down, following public consultation, from an initially proposed 25%. For the purposes of the REF, impact is defined as “an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia” (17; and 18). It will be assessed by academic and user reviewers on the basis of standard-format case studies and unit-level strategic statements, using the twin criteria of “reach” (or breadth) and “significance” (or depth) of impact. In preparing their submissions, universities are currently grappling with the need simultaneously to define, track and demonstrate the impacts of their research, a task for which they had been largely ill-prepared, in terms of infrastructure, capacity, management and strategy. Important challenges at the moment concern the variable time lag between carrying out research, achieving impact, and documenting and reporting it; the difficulties involved in either attributing (parts of) non-academic changes and benefits to particular research projects and outcomes, or demonstrating the material and distinctive contribution of this research to such changes; and evidencing chains of action and influence that may have not been documented at the time of their occurrence.

As a consequence of these initiatives, UK higher education-based researchers are now subject to multiple requirements to assess and demonstrate the impact of their work, in a variety of contexts and for a range of different purposes. The impact to be “demonstrated” could be that of a project or research unit, of a program, of a funding body/strategy, of an area of research, or of the research system as a whole – each captured at different points in time, and relative to varying time horizons and to different types and methodologies of research. Additional pressure is exercised on academic research by competition from other research settings, such as private and third-sector research, both of which may have a sharper focus on non-academic benefits as part of their rationale. Increasingly, too, public expectations from higher education-based research are influenced by the fact that other areas of public service – including health, transport, urban planning, but also culture and heritage, media, and sports – face tighter requirements to account for their use of public funding in terms of outcomes and benefits.


Capturing research impacts

The current interest in research impact, spurred on by the forthcoming REF 2014, has stimulated a growing body of literature (6, 19). Together with practical experience in program evaluation and policy analysis, this literature is already underpinning a small industry around designing and using instruments for measuring and reporting the socio-economic impacts of research. It has also inspired the production of various open-access or commercially available tools for impact tracking and visualization, such as ImpactStory and Altmetrics. Examples of methodological literature include the report to HEFCE and to BIS, on the REF impact pilot (20); the report on frameworks for assessing research impact (21); the report on knowledge transfer metrics commissioned by UNICO (22); also, internationally, the guides produced by projects such as ERiC (23), and SIAMPI (24). This technical literature is complemented by more conceptual work on higher education, research policy-making, and the relationships between research and processes of change at all levels of society.

There is also wide recognition that in the current context for research it is particularly important to reflect critically on the various strategies for increasing and demonstrating research impacts being used or promoted in different institutions and disciplines (see 25, 26, and the LSE Impact blog). In the UK, a number of centers, such as the Research Unit for Research Utilisation at the Universities of Edinburgh and St Andrews (6, 19), the Health Economics Research Group at Brunel University (27), the Public Policy Group’s HEFCE-funded Impact of Social Sciences project at the London School of Economics (11), the Science & Technology Policy Research Centre at the University of Sussex (28, 29), and, most recently, the DESCRIBE project at the University of Exeter, have made notable contributions to this process.


Concluding comment

Additional studies and evaluations based in, and commissioned by, individual universities and university mission groups have highlighted the connections between institutional contexts and impact interpretations and practices; examples include reports for the University of Oxford (25); for the University of Cambridge (9); for the Russell Group Universities (30); for the 1994 Group (31); and for the Million+ group, formerly the Coalition of Modern Universities (32). These studies explore the ways in which universities have adapted the policy-driven impact agenda to their own ways of working and to their longer-term concerns with the quality, sustainability and benefits of research activity. Impact may be the buzzword of the moment, but universities had reflected on their wider mission long before impact was deemed a metaphor worth turning into a governance technology. Many have embedded their efforts to capture research impact in their wider social accountability projects and plugged it in their continued public engagement, community interaction and outreach activities (26). In order to do this, they are reinterpreting the official agenda and articulating alternatives. These reinterpretations – and their visibility and weight in the public domain – are essential if impact is not to become yet another measure rendered meaningless by reducing it to a target for performance.

For impact indicators to be an adequate proxy of research value, they need not only to be technically refined measures, but also to be pitched at the right level, so that they can function as catalysts of, rather than destabilize, higher education activity. To do this, they depend on a healthy ecology of higher education, which in turn requires intellectual autonomy, financial sustainability and insightful governance. Without these preconditions, the high-stakes assessment of impact may fail to reflect and support ongoing research value, and end up simply capturing assessment-driven hyperactivity.


References

(1) BIS (2011) Innovation and Research Strategy for Growth. London: Department for Business, Innovation and Skills.
(2) Arts Council England (2012) Measuring the Economic Benefits of Arts and Culture. BoP Consulting.
(3) Hughes, A., Kitson, M., Bullock, A. and Milner, I. (2013) The Dual Funding Structure for Research in the UK: Research Council and Funding Council Allocation Methods and the Pathways to Impact of UK Academics. BIS report.
(4) RCUK (2002) Science Delivers. Available at: http://www.rcuk.ac.uk/documents/publications/science_delivers.pdf
(5) RCUK (2006) Increasing the Economic Impact of the Research Councils. Available at: http://www.rcuk.ac.uk/documents/publications/ktactionplan.pdf
(6) Davies, H., Nutley, S. and Walter, I. (2005) Approaches to Assessing the Non-academic Impact of Social Science Research. Report of an ESRC symposium on assessing the non-academic impact of research, 12th/13th May 2005.
(7) Salter, A., Tartari, V., D’Este, P. and Neely, A. (2010) The Republic of Engagement Exploring UK Academic Attitudes to Collaborating with Industry and Entrepreneurship. Advanced Institute of Management Research.
(8) UK Evaluation Forum (2006) Medical Research: Assessing the benefits to society. London: Academy of Medical Sciences, Medical Research Council and Wellcome Trust.
(9) Levitt, R., Claire, C., Diepeveen, S., Ní Chonaill, S., Rabinovich, L. and Tiessen, J. (2010) Assessing the Impact of Arts and Humanities Research at the University of Cambridge. RAND Europe.
(10) LSE Public Policy Group (2007) The Policy and Practice Impacts of the ESRC’s ‘Responsive Mode’ Research Grants in Politics and International Studies. ESRC report.
(11) LSE Public Policy Group (2008) Maximizing the Social, Policy and Economic Impacts of Research in the Humanities and Social Sciences. British Academy report.
(12) Meagher, L.R. and Lyall, C. (2007) Policy and Practice Impact Case Study of ESRC Grants and Fellowships in Psychology. ESRC report.
(13) Oancea, A. and Furlong, J. (2007) Expressions of excellence and the assessment of applied and practice-based research, Research Papers in Education, Vol. 22, No. 2.
(14) RCUK (2012) RCUK Impact Report 2012. Available at: http://www.rcuk.ac.uk/Documents/publications/Impactreport2012.pdf
(15) British Academy (2008) Punching Our Weight: The role of the humanities and social sciences in policy-making. London: BA.
(16) British Academy (2004) ‘That Full Complement of Riches’: The contributions of the Arts, Humanities and Social Sciences to the nation’s wealth. London: BA
(17) REF (2012/11) Assessment framework and guidance on submissions.
(18) REF (2011) Decisions on assessing research impact. REF 01.2011.
(19) Nutley, S., Percy-Smith, J. and Solesbury, W. (2003) Models of Research Impact: A cross-sector review of literature and practice. London: LSDA.
(20) Technopolis Ltd (2010) REF Research Impact Pilot Exercise Lessons-Learned Project: Feedback on Pilot Submissions. HEFCE.
(21) Grant, J., Brutsher, P.-B., Kirk, S. E., Butler, L., & Wooding, S. (2009) Capturing Research Impacts: A review of international practice. HEFCE/RAND Europe.
(22) Holi, M.T., Wickramasinghe, R. and van Leeuwen, M. (2008) Metrics for the Evaluation of Knowledge Transfer Activities at Universities. UNICO report.
(23) ERiC (2010) Evaluating the Societal Relevance of Academic Research: A guide. Evaluating Research in Context, Netherlands.
(24) SIAMPI (2010) SIAMPI Approach for the Assessment of Social Impact. Report of SIAMPI Workshop 10.
(25) Oancea, A. (2011) Interpretations and Practices of Research Impact across the Range of Disciplines. HEIF/Oxford University.
(26) Ovseiko, P.V., Oancea, A., and Buchan, A.M. (2012) Assessing research impact in academic clinical medicine: a study using Research Excellence Framework pilot impact indicators. In: BMC Health Services Research 2012, 12:478 .
(27) HERG and Rand Europe (2011) Project Retrosight. Understanding the returns from cardiovascular and stroke research. Cambridge: RAND Europe.
(28) Martin, B.R., and Tang, P. (2007) The benefits from publicly funded research. University of Sussex.
(29) Molas-Gallart, J., Tang, P., Sinclair, T., Morrow, S., and Martin, B.R. (1999) Assessing Research Impact on Non-academic Audiences. Swindon: ESRC.
(30) Russell Pioneering Research Group (2012) The Social Impact of Research Conducted in Russell Group Universities. Russell Group Papers, 3.
(31) McMillan, T., Norton, T., Jacobs, J.B., and Ker, R. (2010) Enterprising Universities: Using the research base to add value to business. 1994 Group report.
(32) Little, A. (2006) The Social and Economic Impact of Publicly Funded Research in 35 Universities. Coalition for Modern Universities.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The use of assessment reports to generate and measure societal impact of research

How can societal impact be measured? Dr. Lutz Bornmann & Dr. Werner Marx propose a new method based on assessment reports, which summarize research findings for a non-scientific audience.

Read more >


Since the 1990s, research evaluation has been extended to include measures of the (1) social, (2) cultural, (3) environmental and (4) economic returns from publicly funded research. The best known national evaluation system in the world is without a doubt the UK Research Assessment Exercise (RAE), which has evaluated research in the UK since the 1980s. It is due to be replaced by the Research Excellence Framework (REF) in 2014. The REF defines research impact as “... the social, economic, environmental and/or cultural benefit of research to end users in the wider community regionally, nationally, and/or internationally” (1). In the new REF, research impact on society will not only be quantified, but expert panels will also review narrative evidence in case studies supported by the appropriate indicators (informed peer review).

Scientific impact measurement is carried out using a number of established methods (such as the statistical analysis of bibliometric data), which undergo continual development, and is supported by a dedicated community. Research on societal impact measurement is still in its infancy: so far, it does not have its own community with conferences and journals. Godin and Dore see research on the measurement of societal impact as being at the stage where the measurement of research and development (R&D) was in the early 1960s (2). Even though no robust or reliable methods for measuring societal impact have yet been developed, societal impact is already measured in terms of its budget relevance (or will be in the near future). In the REF, 20% of the evaluation of a research unit for the purpose of allocations will be determined by the societal influence dimension (65% by research output and 15% by environment). This uneven relationship between research and practice is astonishing when one considers how long it has taken for the methods for measuring scientific impact to develop sufficiently to reach the stage of budget-relevant practice.

The lack of an accepted and standardized framework for evaluating societal impact has resulted in the "case studies" approach being preferred, not only by the planned REF, but also in other evaluation contexts (3). Although this method is very labor-intensive and very much a ‘craft activity’ (4), it is currently considered the best method. Other approaches such as the "payback framework" are, however, similarly or even more laborious (5). We have developed an approach which, unlike the case study approach (and others), is relatively simple, can be used in almost every subject area and delivers results regarding societal impact which can be compared between disciplines (6). Our approach to societal impact starts with the actual function of science in society: to generate reliable knowledge. Robert K. Merton, who founded the modern sociology of science, used the term communalism to describe one of the norms of science: that scientific knowledge should be considered "public knowledge" and should be communicated not only to other scientists and students, but also to society at large (7).

That is why a document which we would like to refer to as an assessment report, summarizing the status of the research on a certain subject, represents knowledge which is available for society to access. A summary like this should be couched in generally understandable terms so that readers who are not familiar with the subject area or the scientific discipline can make sense of it. Assessment reports can be seen as part of the secondary literature of science, which has up to now drawn on review journals, monographs, handbooks and textbooks (primary literature is made up of the publications of the original research literature). They would be items of communal knowledge made available to society. To ensure that they are of high quality, they should be written by scientists (specialists in their field) and should undergo peer review to determine their correctness. The reviewers are asked to recommend the publication or rejection of the report and possibly formulate suggestions for improvement to the submitted work (8). Since the report will be read by scientists from other fields and non-scientists, it should be reviewed not only by an expert in the field but also by a stakeholder from the government, the industry or an advice centre.

What is an assessment report?

  • It can be produced for almost every discipline
  • It summarizes the status of research for those outside of the expert community
  • It should be written by scientists (specialists in their field), reflect above all research excellence and undergo peer review to determine its quality
  • It could be narrative reviews reporting on just the research results from the primary literature. For subjects in which effect sizes are available in empirical studies, on the other hand, it would be possible to carry out meta-analyses (the statistical analysis of a large collection of analysis results from individual studies)
  • It should be couched in generally understandable terms so that readers who are not familiar with the subject area or the scientific discipline (e.g. stakeholders from government, industry or an advice centre) can make sense of it
  • It could be produced separately, or be integrated as part of literature reviews written for the scientific community (as sections summing up the situation for those outside of the community)
  • In order to establish the assessment report as a service provided by science for society, it would be important firstly for research funders to make the production of assessment reports obligatory, and secondly that when research is evaluated (by institutes, research groups and individuals) assessment reports are regarded as content for society to generate societal impact
  • Societal impact is given when the content of an assessment report is addressed outside of science (in a government document, for example). This can be verified with tools which measure the attention that academic papers receive online (for example, Altmetric or ImpactStory).

Societal impact of research is obtained when the content of a report is addressed outside of science (in a government document, for example). This can be verified with tools which measure the attention that academic papers receive online. Altmetric, for example, captures hundreds of thousands of tweets, blog posts, news stories and other pieces of content that mention academic documents. In Scopus, the Elsevier literature database, it is possible to display not only the extent of the scientific impact, but also that of the societal impact of individual publications. This type of societal impact measurement would be carried out in a similar way as the measurement of scientific impact. In other words this would mean applying an established and successful method of measuring scientific impact (the analysis of citations in scientific publications) to the measurement of societal impact, which has clear benefits. For tools like Altmetric, citations should ideally be classified and weighted: for example, a citation by a member of the president’s council of economic advisors should have a different weight than a mention in a random blog post.

The assessment reports produced by the Intergovernmental Panel on Climate Change (IPCC)  are a good example of public knowledge which aims to generate societal impact from one subject area. This panel was founded in 1988 by UNEP, the United Nations Environmental Program, and the World Meteorological Organization (WMO) to summarize the status of climate research for political decision-makers. Its main function is to assess the risks and consequences of global warming to the environment and to society and to develop strategies to avoid it. Climate research is exceptional in that it is a strongly interdisciplinary field like almost no other and encompasses many of the Natural Sciences, such as Biological and Environmental Sciences, Atmospheric Chemistry, Meteorology and Geophysics. There are many points at which they intersect with Politics and Economics. With a broad focus in many fields and a rapidly expanding volume of publications, this area of research has become confusing even for insiders; it requires processing and summarizing to allow the results to be used outside of science and make them available for implementation in the form of policies. Working groups involving numerous scientists collate the results of research for the assessment report at regular intervals. The goal is a coherent representation of the research. As the reports reflect the current consensus of science on climate change, they have become the most important basis for scientific discussion and political decisions in this area. The scientific impact of the IPCC reports can be measured by using citations in scientific publications (such as the Web of Science and Scopus literature databases). The societal impact can be quantified with tools such as Altmetric (see above).

Measuring scientific impact with citations in journal papers can be used to great effect in the Physical and Life Sciences but hardly at all in Social Sciences and Arts & Humanities. An assessment report, on the other hand, can be produced for almost every discipline and its societal impact can be clearly measured. Since Social Sciences and Arts & Humanities are disciplines where impact is generally very difficult to measure, assessment reports offer the advantage of reporting not only on journal papers and monographs, but also on exhibitions and art objects. In our view, it is of fundamental importance that an assessment report reflects above all research excellence in the subject area. Thus, only publications which have been previously subjected to peer review should be included in an assessment report. For certain issues or in certain subject areas, it could be helpful if the reports for society were not produced separately, but integrated as part of the literature reviews written for the scientific community. This could be done in sections summing up the situation for those outside of the community (as a sort of comprehensive layman’s summary).

Although in many countries there is a wish (and a will) to measure societal impact, “it is not clear how to evaluate societal quality, especially for basic and strategic research” (9). In many studies in which societal impact has been measured, it is more often postulated than demonstrated by research. With the breadth of subject matter and complex content of the challenges facing society today (such as population growth or environmental pollution), the demand for available information to be summarized and evaluated for social and political purposes is rising. We have presented an approach with which the societal impact of research outcomes can be initiated and measured (6). We suggest that, as with the IPCC, assessment reports are written on certain research subjects which summarize the status of the research for those outside of the expert community. Tools such as Altmetric can verify the extent to which the assessment reports generate an impact. It would be desirable if these tools were to search through documents for citations used in various contexts for decisions, such as documents from governmental bodies, advisory bodies and consumer organizations.

 

References

(1) RQF development advisory group (2006) Research quality framework: assessing the quality and impact of research in Australia. The recommended RQF (report by the RQF development advisory group). Canberra, Australia: Department of Education, Science and Training.
(2) Godin, B., & Dore, C. (2005) Measuring the impacts of science; beyond the economic dimension, INRS Urbanisation, Culture et Société. Paper presented at the HIST Lecture, Helsinki Institute for Science and Technology Studies, Helsinki, Finland. Available at: http://www.csiic.ca/PDF/Godin_Dore_Impacts.pdf
(3) Bornmann, L. (2012). Measuring the societal impact of research. EMBO Reports, Vol. 13, No. 8, pp. 673-676.
(4) Martin, B. R. (2011) “The Research Excellence Framework and the 'impact agenda': are we creating a Frankenstein monster?”, Research Evaluation, Vol. 20, No. 3, pp. 247-254.
(5) Bornmann, L. (2013) “What is societal impact of research and how can it be assessed? A literature survey”, Journal of the American Society of Information Science and Technology, Vol. 64, No. 2, pp. 217-233.
(6) Bornmann, L., & Marx, W. (in press) “How should the societal impact of research be generated and measured? A proposal for a simple and practicable approach to allow interdisciplinary comparisons”, Scientometrics.
(7) Merton, R. K. (1973) The sociology of science: theoretical and empirical investigations. Chicago, IL, USA: University of Chicago Press.
(8) Bornmann, L. (2011) “Scientific peer review”, Annual Review of Information Science and Technology, Vol. 45, pp. 199-245.
(9) van der Meulen, B., & Rip, A. (2000) “Evaluation of societal quality of public sector research in the Netherlands”, Research Evaluation, Vol. 9, No. 1, pp. 11-25.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The Challenges of Measuring Social Impact Using Altmetrics

In his contribution Mike Taylor investigates how altmetrics can be used to measure social impact. What are some of the obstacles that need to be overcome to make this possible?

Read more >


Abstract

Altmetrics gives us novel ways of detecting the use and consumption of scholarly publishing beyond formal citation, and it is tempting to treat these measurements as proxies for social impact. However, altmetrics is still too shallow and too narrow, and needs to increase its scope and reach before it can make a significant contribution to computing relative values for social impact. Furthermore, in order to go beyond limited comparisons of like-for-like and to become generally useful, computation models must take into account different socio-economic characteristics and legal frameworks. However, much of the necessary work can be borrowed from other fields, and the author concludes that – with certain extensions and added sophistication – altmetrics will be a valuable element in calculating social reach and impact.

 

Altmetrics is the collective term for scholarly usage data that goes beyond formal citation counts. Typically, altmetric data comes from specialist platforms and research tools but can also include data from general applications and technical platforms. Sometimes the term also encompasses mass-media references, and data from publishers, such as web page views and PDF downloads (see Table 1).

Types of data Examples
General social networking applications Mentions, links, ‘likes’, bookmarks to articles Twitter, Facebook, Del.icio.us
Specialized research tools Links, bookmarks, recommendations, additions to reading groups Zotero.org, Mendeley.com, Citeulike.org
Publisher platforms Web page views, PDF downloads, Abstract views PLoS, Scopus, Pubmed
Research output, publishing components Views, recommendations, shares Github.com, Datadryad.org, Slideshare.net, Figshare.com,

Table 1 - Classes of platform and tool that provide data for altmetrics applications. (Source: ImpactStory)

The principal use of altmetrics has been to study and describe the wider scholarly impact of research articles (1). Some researchers have concluded that altmetric activity might act as an indicator for eventual citation count (2) and that it might reveal academic engagement not recorded in citation count (3). As scholarly material becomes more widely available with increasing open access publishing, and as people increasingly use social networks, altmetrics could become a valuable part of understanding and measuring social impact.

The interest in quantifying social impact is not restricted to research: it is a field of increasing importance in the not-for-profit sector – both philanthropic and institutional (4) – and there have been attempts to measure the impact of investments in the arts (5). Within the philanthropic field, there is an emerging paradigm that borrows from business, with financial investment reaping social return. Not unsurprisingly, there are agencies that endeavor to assess and compare social impact and businesses that attempt to do likewise for pure profit investment.

The movement towards Gold open access publishing as promoted by the UK’s Finch Report and the EU’s Horizon 2020 project - where funding agencies become responsible for paying the cost of dissemination via research grants to scholars - enables a parallel with not-for-profit investment. In common with charitable funding bodies, it may be predicted that research investment agencies will increase their efforts to monitor the social impact of research outcomes in published articles. Thus, we can expect to see an increase in the amount of attention paid to assessing the social impact and social reach of research outcomes.

Social impact is often quantified in economic terms, using approaches that attempt to put a value on the benefits to the economy. However, while the social impact of a vaccine might be measured by computing the days lost to the economy, the loss of tax revenue and the cost of healthcare, applying the same approach in other fields – for example, studying the roots of cultural resistance to vaccination (6) - is considerably harder.

In this article, I describe an outline of a methodological approach for calculating or computing relative social reach – in other words how research findings can propagate from the published article into the public domain; while understanding the differences in social capacity – the means by which research can influence society, both by means of socio-economic structure, legislation and influential discourse. I also touch on the idea of social accessibility, or how research findings vary in their ability to be communicated and understood by a lay population.

As altmetric data can detect non-scholarly, non-traditional modes of research consumption, it seems likely that parties interested in social impact assessment via social reach may well start to develop altmetric-based analyses, to complement the existing approaches of case histories, and bibliometric analysis of citations within patent claims and published guidelines.

 

Understanding the social space

In order to begin the task of computing social impact using altmetric data, it is important to understand the varying socio-economic and legislative spaces in which disciplines exist, and to understand the limitations of what activity can be measured. The social space that scholarly endeavor occupies is not common for all disciplines, and it is not necessarily common across national boundaries. The social impact of Medicine is likely to be greater than that of Limnology or pure Mathematics; the study of Literature is politicized in some countries, but not in others (see Table 2).

Furthermore, research that delivers knowledge to practitioners and offers practical help to the lay community is likely to have more potential for a higher social impact and to affect more people if the authors are careful to increase their articles’ social accessibility by the inclusion of keywords, links to glossaries and a lay abstract. Here, publishers have a degree of responsibility, to support researchers in framing descriptions of their work and in developing platforms that are responsive to changing vocabularies. In the case history below, I describe how Nature went to some lengths to provide a social context to a complex story about genetic markers and tests.

Although this effort is commendable when publishing articles that have a high capacity for social influence, in an environment where research is becoming more accessible and where competition for funds is increasing, it behooves both researcher and publisher – both of whom are competing for funds – to increase social accessibility.

Obviously the bulk of most research articles are necessarily written in specialized language, and the addition of keywords, links and a sentence explaining the context of the work would do much to improve the semantic infrastructure and social accessibility through which research finds its social impact. An interesting essay on the importance and skills necessary to communicate research to the wider public may be read in Nature (7).

As the potential for social impact varies, so do the social and government structures that offer a legal and quasi-legal framework in which the research may be expressed: these, in turn, alter a discipline’s capacity for achieving social impact.

Medicine Nursing Economics Pure mathematics
Number of papers published in 2011 123,771 5759 23,727 14,379
Number of practitioners in the UK c250,000 (8) c700,000 (9) Thousands, 1000 in government 3000 (globally)
Professional governance Medical Research Council, General Medical Council, NICE Nursing and Midwifery Council, Royal College of Nursing, NICE None None
Scholarly impact (5FWRI 2011) 0.91 0.73 0.74 0.81
Number of UK Acts of Legislation relating to the practice of this profession (10). 78 UK Acts of Legislation relating to “General Medical Council” with more than 200 of wider relevance. 152 UK Acts specifically relate to Nursing, with more than 200 of wider relevance. 3 Acts for “economists” 30 UK Acts for “mathematics” (all education) and 3 Acts for “mathematician”
Social impact High High High Low

Table 2 - the socio-legal structure and potential for social impact of four research disciplines in the UK

Clearly, different disciplines and discoveries will reach their maximum impact within highly varying timescales. For example, one of the greatest discoveries was probably the development of the concept and number zero, which took place in several cultures and over many centuries, whereas the hypothetical discovery of a large meteorite heading for Earth would have a larger impact in a considerably short period.

The differences between disciplines’ structures and their relationship with the tools that affect social change imply that – at best – a multifactorial approach that can be tuned to focus on different disciplines would be needed to quantify the social impact of scholarly research. In the light of the lack of agreement on what social impact means, and the manifestly complicated background, it is hardly surprising that Bornmann concluded in 2012 that in the absence of any robust evaluations, the best way ahead is by peer review.

One profound difficulty in measuring social impact is the complex ways in which research can affect change. For example, there are relatively few economists, and while primary economic research rarely makes headline news, the impact through politics, finance and international agency is dramatic and far-reaching (see Figure 1).

An interesting example of when primary economic research does come to attention and an illustration of the disproportionate nature of social mentions and impact can be seen in the 2013 criticism of Reinhart and Rogoff’s 2010 paper “Growth in a Time of Debt” (12). The paper is described as a ‘foundational text’ (13) of austerity programs and according to ImpactStory received fewer than 100 social mentions. The methodological critique that discovered Excel errors and other problems received 250 social citations.

Figure 1 - Google search trends for “Reinhard Rogoff”

In the UK, there is no governance for economists, which can be contrasted with the various healthcare professions, which have many complex layers of professional and governing bodies, all of which work to affect social impact, as delivered by professionals. Within these formal channels, it is possible to apply bibliometrics by a citation analysis of the documents produced by governing bodies. However, as the distance from primary research to lay population increases, so does the lack of formal citation or linking.

Although it is tempting to equate social reach (i.e., getting research into the hands of the public), it is not the same as measuring social impact. At the moment, altmetrics provides us with a way of detecting when research is being passed on down the information chains – to be specific, altmetrics detects sharing, or propagation events. However, even though altmetrics offers us a much wider view of how scholarly research is being accessed and discussed than bibliometrics, at the moment the discipline lacks an approach towards understanding the wider context necessary to understand both the social reach and impact of scholarly work.

There have been attempts to create a statistical methodology that defines different types of consumption. Priem et al (14) reported finding five patterns of usage:

  • Highly rated by experts and highly cited
  • Highly cited
  • Highly shared
  • Highly bookmarked, but rarely cited
  • Uncited

Although these patterns of behavior are of potential interest, the authors do not attempt to correlate the clusters with scholarly and non-scholarly use. In fact, a literature search found no research currently available that compared disciplines or readership background using altmetric data. It is not surprising, therefore, to find that there is no research that focuses on the relationship between scholarly research and social consumption using altmetric data.

 

The challenge of measuring social impact and social reach with altmetrics

In order to provide some insight into how altmetrics might be used to measure social reach, and potentially enable the measurement of social impact, I investigated a high profile story that originated in primary research.

On March 27/28, 2013, all the major UK news outlets carried stories based on research that found genetic markers for breast, prostate and bowel cancel. The research reported significantly better accuracy for these markers than previous research. Mass media reports of the research suggested the possibility that within eighteen months (15) or five years (16), a saliva-based screening test for the genetic markers might become available via the UK’s National Health Service, at a cost to the NHS of between £5 and £30.

Some of the commentary included in the reporting came from the principal authors of the research, although there was no obvious linguistic cue or statement of interest (link to the Guardian), thus making the assignment of provenance a separate research project in itself.

This research is likely to have a strong social impact, as the tests are expected to be more accurate than present, can be undertaken at any stage of life and can be coupled with higher detection rates at earlier stages of cancer, with corresponding improvements in lifespan and quality of life. This is likely to be expressed though practitioners and their governing bodies, Government agencies, etc.

Despite the high potential for social impact, and links in the highest read online news stories to a dedicated home page set up by Nature to enable lay-consumption of the primary research, there was very little social activity relating to either the original research, or the essays that Nature had commissioned. Of all the papers linked from this dedicated page, only one was behind a pay wall (see Figure 2. A live altmetric report of this story may be viewed at ImpactStory (17)).

Only two of the mass media articles (the BBC (18) and The Guardian (19)) provided links to the original research. Not unsurprisingly, the stories resulted in a great deal of engagement in social media. However, a review of tweets, comments (323 on The Guardian’s article) and links to the mass media reports found that none was linked to the research, or used any helpful hash tag that would have helped disambiguate tweets about the test versus any other news relating to the forms of cancer.

As the collection of altmetrics is based around following links, a proportion of stories originating from the primary research are immeasurable, and research that constrains itself purely to an altmetric analysis is unlikely to add any helpful indication of social impact at this current period.

As the findings of the research flow out from the research papers, they undergo a series of transformations: they lose their technical language in favor of a lay presentation, the precise findings are replaced with interpretation, and information is added that attempts to predict social impact. In the case of the “£5 Spit Test for Cancer”, some of this interpretative layer is added by the primary researchers and some by other agents. In the course of this evolution, some terms emerge that fit the story, and it is typically these terms that are used by the lay community to discuss the research, along with links back to the mass media articles.

The failure of social and mass media reports to formally cite or link the journalism and commentary to the original research – despite Nature’s best efforts to make the research accessible to the general public – provides an indication that any effort to use existing altmetrics to gauge social reach of primary research is likely to be a worthless endeavor, and at best requires considerable more research. Unfortunately, the size of the altmetric figures for the primary research is insignificant, as is the number of visits to the Nature story page, and are too low to be used for statistically extrapolating social reach from direct social mentions.

Clearly this research was subject to discussion and sharing, amongst the population, but equally clearly, the bulk of this interest is at present as invisible to altmetrics, as it is to bibliometrics. In part, this problem is conceptual, perhaps derived from a desire to maintain a comparison between bibliometrics and altmetrics by restraining the latter’s reach to citation counts; perhaps it is purely a technological problem – however, whatever the cause, the result is the same: altmetrics provides a very weak picture of social reach and social impact.

To some extent, it is possible address the technological issues by extending existing altmetric tools to capture a richer set of data, for example, by accessing the number of comments that have been made on correctly linked articles. Unfortunately, these comments are three steps away from a link to the original research, as the Guardian links not to the papers, but to the dedicated page published by Nature (see Table 3).

Distance between social reference and original research
0 Original research paper linked from:
1 Nature’s dedicated page linked from:
2 Article in The Guardian linked from:
3 Comments on The Guardian, tweets about the newspaper article

Table 3 - As currently formulated, altmetrics only counts direct links to research material and therefore excludes many mass media and social media mentions. In the example in this table, only the page on nature.com links to the original research.

We cannot expect or mandate people to cite original research in their social dialogue, but it is possible to consider an approach that might allow us to study trends in related terms, and to incorporate these data points in our analyses. Within the field of natural language parsing, it is common to look at the coincidence of occurring terms in formally linked articles, and to use this data to infer meanings and relationships, which could be used to classify articles that lack the formal link or citation.

For example, in the mass media articles relating to the “£5 cancer test”, there are a number of entities – researchers, commentators, funding agencies, specific references to particular formal terms – that are common to many stories and blog posts that cover and interpret this research. That these are published within a similar time frame, and have a commonality of semantics, should allow researchers to compute an analysis of similarity, and by mapping these articles and mining the internet, it should be possible to achieve a wider understanding of the social reach of research. Such a study - the quantification of semantics, which might be known as semantometrics - would form ad hoc networks of related stories, commentary, and other social media, from which altmetric data could be harvested for an analysis of social reach (see Table 4).

Primary research Practitioner research Governance and Government Mass Media Social Media
Bibliometrics Bibliometrics Bibliometrics
Usage statistics Usage statistics Usage statistics Usage statistics
Altmetrics Altmetrics Altmetrics Altmetrics Altmetrics
Semantometrics Semantometrics Semantometrics

Table 4 - the development of analytics to compute social reach requires a variety of linking approaches, including extending altmetrics beyond direct linking and the application of semantic technology to discover non-linked influence


Conclusion

Although altmetrics has the potential to be a valuable element in calculating social reach – with the hope this would provide insights into understanding social impact – there are a number of essential steps that are necessary to place this work on the same standing as bibliometrics and other forms of assessment.

There needs to be more effort on behalf of altmetricians to extend their platforms to harvest data using direct relationships (e.g., comments on stories that contain formal links, retweets, social shares) to give a wider picture of social reach, both in terms of depth (or complexity) of the communication, and the breadth of relatively simple messages.

As highly influential stories have –at best – idiosyncratic links to the primary research, there should be investigations in the area of using semantics and natural language parsing to trace the spread of scientific ideas through society, and in particular to the application of semantic technologies to extend the scope of altmetrics.

The difference between the ways in which different disciplines discuss, interpret and share research findings needs to be understood. This step should enable publishers and researchers to improve the accessibility of research to practitioners and academics in response to experimental data.

For different disciplines usage patterns will vary according to differences in their social, legislative, economic and national characteristics and infrastructure. Research has a complex and dynamic context, and attempts to make comparisons must acknowledge these variations.

Figure 2 - Counts of tweets linking to primary research and a selection of online reports, and to the Nature dedicated page. The sum of all tweets linking to the primary research was 133 in March 2013.


References

(1) A good introduction to the ambitions of altmetrics may be found at altmetrics.org/manifesto
(2) Thelwall, M., Haustein, S., Larivière, V., Sugimoto, C.R. (2013) “Do altmetrics work? Twitter and ten other social web services”. Available at: http://www.scit.wlv.ac.uk/~cm1993/papers/Altmetrics_%20preprintx.pdf
(3) Priem, J., Piwowar, H.A., Hemminger, B.H. (2011) “Altmetrics in the wild: An exploratory study of impact metrics based on social media (presentation)”. Available at: http://jasonpriem.org/self-archived/PLoS-altmetrics-sigmetrics11-abstract.pdf
(4) Ebrahim, A. (2013) “Let’s be realistic about measuring impact”, http://blogs.hbr.org/hbsfaculty/2013/03/lets-be-realistic-about-measur.html
(5) Reeves, M., (2002) “Measuring the economic and social impact of the arts: a review”, http://www.artscouncil.org.uk/media/uploads/documents/publications/340.pdf
(6) Davis, V. (2012) “Humanities: the unexpected success story of the 21st century”, http://www.ioe.ac.uk/Virginia_Davis_2012.pdf
(7) Radford, T. (2011) “Of course scientists can communicate”, http://www.nature.com/news/2011/110126/full/469445a.html
(8) General Medical Council, “The state of medical education and practice in the UK: 2012”, http://data.gmc-uk.org.
(9) According to the Nursing and Midwifery Council, http://www.nmc-uk.org/About-us/Annual-reports-and-statutory-accounts, there are 671,668 nurses and midwives who are legally allowed to practice in the UK. Approximately 350,000 are employed by the NHS. http://www.nhsconfed.org/priorities/political-engagement/Pages/NHS-statistics.aspx
(10) UK Legislation, Full text searches on April 24, 2013 on http://www.legislation.gov.uk
(11) Wikipedia, “0 (number)”, http://en.wikipedia.org/wiki/0_%28number%29#History
(12) Reinhart, C.M., Rogoff, K.S., (2010) “Growth in a Time of Debt”, American Economic Review, American Economic Association, Vol. 100, No. 2, pp. 573-578, http://www.nber.org/papers/w15639
(13) Linkins, J. (2013) “Reinhard Rogoff austerity research errors may give unemployed someone to blame”, Huffington Post, http://www.huffingtonpost.com/2013/04/16/reinhart-rogoff-austerity_n_3095200.html
(14) Priem, J., Piwowar, H.A., Hemminger, B.H. (2012) “Altmetrics in the wild: Using social media to explore scholarly impact”, http://arxiv.org/html/1203.4745v1
(15) Mail Online, http://www.dailymail.co.uk/sciencetech/article-2299971/Simple-saliva-test-breast-prostate-cancer-soon-available-GP-just-5.html
(16) The Times, http://www.thetimes.co.uk/tto/health/news/article3724498.ece
(17) ImpactStory, http://www.impactstory.org/collection/dnwpb3
(18) BBC, http://www.bbc.co.uk/news/health-21945812
(19) The Guardian, http://www.guardian.co.uk/science/2013/mar/27/scientists-prostate-breast-ovarian-cancer

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The impact of science on technology, as measured by patent citations

How do citations in patents and scientific articles differ? Steven Scheerooren and Dr. Judith Kamalski discuss the use and limitations of citation data from patents and present a case study on Civil Nuclear Energy research.

Read more >


Much has already been written about the linkage between science and technology, and the validity of using (non-patent) citations in patents as a measurement of the bond between them. In this article, Research Trends presents the current thinking on this topic, as well as our own standpoint on the use and limitations of citation data from patents. In addition, we present a case study on Civil Nuclear Energy research and citations in patents for three different countries: UK, US and China.

 

“Science” & “Technology”

In early citation studies, technological progress was viewed as more or less a direct result of scientific progress. To paraphrase Bassecoulard & Zitt (1), it had been assumed that there is a diachronic relationship in which the science of today is the technology of tomorrow. However, as many authors have since made clear, there are several issues related to using a linear model.

Firstly, the problem of definitions: what is ‘science’, and what sets it apart from ‘technology’? While the two may have been distinct fields in the past (‘science’ being more theoretical and ‘technology’ more practical), over the last decades they have become closely intertwined. In light of this development, Narin & Noma (2) mention Arnold Toynbee’s analogy of a pair of dancers: “…science and technology [are] intimately related as a pair of dancers (…) locked in an embrace from which it is virtually impossible to separate the partners.” After all, university researchers also patent inventions and inventors also publish papers. It is even becoming increasingly common for a researcher to be active in both worlds; i.e. one may work at a corporate R&D lab, but also hold an academic position (adjunct professorship) or vice versa. Meyer (3) adds to the dancer analogy, by saying that “with dancers dancing an ever closer dance, it also gets increasingly difficult to say who is the partner that determines the direction.” He suggests there may well be technology-pulled science, next to science-pushed technology.

Secondly, we must bear in mind that “citations are not intended to be an indication of technology [or knowledge] flows or spillovers.” (4). Patent applicants cite papers not (just) to show what inspired an invention, but rather to avoid future legal battles over its novelty while at the same time indicating interesting areas for potential licensees. Moreover, citations are added not only by the applicants themselves, but also by the examiners of a patent. Depending on the patent office, it may be in this step that most citations are added. On the one hand this would mean that we cannot regard such citations as a ‘science push’. On the other hand it does point to a link, whether the applicant is aware of the papers or not.

 

An analogy with citations in scientific literature

Even though many authors admit that for various reasons citations in patents are not very reliable proof of an article’s influence on an applicant’s thought process, the notion nevertheless continues to exist that such non-patent citations are indicators of science’s influence on technology. We wish to argue this is justifiable, within certain limits. The following factors influence the likelihood of whether an applicant has in fact come into contact with the cited literature, which should be taken into account when interpreting bibliometric data.

 

  • Country. A few authors argue that a so-called ‘domestic bias’ (i.e. a relatively large share of citations referring to research papers from the same country) indicates localized knowledge flows (5). However, this perceived bias may also stem from the fact that European or American patent offices have different requirements for considering patentability. This results in different citation strategies between countries, with some examiners preferring to cite national papers, and some preferring to cite international ones (4, 3).
  • Field. Not all technological fields in which inventions are patented use the same citation methods. Some, such as Pharmacology, (Bio)Chemistry or Genetics, tend to cite a much higher number of non-patent documents than fields such as Engineering (3, 6, 7).
  • Journal. There is a distinct connection between citations in patents and the citation impact of a paper, which relates to the journal in which it is published. Papers, which are cited in patents, are published on average in journals with higher impact than those which are less well cited in patents, or not cited at all. They also tend to receive more citations from other papers. But there is a causality issue here: papers might be more easily cited in papers because they are more visible, or they might readily appear in top journals because they have “broken the technology barrier” (7, 8 ).

 

In fact, many of these factors are similar to the use of citations within the scientific literature as a proxy for impact: some subject areas have a tendency to include more references than others; some countries tend to cite articles from their own countries, others are more international. The fact that these statements are true does not mean that we should not use citations as a proxy for impact, or quality. It only means that we should be aware of these issues and try to find the best ways to analyze the data while taking them into account. This also applies to the use of patent citations. We would argue that as long as a comparison is made within a patent office, within a subject area, that the data could yield interesting insights. It is important to keep in mind, as Tijssen puts it, that for the above reasons, “patent-paper citation data are more appropriate for statistics on the interaction between science and technology, rather than the strength of those linkages or the degree of connectedness.” (5).

 

A case study: Civil Nuclear Energy

In this case study, we analyze how often a reference is made in patents to the civil nuclear energy literature from four different countries: the UK, US, France and China. In Figure 1, the counts of patent citations per country have been normalized for country size by taking the number of Civil Nuclear Energy articles published per year by each country and dividing the patent citations by total output, and then finally multiplying by 10.

The fairest comparison between countries can be made by taking patent data from WIPO, the international patent office (9). Selecting the US Patent and Trademark Office may show a bias towards US, and the same applies for the other national patent offices. Still, the results for the US may be disadvantaged because we are not looking at the US patent office data, and this office would contain most of the US patent citations.

Figure 1 focuses on the UK, the US, and China. Per article publication year, it shows how frequently this research was cited in patents. It is clear that UK research from 2007 and 2008 was relatively well-cited in WIPO patents, but that its relative position varies from one year to the next. This variance is due to the low volume involved in this case study. Regardless of the variance, China seems to show a downward trend in terms of patent citations per article, while the US’ position is relatively stable.

Figure 1 – Patent citations per article, multiplied by 10, for UK, China and US. Source: WIPO

The same analysis can be repeated for the US Patent Office data, reflected in Figure 2. As the most recent years show little activity, we have selected a time frame further back for the US data. It is clear that even though one can expect this data to be biased towards the US, it still shows meaningful patent citations of for instance UK literature on civil nuclear energy.

Figure 2 – Patent citations per article, multiplied by 10, for UK, China and US. Source: USPTO Office

 

The patent citations analysis forms an interesting addition to the more traditional metrics such as output, or field weighted citation impact. For this particular case study, China consistently has the lowest impact, but not the lowest patent citations per article. Depending on the angle chosen, the countries show different strengths and foci. One metric can never provide the whole picture. But we would like to argue that patent citations are a useful addition to the expanding mix of metrics that can be used to assess different aspects of impact.

 

References

(1) Bassecoulard, E., Zitt, M. (2004) “Patents and publications: The Lexical Connection”, in: Moed, H.F., Glänzel, W., Schmoch, U. (eds.), Handbook of quantitative science and technology research, pp.665-694. Dordrecht, The Netherlands: Springer Netherlands.
(2) Narin, F., Noma, E. (1985) “Is science becoming technology?”, Scientometrics, Vol. 7, No. 3-6, pp.369-381.
(3) Meyer, M. (2000) “Does science push technology? Patents citing scientific literature”, Research Policy, Vol. 29, pp.409-434.
(4) Criscuolo, P., Verspagen, B. (2008) “Does it matter where patent citations come from? Inventor vs. examiner citations in European patents”, Research Policy, Vol. 37, pp.1892-1908.
(5) Tijssen, R.J.W. (2004) “Measuring and evaluating science-technology connections and interactions: Towards International Statistics”, in: Moed, H.F., Glänzel, W., Schmoch, U. (eds.), Handbook of quantitative science and technology research, pp.695-715. Dordrecht, The Netherlands: Springer Netherlands.
(6) Lo, S.S. (2010) “Scientific linkage of science research and technology development: a case of genetic engineering research”, Scientometrics, Vol. 82, pp.109-120.
(7) Glänzel, W., Zhou, P. (2011) “Publication activity, citation impact and bi-directional links between publications and patents in biotechnology”, Scientometrics, Vol. 86, pp.505-525.
(8) Meyer, M., Debackere, K., Glänzel, W. (2010) “Can applied science be ‘good science’? Exploring the relationship between patent citations and citation impact in nanoscience”, Scientometrics, Vol. 85, pp.527-539.
(9) Patents are obtained via a partnership with LexisNexis®, a leader in comprehensive and authoritative legal, news and business information and tailored applications. LexisNexis® is a member of Reed Elsevier Group plc.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Women’s Not-for-Profit Organizations’ Research Output – Characterizations and Trends

Dr. Gali Halevi investigates the main characteristics of research sponsored by, or associated with, not-for-profit organizations which focus on women’s issues. What are the main topics the research focuses on and are these the same across the globe?

Read more >


In this article we investigate the main characteristics of research output sponsored by, or associated with, not-for-profit organizations which focus on women’s issues. Women’s foundations, associations and societies play an important role in our community by promoting, creating awareness of, collaborating on or conducting research on topics that are at the heart of women’s existence in society. Issues such as health, family, education, equality, employment and empowerment are some of the foci of thousands of women’s foundations and other not-for-profit organizations around the world. Through a variety of activities such as, among others, workshops, special events, local and community publications, these organizations impact the lives of women and their families every day.

 

Women’s not-for-profit organizations and research output

The work of women’s not-for-profit institutions includes conducting, collaborating on or funding research in their areas of interest for the benefit of planning and policy making, and promoting awareness of major issues, among other things. Therefore, we focused on research output in the form of journal articles or conference proceedings covered by Scopus (www.scopus.com) that are either generated by or associated with not-for-profit women’s organizations. The analysis focuses on (1) discovering the main subject areas covered by these publications, (2) the main research collaborations formed by these organizations, and (3) the research foci of this scientific output around the world.

A Scopus search was conducted to retrieve women’s foundations, associations or societies. In order to retrieve only documents in which at least one of the authors was affiliated with a not-for-profit women’s organization, we excluded publications that only had women’s medical universities or departments, hospitals or medical research centers. The dataset included 764 documents (as of March 19th, 2013), each having at least one women’s foundation, association or society in its listed affiliations. To validate our set, we manually examined 60 random records to confirm that they all had at least one affiliation which was a not-for-profit women’s organization.

 

Findings

Research subject fields and topics covered

Women-related issues include a variety of subjects such as education, employment, civil and domestic equality, to name a few. However, the research output shows a clear dominance of health-related publications with very little research focused on social, education or economic issues (see Figure 1).

Figure 1 - Research areas output affiliated with women’s not-for-profit organizations

As seen in Figure 1, Medical research accounts for over 50% of the output while Social Sciences account for 10% and Psychology and Arts & Humanities for 5% or less. Figure 2 represents the words that occur most frequently within the titles of the retrieved articles. Please note that the bigger the word is, the more times it appears in the titles of the publications. Looking at the titles of the articles retrieved, it is evident that the focus of publication is on health issues, especially breast cancer, diseases, sexual and reproductive issues.


Figure 2 - Most frequently occurring words in article titles (all subject fields)

 

In order to better understand what the issues covered by Social Sciences, Arts & Humanities, Psychology and other non-health related publications are, we limited the set to these subjects. Detailed examination of these, and related subjects such as Economics and Business, reveals more of the same phenomenon: health-related research is dominant. While issues such as welfare, children, and law can be seen in Figure 3, which depicts the titles of publications classified as the above subjects, health is still much more prominent than others.


Figure 3 - Most frequently occurring words in article titles in Social Sciences and Humanities

 

Co-affiliated relations with other institutions

The above findings coincide with the fact that medical departments within universities, hospitals and medical research centers are heavily represented amongst the associated affiliations of these publications. As mentioned above, each of the records retrieved for this article has at least one author associated with a not-for-profit women’s organization and most articles have more than one author. The co-authorships patterns reveal strong collaborative ties between the researchers in women’s foundations on the one hand and scientists from medical universities and research centers as well as hospitals on the other (see Figure 4). The prominent position of medical departments in universities as collaborating partners seems to illustrate the orientation of academic investigators towards health-related research.

A closer examination of the foundations within the overall affiliations showed that most of the research is generated by women’s health foundations (i.e. International Women's Health Coalition; Women and Infants Research Foundation; Birmingham Women's Foundation Trust) which can explain the general heavy focus on health-related issues.

Figure 4 - Collaboration partners of research conducted by women’s organizations

 

Research focus around the world

We conducted an analysis on the affiliations’ countries of origin and looked at the author given and index keywords for the articles. Figure 5 shows the most recurring subjects studied in Africa and the Middle East based on most recurring keyword descriptors. In the Middle East, in addition to breast cancer and child bearing, there is also research being performed on subjects such as feminisim and family. In Africa, women’s foundations’ research focuses on pregnancy, and on HIV and other sexually transmitted diseases. This finding coincides with the UNAIDS 2012 fact sheet which states that Sub-Saharan Africa remains the most heavily affected region in the global HIV epidemic with an estimated 23.5 million [22.1–24.8 million] people living with HIV in 2011. According to this report, women in Sub-Saharan Africa remain disproportionately impacted by the HIV epidemic, accounting for 58% of all people living with HIV in that region in 2011.

Figure 5 - Women-related research foci in Africa and Middle East

The fact that issues of feminism and family are quite dominant in the Middle East coincides with the growing awareness of the importance of education for women and female adolescents. According to a report by Farzaneh Roudi-Fahimi and Valentine M. Moghadam from the Population Reference Bureau (PRB), “access to education has improved dramatically over the past few decades, and there have been a number of encouraging trends in girls' and women's education. Primary school enrollment is high or universal in most Middle East and North African countries, and gender gaps in secondary school enrollment have already disappeared in several countries. Women in (these) countries are also more likely to enroll in universities than they were in the past.” (1)

Research foci in North America and Europe shown in Figures 6 and 7 are very similar. In both continents women-related publications focus on breast cancer and heart disease as well as aging and pregnancy-related issues. This finding is not surprising as heart disease is reported as the leading cause of women’s deaths in the USA by the CDC while breast cancer is the most common cancer among women in the United States. The same holds for Europe.

Figure 6 - Women-related research foci in Europe

Figure 7 - Women-related research foci in North America

In Asia there is also much focus on health-related issues as shown in Figure 8. Yet, in Asia the focus is not necessarily on specific diseases so much as on health awareness programs and health care practices. In addition in Asia there is some focus on young adults and adolescents. This could be due to the fact that well-researched diseases such as breast cancer and heart diseases that are common in North America and Europe are not as prevalent in Asia. Another reason for this could be that Asian women seem to be in better health than women in other countries but lack access to health systems (2).

Figure 8 - Women-related research foci in Asia

 

Conclusions

Our research focused on the publication output affiliated with women’s not-for-profit organizations such as foundations, societies and trusts. The results show an overwhelming focus on health-related research with very little research dedicated to topics related to social issues. Although health and wellness are incredibly important, issues such as equality, employment and education should also be a part of the research corpus as these have an equal effect on women’s daily lives. Women’s foundations should strive to encourage, sponsor and collaborate on the production of more data and findings related to social issues. Peer reviewed research publications on these topics have the potential to raise the awareness of governments and to enable better policy-making and enforcement. Equal education opportunities, and equal employment benefits and opportunities, for example, are not fully available yet in many countries and are just as important to women’s well-being as finding cures to diseases and illnesses.

 

References

(1) Roudi-Fahimi, F. and Moghadam, V.M. (2003) “Empowering Women, Developing Society: Female Education in the Middle East and North Africa”. Available at:http://www.prb.org/pdf/EmpoweringWomeninMENA.pdf
(2)      The United Nations. “The World's Women 2010: Trends and Statistics”. Available at: http://unstats.un.org/unsd/demographic/products/Worldswomen/wwhealth2010.htm

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Beyond the PDF 2

Dr. Daphne van Weijen and Mike Taylor report on the Beyond the PDF 2 conference held in Amsterdam in March. What is being done to change the future of scholarly communication?

Read more >


In a recent issue of Research Trends, we featured a piece by Anita de Waard and Maryann Martone on FORCE11, an international, multidisciplinary group trying to shape the future of Scholarly Communication. In the piece, De Waard and Martone described some of the key proposals from the FORCE11 Manifesto:

(a) Defining new publishable objects;

(b) Collating innovative publishing tools;

(c) Creating access to research data online;

(d) Collectively developing new business models for scholarly publication;

(e) Exploring new metrics of impact.

Since its formation in 2011, the group has worked towards achieving these aims, both online and offline. The latest offline activity was the organization of the Beyond the PDF2 Conference (#btpdf2) in March. Around 200 participants gathered for two days in Amsterdam to listen to presentations on topics related to the Manifesto and to discuss ways to keep moving forward. Thanks to the live stream of the event and an onsite twitter wall, which provided a live feed of all tweets containing the hash tag #btpdf2, the discussion was also picked up online, resulting in nearly 3700 tweets from 678 different accounts (for more details see the archive (1), or the Storify (2)). The event clearly got people’s tongues wagging and got them tweeting, but who were the participants, what were the main topics of discussion, and, most importantly, what’s next?

Figure 1 - Wordle based on the content of 4 different discussions of the BTPDF2 conference found online (3, 4, 5 & 6).

 

Participants

The conference drew around 200 participants from across the research continuum, including librarians, researchers, publishers, technical developers, and policy makers. They came from many different fields, including the Humanities, Social and Life Sciences, although the biomedical fields were somewhat overrepresented (3).

 

Conference Content

The conference program clearly reflected FORCE11’s main aims, providing a great mix of presentations, flash talks, discussion sessions, and break-out sessions related to innovative publishing & communication tools, sharing research data, new business models for scholarly publishing, defining publishable objects and exploring new metrics for measuring impact (see Figure 1).

Day one of the conference had “State of the Art” as its central theme. The day started with an excellent keynote by Kathleen Fitzpatrick (Director of Scholarly Communication, Modern Language Association), who provided input from a humanities perspective. Her presentation, entitled “Planned Obsolescence: Publishing, Technology, and the Future of the Academy”, focused on the fact that academics need to learn to see the benefits of new potential forms of scholarly communication, instead of sticking with outlets they’re familiar with, such as books and print journals. The other presentations and sessions focused on new models of content creation & dissemination, on business cases and the day was rounded off with a “Demos & Dinner” session.

The second day’s theme was “Where are we going?” and started with another keynote. Carol Tenopir (Chancellor’s Professor at the School of Information Sciences at the University of Tennessee, Knoxville) gave another excellent presentation from a humanities perspective, entitled “Shaping the Future of Scholarly Communication”. Tenopir’s talk generated a lot of interest and quotes on Twitter related to her research on academics’ reading behavior.

The other presentations and sessions that day focused mainly on tools, data access and sharing, and included sessions entitled “‘Making it Happen”(with lots of 3-5 minute flash talks on topics such as ORCID and PDF Metadata) and “New models of Evaluation for Research & Researchers”. This last session included a panel with different roles (publisher, researcher, dean, and funding body representative), who held a dynamic discussion/live action role play, well led by Carol Goble, on who should be the first to lead the change. The day ended with a “Visions for the Future“ session.

Other themes that resonated with us as attendees included:

  • The “eScholarship in Context” session, hosted by Laura Czerniewicz and Michelle Willmers, focused in part on internationalism. Michelle Willmers provided input from an African perspective and spoke of the challenges faced by the Scholarly Communication in Africa Program.
  • Traditional forms of publication take too long to get the content out there. One of the main problems involved is peer-review delays. Alternatives offered included tools for facilitating peer-review, and Carol Goble’s suggestion: Don’t publish, release!
  • We need standards for publication, data-sharing, citations etc. But according to NISO’s Executive Director, Todd Carpenter, the problem is that: “Standards are like toothbrushes, everybody has one, but no one wants to use someone else’s” (7).
  • In a discussion on data sharing and data reuse, Carol Goble suggested that: “there is no reproducible science”. In order to help science move forward, we need to develop and implement tools to host datasets online so others can re-use them.


So now what?

So, you may wonder, now what? Was the meeting a success? Well, based on the response on Twitter (#btpdf2), the number of summaries of the event that appeared online shortly afterwards, and the soldiers of the Scholarly Revolution (#scholrev) we’d say yes, it was a great success. But now it’s up to everyone who attended to help support the FORCE11 movement and keep the ball rolling in the right direction…


References

(1) Twitter archive for #btpdf2: http://glimmer.rstudio.com/bodong/btpdf/
(2) Storify for #btpdf2: http://storify.com/mcdawg/beyond-the-pdf-2
See also: http://figshare.com/blog/Beyond_The_PDF_2_-_Resources_and_Highlights/76
(3) Paul Groth’s blog: http://thinklinks.wordpress.com/2013/03/21/beyond-the-pdf-2-quick-recap/
(4) Van Korlaar, I, Riese, C, (2013) “Beyond the PDF2 (#btpdf2) challenges status quo of #ScholarlyCommunication”, http://elsevierconnect.com/beyond-the-pdf2-btpdf2-challenges-status-quo-of-scholarlycommunication/
(5) Lukas Koster’s blog post on the conference, March 22 2013: http://commonplace.net/2013/03/beyond-the-library/
(6) Peter Brantley’s Conference Report: http://publishersweekly.com/pw/by-topic/industry-news/libraries/article/56519-conference-report-beyond-pdf-2.html
(7) source: Tweet by @Tac_Niso.

 

 

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Did you know

… electric cars were already around in the 19th century?

Steven Scheerooren

Not only were electrically powered cars already available by the end of the 19th century, they were also quite popular (1, 2, & 3). In the early 20th century, for example, Thomas Edison and Henry Ford cooperated on a project to produce affordable electric cars on a large scale, with the former firmly believing that they would become the mode of transport of the future (4). So why haven’t we been using them for the past hundred years? Some would say it’s due to a lack of scientific advances in the development of batteries, while others point towards the discovery of oil deposits in Texas, which made its competitor the gasoline engine much more affordable.
References
1. [author unknown], “The popularity of electric cars”, Science 14;336 (1899), p. 34
2. [author unknown], “An improved truck for electric cars”, Science 15;372 (1890), p. 183
3. The Horseless Age, April 5 1899 – September 27 1899 compendium, http://www.carsandracingstuff.com/library/h/horselessage1899b.pdf
4. Strohl, Dan. “Ford, Edison and the cheap EV that almost was”, http://www.wired.com/autopia/2010/06/henry-ford-thomas-edison-ev/

83
https://www.researchtrends.com/wp-content/uploads/2013/05/Research_Trends_Issue_33.pdf