Issue 27 – March 2012

Articles

Editorial

Editorial: Societal Impact

This issue of Research Trends focuses on the measurement of societal impact of research. Research performance is a multi-dimensional concept. Scientific impact is always a key dimension of measurement; however, there are many other ways in which research can be useful for society. Hence, an increasing amount of researchers and research managers underline the importance […]

Read more >


This issue of Research Trends focuses on the measurement of societal impact of research. Research performance is a multi-dimensional concept. Scientific impact is always a key dimension of measurement; however, there are many other ways in which research can be useful for society. Hence, an increasing amount of researchers and research managers underline the importance of measuring the technological, social, economic and cultural impact of science. For the measurement of scientific and technological impact bibliometric methods are available based on research publications and patents. But, how does one measure the various forms of societal impact?

One may wonder whether measuring societal impact can in fact be done in a politically neutral way, without any explicit or implicit appreciation of the social significance of research results. What for some may be considered a solution to a social problem may for others be thought of as merely controlling a symptom. Following this line of reasoning, one may even argue that using societal impact as a criterion for the evaluation of research is dangerous – it opens doors to political control of research institutions and the research they carry out.

On the other hand, we are all also well aware of the fact that science may also provide very valuable and key solutions to issues in our society.  Discussions about the danger of political control over research should not hamper scientists to contribute to solving these societal issues. Neither should it hamper scientists to be led by societal considerations in choosing their topic of research.

We therefore face a dilemma.  In measuring societal impact in the assessment of research, the best approach seems to be: experiment in a cautious, open and reflective manner. A good example being the ideas proposed in the Research Excellence Framework in the UK to invite researchers to submit reports explicitly indicating – demonstrating if you like - the way in which they believe their work has had societal relevance and impact.

In the meantime, I would like to invite readers to express their views on the dilemma. Moreover, I invite them to submit any social-impact-demonstrating reports to Research Trends for publication (print or online). This way the Research Trends Editorial Team hopes to contribute to the discussion of the appropriate assessment and use of societal impact in research assessment.

Kind regards,

Henk F. Moed

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
British brain

The evolution of brain drain and its measurement: Part II

Brain circulation in the UK context: a sea of talent As part of the report ‘International Comparative Performance of the UK Research Base: 2011’, commissioned by the UK’s Department for Business, Innovation and Skills (BIS), a fresh way of looking at researcher mobility was sought. In the report, published in October 2011, Scopus data were […]

Read more >


Brain circulation in the UK context: a sea of talent

As part of the report ‘International Comparative Performance of the UK Research Base: 2011’, commissioned by the UK’s Department for Business, Innovation and Skills (BIS), a fresh way of looking at researcher mobility was sought. In the report, published in October 2011, Scopus data were used to produce a conceptual map of the stocks and flows of human capital (i.e. researchers) in the UK over a 15-year period 1996–2010 (conceptual and methodological details were discussed in Part I of this article in the previous issue of Research Trends). Thinking of the global researcher population as a sea of talent, the study aimed to quantify the size of the waves and the direction of the current from the UK’s perspective.

The main findings of the analyses are (see Figure 1):

Using each author’s affiliation(s) listed in their published articles to determine their mobility patterns, 37.2% of active UK researchers appear never to have published outside the UK in the period 1996-2010. While it is possible that many of these researchers did travel and collaborate internationally, such activities never resulted in published articles in which they listed their address as being outside the UK. These researchers show low ‘productivity’ (articles published per year since their first appearance as an author, relative to benchmark of 1.00 for all UK researchers over this period) at just 0.60. They also display a low relative ‘seniority’ (i.e. number of years since their first appearance as an author, relative to benchmark of 1.00 for all UK researchers over this period) of 0.82.

5.8% of UK researchers moved out of the UK and show no indication of having returned to the UK since, while 5.8% of UK researchers moved into the UK and showed no indication of having left the UK since. The actual difference in this period was a net inflow of just 61 researchers to the UK (of the 210,923 total researchers in the dataset). Researchers moving out of the UK were slightly less productive than average (0.91) but also slightly more senior (1.15), and those moving to the UK had a very similar profile (0.89 and 1.13, respectively). The most common destination countries were the US, Australia, Canada, Germany and France, while the most common source nations were the US, Germany, Australia, France and Italy.

2.6% of UK researchers moved out of the UK and subsequently returned after more than two years abroad (“returnees inflow”), while 4.2% of UK researchers moved into the UK and subsequently left after more than two years in the country (“returnees outflow”). While the latter group are slightly less productive than average (0.95), the former group are highly productive (1.66). Both groups have a very similar relative seniority, at 1.20 for the returnees outflow and 1.23 for the returnees inflow. The most common destination countries amongst the returnees outflow group were the US, Australia, Germany, France and Canada, while the most common source nations in the returnees inflow group were the US, Australia, Canada, Germany and Ireland. Owing to their small number, these two groups of “returnees” contributed a relatively small amount to the UK’s brain circulation, compared to the whole. Despite this, returnees may contribute a great deal to their home country after their return.

Taking together the outflow and returnees outflow group and the inflow and returnees inflow group, the net brain outflow from the UK is about 1.5%. However, the inflow groups together constitute a more productive population than the outflow groups, despite their very similar seniority profiles.

The most prominent groups identified in this analysis are the large numbers of researchers with transitory mobility (with stays either in the UK, or out of the UK, of less than two years as indicated by their country listed in their published articles). In the period 1996-2010, 13.6% of researchers based mainly in the UK showed transitory mobility to non-UK countries (as indicated by their country listed in their published articles), while a very large number (30.8%) of researchers based mainly in non-UK countries showed transitory mobility into the UK. While the former group is about as productive as the average (0.98) and slightly more senior (1.05), the latter group is highly productive (1.35) and somewhat more senior (1.11). The most common destination countries for the mainly UK-based group were the US, Australia, Germany, Canada and France, while the most common source nations for the mainly non-UK-based group were the US, Germany, France, Italy and Australia.

Thinking about brains: refining the map

While clearly of great value in showing the overall ebbs and flows of researchers in and out of the UK, the conceptual map derived using the above approach does come with some caveats and areas for future improvement. For example, while the map shown in Figure 1 shows the rest of world as a single collective entity, the data behind it contain the source and destination (and often intervening) countries for all the researchers it represents; these data have yet to be exploited fully (for a preview, see the report’s Appendix F here). Moreover, only two national brain circulation maps have been produced to date: one for the UK and a comparative map for Germany, the latter with an overall pattern similar to the former but with a slightly higher proportion of researchers who have apparently never been affiliated with institutions beyond Germany, and therefore a lower proportion flowing in and out of the country.

Dr Grit Laudel of the University of Twente, Netherlands, pioneered the development of a methodological framework for bibliometric studies of brain circulation over the last decade. We asked Dr Laudel to offer her thoughts on future refinements of this approach, and her comments are reflected in the discussion below.

In contrast to the seminal works on bibliometric approaches to brain circulation by Laudel (see Part I of this article in the previous issue of Research Trends), the analyses presented here do not take a subject-level view but look across all disciplines. How does the picture differ for mathematics versus life sciences, or social sciences versus physics? Laudel notes: “The most important differentiation that needs to be introduced concerns scientific specialties. The present picture of mobility aggregates researchers from all fields, masking any differences between scientific specialties. However, the specialty is the locus of knowledge production. Conditions of research such as positions available and funding (which are likely to have a strong effect on mobility and migration) are specific for each specialty.” A disaggregated view would therefore be of great value for studies of the science system and research policy. Assigning authors into subject field(s) is not unproblematic, but if a reasonable approach could be devised (such as using the most common subject classification applied to the journals used by each author as a proxy, for example) it would clearly yield valuable insights. Laudel agrees: “Measuring scientific mobility on the level of specialties is methodologically challenging. The approach suggested - to use journal classifications - seems to be promising, at least for mobility patterns in the disciplines whose publication oeuvre is well presented in the publication database and if a specialty’s core journals are used.”

Still thinking in terms of differences between subjects, thought could be given to subject-specific thresholds for the publication productivity filters applied to focus on ‘active researchers’, as the filters used currently have a clear potential for bias against those working in fields with a reduced focus on publication in journals (humanities and some social sciences, for example) or researchers working not in academia but in industry. It is also quite likely that, given differences in the lifecycle of research projects across different disciplines, the definitions of migratory and transitory mobility applied here may not be appropriate for all fields. Laudel says: “The authors distinguish between transitory and migratory mobility. This distinction between moves to another country for a limited period of time, which is a normal part of many researchers’ career (transitory mobility), and the less common migration (permanent moves to another country) is important because science policy wants to encourage the first but to prevent the second. However, the empirical operationalisation of this conceptual distinction is extremely difficult. The two-year threshold applied by the authors for assuming migratory mobility appears to be too short. My own recent studies of academic careers show that it is common for postdocs to stay abroad for two years; and that even longer stays in a foreign lab – three or even four years - occur too frequently to be negligible. For future research I suggest experiments with varying thresholds of two, three, four, and five years.”

The UK brain circulation map looks at researcher productivity and seniority over the entire 15-year span of the analysis, which offers an overview of the stocks and flows of human capital in that period but ignores the temporal dynamics of this complex system. On the basis of a detailed temporal analysis of the career trajectories of 20 individual scientists, Laudel made two very important observations: i) current elites recruit future elites, and a country needs elites to generate elites; ii) it is not necessarily the current elite that migrate, but those who will go on to become the elite later in their careers — a country needs strategies to attract potential elite (1). It would be of great interest to see how these observations on a handful of individuals in selected specialties scales to the active researcher population of the UK: can these findings be confirmed, or can they be even further refined?

Finally, Laudel suggests that more sophisticated metrics to describe the researchers comprising each of the mobility groups shown on the UK map could be devised: “While this information is very interesting, the relative productivity is very likely to be read as a proxy for quality, which is unfortunate. It is of course very important for science policy to know, for example, about the performance levels of researchers ‘gained’ and ‘lost’. However, this requires better indicators than those which are not intended to represent quality but will inevitably be interpreted that way.”

The brain circulation map presented in the ‘International Comparative Performance of the UK Research Base: 2011’ report offers empirical progress on an important but difficult question. As Laudel concludes: “…the map provides not only interesting information, but also many suggestions for further research. Hopefully those will be taken up.”

Figure 1 – International mobility of UK researchers, 1996–2010. See article text for further details. The original figure (Figure 3.3, pg. 21) appeared in the ‘International Comparative Performance of the UK Research Base: 2011’ report.

References

1. Laudel, G. (2005) “Migration currents among the scientific elite”, Minerva, Vol. 43, pp. 377–395.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
trends6-2-full

The influence of free encyclopedias on science

Wikipedia’s birth and growth Since its launch in 2001 Wikipedia has seen incredible growth worldwide, counting more than 21 million articles published in around 280 languages (including nearly 4 million articles in English) in 2012 (1). Wikipedia has grown in size (number of Wikipedia entries/articles have been increasing over time) and is showing high reliability: […]

Read more >


Wikipedia’s birth and growth

Since its launch in 2001 Wikipedia has seen incredible growth worldwide, counting more than 21 million articles published in around 280 languages (including nearly 4 million articles in English) in 2012 (1). Wikipedia has grown in size (number of Wikipedia entries/articles have been increasing over time) and is showing high reliability: a recent study (2) of historical entries found 80% accuracy for Wikipedia, compared to 95-96% for other sources. This means that for the entries checked in the study, Wikipedia contain on average only about 15% more errors than other sources including traditionally perceived authoritative sources such as Encyclopaedia Britannica. The research found that this difference was negligible. Adding to this Wikipedia’s ease of access and wide coverage of topics explains why for many people it has become the first port of call for instant general knowledge on a variety of subjects.

Wikipedia enters scholarly communications

What is perhaps surprising is that Wikipedia appears to be increasingly used by scholars for their research. Research published in 2011 (2) looked at the visibility of Wikipedia in scholarly content, and found a steady increase of the amount of work about Wikipedia from 2002 to 2010. Research Trends replicated the study, looking for “*wikipedia*” in titles, keywords, or abstracts of scholarly papers published in journals covered in Scopus (see Figure 1), and found a staggering Compound Annual Growth Rate (CAGR) of 69% per annum since the first paper in 2002 to the 158 papers published in 2011. Even when looking at the past 5 years (2007-2011) CAGR was impressive at nearly 19% per annum.

Figure 1 – Annual number of scholarly papers with “*wikipedia*” in their titles, keywords, or abstracts, published in journals only. Source: Scopus (note: data for 2011 may be incomplete)

Through the back door of references

More interestingly, there has also been a dramatic increase in the number of publications referring to Wikipedia as a source. The aforementioned recently published study (2) limited the search results to mentions of Wikipedia as a reference title, but extending the search to all reference fields reveals much wider use even with restrictions to scholarly content published in journals (see Figure 2). CAGR was an unbelievable 88% per annum since the first paper in 2002 to the 4006 papers published in 2011. Focusing on the past 5 years (2007-2011) CAGR was still impressive at more than 31% per annum.

Figure 2 – Annual number of scholarly papers with “*wikipedia*” in their references, published in journals only. Source: Scopus (note: data for 2011 may be incomplete)

Wikipedia as a topic versus Wikipedia as a reference

Figures 1 and 2 show data trends similar to a logistic growth curve, characterised by almost exponential growth at the beginning followed by levelling off, and then saturation. Interestingly, whilst Figure 2 does show some level of saturation for recent years, Figure 1 does not: use of Wikipedia as a reference in scholarly communications may be approaching a plateau but this is not matched by the level of interest in Wikipedia as a topic of research itself by the scientific community, which carries on growing rapidly.

At subject level, overall there is a strong correlation (correlation coefficient 0.83), between the number of papers about Wikipedia and the number of papers referencing Wikipedia: Social Sciences, Computer Science, Medicine, and Engineering make it into the top 5 prolific areas for both (see Figures 3a and 3b).

Figure 3a – Subject area distribution of 2002-2011 scholarly papers with “*wikipedia*” in their titles, keywords, and abstracts, published in journals only. Source: Scopus (note: data for 2011 may be incomplete)

Figure 3b – Subject area distribution of 2002-2011 scholarly papers with “*wikipedia*” in their references, published in journals only. Source: Scopus (note: data for 2011 may be incomplete)

The correlation is even stronger at country level (correlation coefficient 0.96) between the number of papers about Wikipedia and the number of papers referencing Wikipedia (see Figure 4a).

Figure 4a – comparison of number of 2002-2011 scholarly papers with “*wikipedia*” in their references and number 2002-2011 scholarly papers with “*wikipedia*” in their titles, keywords, or abstracts, aggregated by country and published in journals only. Source: Scopus (note: data for 2011 may be incomplete)

The zoomed Figure 4b reveals some outliers: European countries such as Germany, France,  Netherlands, Italy, and Spain tend to study Wikipedia proportionally more than they cite it, while the reverse is obversed for Asian countries such as China and India.

Figure 4b – comparison of number of 2002-2011 scholarly papers with “*wikipedia*” in their references and number 2002-2011 scholarly papers with “*wikipedia*” in their titles, keywords, or abstracts, aggregated by country and published in journals only – restricted to countries with 200-1000 papers referencing “*wikipedia*”. Source: Scopus (note: data for 2011 may be incomplete)

Which other ones?

Research Trends also wondered if similar trends would be observed for other free online encyclopedias (see box for brief definitions of these encyclopedias). The above analysis was replicated looking at mentions of these other free online encyclopedias  in references of scholarly papers published in journals covered in Scopus (see Figure 5 for the most referenced). Although growing trends were observed for most of the terms, the actual values were much lower than those observed for Wikipedia: the closest contender was Scholarpedia with astounding 80% growth per annum from 2007 to 2011 (27% for 2009-2011) but in 2011 it only reached about 5% of the number of papers referencing Wikipedia. None of the other sources came close, with each less than 50 papers referencing them in 2011.

  • Citizendium: “an English-language free encyclopaedia project launched by Wikipedia’s co-founder.”
  • Knol: “Knol is a Google project including user-written articles on a range of topics.”
  • PlanetMath: “a collaborative encyclopaedia focussing on mathematics.”
  • Scholarpedia: “peer-reviewed open-access encyclopedia, where knowledge is curated by communities of experts.”
  • Wikibooks: “a free library of educational textbooks that anyone can edit.”
  • Wikipedia: “a free, collaborative, multilingual Internet encyclopedia.”
  • Wikisource: “Wikisource is an online library of free content publications, collected and maintained by the Wikisource community.”
  • Figure 5 – Annual number of scholarly papers referencing various free online encyclopedia in journals. Source: Scopus (note: data for 2011 may be incomplete)

    Reference work in action

    Although the growth of Wikipedia’s influence on scholarly publications is impressive, the enthusiasm of researchers referencing free online encyclopedias has not yet transferred to other free online encyclopedia sources en masse. It could be that acceptance of these alternative reference works will take time, or that scientists find Wikipedia to be a sufficient and well established source within the free online encyclopedia category.

    Wikipedia is frequently updated making it a very dynamic resource. This raises potential issues of version control and instability of references: a Wikipedia entry referenced in a paper published 5 years ago may have changed considerably to the extent that it may no longer be applicable to the specific paper it is referenced in. As Wikipedia’s content is edited to reflect the latest scientific advancements (especially in fast moving fields such as biomedical sciences), it may retrospectively invalidate references found in older papers. In the coming years, academics will decide through their citation and referencing practices whether this is acceptable or not, and whether the advantages of free online encyclopedias outweigh their disadvantages.

    References

    1. Wikimedia Foundation, Inc. (2012), “Wikipedia” entry, retrieved on 13 March 2012 from the World Wide Web: http://en.wikipedia.org/wiki/Wikipedia
    2. Giles, J. (2005) “Internet encyclopaedias go head to head,” Nature, Vol 438, No 7070, pp. 900–901, http://www.nature.com/nature/journal/v438/n7070/full/438900a.html
    3. Park, T. (2011) "The visibility of Wikipedia in scholarly publications",  First Monday [Online], Vol 16, No 8
    VN:F [1.9.22_1171]
    Rating: 0.0/10 (0 votes cast)
    trends6-3-full_2

    Patenting Library Science Research Assets

    There are many factors working in today’s scientific landscape, most prevalent being budgetary constraints, that make the ability to measure Return on Investment (ROI) crucial for funding decisions. Academic research is being scrutinized in search of a metric or evaluative model(s) that will enable decision makers understand the potential of its results and ways it […]

    Read more >


    There are many factors working in today’s scientific landscape, most prevalent being budgetary constraints, that make the ability to measure Return on Investment (ROI) crucial for funding decisions. Academic research is being scrutinized in search of a metric or evaluative model(s) that will enable decision makers understand the potential of its results and ways it will impact the economy and society as a whole. One of the frequently used and most naturally occurring ways to measure science’s impact has been measuring its patentability, which is also evident in the numerous studies that explored the phenomenon of basic research patenting and its effects on both academic and industrial progress (1,2,3). The passage of the Bayh-Dole Act in 1980 contributed to the increase of university patents applications. This act gave universities the right to own and license the results of their government-funded research and in return share a portion of the revenue derived from such patents with the inventor. It has been noted that this increase is more evident in certain disciplines and fields of research such as Biotechnology, Pharmacy, Engineering etc. (4, 5).

    Unlike research in natural and life sciences, research in social sciences, as well as arts and humanities is more difficult to measure on the research-patent-revenue scale. These disciplines, by their very nature, explore personal, social, national and international phenomena over time and their results qualitatively inform policy and economy in ways that are not necessarily patentable.

    The field of Library Science has always been considered a hybrid area of research which also evolved over time to include Information Science. Aligning more closely to Social Sciences in its early years, the field expanded to include elements of computer science and information management. Examining the field and its development from paper to electronic information solutions, one might assume that technology was the driver of this transformation. This article will show that in fact, it was Library Science research that informed and inspired the development of information retrieval solutions, sometimes years before the technology was available to translate it into viable algorithms and computerized modules.

    The purpose of this study is to demonstrate the technological and economical viability of Library Science and to show the areas of technology where research in this discipline had the most influence. Influence was measured by analyzing the manners by which articles that were published in library journals are cited in patents.

    The analysis addressed the following aspects:

    (a)  How many library journals were cited in the patents covered by TotalPatent™ between 1992-2011, and how often?

    (b)  Which articles were cited most frequently?

    (c)  How can one characterize the content of the cited articles and the patents citing these, using keywords or subject classification systems?

    (d)  Who were the assignees of the patents citing library journals?

    (e)  What was the time delay between the publication year of the cited work and that of the citing (granted) patent?

    Methodology

    Leading Library journals showing a high SNIP score were analyzed. SNIP is a journal metric available in Scopus which takes into account the citation behaviour and characteristics in the subject covered by a journal. Scopus™ journal analytics includes the SNIP metric which allows a comparison of subject-related journals; in this case, Library & Information Science journals.

    In the first phase, the Scopus™ SNIP journal ranking analysis retrieved 42 journals which were then searched for, by using the Non-Patent-Literature citation field in TotalPatent™ (NOTE 1)

    TotalPatent™ is a comprehensive database covering applications and patents granted at/by a large number of patent offices around the world, including the US (USPTO), European (EPO) patent offices and World Patent Office (WPO) from 1992 onwards.

    In the second phase, all patents citing these journals were retrieved and the non-patent-literature cited in them was extracted. These citations were manually analyzed and all the library journals’ articles were collected.

    The third phase of the study involved building a database including the following data fields: data fields: Journal Title, Total Number of Citations, Number of Unique Cited Articles, Unique Articles Titles, and Year of publication, Number of Citations, Patents Numbers, Patent Titles, Filing/Issue Dates, Inventor, Assignee, and Classifications. It must be noted that the numbers of citations presented are approximate, due to unexpected variations in the journal titles included in the non-patent citations, and to double counts because of the occurrence of patent families of more or less identical patents submitted to multiple patent offices.

    Results

    Of an initial list of 42 library journals, 8 were found to be cited in patents covered by TotalPatent™. These are listed in Figure 1 below. In addition to the total number of citations, the number of unique articles cited was also analyzed. The Journal of the American Society for Information Science and Technology was the highest cited with a total of 76 citations overall and 24 unique articles cited. Library Hi-tech and Library Journal followed with 58 and 50 total citations and 17 and 13 unique articles citations respectively.

    Figure 1- Citations to Library Science journals. Source: Scopus

    In order to better understand the themes covered in the articles and sketch the domains to which they pertain, the articles’ author given and indexed keywords as well as their titles were collected from Scopus™ and built a word cloud featuring these keywords, presented in image 1.

    Image 1 - Emerging topics based on article keywords. Source for data: Scopus

    The word cloud was created using Wordle™ a free web-based application that enables the generation of word clouds from free text. In order to create an accurate word cloud as possible, phrases within the titles and keywords were kept by using Wordle™ advanced functionality.

    Analyzing the articles keywords as demonstrated by the word cloud shows that the articles feature information retrieval and indexing, and information and documents management systems which pertain to electronic and digital libraries development. This finding was of particular interest because the years of publications showed peaks in the years when the electronic library and automated information retrieval systems were beginning to be investigated. Figure 2 below which indicates the publication years of the cited articles, clearly demonstrates relatively high numbers of citations to articles that were published at the end of the 1980s and late 1990s, when information retrieval and management research were flourishing.

    Figure 2 - Distribution of articles publications by year. Source: Scopus

    An analysis of the correlation between the year of article publication and its citation in a patent showed that the time lapse between the publication of the article and its citation in a patent is significant, ranging from 10 to 20 years. This indicates both technical and conceptual developments within the field before the technology was there to apply its broader concepts such as online commerce.

    Two examples to further portray these results are: Article “NOTIS: The System and Its Features”, published by James Meyer (1985) in Library Hi Tech (6), cited 11 times in patents published between 1999 and 2006. The article featured an online library management system that integrates the public access catalog and in addition included acquisitions, serials management, authority control, and circulation. Patents citing the article include information management systems as well as online purchasing systems that handle products management, purchasing and exchanges. The second example is an article, "MAGGIE III: The Prototypical Library System", published by Kenneth E. Dowlin (1986) in Library Hi Tech (7);featuring an integrated library system that supported a public access catalog and included a cataloging interface, bibliographic maintenance, circulation, electronic mail, and community information databases. The article was cited 10 times in patents published between 1999 and 2008. The patents citing this article made use of some of its concepts to develop electronic commercial sites that manage information such as sales transactions and processing of products registration and returns.

    To be able and visualize the subject fields covered by the citing patents, the titles’ words were collected and constructed in a word cloud (see Image 2 below). As can be seen, the patents focus on electronic information administration, navigation, and products and services management in commercial systems.

    Image 2 - Patents titles keywords. Source for data: Scopus

    The subject areas as they emerge from the titles’ words correspond to the major classes to which the patents were assigned. When analyzing the classifications of the citing patents it was evident that a large majority of them fall in the area of Data Processing with subcategories ranging from financial, business, and databases structure to digital processing (See Figure 3 below). For example in the patents keywords word cloud the topics information systems, personalization, and computers, clearly dominate while the classifications pertain to parallel applications in areas of computer processing. In turn, these correspond to the heavy emphasis on information management in the journals articles. The thread of information and data management combined with customer management and personalization is carried through the articles keywords and the patents titles and classifications.

    Figure 3 - Distribution of patent classifications

    An examination of assignees revealed 55 unique corporate entities with only one exception of a university.  Looking at the top 5 assignees, one can notice the domination of information management companies as well as online purchasing and commercial corporations.

    Figure 4 - Top Assignees

    Discussion

    In earlier studies relationships between research and patents links were examined, in the aim of finding a direct linkage between a researcher and his/her patent application. The study presented in this paper was focused on finding citations of Library Science journals in patents filed between 1992 and 2011, and administered in TotalPatent.

    The analysis of the citation of Library Science journals in patents revealed some interesting observations. First, the most cited journals in this field are those which cover research studies that pertain to software development especially in the domains of information and/or data management.

    Second, the articles’ keywords as shown in the word cloud strongly indicates the themes information and documents retrieval which include indexing, mining browsing etc. Other themes indicating the diversity within the field were those pertaining to multimedia management, graphics retrieval and the web. This is of particular interest considering the fact that these articles were mostly written when the internet was in its infancy, indicating forward looking and innovative approaches within the field.

    Thirdly, examining the citing patents and analyzing their titles’ words showed a strong focus on information systems but also on products which correlates to the above articles’ content and to the overall classifications being in the areas of data processing.

    Lastly, the modules featured in these articles were originally developed for library transactions management systems and have inspired commercial uses in online commerce. The library system serving the public and exchanging different types of commodities such as books, audio and video items etc., has unique properties that allow for this relationship between commercial and public purchasing. The library systems support exchanges, client information management and public interfaces which are similar in essence to those needed for online purchasing.

    Overall, the analysis showed that library systems were developed before online commerce was conceived and in a way inspired their development. The time lapse between the articles’ publication year and the year of their citations in patents featuring systems and modules is of importance: These library systems were developed in a time when the internet as we know it today did not exist and demonstrate forward thinking and innovative breakthroughs that were turned to far reaching applications.

    Notes

    1. The 48 Library Science journals included in the study are: Library ; Library and Information Science Research ;Library Collections, Acquisition and Technical Services; Journal of Library Administration; Library Quarterly; ;Electronic Library ; Library Hi Tech  ; Journal of the Medical Library Association : JMLA ; School Library Media Research  ; Huntington Library Quarterly  ; Library Resources and Technical Services ; International Information and Library Review  ; Library Review   ; Journal of Interlibrary Loan, Document Delivery and Electronic Reserve  ; Library Management  ; Library Trends  ; Malaysian Journal of Library and Information Science  ; Library and Archival Security  ; New Library World  ; Journal of Educational Media and Library Science  ; Library Philosophy and Practice  ; Law Library Journal  ; Journal of Library and Information Services in Distance Learning  ; Public Library Quarterly  ; Library Hi Tech News  ; Canadian Journal of Information and Library Science  ; Library Administration and Management ; Library Leadership and Management  ; Library Journal  ; African Journal of Library Archives and Information Science  ; Library and Information Science   ;Australian Library Journal  ; Journal of Hospital Librarianship  ;Issues in Science and Technology Librarianship  ; Journal of Business and Finance Librarianship  ; Journal of Electronic Resources Librarianship   ; Advances in Librarianship  ; Journal of Web Librarianship  ; Journal of Librarianship and Information Science   ; Journal of Academic Librarianship  ; Journal of librarianship ; New Review of Children's Literature and Librarianship

    Acknowledgement

    The authors would like to thank Jon Klein, Eric Van Stegeren and Oliver Curtis from the TotalPatent™ team at Lexis-Nexis for their generous assistance with collecting and accessing the patent data used in this article.

    References

    1. Kirschenbaum, S. R. (2002) “Patenting basic research: Myths and realities”, Nature Neuroscience, 5(SUPPL.), pp. 1025-1027.
    2. Saathoff, J. (2010) “Technology transfer at the Technical University of Braunschweig: Cooperation projects, patents and start-ups” [Technologietransfer an der Technischen Universität Braunschweig: Kooperationsprojekte, Patente und Existenzgründungen],  PTB - Mitteilungen Forschen Und Prufen, Vol. 120, No. 4, pp. 308-311.
    3. Sampat, B. N. (2006) “Patenting and US academic research in the 20th century: The world before and after Bayh-Dole”, Research Policy,Vol. 35, No. 6, pp. 772-789
    4. Thursby, J. G., & Thursby, M. C. (2011). Has the bayh-dole act compromised basic research? Research Policy, 40(8), 1077-1083
    5. Meyer, J. (1985) "NOTIS: The System and Its Features", Library Hi Tech, Vol. 3, No 2, pp.81 – 90
    6. Dowlin, K.E. (1986) "MAGGIE III: The Prototypical Library System", Library Hi Tech, Vol. 4, No 4, pp.7 – 21
    7. Rosell, C., & Agrawal, A. (2009). Have university knowledge flows narrowed?. evidence from patent data. Research Policy, 38(1), 1-13.
    VN:F [1.9.22_1171]
    Rating: 0.0/10 (0 votes cast)
    trends6-4-full_2

    Scientific Evaluation and Metrics – an Interview with Julia Lane

    Q: You have an economics and statistics background. Can you tell us about how that was leveraged and used in the development of the Science of Science & Innovation Policy (SciSIP) program? A: It helped in two ways. First, it helped me engage with much of the social science community and get them interested in […]

    Read more >


    Julia Lane (jlane@air.org)

    Q: You have an economics and statistics background. Can you tell us about how that was leveraged and used in the development of the Science of Science & Innovation Policy (SciSIP) program?
    A: It helped in two ways. First, it helped me engage with much of the social science community and get them interested in studying the very interesting problems in science and innovation policy. Developing a strong researcher community is the most important part of the program. The second was in working with colleagues to build a strong data infrastructure. The need for a standardized way to connect scientific researchers receiving funding with the output that they produce was apparent from the beginning, as data were scattered around many different systems and couldn’t be patched together. I spent a lot of my career working in areas related to labour, education and health policy – particularly building datasets necessary to understand the results of policy interventions. That meant that I had a strong background to draw on, particularly when the focus of the Federal stimulus package was to track how the money created jobs.

    Q: STAR METRICS might be the first serious attempt to use a triangulated approach to evaluate the impact of Government funding. What were the major forces that influenced the development of STARMETRICS? (e.g. government mandate? market forces?)
    A: The overarching goal of the STAR METRICS program is to provide a better empirical basis for science policy. The program resulted from a federal mandate that asked institutions receiving stimulus grants to report on jobs resulting from them. Responding to this mandate was difficult because there was not one system that captured these data in an automated, consistent and measurable way. We developed an approach that enabled the information to be captured in a relatively low burden way. In addition, the federal agencies and the research agencies felt that this focus was far too narrow and that more aspects should be measured. Researchers funded by the SciSIP program had already developed some data, models and tools to respond to this need, and the Science of Science Policy Interagency group had developed a Roadmap (in 2008) that identified what key elements were necessary. This foundation, combined with input from agencies and research institutions, enabled us to start to build an open and automated data infrastructure that can be used by federal agencies, research institutions and researchers to document federal investments in science and to analyze the resulting relationship between inputs, outputs, and outcomes.

    Q: From your experience what are the major forces that inform and drive Science Policy? (e.g. scientific advancements, the scientists, Government budgets, public opinion)
    A: I and many others believe that there is no one single factor and that everything is endogenous. As everything else, when it comes to funding and budgets there are many forces involved and everything depends on everything else. One of my favourite articles on this exact matter was written by Daniel Sarewitz in 2010 (1) (). In this article he points to the importance of public opinion and as consequence the politics of funding and the gaps between scientists’ perceptions and the public’s. One factor is interwoven in the other, really. We hope that our efforts to build an open data infrastructure that incorporates as many of these factors as possible will help inform this complex process.

    Q: Do you see differences between countries in their approach and methodologies inthe evaluation of science? Can you name a few?

    A: Most countries still use number of publications and citations as an indicator of quality and productivity and that is worrying. We want to identify and support the best science, and I think there is good evidence that counting publications is not sufficient . We do know that it is possible to identify what it is that makes good science; tenure committees, academic administrators and peers routinely make decisions based on who they think is doing good science. The challenge is to get the community to identify what data form the basis for decisions made by these committees. In the past we relied on personal judgements and close networks of people in a certain field that knew each other and each other’s work. Nowadays, with the boost in international collaborations and team science as well as the interdisciplinary nature of science, these types of personal evaluations are no longer sustainable.

    Q: There is a lot of buzz around the term “science policy” and its implications on innovation. In your opinion, does science policy encourage or discourage scientific novelty or is it more of an organic process driven by discovery, budgets or other factors?

    A: As an economist I would describe the process as an endogenous process which means that funding is driven by science and science is driven by funding. Funding agencies always look for the next hot area of science to invest in. When funding allocated, the particular field will see growth which in turn attracts more funding. There’s a constant exchange between scientific innovation and discovery and investment. The challenge is to keep scientific progress so funding will remain available. This is an interesting process because we can see many examples of areas of research that died when funding was no longer available and on the other hand areas which stayed active and flourished even after funding wasn’t available. This in itself is an indicator of influence and impact.

    Q: Traditionally scientific impact was measured by citations and journals’ Impact Factors. Can you give an example of how the STAR METRICS’ triangulated approach integrated traditional methodologies as well as social, workforce and economic indicators?

    A: We are just starting down that path – we hope that the community will help the program develop new and better approaches. We have started to build an Application Program Interface (API) that, once launched, will permit the community to contribute their own insights. The API is based on NSF data, but will be extended to USDA data shortly. It uses new approaches, such as topic modelling techniques to mine large amounts of text (thanks to David Newman’s work at the University of California, Irvine) to describe NSF’s research portfolio. This work was combined with other new approaches, such as Lee Fleming’s work (at Harvard) to disambiguate the names of patent grantees from US Patent and Trademark Office data. A very skilled group of individuals worked to build that data infrastructure; the website that provides different lenses into this infrastructure can be seen here.

    Q: What future developments would you like to see for STAR METRICS and Science Policy in general?
    A: First, I’m encouraged by the growth in participating agencies and institutions both domestic and internationally; in addition to major federal agencies (OSTP, NIH, NSF, DOE, USDA and EPA), more than 85 universities are participating. Internationally Japan, Brazil, China and a number of European countries are actively exploring ways to evaluate science and innovation. There are plans to translate the Handbook of Science of Science Policy, which I edited with Kaye Husbands Fealing, Jack Marburger and Stephanie Shipp to Japanese and Chinese

    I would like STAR METRICS to be thought of as more than a dataset and seen as an approach. We always have to remember that the mission is to identify the best science and get the focus on by employing modern approaches. We owe it to the taxpayer and ourselves to make funding and other decisions in a scientific manner; we must make these investments as wise as possible. At the very least, we must have some understanding on how these investments make their way through the economic and scientific system.

    Q: Can you tell us about your new position and what you hope to achieve in your new role?

    A: I joined the American Institutes for Research (AIR) as a Senior Managing Economist both because of their reputation for producing high quality research and their international reach. As a government employee I wasn’t always able to work internationally and that has always been a great interest of mine. AIR is a very high quality research institution with a great deal of expertise in impact assessment and evaluation on both international and domestic levels. I look forward to collaborating with institutions around the world.

    Q: If there is one highlight or accomplishment that you could pick in your impressive career – what would it be?

    A: Do you mean other than my children?
    As far as my career, I’m very proud of the creation of the Longitudinal Employment-Household Dynamics (LEHD) program which started as a small research project of mine, and was eventually expanded to all 50 states. [Note: Julia won the Vladimir Chavrid Memorial Award for this program].

    About STAR METRICS

    STAR METRICS is a federal and research institution collaboration to create a repository of data and tools that will be useful to assess the impact of federal R&D investments. The National Institutes of Health (NIH) and the National Science Foundation (NSF), under the auspices of Office of Science and Technology Policy (OSTP), are leading this project. This project has been developed after a successful pilot project was conducted with several research institutions in the Federal Demonstration Partnership (FDP). For more Information visit: https://www.starmetrics.nih.gov/

    References

    1. http://www.nature.com/news/2010/101110/full/468135a.html
    VN:F [1.9.22_1171]
    Rating: 0.0/10 (0 votes cast)
    trends6-5-full_2

    Research Impact in the broadest sense: REF 14

    How do we know the return-on-investment for academic research? What is the impact of the academic studies that have been carried out? What is the value for money of the research that a university has performed? In search of excellence These questions, and more, have been important but difficult to answer for many higher education […]

    Read more >


    How do we know the return-on-investment for academic research? What is the impact of the academic studies that have been carried out? What is the value for money of the research that a university has performed?

    In search of excellence

    These questions, and more, have been important but difficult to answer for many higher education institutions. That is why they are the focus of the Research Excellence Framework (REF), a revised system for assessing the quality of research in UK higher education institutions, whose results will be finalised in 2014. The REF is undertaken by the four UK higher education funding bodies (HEFCE, SFC, HEFCW and DELNI), to help them decide where to allocate funding, and to provide accountability for public investment in research and benchmarks for universities in the UK. It is important to note that REF is a selective assessment exercise, not an audit: institutions make their own submissions, and it is possible to choose who is included, what constitutes their best work, and to demonstrate the social impact that will be derived from this. Therefore, its focus will truly be on excellence.

    In time

    In 2006, the UK Government announced its intention to reform its current framework for assessing and funding research. What followed was (1):

    • some initial studies on the potential use of bibliometric indicators;
    • a bibliometrics pilot exercise;
    • proposals to assess the social impact of research;
    • another pilot exercise to test and develop the proposed approach.

    In March 2011 the funding bodies announced their decisions on the weighting and assessment of impact within the REF.  In November 2011, a conference was organized at the Royal Society in London to examine in detail how the REF will work in practice (2). In this article, Research Trends combines insights from that meeting with background information to give you the complete and up-to-date picture.

    Force of impact

    Impact is defined in the broadest sense. The REF looks at several aspects of impact, such as scientific, economic and social, in particular using case studies to demonstrate social impact. Impact is evaluated by panels conducting peer review, and these experts will make use of different types of information and different sources as they deem appropriate. In doing so, they aim to arrive at the fairest evaluation possible, as it is based on many different aspects of impact. In order to ensure that the expert panels include a sufficient breadth and depth of expertise to produce robust assessments and carry the confidence of the community, submissions can be made to 36 different units of assessment, or subject areas.

    Bibliometric indicators derived from SciVerse Scopus will be available to 11 of the 36 panels (see Table 1 for details) to make use of to complement and / or confirm their peer review findings, if they would like. Most panels in Health Sciences, Life Sciences and Physical Sciences will have bibliometric information available. Fields such as engineering and Social Sciences, where citation information is known to have less uptake, will not make use of this option.

    REF unit of assessment Bibliometrics data available?
    1 Clinical Medicine Yes
    2 Public Health, Health Services and Primary Care Yes
    3 Allied Health Professions, Dentistry, Nursing and Pharmacy Yes
    4 Psychology, Psychiatry and Neuroscience Yes
    5 Biological Sciences Yes
    6 Agriculture, Veterinary and Food Science Yes
    7 Earth Systems and Environmental Sciences Yes
    8 Chemistry Yes
    9 Physics Yes
    10 Mathematical Sciences  
    11 Computer Science and Informatics Yes
    12 Aeronautical, Mechanical, Chemical and Manufacturing Engineering  
    13 Electrical and Electronic Engineering, Metallurgy and Materials  
    14 Civil and Construction Engineering  
    15 General Engineering  
    16 Architecture, Built Environment and Planning  
    17 Geography, Environmental Studies and Archaeology  
    18 Economics and Econometrics Yes
    19 Business and Management Studies  
    20 Law  
    21 Politics and International Studies  
    22 Social Work and Social Policy  
    23 Sociology  
    24 Anthropology and Development Studies  
    25 Education  
    26 Sports-Related Studies  
    27 Area Studies  
    28 Modern Languages  
    29 English Language and Literature  
    30 History  
    31 Classics  
    32 Philosophy  
    33 Theology and Religious Studies  
    34 Art and Design: History, Practice and Theory  
    35 Music, Drama, Dance and Performing Arts  
    36 Communication, Cultural and Media Studies, Library and Information Management  

    Table 1 - Units of assessment in REF 2014, indicating which ones will have bibliometric information available as part of the toolkit to evaluate impact.

    A rather unique example of impact

    You may know that Amy Williams won the Winter Olympic 2010 Gold in skeleton bobsleigh. But did you know that she was assisted in suiting the design to her body contours and method of steering by two 2 PhD students? Rachel Blackburn and James Roche, from the University of Southampton, helped realize this achievement. Dr Stephen Turnock, Blackburn and Roche’s supervisor from the University of Southampton's School of Engineering Sciences, said that they had “demonstrated that engineering excellence can be delivered by a small dedicated team with a clear vision”. (3)

    What’s your number?

    Some quotes from the Panel Criteria and working methods (4) clarify REF’s vision on the use of bibliometrics in this exercise.

    On using more than one indicator:

    “Where available and appropriate, citation data will be considered as a positive indicator of the academic significance of the research output. This will only be one element* to inform peer-review judgments about the quality of the output, and will not be used as a primary tool in the assessment.” (p. 13)

    On reliability and comparability:

    “… the citation count is sometimes, but not always, a reliable indicator. (…) such data may not always be available, and the level of citations can vary across disciplines (…). Sub-panels will be mindful that citation data may be an unreliable indicator for some forms of output (for example, relating to applied research) and for recent outputs.” (p.42)

    On putting a number into context:

    “ Where available on the Scopus citation database, the REF team will provide citation counts for submitted outputs, at a pre-determined date and in a standard format. The sub-panels will also receive discipline-specific contextual information about citation rates for each year of the assessment period to inform, if appropriate, the interpretation of citation data”.  (p.42)

    Cause for concern

    Much of the original criticism towards REF was focused on measurement of impact and how that could be done in an objective way, for instance (5). Often, it was commented that impact can’t include everything: it relies on strong underlying science, and several speakers at the conference underlined how “curiosity science” or “risk science” is not something an institution should be penalized for, even if it will not consistently pay off as much in terms of impact as the more “conservative science” will inevitably do.

    Other concerns have been expressed about specific subject areas, especially Arts & Humanities, and how it may be more difficult to show impact there, not only in terms of citation counts, but also in terms of impact on society. In this issue of Research Trends we describe the role of library and information science journals in generating patents, which is one potential way of showing concrete impact. Examples of impact could be: improving public understanding, improving patient outcome, or influencing policy.

    Watch this space

    Final results will not be published until 2014, but Research Trends will follow up and report on any interesting developments, as fostering excellence is crucial for the research of the future.  It’s not simply an exercise in assessing what was done, but what was done over and above the expected.

    Links

    1. http://www.hefce.ac.uk/research/ref/
    2. http://www.hepi.ac.uk/478-2001/HEPI's-Autumn-Conference-will-focus-on-the-new-Research-Excellence-Framework-which-is-due-to-go-live-in-2014.html
    3. http://www.epsrc.ac.uk/newsevents/news/2010/Pages/gold-winningsled.aspx
    4. http://www.hefce.ac.uk/research/ref/pubs/2012/01_12/
    5. http://www.brass.cf.ac.uk/uploads/Research_Excellence_Framework290410.pdf

    Notes

    *emphasis by authors

    VN:F [1.9.22_1171]
    Rating: 0.0/10 (0 votes cast)

    Did you know

    …Scopus is home to stars?

    Matthew Richardson

    In 2011 a paper published in Current Biology listed Colin Firth as co-author — Firth’s lab skills weren’t involved, but his guest editor spot at Radio 4’s Today program prompted the research1 — and with publication, comes a Scopus author profile.
    Firth joins others more familiar for their screen careers, such as Natalie Hershlag (alias Portman, star of films such as Black Swan and V for Vendetta), and Danica McKellar (best-known for her role in The Wonder Years, an American drama series from the late 1980s). McKellar even lends her name to the Chayes–McKellar–Winn theorem (which relates the Curie temperature of an iron bar magnet to the temperature below which percolation can occur)2.
    Each of these authors’ papers has been cited since publication — and unsurprisingly, each actor has an h-index equal to their number of publications.
    More generally speaking, celebrities can be found in Scopus as a topic of research. A search across the database looking for “celebrity” or “celebrities” in titles, abstracts and keywords of journal articles finds almost 3,000 papers, with more than 200 per year from 2008 to 2011.

    1. “Colin Firth credited in brain research”, BBC News, 5 June 2011
    2. http://terrytao.wordpress.com/2007/08/20/math-doesnt-suck-and-the-chayes-mckellar-winn-theorem/

    64
    http://www.researchtrends.com/wp-content/uploads/2012/03/Research_Trends_Issue27.pdf
    • Issues

    • Twitter Feed

    • Events

      iConference Workshop 2013

      iConference Workshop 2013: Computational Scientometrics: Theory and Applications      

      Learn more

      14th International Conference of the International Society of Scientometrics and Informetrics Conference

      Learn more

      9th International Conference on Webometrics, Informetrics and Scientometrics (WIS) & 14th COLLNET Meeting

      Learn more

      CoLIS 8: The eight International Conference on Conceptions of Library and Information Science

      Learn more

      THE (Times Higher Education) World Academic Summit

      Learn more

      Science and Technology in Society Annual Meeting

      Learn more

      Australian International Education Conference (AIEC)

      Learn more

      World Social Science Forum

      Learn more

      Critical Issues in Higher Education Conference

      Learn more

      Small Nations Research Conference aka U21 Educational Innovation Conference

      Learn more

    • Polls

      What is your favourite bibliometrics indicator?

      View Results

      Loading ... Loading ...