Articles

Research trends is an online magazine providing objective insights into scientific trends based on bibliometrics analyses.

The Challenges of Measuring Social Impact Using Altmetrics

In his contribution Mike Taylor investigates how altmetrics can be used to measure social impact. What are some of the obstacles that need to be overcome to make this possible?

Read more >


Abstract

Altmetrics gives us novel ways of detecting the use and consumption of scholarly publishing beyond formal citation, and it is tempting to treat these measurements as proxies for social impact. However, altmetrics is still too shallow and too narrow, and needs to increase its scope and reach before it can make a significant contribution to computing relative values for social impact. Furthermore, in order to go beyond limited comparisons of like-for-like and to become generally useful, computation models must take into account different socio-economic characteristics and legal frameworks. However, much of the necessary work can be borrowed from other fields, and the author concludes that – with certain extensions and added sophistication – altmetrics will be a valuable element in calculating social reach and impact.

 

Altmetrics is the collective term for scholarly usage data that goes beyond formal citation counts. Typically, altmetric data comes from specialist platforms and research tools but can also include data from general applications and technical platforms. Sometimes the term also encompasses mass-media references, and data from publishers, such as web page views and PDF downloads (see Table 1).

Types of data Examples
General social networking applications Mentions, links, ‘likes’, bookmarks to articles Twitter, Facebook, Del.icio.us
Specialized research tools Links, bookmarks, recommendations, additions to reading groups Zotero.org, Mendeley.com, Citeulike.org
Publisher platforms Web page views, PDF downloads, Abstract views PLoS, Scopus, Pubmed
Research output, publishing components Views, recommendations, shares Github.com, Datadryad.org, Slideshare.net, Figshare.com,

Table 1 - Classes of platform and tool that provide data for altmetrics applications. (Source: ImpactStory)

The principal use of altmetrics has been to study and describe the wider scholarly impact of research articles (1). Some researchers have concluded that altmetric activity might act as an indicator for eventual citation count (2) and that it might reveal academic engagement not recorded in citation count (3). As scholarly material becomes more widely available with increasing open access publishing, and as people increasingly use social networks, altmetrics could become a valuable part of understanding and measuring social impact.

The interest in quantifying social impact is not restricted to research: it is a field of increasing importance in the not-for-profit sector – both philanthropic and institutional (4) – and there have been attempts to measure the impact of investments in the arts (5). Within the philanthropic field, there is an emerging paradigm that borrows from business, with financial investment reaping social return. Not unsurprisingly, there are agencies that endeavor to assess and compare social impact and businesses that attempt to do likewise for pure profit investment.

The movement towards Gold open access publishing as promoted by the UK’s Finch Report and the EU’s Horizon 2020 project - where funding agencies become responsible for paying the cost of dissemination via research grants to scholars - enables a parallel with not-for-profit investment. In common with charitable funding bodies, it may be predicted that research investment agencies will increase their efforts to monitor the social impact of research outcomes in published articles. Thus, we can expect to see an increase in the amount of attention paid to assessing the social impact and social reach of research outcomes.

Social impact is often quantified in economic terms, using approaches that attempt to put a value on the benefits to the economy. However, while the social impact of a vaccine might be measured by computing the days lost to the economy, the loss of tax revenue and the cost of healthcare, applying the same approach in other fields – for example, studying the roots of cultural resistance to vaccination (6) - is considerably harder.

In this article, I describe an outline of a methodological approach for calculating or computing relative social reach – in other words how research findings can propagate from the published article into the public domain; while understanding the differences in social capacity – the means by which research can influence society, both by means of socio-economic structure, legislation and influential discourse. I also touch on the idea of social accessibility, or how research findings vary in their ability to be communicated and understood by a lay population.

As altmetric data can detect non-scholarly, non-traditional modes of research consumption, it seems likely that parties interested in social impact assessment via social reach may well start to develop altmetric-based analyses, to complement the existing approaches of case histories, and bibliometric analysis of citations within patent claims and published guidelines.

 

Understanding the social space

In order to begin the task of computing social impact using altmetric data, it is important to understand the varying socio-economic and legislative spaces in which disciplines exist, and to understand the limitations of what activity can be measured. The social space that scholarly endeavor occupies is not common for all disciplines, and it is not necessarily common across national boundaries. The social impact of Medicine is likely to be greater than that of Limnology or pure Mathematics; the study of Literature is politicized in some countries, but not in others (see Table 2).

Furthermore, research that delivers knowledge to practitioners and offers practical help to the lay community is likely to have more potential for a higher social impact and to affect more people if the authors are careful to increase their articles’ social accessibility by the inclusion of keywords, links to glossaries and a lay abstract. Here, publishers have a degree of responsibility, to support researchers in framing descriptions of their work and in developing platforms that are responsive to changing vocabularies. In the case history below, I describe how Nature went to some lengths to provide a social context to a complex story about genetic markers and tests.

Although this effort is commendable when publishing articles that have a high capacity for social influence, in an environment where research is becoming more accessible and where competition for funds is increasing, it behooves both researcher and publisher – both of whom are competing for funds – to increase social accessibility.

Obviously the bulk of most research articles are necessarily written in specialized language, and the addition of keywords, links and a sentence explaining the context of the work would do much to improve the semantic infrastructure and social accessibility through which research finds its social impact. An interesting essay on the importance and skills necessary to communicate research to the wider public may be read in Nature (7).

As the potential for social impact varies, so do the social and government structures that offer a legal and quasi-legal framework in which the research may be expressed: these, in turn, alter a discipline’s capacity for achieving social impact.

Medicine Nursing Economics Pure mathematics
Number of papers published in 2011 123,771 5759 23,727 14,379
Number of practitioners in the UK c250,000 (8) c700,000 (9) Thousands, 1000 in government 3000 (globally)
Professional governance Medical Research Council, General Medical Council, NICE Nursing and Midwifery Council, Royal College of Nursing, NICE None None
Scholarly impact (5FWRI 2011) 0.91 0.73 0.74 0.81
Number of UK Acts of Legislation relating to the practice of this profession (10). 78 UK Acts of Legislation relating to “General Medical Council” with more than 200 of wider relevance. 152 UK Acts specifically relate to Nursing, with more than 200 of wider relevance. 3 Acts for “economists” 30 UK Acts for “mathematics” (all education) and 3 Acts for “mathematician”
Social impact High High High Low

Table 2 - the socio-legal structure and potential for social impact of four research disciplines in the UK

Clearly, different disciplines and discoveries will reach their maximum impact within highly varying timescales. For example, one of the greatest discoveries was probably the development of the concept and number zero, which took place in several cultures and over many centuries, whereas the hypothetical discovery of a large meteorite heading for Earth would have a larger impact in a considerably short period.

The differences between disciplines’ structures and their relationship with the tools that affect social change imply that – at best – a multifactorial approach that can be tuned to focus on different disciplines would be needed to quantify the social impact of scholarly research. In the light of the lack of agreement on what social impact means, and the manifestly complicated background, it is hardly surprising that Bornmann concluded in 2012 that in the absence of any robust evaluations, the best way ahead is by peer review.

One profound difficulty in measuring social impact is the complex ways in which research can affect change. For example, there are relatively few economists, and while primary economic research rarely makes headline news, the impact through politics, finance and international agency is dramatic and far-reaching (see Figure 1).

An interesting example of when primary economic research does come to attention and an illustration of the disproportionate nature of social mentions and impact can be seen in the 2013 criticism of Reinhart and Rogoff’s 2010 paper “Growth in a Time of Debt” (12). The paper is described as a ‘foundational text’ (13) of austerity programs and according to ImpactStory received fewer than 100 social mentions. The methodological critique that discovered Excel errors and other problems received 250 social citations.

Figure 1 - Google search trends for “Reinhard Rogoff”

In the UK, there is no governance for economists, which can be contrasted with the various healthcare professions, which have many complex layers of professional and governing bodies, all of which work to affect social impact, as delivered by professionals. Within these formal channels, it is possible to apply bibliometrics by a citation analysis of the documents produced by governing bodies. However, as the distance from primary research to lay population increases, so does the lack of formal citation or linking.

Although it is tempting to equate social reach (i.e., getting research into the hands of the public), it is not the same as measuring social impact. At the moment, altmetrics provides us with a way of detecting when research is being passed on down the information chains – to be specific, altmetrics detects sharing, or propagation events. However, even though altmetrics offers us a much wider view of how scholarly research is being accessed and discussed than bibliometrics, at the moment the discipline lacks an approach towards understanding the wider context necessary to understand both the social reach and impact of scholarly work.

There have been attempts to create a statistical methodology that defines different types of consumption. Priem et al (14) reported finding five patterns of usage:

  • Highly rated by experts and highly cited
  • Highly cited
  • Highly shared
  • Highly bookmarked, but rarely cited
  • Uncited

Although these patterns of behavior are of potential interest, the authors do not attempt to correlate the clusters with scholarly and non-scholarly use. In fact, a literature search found no research currently available that compared disciplines or readership background using altmetric data. It is not surprising, therefore, to find that there is no research that focuses on the relationship between scholarly research and social consumption using altmetric data.

 

The challenge of measuring social impact and social reach with altmetrics

In order to provide some insight into how altmetrics might be used to measure social reach, and potentially enable the measurement of social impact, I investigated a high profile story that originated in primary research.

On March 27/28, 2013, all the major UK news outlets carried stories based on research that found genetic markers for breast, prostate and bowel cancel. The research reported significantly better accuracy for these markers than previous research. Mass media reports of the research suggested the possibility that within eighteen months (15) or five years (16), a saliva-based screening test for the genetic markers might become available via the UK’s National Health Service, at a cost to the NHS of between £5 and £30.

Some of the commentary included in the reporting came from the principal authors of the research, although there was no obvious linguistic cue or statement of interest (link to the Guardian), thus making the assignment of provenance a separate research project in itself.

This research is likely to have a strong social impact, as the tests are expected to be more accurate than present, can be undertaken at any stage of life and can be coupled with higher detection rates at earlier stages of cancer, with corresponding improvements in lifespan and quality of life. This is likely to be expressed though practitioners and their governing bodies, Government agencies, etc.

Despite the high potential for social impact, and links in the highest read online news stories to a dedicated home page set up by Nature to enable lay-consumption of the primary research, there was very little social activity relating to either the original research, or the essays that Nature had commissioned. Of all the papers linked from this dedicated page, only one was behind a pay wall (see Figure 2. A live altmetric report of this story may be viewed at ImpactStory (17)).

Only two of the mass media articles (the BBC (18) and The Guardian (19)) provided links to the original research. Not unsurprisingly, the stories resulted in a great deal of engagement in social media. However, a review of tweets, comments (323 on The Guardian’s article) and links to the mass media reports found that none was linked to the research, or used any helpful hash tag that would have helped disambiguate tweets about the test versus any other news relating to the forms of cancer.

As the collection of altmetrics is based around following links, a proportion of stories originating from the primary research are immeasurable, and research that constrains itself purely to an altmetric analysis is unlikely to add any helpful indication of social impact at this current period.

As the findings of the research flow out from the research papers, they undergo a series of transformations: they lose their technical language in favor of a lay presentation, the precise findings are replaced with interpretation, and information is added that attempts to predict social impact. In the case of the “£5 Spit Test for Cancer”, some of this interpretative layer is added by the primary researchers and some by other agents. In the course of this evolution, some terms emerge that fit the story, and it is typically these terms that are used by the lay community to discuss the research, along with links back to the mass media articles.

The failure of social and mass media reports to formally cite or link the journalism and commentary to the original research – despite Nature’s best efforts to make the research accessible to the general public – provides an indication that any effort to use existing altmetrics to gauge social reach of primary research is likely to be a worthless endeavor, and at best requires considerable more research. Unfortunately, the size of the altmetric figures for the primary research is insignificant, as is the number of visits to the Nature story page, and are too low to be used for statistically extrapolating social reach from direct social mentions.

Clearly this research was subject to discussion and sharing, amongst the population, but equally clearly, the bulk of this interest is at present as invisible to altmetrics, as it is to bibliometrics. In part, this problem is conceptual, perhaps derived from a desire to maintain a comparison between bibliometrics and altmetrics by restraining the latter’s reach to citation counts; perhaps it is purely a technological problem – however, whatever the cause, the result is the same: altmetrics provides a very weak picture of social reach and social impact.

To some extent, it is possible address the technological issues by extending existing altmetric tools to capture a richer set of data, for example, by accessing the number of comments that have been made on correctly linked articles. Unfortunately, these comments are three steps away from a link to the original research, as the Guardian links not to the papers, but to the dedicated page published by Nature (see Table 3).

Distance between social reference and original research
0 Original research paper linked from:
1 Nature’s dedicated page linked from:
2 Article in The Guardian linked from:
3 Comments on The Guardian, tweets about the newspaper article

Table 3 - As currently formulated, altmetrics only counts direct links to research material and therefore excludes many mass media and social media mentions. In the example in this table, only the page on nature.com links to the original research.

We cannot expect or mandate people to cite original research in their social dialogue, but it is possible to consider an approach that might allow us to study trends in related terms, and to incorporate these data points in our analyses. Within the field of natural language parsing, it is common to look at the coincidence of occurring terms in formally linked articles, and to use this data to infer meanings and relationships, which could be used to classify articles that lack the formal link or citation.

For example, in the mass media articles relating to the “£5 cancer test”, there are a number of entities – researchers, commentators, funding agencies, specific references to particular formal terms – that are common to many stories and blog posts that cover and interpret this research. That these are published within a similar time frame, and have a commonality of semantics, should allow researchers to compute an analysis of similarity, and by mapping these articles and mining the internet, it should be possible to achieve a wider understanding of the social reach of research. Such a study - the quantification of semantics, which might be known as semantometrics - would form ad hoc networks of related stories, commentary, and other social media, from which altmetric data could be harvested for an analysis of social reach (see Table 4).

Primary research Practitioner research Governance and Government Mass Media Social Media
Bibliometrics Bibliometrics Bibliometrics
Usage statistics Usage statistics Usage statistics Usage statistics
Altmetrics Altmetrics Altmetrics Altmetrics Altmetrics
Semantometrics Semantometrics Semantometrics

Table 4 - the development of analytics to compute social reach requires a variety of linking approaches, including extending altmetrics beyond direct linking and the application of semantic technology to discover non-linked influence


Conclusion

Although altmetrics has the potential to be a valuable element in calculating social reach – with the hope this would provide insights into understanding social impact – there are a number of essential steps that are necessary to place this work on the same standing as bibliometrics and other forms of assessment.

There needs to be more effort on behalf of altmetricians to extend their platforms to harvest data using direct relationships (e.g., comments on stories that contain formal links, retweets, social shares) to give a wider picture of social reach, both in terms of depth (or complexity) of the communication, and the breadth of relatively simple messages.

As highly influential stories have –at best – idiosyncratic links to the primary research, there should be investigations in the area of using semantics and natural language parsing to trace the spread of scientific ideas through society, and in particular to the application of semantic technologies to extend the scope of altmetrics.

The difference between the ways in which different disciplines discuss, interpret and share research findings needs to be understood. This step should enable publishers and researchers to improve the accessibility of research to practitioners and academics in response to experimental data.

For different disciplines usage patterns will vary according to differences in their social, legislative, economic and national characteristics and infrastructure. Research has a complex and dynamic context, and attempts to make comparisons must acknowledge these variations.

Figure 2 - Counts of tweets linking to primary research and a selection of online reports, and to the Nature dedicated page. The sum of all tweets linking to the primary research was 133 in March 2013.


References

(1) A good introduction to the ambitions of altmetrics may be found at altmetrics.org/manifesto
(2) Thelwall, M., Haustein, S., Larivière, V., Sugimoto, C.R. (2013) “Do altmetrics work? Twitter and ten other social web services”. Available at: http://www.scit.wlv.ac.uk/~cm1993/papers/Altmetrics_%20preprintx.pdf
(3) Priem, J., Piwowar, H.A., Hemminger, B.H. (2011) “Altmetrics in the wild: An exploratory study of impact metrics based on social media (presentation)”. Available at: http://jasonpriem.org/self-archived/PLoS-altmetrics-sigmetrics11-abstract.pdf
(4) Ebrahim, A. (2013) “Let’s be realistic about measuring impact”, http://blogs.hbr.org/hbsfaculty/2013/03/lets-be-realistic-about-measur.html
(5) Reeves, M., (2002) “Measuring the economic and social impact of the arts: a review”, http://www.artscouncil.org.uk/media/uploads/documents/publications/340.pdf
(6) Davis, V. (2012) “Humanities: the unexpected success story of the 21st century”, http://www.ioe.ac.uk/Virginia_Davis_2012.pdf
(7) Radford, T. (2011) “Of course scientists can communicate”, http://www.nature.com/news/2011/110126/full/469445a.html
(8) General Medical Council, “The state of medical education and practice in the UK: 2012”, http://data.gmc-uk.org.
(9) According to the Nursing and Midwifery Council, http://www.nmc-uk.org/About-us/Annual-reports-and-statutory-accounts, there are 671,668 nurses and midwives who are legally allowed to practice in the UK. Approximately 350,000 are employed by the NHS. http://www.nhsconfed.org/priorities/political-engagement/Pages/NHS-statistics.aspx
(10) UK Legislation, Full text searches on April 24, 2013 on http://www.legislation.gov.uk
(11) Wikipedia, “0 (number)”, http://en.wikipedia.org/wiki/0_%28number%29#History
(12) Reinhart, C.M., Rogoff, K.S., (2010) “Growth in a Time of Debt”, American Economic Review, American Economic Association, Vol. 100, No. 2, pp. 573-578, http://www.nber.org/papers/w15639
(13) Linkins, J. (2013) “Reinhard Rogoff austerity research errors may give unemployed someone to blame”, Huffington Post, http://www.huffingtonpost.com/2013/04/16/reinhart-rogoff-austerity_n_3095200.html
(14) Priem, J., Piwowar, H.A., Hemminger, B.H. (2012) “Altmetrics in the wild: Using social media to explore scholarly impact”, http://arxiv.org/html/1203.4745v1
(15) Mail Online, http://www.dailymail.co.uk/sciencetech/article-2299971/Simple-saliva-test-breast-prostate-cancer-soon-available-GP-just-5.html
(16) The Times, http://www.thetimes.co.uk/tto/health/news/article3724498.ece
(17) ImpactStory, http://www.impactstory.org/collection/dnwpb3
(18) BBC, http://www.bbc.co.uk/news/health-21945812
(19) The Guardian, http://www.guardian.co.uk/science/2013/mar/27/scientists-prostate-breast-ovarian-cancer

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Abstract

Altmetrics gives us novel ways of detecting the use and consumption of scholarly publishing beyond formal citation, and it is tempting to treat these measurements as proxies for social impact. However, altmetrics is still too shallow and too narrow, and needs to increase its scope and reach before it can make a significant contribution to computing relative values for social impact. Furthermore, in order to go beyond limited comparisons of like-for-like and to become generally useful, computation models must take into account different socio-economic characteristics and legal frameworks. However, much of the necessary work can be borrowed from other fields, and the author concludes that – with certain extensions and added sophistication – altmetrics will be a valuable element in calculating social reach and impact.

 

Altmetrics is the collective term for scholarly usage data that goes beyond formal citation counts. Typically, altmetric data comes from specialist platforms and research tools but can also include data from general applications and technical platforms. Sometimes the term also encompasses mass-media references, and data from publishers, such as web page views and PDF downloads (see Table 1).

Types of data Examples
General social networking applications Mentions, links, ‘likes’, bookmarks to articles Twitter, Facebook, Del.icio.us
Specialized research tools Links, bookmarks, recommendations, additions to reading groups Zotero.org, Mendeley.com, Citeulike.org
Publisher platforms Web page views, PDF downloads, Abstract views PLoS, Scopus, Pubmed
Research output, publishing components Views, recommendations, shares Github.com, Datadryad.org, Slideshare.net, Figshare.com,

Table 1 - Classes of platform and tool that provide data for altmetrics applications. (Source: ImpactStory)

The principal use of altmetrics has been to study and describe the wider scholarly impact of research articles (1). Some researchers have concluded that altmetric activity might act as an indicator for eventual citation count (2) and that it might reveal academic engagement not recorded in citation count (3). As scholarly material becomes more widely available with increasing open access publishing, and as people increasingly use social networks, altmetrics could become a valuable part of understanding and measuring social impact.

The interest in quantifying social impact is not restricted to research: it is a field of increasing importance in the not-for-profit sector – both philanthropic and institutional (4) – and there have been attempts to measure the impact of investments in the arts (5). Within the philanthropic field, there is an emerging paradigm that borrows from business, with financial investment reaping social return. Not unsurprisingly, there are agencies that endeavor to assess and compare social impact and businesses that attempt to do likewise for pure profit investment.

The movement towards Gold open access publishing as promoted by the UK’s Finch Report and the EU’s Horizon 2020 project - where funding agencies become responsible for paying the cost of dissemination via research grants to scholars - enables a parallel with not-for-profit investment. In common with charitable funding bodies, it may be predicted that research investment agencies will increase their efforts to monitor the social impact of research outcomes in published articles. Thus, we can expect to see an increase in the amount of attention paid to assessing the social impact and social reach of research outcomes.

Social impact is often quantified in economic terms, using approaches that attempt to put a value on the benefits to the economy. However, while the social impact of a vaccine might be measured by computing the days lost to the economy, the loss of tax revenue and the cost of healthcare, applying the same approach in other fields – for example, studying the roots of cultural resistance to vaccination (6) - is considerably harder.

In this article, I describe an outline of a methodological approach for calculating or computing relative social reach – in other words how research findings can propagate from the published article into the public domain; while understanding the differences in social capacity – the means by which research can influence society, both by means of socio-economic structure, legislation and influential discourse. I also touch on the idea of social accessibility, or how research findings vary in their ability to be communicated and understood by a lay population.

As altmetric data can detect non-scholarly, non-traditional modes of research consumption, it seems likely that parties interested in social impact assessment via social reach may well start to develop altmetric-based analyses, to complement the existing approaches of case histories, and bibliometric analysis of citations within patent claims and published guidelines.

 

Understanding the social space

In order to begin the task of computing social impact using altmetric data, it is important to understand the varying socio-economic and legislative spaces in which disciplines exist, and to understand the limitations of what activity can be measured. The social space that scholarly endeavor occupies is not common for all disciplines, and it is not necessarily common across national boundaries. The social impact of Medicine is likely to be greater than that of Limnology or pure Mathematics; the study of Literature is politicized in some countries, but not in others (see Table 2).

Furthermore, research that delivers knowledge to practitioners and offers practical help to the lay community is likely to have more potential for a higher social impact and to affect more people if the authors are careful to increase their articles’ social accessibility by the inclusion of keywords, links to glossaries and a lay abstract. Here, publishers have a degree of responsibility, to support researchers in framing descriptions of their work and in developing platforms that are responsive to changing vocabularies. In the case history below, I describe how Nature went to some lengths to provide a social context to a complex story about genetic markers and tests.

Although this effort is commendable when publishing articles that have a high capacity for social influence, in an environment where research is becoming more accessible and where competition for funds is increasing, it behooves both researcher and publisher – both of whom are competing for funds – to increase social accessibility.

Obviously the bulk of most research articles are necessarily written in specialized language, and the addition of keywords, links and a sentence explaining the context of the work would do much to improve the semantic infrastructure and social accessibility through which research finds its social impact. An interesting essay on the importance and skills necessary to communicate research to the wider public may be read in Nature (7).

As the potential for social impact varies, so do the social and government structures that offer a legal and quasi-legal framework in which the research may be expressed: these, in turn, alter a discipline’s capacity for achieving social impact.

Medicine Nursing Economics Pure mathematics
Number of papers published in 2011 123,771 5759 23,727 14,379
Number of practitioners in the UK c250,000 (8) c700,000 (9) Thousands, 1000 in government 3000 (globally)
Professional governance Medical Research Council, General Medical Council, NICE Nursing and Midwifery Council, Royal College of Nursing, NICE None None
Scholarly impact (5FWRI 2011) 0.91 0.73 0.74 0.81
Number of UK Acts of Legislation relating to the practice of this profession (10). 78 UK Acts of Legislation relating to “General Medical Council” with more than 200 of wider relevance. 152 UK Acts specifically relate to Nursing, with more than 200 of wider relevance. 3 Acts for “economists” 30 UK Acts for “mathematics” (all education) and 3 Acts for “mathematician”
Social impact High High High Low

Table 2 - the socio-legal structure and potential for social impact of four research disciplines in the UK

Clearly, different disciplines and discoveries will reach their maximum impact within highly varying timescales. For example, one of the greatest discoveries was probably the development of the concept and number zero, which took place in several cultures and over many centuries, whereas the hypothetical discovery of a large meteorite heading for Earth would have a larger impact in a considerably short period.

The differences between disciplines’ structures and their relationship with the tools that affect social change imply that – at best – a multifactorial approach that can be tuned to focus on different disciplines would be needed to quantify the social impact of scholarly research. In the light of the lack of agreement on what social impact means, and the manifestly complicated background, it is hardly surprising that Bornmann concluded in 2012 that in the absence of any robust evaluations, the best way ahead is by peer review.

One profound difficulty in measuring social impact is the complex ways in which research can affect change. For example, there are relatively few economists, and while primary economic research rarely makes headline news, the impact through politics, finance and international agency is dramatic and far-reaching (see Figure 1).

An interesting example of when primary economic research does come to attention and an illustration of the disproportionate nature of social mentions and impact can be seen in the 2013 criticism of Reinhart and Rogoff’s 2010 paper “Growth in a Time of Debt” (12). The paper is described as a ‘foundational text’ (13) of austerity programs and according to ImpactStory received fewer than 100 social mentions. The methodological critique that discovered Excel errors and other problems received 250 social citations.

Figure 1 - Google search trends for “Reinhard Rogoff”

In the UK, there is no governance for economists, which can be contrasted with the various healthcare professions, which have many complex layers of professional and governing bodies, all of which work to affect social impact, as delivered by professionals. Within these formal channels, it is possible to apply bibliometrics by a citation analysis of the documents produced by governing bodies. However, as the distance from primary research to lay population increases, so does the lack of formal citation or linking.

Although it is tempting to equate social reach (i.e., getting research into the hands of the public), it is not the same as measuring social impact. At the moment, altmetrics provides us with a way of detecting when research is being passed on down the information chains – to be specific, altmetrics detects sharing, or propagation events. However, even though altmetrics offers us a much wider view of how scholarly research is being accessed and discussed than bibliometrics, at the moment the discipline lacks an approach towards understanding the wider context necessary to understand both the social reach and impact of scholarly work.

There have been attempts to create a statistical methodology that defines different types of consumption. Priem et al (14) reported finding five patterns of usage:

  • Highly rated by experts and highly cited
  • Highly cited
  • Highly shared
  • Highly bookmarked, but rarely cited
  • Uncited

Although these patterns of behavior are of potential interest, the authors do not attempt to correlate the clusters with scholarly and non-scholarly use. In fact, a literature search found no research currently available that compared disciplines or readership background using altmetric data. It is not surprising, therefore, to find that there is no research that focuses on the relationship between scholarly research and social consumption using altmetric data.

 

The challenge of measuring social impact and social reach with altmetrics

In order to provide some insight into how altmetrics might be used to measure social reach, and potentially enable the measurement of social impact, I investigated a high profile story that originated in primary research.

On March 27/28, 2013, all the major UK news outlets carried stories based on research that found genetic markers for breast, prostate and bowel cancel. The research reported significantly better accuracy for these markers than previous research. Mass media reports of the research suggested the possibility that within eighteen months (15) or five years (16), a saliva-based screening test for the genetic markers might become available via the UK’s National Health Service, at a cost to the NHS of between £5 and £30.

Some of the commentary included in the reporting came from the principal authors of the research, although there was no obvious linguistic cue or statement of interest (link to the Guardian), thus making the assignment of provenance a separate research project in itself.

This research is likely to have a strong social impact, as the tests are expected to be more accurate than present, can be undertaken at any stage of life and can be coupled with higher detection rates at earlier stages of cancer, with corresponding improvements in lifespan and quality of life. This is likely to be expressed though practitioners and their governing bodies, Government agencies, etc.

Despite the high potential for social impact, and links in the highest read online news stories to a dedicated home page set up by Nature to enable lay-consumption of the primary research, there was very little social activity relating to either the original research, or the essays that Nature had commissioned. Of all the papers linked from this dedicated page, only one was behind a pay wall (see Figure 2. A live altmetric report of this story may be viewed at ImpactStory (17)).

Only two of the mass media articles (the BBC (18) and The Guardian (19)) provided links to the original research. Not unsurprisingly, the stories resulted in a great deal of engagement in social media. However, a review of tweets, comments (323 on The Guardian’s article) and links to the mass media reports found that none was linked to the research, or used any helpful hash tag that would have helped disambiguate tweets about the test versus any other news relating to the forms of cancer.

As the collection of altmetrics is based around following links, a proportion of stories originating from the primary research are immeasurable, and research that constrains itself purely to an altmetric analysis is unlikely to add any helpful indication of social impact at this current period.

As the findings of the research flow out from the research papers, they undergo a series of transformations: they lose their technical language in favor of a lay presentation, the precise findings are replaced with interpretation, and information is added that attempts to predict social impact. In the case of the “£5 Spit Test for Cancer”, some of this interpretative layer is added by the primary researchers and some by other agents. In the course of this evolution, some terms emerge that fit the story, and it is typically these terms that are used by the lay community to discuss the research, along with links back to the mass media articles.

The failure of social and mass media reports to formally cite or link the journalism and commentary to the original research – despite Nature’s best efforts to make the research accessible to the general public – provides an indication that any effort to use existing altmetrics to gauge social reach of primary research is likely to be a worthless endeavor, and at best requires considerable more research. Unfortunately, the size of the altmetric figures for the primary research is insignificant, as is the number of visits to the Nature story page, and are too low to be used for statistically extrapolating social reach from direct social mentions.

Clearly this research was subject to discussion and sharing, amongst the population, but equally clearly, the bulk of this interest is at present as invisible to altmetrics, as it is to bibliometrics. In part, this problem is conceptual, perhaps derived from a desire to maintain a comparison between bibliometrics and altmetrics by restraining the latter’s reach to citation counts; perhaps it is purely a technological problem – however, whatever the cause, the result is the same: altmetrics provides a very weak picture of social reach and social impact.

To some extent, it is possible address the technological issues by extending existing altmetric tools to capture a richer set of data, for example, by accessing the number of comments that have been made on correctly linked articles. Unfortunately, these comments are three steps away from a link to the original research, as the Guardian links not to the papers, but to the dedicated page published by Nature (see Table 3).

Distance between social reference and original research
0 Original research paper linked from:
1 Nature’s dedicated page linked from:
2 Article in The Guardian linked from:
3 Comments on The Guardian, tweets about the newspaper article

Table 3 - As currently formulated, altmetrics only counts direct links to research material and therefore excludes many mass media and social media mentions. In the example in this table, only the page on nature.com links to the original research.

We cannot expect or mandate people to cite original research in their social dialogue, but it is possible to consider an approach that might allow us to study trends in related terms, and to incorporate these data points in our analyses. Within the field of natural language parsing, it is common to look at the coincidence of occurring terms in formally linked articles, and to use this data to infer meanings and relationships, which could be used to classify articles that lack the formal link or citation.

For example, in the mass media articles relating to the “£5 cancer test”, there are a number of entities – researchers, commentators, funding agencies, specific references to particular formal terms – that are common to many stories and blog posts that cover and interpret this research. That these are published within a similar time frame, and have a commonality of semantics, should allow researchers to compute an analysis of similarity, and by mapping these articles and mining the internet, it should be possible to achieve a wider understanding of the social reach of research. Such a study - the quantification of semantics, which might be known as semantometrics - would form ad hoc networks of related stories, commentary, and other social media, from which altmetric data could be harvested for an analysis of social reach (see Table 4).

Primary research Practitioner research Governance and Government Mass Media Social Media
Bibliometrics Bibliometrics Bibliometrics
Usage statistics Usage statistics Usage statistics Usage statistics
Altmetrics Altmetrics Altmetrics Altmetrics Altmetrics
Semantometrics Semantometrics Semantometrics

Table 4 - the development of analytics to compute social reach requires a variety of linking approaches, including extending altmetrics beyond direct linking and the application of semantic technology to discover non-linked influence


Conclusion

Although altmetrics has the potential to be a valuable element in calculating social reach – with the hope this would provide insights into understanding social impact – there are a number of essential steps that are necessary to place this work on the same standing as bibliometrics and other forms of assessment.

There needs to be more effort on behalf of altmetricians to extend their platforms to harvest data using direct relationships (e.g., comments on stories that contain formal links, retweets, social shares) to give a wider picture of social reach, both in terms of depth (or complexity) of the communication, and the breadth of relatively simple messages.

As highly influential stories have –at best – idiosyncratic links to the primary research, there should be investigations in the area of using semantics and natural language parsing to trace the spread of scientific ideas through society, and in particular to the application of semantic technologies to extend the scope of altmetrics.

The difference between the ways in which different disciplines discuss, interpret and share research findings needs to be understood. This step should enable publishers and researchers to improve the accessibility of research to practitioners and academics in response to experimental data.

For different disciplines usage patterns will vary according to differences in their social, legislative, economic and national characteristics and infrastructure. Research has a complex and dynamic context, and attempts to make comparisons must acknowledge these variations.

Figure 2 - Counts of tweets linking to primary research and a selection of online reports, and to the Nature dedicated page. The sum of all tweets linking to the primary research was 133 in March 2013.


References

(1) A good introduction to the ambitions of altmetrics may be found at altmetrics.org/manifesto
(2) Thelwall, M., Haustein, S., Larivière, V., Sugimoto, C.R. (2013) “Do altmetrics work? Twitter and ten other social web services”. Available at: http://www.scit.wlv.ac.uk/~cm1993/papers/Altmetrics_%20preprintx.pdf
(3) Priem, J., Piwowar, H.A., Hemminger, B.H. (2011) “Altmetrics in the wild: An exploratory study of impact metrics based on social media (presentation)”. Available at: http://jasonpriem.org/self-archived/PLoS-altmetrics-sigmetrics11-abstract.pdf
(4) Ebrahim, A. (2013) “Let’s be realistic about measuring impact”, http://blogs.hbr.org/hbsfaculty/2013/03/lets-be-realistic-about-measur.html
(5) Reeves, M., (2002) “Measuring the economic and social impact of the arts: a review”, http://www.artscouncil.org.uk/media/uploads/documents/publications/340.pdf
(6) Davis, V. (2012) “Humanities: the unexpected success story of the 21st century”, http://www.ioe.ac.uk/Virginia_Davis_2012.pdf
(7) Radford, T. (2011) “Of course scientists can communicate”, http://www.nature.com/news/2011/110126/full/469445a.html
(8) General Medical Council, “The state of medical education and practice in the UK: 2012”, http://data.gmc-uk.org.
(9) According to the Nursing and Midwifery Council, http://www.nmc-uk.org/About-us/Annual-reports-and-statutory-accounts, there are 671,668 nurses and midwives who are legally allowed to practice in the UK. Approximately 350,000 are employed by the NHS. http://www.nhsconfed.org/priorities/political-engagement/Pages/NHS-statistics.aspx
(10) UK Legislation, Full text searches on April 24, 2013 on http://www.legislation.gov.uk
(11) Wikipedia, “0 (number)”, http://en.wikipedia.org/wiki/0_%28number%29#History
(12) Reinhart, C.M., Rogoff, K.S., (2010) “Growth in a Time of Debt”, American Economic Review, American Economic Association, Vol. 100, No. 2, pp. 573-578, http://www.nber.org/papers/w15639
(13) Linkins, J. (2013) “Reinhard Rogoff austerity research errors may give unemployed someone to blame”, Huffington Post, http://www.huffingtonpost.com/2013/04/16/reinhart-rogoff-austerity_n_3095200.html
(14) Priem, J., Piwowar, H.A., Hemminger, B.H. (2012) “Altmetrics in the wild: Using social media to explore scholarly impact”, http://arxiv.org/html/1203.4745v1
(15) Mail Online, http://www.dailymail.co.uk/sciencetech/article-2299971/Simple-saliva-test-breast-prostate-cancer-soon-available-GP-just-5.html
(16) The Times, http://www.thetimes.co.uk/tto/health/news/article3724498.ece
(17) ImpactStory, http://www.impactstory.org/collection/dnwpb3
(18) BBC, http://www.bbc.co.uk/news/health-21945812
(19) The Guardian, http://www.guardian.co.uk/science/2013/mar/27/scientists-prostate-breast-ovarian-cancer

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The use of assessment reports to generate and measure societal impact of research

How can societal impact be measured? Dr. Lutz Bornmann & Dr. Werner Marx propose a new method based on assessment reports, which summarize research findings for a non-scientific audience.

Read more >


Since the 1990s, research evaluation has been extended to include measures of the (1) social, (2) cultural, (3) environmental and (4) economic returns from publicly funded research. The best known national evaluation system in the world is without a doubt the UK Research Assessment Exercise (RAE), which has evaluated research in the UK since the 1980s. It is due to be replaced by the Research Excellence Framework (REF) in 2014. The REF defines research impact as “... the social, economic, environmental and/or cultural benefit of research to end users in the wider community regionally, nationally, and/or internationally” (1). In the new REF, research impact on society will not only be quantified, but expert panels will also review narrative evidence in case studies supported by the appropriate indicators (informed peer review).

Scientific impact measurement is carried out using a number of established methods (such as the statistical analysis of bibliometric data), which undergo continual development, and is supported by a dedicated community. Research on societal impact measurement is still in its infancy: so far, it does not have its own community with conferences and journals. Godin and Dore see research on the measurement of societal impact as being at the stage where the measurement of research and development (R&D) was in the early 1960s (2). Even though no robust or reliable methods for measuring societal impact have yet been developed, societal impact is already measured in terms of its budget relevance (or will be in the near future). In the REF, 20% of the evaluation of a research unit for the purpose of allocations will be determined by the societal influence dimension (65% by research output and 15% by environment). This uneven relationship between research and practice is astonishing when one considers how long it has taken for the methods for measuring scientific impact to develop sufficiently to reach the stage of budget-relevant practice.

The lack of an accepted and standardized framework for evaluating societal impact has resulted in the "case studies" approach being preferred, not only by the planned REF, but also in other evaluation contexts (3). Although this method is very labor-intensive and very much a ‘craft activity’ (4), it is currently considered the best method. Other approaches such as the "payback framework" are, however, similarly or even more laborious (5). We have developed an approach which, unlike the case study approach (and others), is relatively simple, can be used in almost every subject area and delivers results regarding societal impact which can be compared between disciplines (6). Our approach to societal impact starts with the actual function of science in society: to generate reliable knowledge. Robert K. Merton, who founded the modern sociology of science, used the term communalism to describe one of the norms of science: that scientific knowledge should be considered "public knowledge" and should be communicated not only to other scientists and students, but also to society at large (7).

That is why a document which we would like to refer to as an assessment report, summarizing the status of the research on a certain subject, represents knowledge which is available for society to access. A summary like this should be couched in generally understandable terms so that readers who are not familiar with the subject area or the scientific discipline can make sense of it. Assessment reports can be seen as part of the secondary literature of science, which has up to now drawn on review journals, monographs, handbooks and textbooks (primary literature is made up of the publications of the original research literature). They would be items of communal knowledge made available to society. To ensure that they are of high quality, they should be written by scientists (specialists in their field) and should undergo peer review to determine their correctness. The reviewers are asked to recommend the publication or rejection of the report and possibly formulate suggestions for improvement to the submitted work (8). Since the report will be read by scientists from other fields and non-scientists, it should be reviewed not only by an expert in the field but also by a stakeholder from the government, the industry or an advice centre.

What is an assessment report?

  • It can be produced for almost every discipline
  • It summarizes the status of research for those outside of the expert community
  • It should be written by scientists (specialists in their field), reflect above all research excellence and undergo peer review to determine its quality
  • It could be narrative reviews reporting on just the research results from the primary literature. For subjects in which effect sizes are available in empirical studies, on the other hand, it would be possible to carry out meta-analyses (the statistical analysis of a large collection of analysis results from individual studies)
  • It should be couched in generally understandable terms so that readers who are not familiar with the subject area or the scientific discipline (e.g. stakeholders from government, industry or an advice centre) can make sense of it
  • It could be produced separately, or be integrated as part of literature reviews written for the scientific community (as sections summing up the situation for those outside of the community)
  • In order to establish the assessment report as a service provided by science for society, it would be important firstly for research funders to make the production of assessment reports obligatory, and secondly that when research is evaluated (by institutes, research groups and individuals) assessment reports are regarded as content for society to generate societal impact
  • Societal impact is given when the content of an assessment report is addressed outside of science (in a government document, for example). This can be verified with tools which measure the attention that academic papers receive online (for example, Altmetric or ImpactStory).

Societal impact of research is obtained when the content of a report is addressed outside of science (in a government document, for example). This can be verified with tools which measure the attention that academic papers receive online. Altmetric, for example, captures hundreds of thousands of tweets, blog posts, news stories and other pieces of content that mention academic documents. In Scopus, the Elsevier literature database, it is possible to display not only the extent of the scientific impact, but also that of the societal impact of individual publications. This type of societal impact measurement would be carried out in a similar way as the measurement of scientific impact. In other words this would mean applying an established and successful method of measuring scientific impact (the analysis of citations in scientific publications) to the measurement of societal impact, which has clear benefits. For tools like Altmetric, citations should ideally be classified and weighted: for example, a citation by a member of the president’s council of economic advisors should have a different weight than a mention in a random blog post.

The assessment reports produced by the Intergovernmental Panel on Climate Change (IPCC)  are a good example of public knowledge which aims to generate societal impact from one subject area. This panel was founded in 1988 by UNEP, the United Nations Environmental Program, and the World Meteorological Organization (WMO) to summarize the status of climate research for political decision-makers. Its main function is to assess the risks and consequences of global warming to the environment and to society and to develop strategies to avoid it. Climate research is exceptional in that it is a strongly interdisciplinary field like almost no other and encompasses many of the Natural Sciences, such as Biological and Environmental Sciences, Atmospheric Chemistry, Meteorology and Geophysics. There are many points at which they intersect with Politics and Economics. With a broad focus in many fields and a rapidly expanding volume of publications, this area of research has become confusing even for insiders; it requires processing and summarizing to allow the results to be used outside of science and make them available for implementation in the form of policies. Working groups involving numerous scientists collate the results of research for the assessment report at regular intervals. The goal is a coherent representation of the research. As the reports reflect the current consensus of science on climate change, they have become the most important basis for scientific discussion and political decisions in this area. The scientific impact of the IPCC reports can be measured by using citations in scientific publications (such as the Web of Science and Scopus literature databases). The societal impact can be quantified with tools such as Altmetric (see above).

Measuring scientific impact with citations in journal papers can be used to great effect in the Physical and Life Sciences but hardly at all in Social Sciences and Arts & Humanities. An assessment report, on the other hand, can be produced for almost every discipline and its societal impact can be clearly measured. Since Social Sciences and Arts & Humanities are disciplines where impact is generally very difficult to measure, assessment reports offer the advantage of reporting not only on journal papers and monographs, but also on exhibitions and art objects. In our view, it is of fundamental importance that an assessment report reflects above all research excellence in the subject area. Thus, only publications which have been previously subjected to peer review should be included in an assessment report. For certain issues or in certain subject areas, it could be helpful if the reports for society were not produced separately, but integrated as part of the literature reviews written for the scientific community. This could be done in sections summing up the situation for those outside of the community (as a sort of comprehensive layman’s summary).

Although in many countries there is a wish (and a will) to measure societal impact, “it is not clear how to evaluate societal quality, especially for basic and strategic research” (9). In many studies in which societal impact has been measured, it is more often postulated than demonstrated by research. With the breadth of subject matter and complex content of the challenges facing society today (such as population growth or environmental pollution), the demand for available information to be summarized and evaluated for social and political purposes is rising. We have presented an approach with which the societal impact of research outcomes can be initiated and measured (6). We suggest that, as with the IPCC, assessment reports are written on certain research subjects which summarize the status of the research for those outside of the expert community. Tools such as Altmetric can verify the extent to which the assessment reports generate an impact. It would be desirable if these tools were to search through documents for citations used in various contexts for decisions, such as documents from governmental bodies, advisory bodies and consumer organizations.

 

References

(1) RQF development advisory group (2006) Research quality framework: assessing the quality and impact of research in Australia. The recommended RQF (report by the RQF development advisory group). Canberra, Australia: Department of Education, Science and Training.
(2) Godin, B., & Dore, C. (2005) Measuring the impacts of science; beyond the economic dimension, INRS Urbanisation, Culture et Société. Paper presented at the HIST Lecture, Helsinki Institute for Science and Technology Studies, Helsinki, Finland. Available at: http://www.csiic.ca/PDF/Godin_Dore_Impacts.pdf
(3) Bornmann, L. (2012). Measuring the societal impact of research. EMBO Reports, Vol. 13, No. 8, pp. 673-676.
(4) Martin, B. R. (2011) “The Research Excellence Framework and the 'impact agenda': are we creating a Frankenstein monster?”, Research Evaluation, Vol. 20, No. 3, pp. 247-254.
(5) Bornmann, L. (2013) “What is societal impact of research and how can it be assessed? A literature survey”, Journal of the American Society of Information Science and Technology, Vol. 64, No. 2, pp. 217-233.
(6) Bornmann, L., & Marx, W. (in press) “How should the societal impact of research be generated and measured? A proposal for a simple and practicable approach to allow interdisciplinary comparisons”, Scientometrics.
(7) Merton, R. K. (1973) The sociology of science: theoretical and empirical investigations. Chicago, IL, USA: University of Chicago Press.
(8) Bornmann, L. (2011) “Scientific peer review”, Annual Review of Information Science and Technology, Vol. 45, pp. 199-245.
(9) van der Meulen, B., & Rip, A. (2000) “Evaluation of societal quality of public sector research in the Netherlands”, Research Evaluation, Vol. 9, No. 1, pp. 11-25.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Since the 1990s, research evaluation has been extended to include measures of the (1) social, (2) cultural, (3) environmental and (4) economic returns from publicly funded research. The best known national evaluation system in the world is without a doubt the UK Research Assessment Exercise (RAE), which has evaluated research in the UK since the 1980s. It is due to be replaced by the Research Excellence Framework (REF) in 2014. The REF defines research impact as “... the social, economic, environmental and/or cultural benefit of research to end users in the wider community regionally, nationally, and/or internationally” (1). In the new REF, research impact on society will not only be quantified, but expert panels will also review narrative evidence in case studies supported by the appropriate indicators (informed peer review).

Scientific impact measurement is carried out using a number of established methods (such as the statistical analysis of bibliometric data), which undergo continual development, and is supported by a dedicated community. Research on societal impact measurement is still in its infancy: so far, it does not have its own community with conferences and journals. Godin and Dore see research on the measurement of societal impact as being at the stage where the measurement of research and development (R&D) was in the early 1960s (2). Even though no robust or reliable methods for measuring societal impact have yet been developed, societal impact is already measured in terms of its budget relevance (or will be in the near future). In the REF, 20% of the evaluation of a research unit for the purpose of allocations will be determined by the societal influence dimension (65% by research output and 15% by environment). This uneven relationship between research and practice is astonishing when one considers how long it has taken for the methods for measuring scientific impact to develop sufficiently to reach the stage of budget-relevant practice.

The lack of an accepted and standardized framework for evaluating societal impact has resulted in the "case studies" approach being preferred, not only by the planned REF, but also in other evaluation contexts (3). Although this method is very labor-intensive and very much a ‘craft activity’ (4), it is currently considered the best method. Other approaches such as the "payback framework" are, however, similarly or even more laborious (5). We have developed an approach which, unlike the case study approach (and others), is relatively simple, can be used in almost every subject area and delivers results regarding societal impact which can be compared between disciplines (6). Our approach to societal impact starts with the actual function of science in society: to generate reliable knowledge. Robert K. Merton, who founded the modern sociology of science, used the term communalism to describe one of the norms of science: that scientific knowledge should be considered "public knowledge" and should be communicated not only to other scientists and students, but also to society at large (7).

That is why a document which we would like to refer to as an assessment report, summarizing the status of the research on a certain subject, represents knowledge which is available for society to access. A summary like this should be couched in generally understandable terms so that readers who are not familiar with the subject area or the scientific discipline can make sense of it. Assessment reports can be seen as part of the secondary literature of science, which has up to now drawn on review journals, monographs, handbooks and textbooks (primary literature is made up of the publications of the original research literature). They would be items of communal knowledge made available to society. To ensure that they are of high quality, they should be written by scientists (specialists in their field) and should undergo peer review to determine their correctness. The reviewers are asked to recommend the publication or rejection of the report and possibly formulate suggestions for improvement to the submitted work (8). Since the report will be read by scientists from other fields and non-scientists, it should be reviewed not only by an expert in the field but also by a stakeholder from the government, the industry or an advice centre.

What is an assessment report?

  • It can be produced for almost every discipline
  • It summarizes the status of research for those outside of the expert community
  • It should be written by scientists (specialists in their field), reflect above all research excellence and undergo peer review to determine its quality
  • It could be narrative reviews reporting on just the research results from the primary literature. For subjects in which effect sizes are available in empirical studies, on the other hand, it would be possible to carry out meta-analyses (the statistical analysis of a large collection of analysis results from individual studies)
  • It should be couched in generally understandable terms so that readers who are not familiar with the subject area or the scientific discipline (e.g. stakeholders from government, industry or an advice centre) can make sense of it
  • It could be produced separately, or be integrated as part of literature reviews written for the scientific community (as sections summing up the situation for those outside of the community)
  • In order to establish the assessment report as a service provided by science for society, it would be important firstly for research funders to make the production of assessment reports obligatory, and secondly that when research is evaluated (by institutes, research groups and individuals) assessment reports are regarded as content for society to generate societal impact
  • Societal impact is given when the content of an assessment report is addressed outside of science (in a government document, for example). This can be verified with tools which measure the attention that academic papers receive online (for example, Altmetric or ImpactStory).

Societal impact of research is obtained when the content of a report is addressed outside of science (in a government document, for example). This can be verified with tools which measure the attention that academic papers receive online. Altmetric, for example, captures hundreds of thousands of tweets, blog posts, news stories and other pieces of content that mention academic documents. In Scopus, the Elsevier literature database, it is possible to display not only the extent of the scientific impact, but also that of the societal impact of individual publications. This type of societal impact measurement would be carried out in a similar way as the measurement of scientific impact. In other words this would mean applying an established and successful method of measuring scientific impact (the analysis of citations in scientific publications) to the measurement of societal impact, which has clear benefits. For tools like Altmetric, citations should ideally be classified and weighted: for example, a citation by a member of the president’s council of economic advisors should have a different weight than a mention in a random blog post.

The assessment reports produced by the Intergovernmental Panel on Climate Change (IPCC)  are a good example of public knowledge which aims to generate societal impact from one subject area. This panel was founded in 1988 by UNEP, the United Nations Environmental Program, and the World Meteorological Organization (WMO) to summarize the status of climate research for political decision-makers. Its main function is to assess the risks and consequences of global warming to the environment and to society and to develop strategies to avoid it. Climate research is exceptional in that it is a strongly interdisciplinary field like almost no other and encompasses many of the Natural Sciences, such as Biological and Environmental Sciences, Atmospheric Chemistry, Meteorology and Geophysics. There are many points at which they intersect with Politics and Economics. With a broad focus in many fields and a rapidly expanding volume of publications, this area of research has become confusing even for insiders; it requires processing and summarizing to allow the results to be used outside of science and make them available for implementation in the form of policies. Working groups involving numerous scientists collate the results of research for the assessment report at regular intervals. The goal is a coherent representation of the research. As the reports reflect the current consensus of science on climate change, they have become the most important basis for scientific discussion and political decisions in this area. The scientific impact of the IPCC reports can be measured by using citations in scientific publications (such as the Web of Science and Scopus literature databases). The societal impact can be quantified with tools such as Altmetric (see above).

Measuring scientific impact with citations in journal papers can be used to great effect in the Physical and Life Sciences but hardly at all in Social Sciences and Arts & Humanities. An assessment report, on the other hand, can be produced for almost every discipline and its societal impact can be clearly measured. Since Social Sciences and Arts & Humanities are disciplines where impact is generally very difficult to measure, assessment reports offer the advantage of reporting not only on journal papers and monographs, but also on exhibitions and art objects. In our view, it is of fundamental importance that an assessment report reflects above all research excellence in the subject area. Thus, only publications which have been previously subjected to peer review should be included in an assessment report. For certain issues or in certain subject areas, it could be helpful if the reports for society were not produced separately, but integrated as part of the literature reviews written for the scientific community. This could be done in sections summing up the situation for those outside of the community (as a sort of comprehensive layman’s summary).

Although in many countries there is a wish (and a will) to measure societal impact, “it is not clear how to evaluate societal quality, especially for basic and strategic research” (9). In many studies in which societal impact has been measured, it is more often postulated than demonstrated by research. With the breadth of subject matter and complex content of the challenges facing society today (such as population growth or environmental pollution), the demand for available information to be summarized and evaluated for social and political purposes is rising. We have presented an approach with which the societal impact of research outcomes can be initiated and measured (6). We suggest that, as with the IPCC, assessment reports are written on certain research subjects which summarize the status of the research for those outside of the expert community. Tools such as Altmetric can verify the extent to which the assessment reports generate an impact. It would be desirable if these tools were to search through documents for citations used in various contexts for decisions, such as documents from governmental bodies, advisory bodies and consumer organizations.

 

References

(1) RQF development advisory group (2006) Research quality framework: assessing the quality and impact of research in Australia. The recommended RQF (report by the RQF development advisory group). Canberra, Australia: Department of Education, Science and Training.
(2) Godin, B., & Dore, C. (2005) Measuring the impacts of science; beyond the economic dimension, INRS Urbanisation, Culture et Société. Paper presented at the HIST Lecture, Helsinki Institute for Science and Technology Studies, Helsinki, Finland. Available at: http://www.csiic.ca/PDF/Godin_Dore_Impacts.pdf
(3) Bornmann, L. (2012). Measuring the societal impact of research. EMBO Reports, Vol. 13, No. 8, pp. 673-676.
(4) Martin, B. R. (2011) “The Research Excellence Framework and the 'impact agenda': are we creating a Frankenstein monster?”, Research Evaluation, Vol. 20, No. 3, pp. 247-254.
(5) Bornmann, L. (2013) “What is societal impact of research and how can it be assessed? A literature survey”, Journal of the American Society of Information Science and Technology, Vol. 64, No. 2, pp. 217-233.
(6) Bornmann, L., & Marx, W. (in press) “How should the societal impact of research be generated and measured? A proposal for a simple and practicable approach to allow interdisciplinary comparisons”, Scientometrics.
(7) Merton, R. K. (1973) The sociology of science: theoretical and empirical investigations. Chicago, IL, USA: University of Chicago Press.
(8) Bornmann, L. (2011) “Scientific peer review”, Annual Review of Information Science and Technology, Vol. 45, pp. 199-245.
(9) van der Meulen, B., & Rip, A. (2000) “Evaluation of societal quality of public sector research in the Netherlands”, Research Evaluation, Vol. 9, No. 1, pp. 11-25.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Buzzwords and Values: The prominence of “impact” in UK research policy and governance

Dr. Alis Oancea describes the role societal impact plays in UK research policy. How important has this become in recent years?

Read more >


Impact assessment is now a prominent technology for research governance in the United Kingdom (UK). The current focus on the impact of research beyond academia – while clearly the buzzword of the moment in UK research policy – has complex roots in policy discourses around wealth creation, user relevance, public accountability, and evidence-based decision-making (some of which I unpack in a forthcoming paper). Given this complexity, a grudging consensus is currently being forged around the importance of strengthening the connections between academic and non-academic contexts, while controversy continues around performance-based higher education funding and the extent to which universities ought to be held accountable by the government (on behalf of the taxpayer) for the non-academic implications and outcomes of their research. While these pivotal principles, and the values underpinning them, are being renegotiated, much of the attention of both the government and the higher education institutions has been diverted, under the direct influence of the forthcoming national assessment exercise for research (REF, due in 2014), towards the technicalities of designing and using measures of impact.


The impact agenda and outcomes-based allocation of public funding for research

In a policy and governance context that favors selectivity and concentration, and on the background of economic crisis, research funding is no longer defined in policy circles as a long-term investment in intrinsically worthwhile activities. Rather, in what is described as a knowledge and innovation economy, research is expected to make a case for funding in terms of external value (1, 2). Assessing and demonstrating the non-academic impact of publicly funded university research has thus become a key element of recent UK research policy. The pursuit of research impact is now a priority for both arms of the UK public research funding system, known as the “dual support” system (3), as well as for the direct commissioning of research by government departments and agencies. The UK “dual support” system comprises separate funding streams for core research infrastructure (in the shape of outcome-based block grants distributed by the four national higher education funding councils – informed by the outcomes of the Research Excellence Framework, or REF (the REF was preceded by the Research Assessment Exercises, which, between 1986 and 2008, informed the selective allocation by the higher education funding councils of core public grants to higher education research) and for project expenditure (allocated competitively by the seven Research Councils UK).

The Royal Charters and the current strategic framework, “Excellence with Impact”, of the UK Research Councils draw direct links between good research and social, cultural, health, economic and environmental impacts. At proposal stage, the Councils are interested in potential impacts and in the ways in which they will be pursued; for example, they require impact summaries and “pathways to impact” statements in applications for funding. At the end-of-award reporting stage, the Councils are also interested in the actual impacts achieved by a project over its lifetime. The Research Councils’ interest in impact pre-dates the REF (e.g. 4, 5, and 6) and is also evident in their commissioning of studies of research impact, knowledge transfer, practice-based research, and industry engagement, many of which are evaluation studies. Examples include the areas of engineering and physical sciences (7), medical research (8), arts and humanities (9), and the social sciences (10, 11, 12, and 13). There is also a wealth of commissioned impact “case studies” which showcase successful practice (14). On this basis, the Councils have produced guidelines and “toolkits” for impact – see, for example, the Economic and Social Research Council’s online “Impact Toolkit” and Impact Case Studies. Other key players in the recent impact debates have been the British Academy, which produced its own reports on the role of the humanities and social sciences in policy-making (15, 16), the Royal Society, Universities UK action group, various learned societies, and research charities such as the Wellcome Trust and Jisc (formerly the Joint Infrastructure Committee for higher education, now a charity aiming to foster “engaged research”).

The most controversial and publicly visible move towards prioritizing research impact was its introduction, following public consultation and a pilot exercise in 2010, as one of the three key components (alongside quality of research outputs and research environment) of the Research Excellence Framework, the national exercise for the assessment of higher education research in the UK, due in 2014. Impact has thus become part of the mechanism for performance-based research governance, as the REF is intended to inform the selective allocation of public funds, to function as a mechanism for accountability, and to enable higher education benchmarking. The current documentation for the REF gives impact a 20% weighting of the final grade profile awarded to a submitting institution - down, following public consultation, from an initially proposed 25%. For the purposes of the REF, impact is defined as “an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia” (17; and 18). It will be assessed by academic and user reviewers on the basis of standard-format case studies and unit-level strategic statements, using the twin criteria of “reach” (or breadth) and “significance” (or depth) of impact. In preparing their submissions, universities are currently grappling with the need simultaneously to define, track and demonstrate the impacts of their research, a task for which they had been largely ill-prepared, in terms of infrastructure, capacity, management and strategy. Important challenges at the moment concern the variable time lag between carrying out research, achieving impact, and documenting and reporting it; the difficulties involved in either attributing (parts of) non-academic changes and benefits to particular research projects and outcomes, or demonstrating the material and distinctive contribution of this research to such changes; and evidencing chains of action and influence that may have not been documented at the time of their occurrence.

As a consequence of these initiatives, UK higher education-based researchers are now subject to multiple requirements to assess and demonstrate the impact of their work, in a variety of contexts and for a range of different purposes. The impact to be “demonstrated” could be that of a project or research unit, of a program, of a funding body/strategy, of an area of research, or of the research system as a whole – each captured at different points in time, and relative to varying time horizons and to different types and methodologies of research. Additional pressure is exercised on academic research by competition from other research settings, such as private and third-sector research, both of which may have a sharper focus on non-academic benefits as part of their rationale. Increasingly, too, public expectations from higher education-based research are influenced by the fact that other areas of public service – including health, transport, urban planning, but also culture and heritage, media, and sports – face tighter requirements to account for their use of public funding in terms of outcomes and benefits.


Capturing research impacts

The current interest in research impact, spurred on by the forthcoming REF 2014, has stimulated a growing body of literature (6, 19). Together with practical experience in program evaluation and policy analysis, this literature is already underpinning a small industry around designing and using instruments for measuring and reporting the socio-economic impacts of research. It has also inspired the production of various open-access or commercially available tools for impact tracking and visualization, such as ImpactStory and Altmetrics. Examples of methodological literature include the report to HEFCE and to BIS, on the REF impact pilot (20); the report on frameworks for assessing research impact (21); the report on knowledge transfer metrics commissioned by UNICO (22); also, internationally, the guides produced by projects such as ERiC (23), and SIAMPI (24). This technical literature is complemented by more conceptual work on higher education, research policy-making, and the relationships between research and processes of change at all levels of society.

There is also wide recognition that in the current context for research it is particularly important to reflect critically on the various strategies for increasing and demonstrating research impacts being used or promoted in different institutions and disciplines (see 25, 26, and the LSE Impact blog). In the UK, a number of centers, such as the Research Unit for Research Utilisation at the Universities of Edinburgh and St Andrews (6, 19), the Health Economics Research Group at Brunel University (27), the Public Policy Group’s HEFCE-funded Impact of Social Sciences project at the London School of Economics (11), the Science & Technology Policy Research Centre at the University of Sussex (28, 29), and, most recently, the DESCRIBE project at the University of Exeter, have made notable contributions to this process.


Concluding comment

Additional studies and evaluations based in, and commissioned by, individual universities and university mission groups have highlighted the connections between institutional contexts and impact interpretations and practices; examples include reports for the University of Oxford (25); for the University of Cambridge (9); for the Russell Group Universities (30); for the 1994 Group (31); and for the Million+ group, formerly the Coalition of Modern Universities (32). These studies explore the ways in which universities have adapted the policy-driven impact agenda to their own ways of working and to their longer-term concerns with the quality, sustainability and benefits of research activity. Impact may be the buzzword of the moment, but universities had reflected on their wider mission long before impact was deemed a metaphor worth turning into a governance technology. Many have embedded their efforts to capture research impact in their wider social accountability projects and plugged it in their continued public engagement, community interaction and outreach activities (26). In order to do this, they are reinterpreting the official agenda and articulating alternatives. These reinterpretations – and their visibility and weight in the public domain – are essential if impact is not to become yet another measure rendered meaningless by reducing it to a target for performance.

For impact indicators to be an adequate proxy of research value, they need not only to be technically refined measures, but also to be pitched at the right level, so that they can function as catalysts of, rather than destabilize, higher education activity. To do this, they depend on a healthy ecology of higher education, which in turn requires intellectual autonomy, financial sustainability and insightful governance. Without these preconditions, the high-stakes assessment of impact may fail to reflect and support ongoing research value, and end up simply capturing assessment-driven hyperactivity.


References

(1) BIS (2011) Innovation and Research Strategy for Growth. London: Department for Business, Innovation and Skills.
(2) Arts Council England (2012) Measuring the Economic Benefits of Arts and Culture. BoP Consulting.
(3) Hughes, A., Kitson, M., Bullock, A. and Milner, I. (2013) The Dual Funding Structure for Research in the UK: Research Council and Funding Council Allocation Methods and the Pathways to Impact of UK Academics. BIS report.
(4) RCUK (2002) Science Delivers. Available at: http://www.rcuk.ac.uk/documents/publications/science_delivers.pdf
(5) RCUK (2006) Increasing the Economic Impact of the Research Councils. Available at: http://www.rcuk.ac.uk/documents/publications/ktactionplan.pdf
(6) Davies, H., Nutley, S. and Walter, I. (2005) Approaches to Assessing the Non-academic Impact of Social Science Research. Report of an ESRC symposium on assessing the non-academic impact of research, 12th/13th May 2005.
(7) Salter, A., Tartari, V., D’Este, P. and Neely, A. (2010) The Republic of Engagement Exploring UK Academic Attitudes to Collaborating with Industry and Entrepreneurship. Advanced Institute of Management Research.
(8) UK Evaluation Forum (2006) Medical Research: Assessing the benefits to society. London: Academy of Medical Sciences, Medical Research Council and Wellcome Trust.
(9) Levitt, R., Claire, C., Diepeveen, S., Ní Chonaill, S., Rabinovich, L. and Tiessen, J. (2010) Assessing the Impact of Arts and Humanities Research at the University of Cambridge. RAND Europe.
(10) LSE Public Policy Group (2007) The Policy and Practice Impacts of the ESRC’s ‘Responsive Mode’ Research Grants in Politics and International Studies. ESRC report.
(11) LSE Public Policy Group (2008) Maximizing the Social, Policy and Economic Impacts of Research in the Humanities and Social Sciences. British Academy report.
(12) Meagher, L.R. and Lyall, C. (2007) Policy and Practice Impact Case Study of ESRC Grants and Fellowships in Psychology. ESRC report.
(13) Oancea, A. and Furlong, J. (2007) Expressions of excellence and the assessment of applied and practice-based research, Research Papers in Education, Vol. 22, No. 2.
(14) RCUK (2012) RCUK Impact Report 2012. Available at: http://www.rcuk.ac.uk/Documents/publications/Impactreport2012.pdf
(15) British Academy (2008) Punching Our Weight: The role of the humanities and social sciences in policy-making. London: BA.
(16) British Academy (2004) ‘That Full Complement of Riches’: The contributions of the Arts, Humanities and Social Sciences to the nation’s wealth. London: BA
(17) REF (2012/11) Assessment framework and guidance on submissions.
(18) REF (2011) Decisions on assessing research impact. REF 01.2011.
(19) Nutley, S., Percy-Smith, J. and Solesbury, W. (2003) Models of Research Impact: A cross-sector review of literature and practice. London: LSDA.
(20) Technopolis Ltd (2010) REF Research Impact Pilot Exercise Lessons-Learned Project: Feedback on Pilot Submissions. HEFCE.
(21) Grant, J., Brutsher, P.-B., Kirk, S. E., Butler, L., & Wooding, S. (2009) Capturing Research Impacts: A review of international practice. HEFCE/RAND Europe.
(22) Holi, M.T., Wickramasinghe, R. and van Leeuwen, M. (2008) Metrics for the Evaluation of Knowledge Transfer Activities at Universities. UNICO report.
(23) ERiC (2010) Evaluating the Societal Relevance of Academic Research: A guide. Evaluating Research in Context, Netherlands.
(24) SIAMPI (2010) SIAMPI Approach for the Assessment of Social Impact. Report of SIAMPI Workshop 10.
(25) Oancea, A. (2011) Interpretations and Practices of Research Impact across the Range of Disciplines. HEIF/Oxford University.
(26) Ovseiko, P.V., Oancea, A., and Buchan, A.M. (2012) Assessing research impact in academic clinical medicine: a study using Research Excellence Framework pilot impact indicators. In: BMC Health Services Research 2012, 12:478 .
(27) HERG and Rand Europe (2011) Project Retrosight. Understanding the returns from cardiovascular and stroke research. Cambridge: RAND Europe.
(28) Martin, B.R., and Tang, P. (2007) The benefits from publicly funded research. University of Sussex.
(29) Molas-Gallart, J., Tang, P., Sinclair, T., Morrow, S., and Martin, B.R. (1999) Assessing Research Impact on Non-academic Audiences. Swindon: ESRC.
(30) Russell Pioneering Research Group (2012) The Social Impact of Research Conducted in Russell Group Universities. Russell Group Papers, 3.
(31) McMillan, T., Norton, T., Jacobs, J.B., and Ker, R. (2010) Enterprising Universities: Using the research base to add value to business. 1994 Group report.
(32) Little, A. (2006) The Social and Economic Impact of Publicly Funded Research in 35 Universities. Coalition for Modern Universities.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Impact assessment is now a prominent technology for research governance in the United Kingdom (UK). The current focus on the impact of research beyond academia – while clearly the buzzword of the moment in UK research policy – has complex roots in policy discourses around wealth creation, user relevance, public accountability, and evidence-based decision-making (some of which I unpack in a forthcoming paper). Given this complexity, a grudging consensus is currently being forged around the importance of strengthening the connections between academic and non-academic contexts, while controversy continues around performance-based higher education funding and the extent to which universities ought to be held accountable by the government (on behalf of the taxpayer) for the non-academic implications and outcomes of their research. While these pivotal principles, and the values underpinning them, are being renegotiated, much of the attention of both the government and the higher education institutions has been diverted, under the direct influence of the forthcoming national assessment exercise for research (REF, due in 2014), towards the technicalities of designing and using measures of impact.


The impact agenda and outcomes-based allocation of public funding for research

In a policy and governance context that favors selectivity and concentration, and on the background of economic crisis, research funding is no longer defined in policy circles as a long-term investment in intrinsically worthwhile activities. Rather, in what is described as a knowledge and innovation economy, research is expected to make a case for funding in terms of external value (1, 2). Assessing and demonstrating the non-academic impact of publicly funded university research has thus become a key element of recent UK research policy. The pursuit of research impact is now a priority for both arms of the UK public research funding system, known as the “dual support” system (3), as well as for the direct commissioning of research by government departments and agencies. The UK “dual support” system comprises separate funding streams for core research infrastructure (in the shape of outcome-based block grants distributed by the four national higher education funding councils – informed by the outcomes of the Research Excellence Framework, or REF (the REF was preceded by the Research Assessment Exercises, which, between 1986 and 2008, informed the selective allocation by the higher education funding councils of core public grants to higher education research) and for project expenditure (allocated competitively by the seven Research Councils UK).

The Royal Charters and the current strategic framework, “Excellence with Impact”, of the UK Research Councils draw direct links between good research and social, cultural, health, economic and environmental impacts. At proposal stage, the Councils are interested in potential impacts and in the ways in which they will be pursued; for example, they require impact summaries and “pathways to impact” statements in applications for funding. At the end-of-award reporting stage, the Councils are also interested in the actual impacts achieved by a project over its lifetime. The Research Councils’ interest in impact pre-dates the REF (e.g. 4, 5, and 6) and is also evident in their commissioning of studies of research impact, knowledge transfer, practice-based research, and industry engagement, many of which are evaluation studies. Examples include the areas of engineering and physical sciences (7), medical research (8), arts and humanities (9), and the social sciences (10, 11, 12, and 13). There is also a wealth of commissioned impact “case studies” which showcase successful practice (14). On this basis, the Councils have produced guidelines and “toolkits” for impact – see, for example, the Economic and Social Research Council’s online “Impact Toolkit” and Impact Case Studies. Other key players in the recent impact debates have been the British Academy, which produced its own reports on the role of the humanities and social sciences in policy-making (15, 16), the Royal Society, Universities UK action group, various learned societies, and research charities such as the Wellcome Trust and Jisc (formerly the Joint Infrastructure Committee for higher education, now a charity aiming to foster “engaged research”).

The most controversial and publicly visible move towards prioritizing research impact was its introduction, following public consultation and a pilot exercise in 2010, as one of the three key components (alongside quality of research outputs and research environment) of the Research Excellence Framework, the national exercise for the assessment of higher education research in the UK, due in 2014. Impact has thus become part of the mechanism for performance-based research governance, as the REF is intended to inform the selective allocation of public funds, to function as a mechanism for accountability, and to enable higher education benchmarking. The current documentation for the REF gives impact a 20% weighting of the final grade profile awarded to a submitting institution - down, following public consultation, from an initially proposed 25%. For the purposes of the REF, impact is defined as “an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia” (17; and 18). It will be assessed by academic and user reviewers on the basis of standard-format case studies and unit-level strategic statements, using the twin criteria of “reach” (or breadth) and “significance” (or depth) of impact. In preparing their submissions, universities are currently grappling with the need simultaneously to define, track and demonstrate the impacts of their research, a task for which they had been largely ill-prepared, in terms of infrastructure, capacity, management and strategy. Important challenges at the moment concern the variable time lag between carrying out research, achieving impact, and documenting and reporting it; the difficulties involved in either attributing (parts of) non-academic changes and benefits to particular research projects and outcomes, or demonstrating the material and distinctive contribution of this research to such changes; and evidencing chains of action and influence that may have not been documented at the time of their occurrence.

As a consequence of these initiatives, UK higher education-based researchers are now subject to multiple requirements to assess and demonstrate the impact of their work, in a variety of contexts and for a range of different purposes. The impact to be “demonstrated” could be that of a project or research unit, of a program, of a funding body/strategy, of an area of research, or of the research system as a whole – each captured at different points in time, and relative to varying time horizons and to different types and methodologies of research. Additional pressure is exercised on academic research by competition from other research settings, such as private and third-sector research, both of which may have a sharper focus on non-academic benefits as part of their rationale. Increasingly, too, public expectations from higher education-based research are influenced by the fact that other areas of public service – including health, transport, urban planning, but also culture and heritage, media, and sports – face tighter requirements to account for their use of public funding in terms of outcomes and benefits.


Capturing research impacts

The current interest in research impact, spurred on by the forthcoming REF 2014, has stimulated a growing body of literature (6, 19). Together with practical experience in program evaluation and policy analysis, this literature is already underpinning a small industry around designing and using instruments for measuring and reporting the socio-economic impacts of research. It has also inspired the production of various open-access or commercially available tools for impact tracking and visualization, such as ImpactStory and Altmetrics. Examples of methodological literature include the report to HEFCE and to BIS, on the REF impact pilot (20); the report on frameworks for assessing research impact (21); the report on knowledge transfer metrics commissioned by UNICO (22); also, internationally, the guides produced by projects such as ERiC (23), and SIAMPI (24). This technical literature is complemented by more conceptual work on higher education, research policy-making, and the relationships between research and processes of change at all levels of society.

There is also wide recognition that in the current context for research it is particularly important to reflect critically on the various strategies for increasing and demonstrating research impacts being used or promoted in different institutions and disciplines (see 25, 26, and the LSE Impact blog). In the UK, a number of centers, such as the Research Unit for Research Utilisation at the Universities of Edinburgh and St Andrews (6, 19), the Health Economics Research Group at Brunel University (27), the Public Policy Group’s HEFCE-funded Impact of Social Sciences project at the London School of Economics (11), the Science & Technology Policy Research Centre at the University of Sussex (28, 29), and, most recently, the DESCRIBE project at the University of Exeter, have made notable contributions to this process.


Concluding comment

Additional studies and evaluations based in, and commissioned by, individual universities and university mission groups have highlighted the connections between institutional contexts and impact interpretations and practices; examples include reports for the University of Oxford (25); for the University of Cambridge (9); for the Russell Group Universities (30); for the 1994 Group (31); and for the Million+ group, formerly the Coalition of Modern Universities (32). These studies explore the ways in which universities have adapted the policy-driven impact agenda to their own ways of working and to their longer-term concerns with the quality, sustainability and benefits of research activity. Impact may be the buzzword of the moment, but universities had reflected on their wider mission long before impact was deemed a metaphor worth turning into a governance technology. Many have embedded their efforts to capture research impact in their wider social accountability projects and plugged it in their continued public engagement, community interaction and outreach activities (26). In order to do this, they are reinterpreting the official agenda and articulating alternatives. These reinterpretations – and their visibility and weight in the public domain – are essential if impact is not to become yet another measure rendered meaningless by reducing it to a target for performance.

For impact indicators to be an adequate proxy of research value, they need not only to be technically refined measures, but also to be pitched at the right level, so that they can function as catalysts of, rather than destabilize, higher education activity. To do this, they depend on a healthy ecology of higher education, which in turn requires intellectual autonomy, financial sustainability and insightful governance. Without these preconditions, the high-stakes assessment of impact may fail to reflect and support ongoing research value, and end up simply capturing assessment-driven hyperactivity.


References

(1) BIS (2011) Innovation and Research Strategy for Growth. London: Department for Business, Innovation and Skills.
(2) Arts Council England (2012) Measuring the Economic Benefits of Arts and Culture. BoP Consulting.
(3) Hughes, A., Kitson, M., Bullock, A. and Milner, I. (2013) The Dual Funding Structure for Research in the UK: Research Council and Funding Council Allocation Methods and the Pathways to Impact of UK Academics. BIS report.
(4) RCUK (2002) Science Delivers. Available at: http://www.rcuk.ac.uk/documents/publications/science_delivers.pdf
(5) RCUK (2006) Increasing the Economic Impact of the Research Councils. Available at: http://www.rcuk.ac.uk/documents/publications/ktactionplan.pdf
(6) Davies, H., Nutley, S. and Walter, I. (2005) Approaches to Assessing the Non-academic Impact of Social Science Research. Report of an ESRC symposium on assessing the non-academic impact of research, 12th/13th May 2005.
(7) Salter, A., Tartari, V., D’Este, P. and Neely, A. (2010) The Republic of Engagement Exploring UK Academic Attitudes to Collaborating with Industry and Entrepreneurship. Advanced Institute of Management Research.
(8) UK Evaluation Forum (2006) Medical Research: Assessing the benefits to society. London: Academy of Medical Sciences, Medical Research Council and Wellcome Trust.
(9) Levitt, R., Claire, C., Diepeveen, S., Ní Chonaill, S., Rabinovich, L. and Tiessen, J. (2010) Assessing the Impact of Arts and Humanities Research at the University of Cambridge. RAND Europe.
(10) LSE Public Policy Group (2007) The Policy and Practice Impacts of the ESRC’s ‘Responsive Mode’ Research Grants in Politics and International Studies. ESRC report.
(11) LSE Public Policy Group (2008) Maximizing the Social, Policy and Economic Impacts of Research in the Humanities and Social Sciences. British Academy report.
(12) Meagher, L.R. and Lyall, C. (2007) Policy and Practice Impact Case Study of ESRC Grants and Fellowships in Psychology. ESRC report.
(13) Oancea, A. and Furlong, J. (2007) Expressions of excellence and the assessment of applied and practice-based research, Research Papers in Education, Vol. 22, No. 2.
(14) RCUK (2012) RCUK Impact Report 2012. Available at: http://www.rcuk.ac.uk/Documents/publications/Impactreport2012.pdf
(15) British Academy (2008) Punching Our Weight: The role of the humanities and social sciences in policy-making. London: BA.
(16) British Academy (2004) ‘That Full Complement of Riches’: The contributions of the Arts, Humanities and Social Sciences to the nation’s wealth. London: BA
(17) REF (2012/11) Assessment framework and guidance on submissions.
(18) REF (2011) Decisions on assessing research impact. REF 01.2011.
(19) Nutley, S., Percy-Smith, J. and Solesbury, W. (2003) Models of Research Impact: A cross-sector review of literature and practice. London: LSDA.
(20) Technopolis Ltd (2010) REF Research Impact Pilot Exercise Lessons-Learned Project: Feedback on Pilot Submissions. HEFCE.
(21) Grant, J., Brutsher, P.-B., Kirk, S. E., Butler, L., & Wooding, S. (2009) Capturing Research Impacts: A review of international practice. HEFCE/RAND Europe.
(22) Holi, M.T., Wickramasinghe, R. and van Leeuwen, M. (2008) Metrics for the Evaluation of Knowledge Transfer Activities at Universities. UNICO report.
(23) ERiC (2010) Evaluating the Societal Relevance of Academic Research: A guide. Evaluating Research in Context, Netherlands.
(24) SIAMPI (2010) SIAMPI Approach for the Assessment of Social Impact. Report of SIAMPI Workshop 10.
(25) Oancea, A. (2011) Interpretations and Practices of Research Impact across the Range of Disciplines. HEIF/Oxford University.
(26) Ovseiko, P.V., Oancea, A., and Buchan, A.M. (2012) Assessing research impact in academic clinical medicine: a study using Research Excellence Framework pilot impact indicators. In: BMC Health Services Research 2012, 12:478 .
(27) HERG and Rand Europe (2011) Project Retrosight. Understanding the returns from cardiovascular and stroke research. Cambridge: RAND Europe.
(28) Martin, B.R., and Tang, P. (2007) The benefits from publicly funded research. University of Sussex.
(29) Molas-Gallart, J., Tang, P., Sinclair, T., Morrow, S., and Martin, B.R. (1999) Assessing Research Impact on Non-academic Audiences. Swindon: ESRC.
(30) Russell Pioneering Research Group (2012) The Social Impact of Research Conducted in Russell Group Universities. Russell Group Papers, 3.
(31) McMillan, T., Norton, T., Jacobs, J.B., and Ker, R. (2010) Enterprising Universities: Using the research base to add value to business. 1994 Group report.
(32) Little, A. (2006) The Social and Economic Impact of Publicly Funded Research in 35 Universities. Coalition for Modern Universities.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The science that changed our lives

In this contribution Dr. Gali Halevi pays tribute to Francis Narin and his work on measuring the connection between science and innovation, and provides access to the original TRACES report, first published in 1968.

Read more >


A tribute to Francis Narin and his contribution to understanding the linkage between science and innovation

A discussion about the societal effects of science would not be complete without discussing the linkage between basic science and patents. Patents are seen as the embodiment of research as they describe unique processes, methodologies and products which are the result of extensive scientific research. Patents are the link between science and market, between concepts and prototypes – and they serve as a step in the process of converting ideas into economic growth.

This topic was the focus of the American Competitiveness Initiative of 2006. One of the examples given by the White House at that time was the basic sciences that led to the development of the iPod ™ (see Figure 1). This type of linkage between basic science and innovative products is at the heart of Francis Narin’s work as the first researcher to investigate this by studying the connections between basic research and innovation.

Figure 1 - Impact of basic research on innovation. Source: American Competitiveness Initiative of 2006

In what he himself denoted as “probably his last paper”, “Tracing the Paths from Basic Research to Economic Impact” (1), Francis Narin provides a glimpse into his pioneering work which changed the way government and industry measure the value of basic science. In his long career, Narin published over 50 articles on this linkage, examining citations exchanges between basic research and intellectual property in numerous subject areas, such as Biotechnology (2), Agriculture (3), Human Genome Mapping (4), and Eye Care Technologies (5). Collaborating with researchers from around the world, Narin dedicated his career to the study of the connections between scientific citations and patents, and measuring the economic strengths of countries, companies and even the stock market (6) through their scientific and intellectual property capabilities. Through the years, Narin and his colleagues were able to prove that basic science strengthens not only a country’s academic and scientific competency, but also has a direct effect on its economic prosperity through the translation of science into products and services. One of the examples given by Narin in his last article is the connection between his own citation influence methodology and the development of Google. The “citation influence methodology”, developed in the 1970s, maps the citation links from a specific journal to the journals it cites most heavily and allows an influence map of sub-fields to be created. This methodology was later heavily cited by Sergey Brin and Larry Page as the basis for their PageRank internet search algorithm. PageRank became Google’s most unique feature which differentiated it from others and enabled its enormous success.

The original "Technology in Retrospect and Critical Events in Science" (TRACES) report (1968) is available here.

In the words of Francis Narin and his colleagues at CHI Research, a firm pioneering in the analysis of patent citations:

“Science Linkage is a measure of the extent to which a company’s technology builds upon cutting edge scientific research. It is calculated on the basis of the average number of references on a company’s patents to scientific papers, as distinct from references to previous patents. Companies whose patents cite a large number of scientific papers are assumed to be working closely with the latest scientific developments.” (7)

Economic strains and government deficits make Narin’s work more important than ever. While governments are looking at cutting funding budgets as a way to balance national debt, scientific activities are often faced with depleting resources. Narin’s work plays a central role in proving the importance of continuous government support of the sciences as they are directly linked to industrial advancement and economic growth. The article “The Increasing Linkage between U.S. Technology and Public Science” (8), published in 1997 by Narin, Hamilton and Olivastro, is one of Narin’s seminal articles and has been cited over 300 times by researchers from various disciplines (see Figures 2-3).

Figure 2 - Number of citations to “ The Increasing Linkage between U.S. Technology and Public Science” over time

In this article the authors performed a systematic examination which proved the direct linkage between publically funded science and its impact on industrial technology, while providing the empirical and methodological evidence needed for continuous government support of basic sciences. Whether for university or laboratory, publically funded research supported by government agencies such as NIH and NSF has been shown to be heavily cited in technological and innovative patents. The importance of such proof for facilitating budgetary allocations to scientific endeavors is illustrated by the fact that citations to this article still grow every year.

Figure 3 - Disciplines citing“ The Increasing Linkage between U.S. Technology and Public Science”

This innovative methodological investigation has led to an explosion of studies into the connection between basic science and innovation, which saw a surge in publications since 2008 as the economy plunged after the 2008 financial crisis (see Figure 4).

Figure 4 - Publications focusing on basic science and innovation from 1996 - 2012

Narin’s contribution to our understanding of the connection between basic research, innovation, industry and economy brought forth the need to demonstrate the importance of other disciplines to this process, for example, Social Sciences. Using Narin’s methodology of tracing non-patent literature citations in patents, Moed and Halevi demonstrated in this publication (9) how basic research in Library & Information Science was used in the development of search engines by technology companies, including the above mentioned citation influence methodology. The contribution of Social Sciences to innovation was the subject of the 1982 article by Tornatzky et al. (10), which argued that Social Sciences have been ignored in the general debate regarding national productivity and innovation mainly because they are usually nonproprietary in nature. Yet, Social Science has been shown to be instrumental as a decision aid, a source of social technology and as a tool for understanding innovation and productivity. An example of this can be seen in Lavoie (11), who demonstrated the vital role of social scientists and their expertise in the field of regenerative medicine by “providing a comprehensive framework to include both technology and market conditions, as well as considering social, economic, and ethical values” (pp. 613).

Regardless of the discipline, tracking the connection between research and innovation is of immense importance, especially in turbulent economic times when the need to prove their economic and social value is crucial. There are many factors working in today’s scientific landscape, most prevalent being budgetary constraints, that make the ability to measure Return on Investment (ROI) crucial for funding decisions. Academic and other publically funded research is being scrutinized in search of a metric or evaluative model that will enable decision makers to assess its impact on the economy and society as a whole. Francis Narin offers a sound methodology and empirical measurement to track these linkages and demonstrate the crucial role science plays in building a sustainable economy based on technological and industrial innovation. This type of study will remain important in years to come as the interest in assessing societal impact of scientific research is rapidly increasing, and the public becomes more involved in, and better informed of, funding policies using tax payers’ money.

References

(1) Narin, F. (2013) “Tracing the Paths from Basic to Economic Impact”, F&M Scientist, Winter 2013, http://www.fandm.edu/fandm-scientist
(2) McMillan, G.S., Narin, F., & Deeds, D.L. (2000) “An analysis of the critical role of public science in innovation: The case of biotechnology”, Research Policy, Vol. 29, No. 1, pp. 1-8.
(3) Perko, J.S., & Narin, F. (1997) “The transfer of public science to patented technology: A case study in agricultural science”, Journal of Technology Transfer, Vol. 22, No. 3, pp. 65-72.
(4) Anderson, J., Williams, N., Seemungal, D., Narin, F., & Olivastro, D. (1996) “Human genetic technology: Exploring the links between science and innovation”, Technology Analysis and Strategic Management, Vol. 8, No. 2, pp. 135-156.
(5) Ellwein, L.B., Kroll, P., & Narin, F. (1996) “Linkage between research sponsorship and patented eye-care technology”, Investigative Ophthalmology and Visual Science, Vol. 37, No. 12, pp. 2495-2503.
(6) Deng, Z., Lev, B. and Narin, F. (1999) “Science & Technology as Predictors of Stock Performance”, Financial Analysts Journal, Vol. 55, No. 3, pp. 20-32.
(7) Narin, F., Breitzman, A. & Thomas, P. (2004) “Using Patent Citation Indicators to Manage a Stock Portfolio”, in Moed, H.F. & Schmoch, U. (Ed.), Handbook of Quantitative Science and Technology Research. The Use of Publication and Patent Statistics in Studies of S&T Systems, pp. 553-568. Dordrecht, The Netherlands: Springer Netherlands.
(8) Narin, F., Hamilton, K.S., & Olivastro, D. (1997) “The increasing linkage between U.S. technology and public science”, Research Policy, Vol. 26, No. 3, pp. 317-330.
(9) Halevi, G. & Moed, H. (2012) “The Technological Impact of Library Science Research: A Patent Analysis”, 17th International Conference on Science and Technology Indicators (STI), Montreal, Canada 2012. http://sticonference.org/Proceedings/vol1/Halevi_Technological_371.pdf
(10) Tornatzky, L.G. et al. (1982) “Contributions of social science to innovation and productivity”, American Psychologist, Vol. 37, No. 7, pp. 737-746.
(11) Lavoie, M. (2011) “The Role of Social Scientists in Accelerating Innovation in Regenerative Medicine”, Review of Policy Research, Vol. 28, No. 6, pp. 613-630.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

A tribute to Francis Narin and his contribution to understanding the linkage between science and innovation

A discussion about the societal effects of science would not be complete without discussing the linkage between basic science and patents. Patents are seen as the embodiment of research as they describe unique processes, methodologies and products which are the result of extensive scientific research. Patents are the link between science and market, between concepts and prototypes – and they serve as a step in the process of converting ideas into economic growth.

This topic was the focus of the American Competitiveness Initiative of 2006. One of the examples given by the White House at that time was the basic sciences that led to the development of the iPod ™ (see Figure 1). This type of linkage between basic science and innovative products is at the heart of Francis Narin’s work as the first researcher to investigate this by studying the connections between basic research and innovation.

Figure 1 - Impact of basic research on innovation. Source: American Competitiveness Initiative of 2006

In what he himself denoted as “probably his last paper”, “Tracing the Paths from Basic Research to Economic Impact” (1), Francis Narin provides a glimpse into his pioneering work which changed the way government and industry measure the value of basic science. In his long career, Narin published over 50 articles on this linkage, examining citations exchanges between basic research and intellectual property in numerous subject areas, such as Biotechnology (2), Agriculture (3), Human Genome Mapping (4), and Eye Care Technologies (5). Collaborating with researchers from around the world, Narin dedicated his career to the study of the connections between scientific citations and patents, and measuring the economic strengths of countries, companies and even the stock market (6) through their scientific and intellectual property capabilities. Through the years, Narin and his colleagues were able to prove that basic science strengthens not only a country’s academic and scientific competency, but also has a direct effect on its economic prosperity through the translation of science into products and services. One of the examples given by Narin in his last article is the connection between his own citation influence methodology and the development of Google. The “citation influence methodology”, developed in the 1970s, maps the citation links from a specific journal to the journals it cites most heavily and allows an influence map of sub-fields to be created. This methodology was later heavily cited by Sergey Brin and Larry Page as the basis for their PageRank internet search algorithm. PageRank became Google’s most unique feature which differentiated it from others and enabled its enormous success.

The original "Technology in Retrospect and Critical Events in Science" (TRACES) report (1968) is available here.

In the words of Francis Narin and his colleagues at CHI Research, a firm pioneering in the analysis of patent citations:

“Science Linkage is a measure of the extent to which a company’s technology builds upon cutting edge scientific research. It is calculated on the basis of the average number of references on a company’s patents to scientific papers, as distinct from references to previous patents. Companies whose patents cite a large number of scientific papers are assumed to be working closely with the latest scientific developments.” (7)

Economic strains and government deficits make Narin’s work more important than ever. While governments are looking at cutting funding budgets as a way to balance national debt, scientific activities are often faced with depleting resources. Narin’s work plays a central role in proving the importance of continuous government support of the sciences as they are directly linked to industrial advancement and economic growth. The article “The Increasing Linkage between U.S. Technology and Public Science” (8), published in 1997 by Narin, Hamilton and Olivastro, is one of Narin’s seminal articles and has been cited over 300 times by researchers from various disciplines (see Figures 2-3).

Figure 2 - Number of citations to “ The Increasing Linkage between U.S. Technology and Public Science” over time

In this article the authors performed a systematic examination which proved the direct linkage between publically funded science and its impact on industrial technology, while providing the empirical and methodological evidence needed for continuous government support of basic sciences. Whether for university or laboratory, publically funded research supported by government agencies such as NIH and NSF has been shown to be heavily cited in technological and innovative patents. The importance of such proof for facilitating budgetary allocations to scientific endeavors is illustrated by the fact that citations to this article still grow every year.

Figure 3 - Disciplines citing“ The Increasing Linkage between U.S. Technology and Public Science”

This innovative methodological investigation has led to an explosion of studies into the connection between basic science and innovation, which saw a surge in publications since 2008 as the economy plunged after the 2008 financial crisis (see Figure 4).

Figure 4 - Publications focusing on basic science and innovation from 1996 - 2012

Narin’s contribution to our understanding of the connection between basic research, innovation, industry and economy brought forth the need to demonstrate the importance of other disciplines to this process, for example, Social Sciences. Using Narin’s methodology of tracing non-patent literature citations in patents, Moed and Halevi demonstrated in this publication (9) how basic research in Library & Information Science was used in the development of search engines by technology companies, including the above mentioned citation influence methodology. The contribution of Social Sciences to innovation was the subject of the 1982 article by Tornatzky et al. (10), which argued that Social Sciences have been ignored in the general debate regarding national productivity and innovation mainly because they are usually nonproprietary in nature. Yet, Social Science has been shown to be instrumental as a decision aid, a source of social technology and as a tool for understanding innovation and productivity. An example of this can be seen in Lavoie (11), who demonstrated the vital role of social scientists and their expertise in the field of regenerative medicine by “providing a comprehensive framework to include both technology and market conditions, as well as considering social, economic, and ethical values” (pp. 613).

Regardless of the discipline, tracking the connection between research and innovation is of immense importance, especially in turbulent economic times when the need to prove their economic and social value is crucial. There are many factors working in today’s scientific landscape, most prevalent being budgetary constraints, that make the ability to measure Return on Investment (ROI) crucial for funding decisions. Academic and other publically funded research is being scrutinized in search of a metric or evaluative model that will enable decision makers to assess its impact on the economy and society as a whole. Francis Narin offers a sound methodology and empirical measurement to track these linkages and demonstrate the crucial role science plays in building a sustainable economy based on technological and industrial innovation. This type of study will remain important in years to come as the interest in assessing societal impact of scientific research is rapidly increasing, and the public becomes more involved in, and better informed of, funding policies using tax payers’ money.

References

(1) Narin, F. (2013) “Tracing the Paths from Basic to Economic Impact”, F&M Scientist, Winter 2013, http://www.fandm.edu/fandm-scientist
(2) McMillan, G.S., Narin, F., & Deeds, D.L. (2000) “An analysis of the critical role of public science in innovation: The case of biotechnology”, Research Policy, Vol. 29, No. 1, pp. 1-8.
(3) Perko, J.S., & Narin, F. (1997) “The transfer of public science to patented technology: A case study in agricultural science”, Journal of Technology Transfer, Vol. 22, No. 3, pp. 65-72.
(4) Anderson, J., Williams, N., Seemungal, D., Narin, F., & Olivastro, D. (1996) “Human genetic technology: Exploring the links between science and innovation”, Technology Analysis and Strategic Management, Vol. 8, No. 2, pp. 135-156.
(5) Ellwein, L.B., Kroll, P., & Narin, F. (1996) “Linkage between research sponsorship and patented eye-care technology”, Investigative Ophthalmology and Visual Science, Vol. 37, No. 12, pp. 2495-2503.
(6) Deng, Z., Lev, B. and Narin, F. (1999) “Science & Technology as Predictors of Stock Performance”, Financial Analysts Journal, Vol. 55, No. 3, pp. 20-32.
(7) Narin, F., Breitzman, A. & Thomas, P. (2004) “Using Patent Citation Indicators to Manage a Stock Portfolio”, in Moed, H.F. & Schmoch, U. (Ed.), Handbook of Quantitative Science and Technology Research. The Use of Publication and Patent Statistics in Studies of S&T Systems, pp. 553-568. Dordrecht, The Netherlands: Springer Netherlands.
(8) Narin, F., Hamilton, K.S., & Olivastro, D. (1997) “The increasing linkage between U.S. technology and public science”, Research Policy, Vol. 26, No. 3, pp. 317-330.
(9) Halevi, G. & Moed, H. (2012) “The Technological Impact of Library Science Research: A Patent Analysis”, 17th International Conference on Science and Technology Indicators (STI), Montreal, Canada 2012. http://sticonference.org/Proceedings/vol1/Halevi_Technological_371.pdf
(10) Tornatzky, L.G. et al. (1982) “Contributions of social science to innovation and productivity”, American Psychologist, Vol. 37, No. 7, pp. 737-746.
(11) Lavoie, M. (2011) “The Role of Social Scientists in Accelerating Innovation in Regenerative Medicine”, Review of Policy Research, Vol. 28, No. 6, pp. 613-630.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Arts & Humanities around the world

Sarah Huggett explores the geographical distribution of Arts & Humanities research. What are the most productive cities in terms of Arts & Humanities research output?

Read more >


As part of this thematic issue on the Arts & Humanities, Research Trends thought it would be interesting to look into the geographical distribution of Arts & Humanities research. Although Arts & Humanities scholarly output is published in a variety of media (1), looking at a large enough publication window for papers published in journals should still show a realistic representation of the geographical distribution of the research. In this piece, we present three alternative pictures of the global repartition of Arts & Humanities (2).

 

Absolute numbers: a somewhat unsurprising map

First we map the most straightforward data: absolute number of Arts & Humanities papers (see Figure 1) by author location as defined by institutional address for any collaborative author (whole counts). This yields a somewhat predictable output: with a few exceptions in Asia-Pacific, the most prolific cities tend to be well-known academic and cultural hotspots in the USA and Western Europe, such as London, Paris, and New York. This confirms that in absolute numbers of papers, the Arts & Humanities behave like the Sciences, with research concentrated in institutes linked to large cities (3).

Figure 1: Absolute number of 1996-2010 Arts & Humanities scholarly journal articles by author affiliation city for cities with 100 or more Arts & Humanities articles. Sources: Scopus and GPS visualizer. Size and color of the circles depend on absolute number of Arts & Humanities papers (red >1000, orange =500-999, yellow =300-499, green =200-299, blue =100-199).

 

Relative to overall city output: expected results with a few unforeseen twists

When we look into the Arts & Humanities output relative to overall output for each city, however, we see a slightly more interesting picture (see Figure 2). Again, these were mapped by author location as defined by institutional address for any collaborative author (whole counts). The distribution still appears concentrated in the USA and Western Europe, but reveals more unexpected cities. For instance, the two locations with the highest proportion of their output in the Arts & Humanities are La Mirada on the US West Coast (home of Biola University), with 101 of its 162 scholarly papers belonging to Arts & Humanities, and Lampeter in Wales (where the University of Wales Trinity Saint David has a campus), with 107 of its 255 scholarly papers in the Arts & Humanities.

Figure 2: Proportion of 1996-2010 Arts & Humanities scholarly journal articles relative to overall output by author affiliation city for cities with 100 or more Arts & Humanities articles. Sources: Scopus and GPS visualizer. Size of the circles represents the proportion of Arts & Humanities papers relative to total output; color of the circles depends on absolute number of Arts & Humanities papers (red >1000, orange =500-999, yellow =300-499, green =200-299, blue =100-199).

 

Relative to country Arts & Humanities output: some unanticipated results

Exploring city Arts & Humanities output relative to country Arts & Humanities output reveals a more surprising map (see Figure 3). In this analysis, city output was again calculated by author location as defined by institutional address for any collaborative author (whole counts), whilst country output was derived from author location for any collaborative author (whole counts). Major hubs appear in Latin America (e.g. San Juan, Puerto Rico; Bogotá, Colombia), Eastern Europe (e.g. Belgrade, Serbia; Sofia, Bulgaria; Vilnius, Lithuania), the Middle East (e.g. Beirut, Lebanon), and Asia (e.g. Singapore, Singapore).

Figure 3: Proportion of 1996-2010 Arts & Humanities scholarly journal articles output by author affiliation city relative to country Arts & Humanities scholarly articles output, restricted to cities with 100 or more Arts & Humanities articles. Sources: Scopus and GPS visualizer. Size of the circles represent the proportion of Arts & Humanities papers relative to country Arts & Humanities output; color of the circles depends on absolute number of Arts & Humanities papers (red >1000, orange =500-999, yellow =300-499, green =200-299, blue =100-199).

 

Relative to overall country output: same unexpected places

Delving into city Arts & Humanities output compared to country overall output again shows some less expected locations, most of them consistent, however, with the analysis relative to Arts & Humanities output (see Figure 4). This makes sense as for many institutes, absolute overall output tends to correlate with absolute subject volume, due to the general size and scale of the institution. In this analysis, city output was again calculated by author location as defined by institutional address for any collaborative author (whole counts), whilst country output was derived from author location for any collaborative author (whole counts). We find similar hubs in Latin America (e.g. San Juan, Puerto Rico; Bogotá, Colombia; Santiago, Chile), Eastern Europe (e.g. Vilnius, Lithuania; Tartu, Estonia; Budapest, Hungary), the Middle East (e.g. Beirut, Lebanon), and Western Europe (Nicosia, Cyprus; Reykjavik, Iceland; Dublin, Ireland).

Figure 4: Proportion of 1996-2010 Arts & Humanities scholarly journal articles output by author affiliation city relative to country overall scholarly articles output, restricted to cities with 100 or more Arts & Humanities articles. Sources: Scopus and GPS visualizer. Size of the circles represent the proportion of Arts & Humanities papers relative to country overall output; color of the circles depends on absolute number of Arts & Humanities papers (red >1000, orange =500-999, yellow =300-499, green =200-299, blue =100-199).

 

Four different maps for a single field?

This analysis shows how using different filters to explore the same data can reveal some remarkably varied results. As can be expected, the most productive cities in terms of absolute numbers of Arts & Humanities scholarly journal articles are well established academic and cultural hubs. However, once the numbers are normalized relative to the cities’ overall academic output, more unanticipated locations emerge with a high proportion of their output in the Arts & Humanities. And when the data are normalized against country Arts & Humanities or overall output, we see a radical shift in the regional distribution of the major players.

 

References

(1) Hicks, D. (2004) “The four literatures of social science” in Moed, H.F. (Ed.), Handbook of quantitative science and technology research, pp. 473–496. Dordrecht: Kluwer Academic
(2) Bornmann, L. et al. (2011) “Mapping excellence in the geography of science: An approach based on Scopus data”, Journal of Informetrics, Vol. 5, No. 4, pp. 537–546.
(3) Van Raan, A.F.J. (2012) “Universities Scale Like Cities”, http://arXiv:1211.5124
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

As part of this thematic issue on the Arts & Humanities, Research Trends thought it would be interesting to look into the geographical distribution of Arts & Humanities research. Although Arts & Humanities scholarly output is published in a variety of media (1), looking at a large enough publication window for papers published in journals should still show a realistic representation of the geographical distribution of the research. In this piece, we present three alternative pictures of the global repartition of Arts & Humanities (2).

 

Absolute numbers: a somewhat unsurprising map

First we map the most straightforward data: absolute number of Arts & Humanities papers (see Figure 1) by author location as defined by institutional address for any collaborative author (whole counts). This yields a somewhat predictable output: with a few exceptions in Asia-Pacific, the most prolific cities tend to be well-known academic and cultural hotspots in the USA and Western Europe, such as London, Paris, and New York. This confirms that in absolute numbers of papers, the Arts & Humanities behave like the Sciences, with research concentrated in institutes linked to large cities (3).

Figure 1: Absolute number of 1996-2010 Arts & Humanities scholarly journal articles by author affiliation city for cities with 100 or more Arts & Humanities articles. Sources: Scopus and GPS visualizer. Size and color of the circles depend on absolute number of Arts & Humanities papers (red >1000, orange =500-999, yellow =300-499, green =200-299, blue =100-199).

 

Relative to overall city output: expected results with a few unforeseen twists

When we look into the Arts & Humanities output relative to overall output for each city, however, we see a slightly more interesting picture (see Figure 2). Again, these were mapped by author location as defined by institutional address for any collaborative author (whole counts). The distribution still appears concentrated in the USA and Western Europe, but reveals more unexpected cities. For instance, the two locations with the highest proportion of their output in the Arts & Humanities are La Mirada on the US West Coast (home of Biola University), with 101 of its 162 scholarly papers belonging to Arts & Humanities, and Lampeter in Wales (where the University of Wales Trinity Saint David has a campus), with 107 of its 255 scholarly papers in the Arts & Humanities.

Figure 2: Proportion of 1996-2010 Arts & Humanities scholarly journal articles relative to overall output by author affiliation city for cities with 100 or more Arts & Humanities articles. Sources: Scopus and GPS visualizer. Size of the circles represents the proportion of Arts & Humanities papers relative to total output; color of the circles depends on absolute number of Arts & Humanities papers (red >1000, orange =500-999, yellow =300-499, green =200-299, blue =100-199).

 

Relative to country Arts & Humanities output: some unanticipated results

Exploring city Arts & Humanities output relative to country Arts & Humanities output reveals a more surprising map (see Figure 3). In this analysis, city output was again calculated by author location as defined by institutional address for any collaborative author (whole counts), whilst country output was derived from author location for any collaborative author (whole counts). Major hubs appear in Latin America (e.g. San Juan, Puerto Rico; Bogotá, Colombia), Eastern Europe (e.g. Belgrade, Serbia; Sofia, Bulgaria; Vilnius, Lithuania), the Middle East (e.g. Beirut, Lebanon), and Asia (e.g. Singapore, Singapore).

Figure 3: Proportion of 1996-2010 Arts & Humanities scholarly journal articles output by author affiliation city relative to country Arts & Humanities scholarly articles output, restricted to cities with 100 or more Arts & Humanities articles. Sources: Scopus and GPS visualizer. Size of the circles represent the proportion of Arts & Humanities papers relative to country Arts & Humanities output; color of the circles depends on absolute number of Arts & Humanities papers (red >1000, orange =500-999, yellow =300-499, green =200-299, blue =100-199).

 

Relative to overall country output: same unexpected places

Delving into city Arts & Humanities output compared to country overall output again shows some less expected locations, most of them consistent, however, with the analysis relative to Arts & Humanities output (see Figure 4). This makes sense as for many institutes, absolute overall output tends to correlate with absolute subject volume, due to the general size and scale of the institution. In this analysis, city output was again calculated by author location as defined by institutional address for any collaborative author (whole counts), whilst country output was derived from author location for any collaborative author (whole counts). We find similar hubs in Latin America (e.g. San Juan, Puerto Rico; Bogotá, Colombia; Santiago, Chile), Eastern Europe (e.g. Vilnius, Lithuania; Tartu, Estonia; Budapest, Hungary), the Middle East (e.g. Beirut, Lebanon), and Western Europe (Nicosia, Cyprus; Reykjavik, Iceland; Dublin, Ireland).

Figure 4: Proportion of 1996-2010 Arts & Humanities scholarly journal articles output by author affiliation city relative to country overall scholarly articles output, restricted to cities with 100 or more Arts & Humanities articles. Sources: Scopus and GPS visualizer. Size of the circles represent the proportion of Arts & Humanities papers relative to country overall output; color of the circles depends on absolute number of Arts & Humanities papers (red >1000, orange =500-999, yellow =300-499, green =200-299, blue =100-199).

 

Four different maps for a single field?

This analysis shows how using different filters to explore the same data can reveal some remarkably varied results. As can be expected, the most productive cities in terms of absolute numbers of Arts & Humanities scholarly journal articles are well established academic and cultural hubs. However, once the numbers are normalized relative to the cities’ overall academic output, more unanticipated locations emerge with a high proportion of their output in the Arts & Humanities. And when the data are normalized against country Arts & Humanities or overall output, we see a radical shift in the regional distribution of the major players.

 

References

(1) Hicks, D. (2004) “The four literatures of social science” in Moed, H.F. (Ed.), Handbook of quantitative science and technology research, pp. 473–496. Dordrecht: Kluwer Academic
(2) Bornmann, L. et al. (2011) “Mapping excellence in the geography of science: An approach based on Scopus data”, Journal of Informetrics, Vol. 5, No. 4, pp. 537–546.
(3) Van Raan, A.F.J. (2012) “Universities Scale Like Cities”, http://arXiv:1211.5124
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Citation characteristics in the Arts & Humanities

Gali Halevi studies how frequently authors tend to cite older documents. Do Humanities scholars do this more often than their colleagues in other fields such as Physics and Astronomy?

Read more >


The main topic addressed in this article is how frequently authors cite documents that are more than 15 years old and how Humanities journals compare to Science journals in this respect. As current scientific publications become available more quickly and readily over web-based databases, it’s becoming important to test how older publications, which might not be as interactive or available as their newer counterparts, are cited, at what rate, and which disciplines cite relatively newer research compared to others that might cite older articles. In addition, the age of the cited references has the potential to reveal which factors influence such choices and whether they are “time dependent” or “field dependent” (1). Within the time-dependent factors, Bornmann and Daniel (1) point to two separate trends that emerge from the literature: citations of more current articles, mainly due to the fact that there are more of them; and a tendency to cite papers that received a large amount of citations on the basis of their acceptance and popularity. In this article, we focus more on studying the former phenomenon, examining whether recent articles are cited more and in what disciplines this phenomenon is more apparent. Therefore, this study hypothesizes that certain scientific disciplines such as Medicine and Engineering cite more current research articles, while others such as Arts & Humanities, Economics and Mathematics cite older research.

 

Data Collection & Methodology

We randomly selected a corpus of 63 journals from 14 disciplines (see Appendix A) and collected the following data fields for the analysis:

  1. Journal Title
  2. Year Started – the year the journal first appeared
  3. Total number of articles published up to 2011
  4. Total number of references within each journal, per year from its start-time to the present
  5. Total number of references to articles dated before 1996 (per journal per year)
  6. Total number of references  to articles dated post 1995 (per journal per year)
  7. The journal’s main discipline - The disciplinary assignment of each journal was derived from the Scopus database which assigns a discipline to each source it covers. In many cases, Scopus will assign more than one discipline to each source. In such cases, we selected the 1st discipline as the main subject area to which a journal belongs.
  8. Topics covered in the journal – the topic lists were retrieved from the journal’s aims and scope.

 

Findings

Average number of references per article

We calculated the average of the number of references per article for each of the disciplines. As can be seen from Figure 1, the journals in Social Sciences, Arts & Humanities and Physics and Astronomy have the largest average number of references per article while the lowest average number of references per article occurs in the titles covering Health Professions and Earth and Planetary Sciences.

Figure 1: The average number of references per article per discipline

In addition we looked at the average number of references per article in the six A&H journals in our sample. These journals include Journal of Medieval History, Lingua, Poetics, Design Studies, Journal of Phonetics, and Journal of Cultural Heritage. We found that History related journals have the largest references per article, followed by journals related to Linguistics, while the smallest number of references occurred in Poetics.

Percentage of references dated 1996 and after

We calculated the percentage of references dated 1996 to the present, per discipline according to the journals’ disciplinary assignment and then averaged these citations per discipline (see Figure 2). Examining the references, we found that 60% of the references cited in the A&H journals in our study sample are new, meaning the articles referenced were published in 1996 or after. Examining the six A&H journals individually we found that history related journals tend to reference newer publications while linguistics related journals tend to reference older publications. Disciplines covered by journals in this study showing 70%-80% references to articles published post-1995 are Earth and Planetary Sciences, Medicine, Biochemistry, Genetics and Molecular Biology, Engineering, and Materials Science. However, articles in our A&H titles reference newer materials when compared to study journals in disciplines such as Mathematics with 40% newer references, Social Sciences (45%), Agriculture (47%) and Computer Science (50%). This finding is quite notable considering that A&H content is considered to develop over time and where “the masters are continually discussed” (2).

Figure 2: Percentage of references dated 1996 and after

 

Disciplinary growth and references age

The finding that 60% of cited references in the A&H journals in our study set are new, i.e., published after 1995, led us to examine the overall growth rate of the number of articles in our journal sets arranged by discipline, and attempt to find out whether growth rate correlates to the references age. The rationale here was that in fast growing disciplines, where articles are published more often and in larger numbers, the age of references will be younger mainly because there are more recent materials available. In slower growing disciplines, on the other hand, the expectation was that the age of the references will be older because there are not as many new materials available. In order to analyze this, we compared the overall article growth in each of the disciplines from 1960 to the present. As can be seen from Figure 3, the fastest growing disciplines are Medicine and Engineering and these are also the disciplines that have the younger references age. A&H growth is much slower, yet its references are fairly new (i.e. post 1995).

Figure 3: Overall disciplinary growth over 5-year intervals

 

Conclusions

The Arts & Humanities journals in our sample show a relatively large number of references per article with an average of almost 56 references. When examining the average number of references per article in the A&H journals in our sample, we found that History has the largest amount of references per article, followed by journals related to Linguistics. The smallest number of references per article was seen in Poetics. In this respect it is compatible with Social Sciences and Physics and Astronomy, which show similar reference per article ratios. Despite their fairly slow growth rate, especially when compared to Medicine or Engineering, A&H articles tend to reference newer articles (i.e. published after 1996) in approximately 60% of the cases. This discipline, which was traditionally considered as building upon older materials, actually shows a relatively high number of references to newer materials. This finding is contrary to our initial hypothesis, which assumed that A&H articles will reference older materials, especially when the ‘classics’ have been assumed to be used often. For the six A&H journals in our sample, Journal of Medieval History, Lingua, Poetics, Design Studies, Journal of Phonetics, and Journal of Cultural Heritage, the percentage of post-1995 cited references is 44%, 63%, 69%, 73%, 62% and 72%, respectively. It follows that for all journals except the first this percentage is compatible with that in source titles covering Physics and Astronomy, Environmental Sciences and Business and Management, which show approximately 65% references to materials published after 1996.

 

Limitations and Further Research

This study was conducted using a small sample of 63 journals. In order to test the findings in this study, a much larger scale is needed and therefore it is necessary to analyze more journals per discipline. In a larger study, it will be important to include at least 20 journals per discipline in order to have a sufficient amount of journals and citations to analyze.

In addition, a comparison between different databases is important. Each database covers citations differently. Thus, comparing Scopus, Web of Science and Google Scholar, for example, can provide a better understanding of the differences in citation coverage and how these influence the way each discipline is perceived.

Finally, a more granular approach to the disciplinary and topical analysis is needed. In today’s research landscape, where many journals are becoming more interdisciplinary, it will be of significance to analyze sub-topics and their citations behavior and make the differentiation between them.

 

References

(1) Bornmann L., Daniel H.D. (2008) “What do Citations Measure? A Review of Studies on Citing Behavior”, Journal of Documentation, Vol. 64, No. 1, pp. 45-80.
(2) Garfield, E. (1979) “Most Cited Authors in the Arts & Humanities, 1977-1978”, Current Contents, Vol. 32, pp. 5-10.

APPENDIX A can be downloaded here.

 

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The main topic addressed in this article is how frequently authors cite documents that are more than 15 years old and how Humanities journals compare to Science journals in this respect. As current scientific publications become available more quickly and readily over web-based databases, it’s becoming important to test how older publications, which might not be as interactive or available as their newer counterparts, are cited, at what rate, and which disciplines cite relatively newer research compared to others that might cite older articles. In addition, the age of the cited references has the potential to reveal which factors influence such choices and whether they are “time dependent” or “field dependent” (1). Within the time-dependent factors, Bornmann and Daniel (1) point to two separate trends that emerge from the literature: citations of more current articles, mainly due to the fact that there are more of them; and a tendency to cite papers that received a large amount of citations on the basis of their acceptance and popularity. In this article, we focus more on studying the former phenomenon, examining whether recent articles are cited more and in what disciplines this phenomenon is more apparent. Therefore, this study hypothesizes that certain scientific disciplines such as Medicine and Engineering cite more current research articles, while others such as Arts & Humanities, Economics and Mathematics cite older research.

 

Data Collection & Methodology

We randomly selected a corpus of 63 journals from 14 disciplines (see Appendix A) and collected the following data fields for the analysis:

  1. Journal Title
  2. Year Started – the year the journal first appeared
  3. Total number of articles published up to 2011
  4. Total number of references within each journal, per year from its start-time to the present
  5. Total number of references to articles dated before 1996 (per journal per year)
  6. Total number of references  to articles dated post 1995 (per journal per year)
  7. The journal’s main discipline - The disciplinary assignment of each journal was derived from the Scopus database which assigns a discipline to each source it covers. In many cases, Scopus will assign more than one discipline to each source. In such cases, we selected the 1st discipline as the main subject area to which a journal belongs.
  8. Topics covered in the journal – the topic lists were retrieved from the journal’s aims and scope.

 

Findings

Average number of references per article

We calculated the average of the number of references per article for each of the disciplines. As can be seen from Figure 1, the journals in Social Sciences, Arts & Humanities and Physics and Astronomy have the largest average number of references per article while the lowest average number of references per article occurs in the titles covering Health Professions and Earth and Planetary Sciences.

Figure 1: The average number of references per article per discipline

In addition we looked at the average number of references per article in the six A&H journals in our sample. These journals include Journal of Medieval History, Lingua, Poetics, Design Studies, Journal of Phonetics, and Journal of Cultural Heritage. We found that History related journals have the largest references per article, followed by journals related to Linguistics, while the smallest number of references occurred in Poetics.

Percentage of references dated 1996 and after

We calculated the percentage of references dated 1996 to the present, per discipline according to the journals’ disciplinary assignment and then averaged these citations per discipline (see Figure 2). Examining the references, we found that 60% of the references cited in the A&H journals in our study sample are new, meaning the articles referenced were published in 1996 or after. Examining the six A&H journals individually we found that history related journals tend to reference newer publications while linguistics related journals tend to reference older publications. Disciplines covered by journals in this study showing 70%-80% references to articles published post-1995 are Earth and Planetary Sciences, Medicine, Biochemistry, Genetics and Molecular Biology, Engineering, and Materials Science. However, articles in our A&H titles reference newer materials when compared to study journals in disciplines such as Mathematics with 40% newer references, Social Sciences (45%), Agriculture (47%) and Computer Science (50%). This finding is quite notable considering that A&H content is considered to develop over time and where “the masters are continually discussed” (2).

Figure 2: Percentage of references dated 1996 and after

 

Disciplinary growth and references age

The finding that 60% of cited references in the A&H journals in our study set are new, i.e., published after 1995, led us to examine the overall growth rate of the number of articles in our journal sets arranged by discipline, and attempt to find out whether growth rate correlates to the references age. The rationale here was that in fast growing disciplines, where articles are published more often and in larger numbers, the age of references will be younger mainly because there are more recent materials available. In slower growing disciplines, on the other hand, the expectation was that the age of the references will be older because there are not as many new materials available. In order to analyze this, we compared the overall article growth in each of the disciplines from 1960 to the present. As can be seen from Figure 3, the fastest growing disciplines are Medicine and Engineering and these are also the disciplines that have the younger references age. A&H growth is much slower, yet its references are fairly new (i.e. post 1995).

Figure 3: Overall disciplinary growth over 5-year intervals

 

Conclusions

The Arts & Humanities journals in our sample show a relatively large number of references per article with an average of almost 56 references. When examining the average number of references per article in the A&H journals in our sample, we found that History has the largest amount of references per article, followed by journals related to Linguistics. The smallest number of references per article was seen in Poetics. In this respect it is compatible with Social Sciences and Physics and Astronomy, which show similar reference per article ratios. Despite their fairly slow growth rate, especially when compared to Medicine or Engineering, A&H articles tend to reference newer articles (i.e. published after 1996) in approximately 60% of the cases. This discipline, which was traditionally considered as building upon older materials, actually shows a relatively high number of references to newer materials. This finding is contrary to our initial hypothesis, which assumed that A&H articles will reference older materials, especially when the ‘classics’ have been assumed to be used often. For the six A&H journals in our sample, Journal of Medieval History, Lingua, Poetics, Design Studies, Journal of Phonetics, and Journal of Cultural Heritage, the percentage of post-1995 cited references is 44%, 63%, 69%, 73%, 62% and 72%, respectively. It follows that for all journals except the first this percentage is compatible with that in source titles covering Physics and Astronomy, Environmental Sciences and Business and Management, which show approximately 65% references to materials published after 1996.

 

Limitations and Further Research

This study was conducted using a small sample of 63 journals. In order to test the findings in this study, a much larger scale is needed and therefore it is necessary to analyze more journals per discipline. In a larger study, it will be important to include at least 20 journals per discipline in order to have a sufficient amount of journals and citations to analyze.

In addition, a comparison between different databases is important. Each database covers citations differently. Thus, comparing Scopus, Web of Science and Google Scholar, for example, can provide a better understanding of the differences in citation coverage and how these influence the way each discipline is perceived.

Finally, a more granular approach to the disciplinary and topical analysis is needed. In today’s research landscape, where many journals are becoming more interdisciplinary, it will be of significance to analyze sub-topics and their citations behavior and make the differentiation between them.

 

References

(1) Bornmann L., Daniel H.D. (2008) “What do Citations Measure? A Review of Studies on Citing Behavior”, Journal of Documentation, Vol. 64, No. 1, pp. 45-80.
(2) Garfield, E. (1979) “Most Cited Authors in the Arts & Humanities, 1977-1978”, Current Contents, Vol. 32, pp. 5-10.

APPENDIX A can be downloaded here.

 

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Publication languages in the Arts & Humanities

Dr. Daphne van Weijen investigates the role language plays in the Arts & Humanities. Do researchers in the field prefer to publish in their first language rather than in English?

Read more >


In previous issues of Research Trends, we noted that English remains the dominant language in Science (1, 2). However, this does not appear to be the case across all subject areas. Researchers who publish their work in other languages tend to do so more frequently in the ‘softer’ sciences, such as the Health Sciences, Social Sciences, Psychology, and Arts & Humanities (2). In this article we focus specifically on the role language plays in the Arts & Humanities and the extent to which researchers from different countries publish in languages other than English.

To answer this question, we first examined the extent to which other languages are used in the Arts & Humanities in general, and then distinguished trends in language use at country level (which countries favor English when publishing in the Humanities and which do not?). In addition we also analyzed different subfields within the Humanities, specifically: Archeology, History, Language and Linguistics, and Philosophy. The analyses are based on Scopus data. For a detailed overview of the coverage of Humanities journals in Scopus, please see Dr. Wim Meester’s contribution on the Arts & Humanities citation indexes in this issue of Research Trends (3). Finally it is important to note that Scopus only covers journals that publish articles in other languages if they include titles and abstracts in English.

 

Publication languages in the Humanities

Over the past five years, roughly 265,000 articles were indexed in the Arts & Humanities, written in 45 languages, but all with English abstracts. Results indicate that English is clearly the dominant language of publication in the Arts & Humanities (77%), although this figure is somewhat lower than the proportion of English language content in Scopus in general (88.4%). This suggests that local languages appear to play a larger role within the Humanities than in other fields (2). Of the 23% of publications that are non-English, French (7%), German (4%), Spanish (4%) and Italian (2.5%) are the languages most frequently used (see Figure 1). This finding, which emerges from an analysis of articles, is also confirmed when analyzing the corpus of journals as discussed in Meester’s contribution in this issue of Research Trends (3).

Figure 1: Word cloud containing the Languages other than English used in publications in the Arts & Humanities between 2008 and 2012. Source: Scopus data, Word cloud generated using Wordle™.

Country level analysis

The second phase of our study focused on the use of language across different countries in order to see whether the preference for publishing in English was the same across countries. The countries included in the analysis were the same as in an earlier Research Trends piece on the language of scientific communication (2). However, in this case we included the United Kingdom and the United States for comparison purposes. The outcome of the analysis clearly shows that the percentage of articles written in English varies strongly from country to country (see Table 1). Researchers from The Netherlands and Russia for example, are far more likely to publish their Humanities papers in English than researchers from France or Spain. This is in line with the ratios between English and local language papers in general, which were far lower for France and Spain than for The Netherlands and Russia (2). The interesting question then, is whether this preference holds across specific subfields of the Humanities.

 

Language (%)
Country Article count English French German Italian Spanish Other
United Kingdom 27400 98.0 0.7 0.3 0.1 0.6 0.3
United States 67815 97.3 0.8 0.2 0.1 1.4 0.2
The Netherlands 4985 89.8 1.4 1.3 0.0 0.5 7
Russia 1015 84.5 2.7 1.3 0.0 0.5 11
China 4231 78.2 0.4 0.4 0.1 0.2 20.7
Portugal 942 76.3 4.0 0.7 1.3 4.8 12.8
Germany 9824 66.9 2.1 28.3 0.4 1.2 1.1
Italy 5718 66.0 5.1 1.4 23.3 2.9 1.3
Spain 7975 48.2 3.0 0.5 0.4 45.5 2.4
France 10900 39.4 56.2 1.0 0.6 1.7 1.1
Overall 252443 77.0 7.1 4.2 2.4 4.1 5.2

Table 1: Overview of the percentage of Arts & Humanities papers published in English versus other languages per country (in 2008 – 2012), ordered by percentage of English use from most to least. Source: Scopus

Subfield analysis

In order to examine whether the use of languages other than English is similar across subfields within the Humanities, we chose to compare five subfields: Archaeology, History, Language & Linguistics, Literature, and Philosophy. The main purpose of this analysis was to discover whether or not the percentage of English use in each subfield is the same or different and in which other languages researchers publish besides English. The results of the analysis indicate that English is still the dominant language of publication in the Scopus-covered publication output in each of these subfields, but this varies from 65% in Archeology to 79% in Philosophy. The large percentage of English language publications in Philosophy reflects an Anglo-Saxon orientation of the Scopus-indexed literature in this subject field,  possibly due to the fact that it is uncommon for publishers and authors in Philosophy from non-English speaking countries to add English article titles and abstracts to their publications. Furthermore, the top five languages are consistent for all fields (see Figure 2). In each case French is the second publication language of choice, followed in more or less the same order by German, Spanish and Italian.

Figure 2: Overview of the percentage of papers published in the top five languages per subfield of the Humanities (in 2008 – 2012), ordered by percentage of English use from least (left) to most (right). Source: Scopus

Overall, we can conclude that researchers do seem to vary in the extent to which they publish in languages other than English in the Arts & Humanities. Spanish and French researchers in particular appear to hold a preference for publishing in their own language. Furthermore, researchers are somewhat more likely to publish in other languages, particularly French, in the Archeology and Literature subfields.

References

(1) Research Trends (2008) “English as the international language of science”, Research Trends, Issue 6, July 2008.
(2) Van Weijen, D. (2012) “The Language of (Future) Scientific Communication”, Research Trends, Issue 31, November 2012.
(3) Meester, W. (2013) “Towards a comprehensive citation index for the Arts & Humanities”, Research Trends, Issue 32, March 2013.

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

In previous issues of Research Trends, we noted that English remains the dominant language in Science (1, 2). However, this does not appear to be the case across all subject areas. Researchers who publish their work in other languages tend to do so more frequently in the ‘softer’ sciences, such as the Health Sciences, Social Sciences, Psychology, and Arts & Humanities (2). In this article we focus specifically on the role language plays in the Arts & Humanities and the extent to which researchers from different countries publish in languages other than English.

To answer this question, we first examined the extent to which other languages are used in the Arts & Humanities in general, and then distinguished trends in language use at country level (which countries favor English when publishing in the Humanities and which do not?). In addition we also analyzed different subfields within the Humanities, specifically: Archeology, History, Language and Linguistics, and Philosophy. The analyses are based on Scopus data. For a detailed overview of the coverage of Humanities journals in Scopus, please see Dr. Wim Meester’s contribution on the Arts & Humanities citation indexes in this issue of Research Trends (3). Finally it is important to note that Scopus only covers journals that publish articles in other languages if they include titles and abstracts in English.

 

Publication languages in the Humanities

Over the past five years, roughly 265,000 articles were indexed in the Arts & Humanities, written in 45 languages, but all with English abstracts. Results indicate that English is clearly the dominant language of publication in the Arts & Humanities (77%), although this figure is somewhat lower than the proportion of English language content in Scopus in general (88.4%). This suggests that local languages appear to play a larger role within the Humanities than in other fields (2). Of the 23% of publications that are non-English, French (7%), German (4%), Spanish (4%) and Italian (2.5%) are the languages most frequently used (see Figure 1). This finding, which emerges from an analysis of articles, is also confirmed when analyzing the corpus of journals as discussed in Meester’s contribution in this issue of Research Trends (3).

Figure 1: Word cloud containing the Languages other than English used in publications in the Arts & Humanities between 2008 and 2012. Source: Scopus data, Word cloud generated using Wordle™.

Country level analysis

The second phase of our study focused on the use of language across different countries in order to see whether the preference for publishing in English was the same across countries. The countries included in the analysis were the same as in an earlier Research Trends piece on the language of scientific communication (2). However, in this case we included the United Kingdom and the United States for comparison purposes. The outcome of the analysis clearly shows that the percentage of articles written in English varies strongly from country to country (see Table 1). Researchers from The Netherlands and Russia for example, are far more likely to publish their Humanities papers in English than researchers from France or Spain. This is in line with the ratios between English and local language papers in general, which were far lower for France and Spain than for The Netherlands and Russia (2). The interesting question then, is whether this preference holds across specific subfields of the Humanities.

 

Language (%)
Country Article count English French German Italian Spanish Other
United Kingdom 27400 98.0 0.7 0.3 0.1 0.6 0.3
United States 67815 97.3 0.8 0.2 0.1 1.4 0.2
The Netherlands 4985 89.8 1.4 1.3 0.0 0.5 7
Russia 1015 84.5 2.7 1.3 0.0 0.5 11
China 4231 78.2 0.4 0.4 0.1 0.2 20.7
Portugal 942 76.3 4.0 0.7 1.3 4.8 12.8
Germany 9824 66.9 2.1 28.3 0.4 1.2 1.1
Italy 5718 66.0 5.1 1.4 23.3 2.9 1.3
Spain 7975 48.2 3.0 0.5 0.4 45.5 2.4
France 10900 39.4 56.2 1.0 0.6 1.7 1.1
Overall 252443 77.0 7.1 4.2 2.4 4.1 5.2

Table 1: Overview of the percentage of Arts & Humanities papers published in English versus other languages per country (in 2008 – 2012), ordered by percentage of English use from most to least. Source: Scopus

Subfield analysis

In order to examine whether the use of languages other than English is similar across subfields within the Humanities, we chose to compare five subfields: Archaeology, History, Language & Linguistics, Literature, and Philosophy. The main purpose of this analysis was to discover whether or not the percentage of English use in each subfield is the same or different and in which other languages researchers publish besides English. The results of the analysis indicate that English is still the dominant language of publication in the Scopus-covered publication output in each of these subfields, but this varies from 65% in Archeology to 79% in Philosophy. The large percentage of English language publications in Philosophy reflects an Anglo-Saxon orientation of the Scopus-indexed literature in this subject field,  possibly due to the fact that it is uncommon for publishers and authors in Philosophy from non-English speaking countries to add English article titles and abstracts to their publications. Furthermore, the top five languages are consistent for all fields (see Figure 2). In each case French is the second publication language of choice, followed in more or less the same order by German, Spanish and Italian.

Figure 2: Overview of the percentage of papers published in the top five languages per subfield of the Humanities (in 2008 – 2012), ordered by percentage of English use from least (left) to most (right). Source: Scopus

Overall, we can conclude that researchers do seem to vary in the extent to which they publish in languages other than English in the Arts & Humanities. Spanish and French researchers in particular appear to hold a preference for publishing in their own language. Furthermore, researchers are somewhat more likely to publish in other languages, particularly French, in the Archeology and Literature subfields.

References

(1) Research Trends (2008) “English as the international language of science”, Research Trends, Issue 6, July 2008.
(2) Van Weijen, D. (2012) “The Language of (Future) Scientific Communication”, Research Trends, Issue 31, November 2012.
(3) Meester, W. (2013) “Towards a comprehensive citation index for the Arts & Humanities”, Research Trends, Issue 32, March 2013.

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Mapping the multidisciplinarity of the Arts & Humanities

Matthew Richardson presents visualizations of the citation links in the Arts & Humanities. How multi-disciplinary is this field?

Read more >


The Arts & Humanities include a diverse range of subjects, including many of the oldest intellectual pursuits such as Philosophy, Religion, Music, History, Art, Theatre and Literature. These disciplines, along with fields such as Language, Linguistics and the History of Science, share a common concern with humanity and culture.

This mutual interest means that we can expect much of the research in the Arts & Humanities to bridge disparate fields, in much the same way as modern scientific research increasingly links traditionally separate disciplines. One method for investigating this multidisciplinarity of research is to look at the citation links formed between journals when a paper published in one journal makes reference to a paper published in another. Citations are made to earlier work with some relevance to the current research, and so they can be used to draw together topically similar journals; see Research Trends issue 26 (1) for an earlier exploration of citation mapping using Scopus citation data. A similar approach applied to the Arts & Humanities reveals the structure of the subject area, and can also be used to highlight examples of multidisciplinarity.

The Arts & Humanities landscape

The Arts & Humanities as a subject area is only rarely the focus of bibliometric analyses, due to a common emphasis within the bibliometric community on citation analysis. These approaches tend to rely on sufficient quantities of journal articles, and of citations to recent research. In the Arts & Humanities, scholarly work is often published outside journals, for instance in monographs and books.

Our citation mapping method can be adapted to better suit the disciplines; for this map, we used ten years of publication and citation data (using the years 2001–10). This allows a greater period of time for citations to be made to previously-published work, and increases the confidence we can have in the structure of the graph, at the expense of having a map which does not reflect the most current trends. Otherwise, the method remains similar: citation data from Scopus are gathered at journal–journal level; these citations are first normalized, and then used as the edge data for a network graph in Gephi. Citations from a journal to any other are normalized both by the total citations from the citing journal and the total citations to the cited journal within the time period: the L index defined by Calero Medina et al. (2).

The map presented in Figure 1 is the result of mapping the journals classified within the Arts & Humanities in Scopus. All included journals published at least ten articles in the period of study (2001–10), and made at least ten citations to other journals in the map. After outliers and unconnected journals are removed, there remain 1570 journals which form the core of the decade’s Arts & Humanities work.

Figure 1: Journal citation map covering 1570 journals from the Arts & Humanities.

Journals are visualized using Gephi 0.8.1 beta and the ForceAtlas2 layout algorithm. Each node represents a journal, and edges the citations from one journal to the other in the years 2001–10. Each subject is assigned a color, used to highlight journals belonging to a single subject within the Arts & Humanities according to the ASJC classification used by Scopus; journals in multiple subjects are shown in white. Source: Scopus.

The map uses colors to highlight the position of the various subjects classed within the Arts & Humanities in Scopus; the major subjects have been labeled directly in Figure 1. These subjects group around Literature and the Arts in the center, with close connections to History to the right, and Religious Studies and Philosophy to the left; and looser connections to Language & Linguistics at the top, and Archaeology at the bottom.

When compared with a previous journal map of the Arts & Humanities produced by Leydesdorff et al. (3), similarities include the positioning of Literature between Music, Philosophy, Art and History; the proximity of Archaeology to History; and the connection between Linguistics and Philosophy. The maps are in broad agreement, despite differences in data source, normalization and layout algorithm; however, the visualization presented by Leydesdorff et al. shows Linguistics closer to the center of the map, and Theology as one key group of journals outside the core of the field. In the present map shown in Figure 1, Language & Linguistics are situated further from the center, while Religious Studies is shown to have strong ties not only to Philosophy but also to the Arts and History.

 

Mapping journal context

The curved lines reaching across Figure 1 represent citations from one journal to another, and it follows that the long lines reaching from one discipline to another show those citations made to journals in a different field of research. While Figure 1 shows the many interconnections between subjects, we can focus on smaller subsets of the map to see how individual journals exemplify multidisciplinarity.

Journal of Memory and Language publishes articles that “contribute to the formulation of scientific issues and theories in the areas of memory, language comprehension and production, and cognitive processes.” (4) The journal sits within the Language & Linguistics subject of the map shown in Figure 1, and, as expected, many of the journals it has citation links with are in the same field. However, the journal also has links with a broad range of journals in the field of Philosophy, as well as Literature and Music. Figure 2 shows only those journals with a direct link to the Journal of Memory and Language, and while the core of Language & Linguistics journals is immediately visible, the other related journals are shown at the right and bottom edges of the map; the links to a variety of Philosophy journals (turquoise) are particularly evident at the bottom-left.

Figure 2: Journal citation map covering 113 journals from the Arts & Humanities with a direct citation link to Journal of Memory and Language in the years 2001–10; selected journals are labeled. Source: Scopus.

Three further journal maps illustrate the wide range of interdisciplinary links across the Arts & Humanities. Figures 3–5 show maps for Journal of Archaeological Science, Journal of the History of Ideas, and New Literary History, each of which are based in different subjects within the area: respectively Archaeology, History & Philosophy of Science, and Literature & Literary Theory (see Figure 1).

Journal of Archaeological Science publishes work “advancing the development and application of scientific techniques and methodologies to all areas of archaeology.” (5) Figure 3 shows the predominately archaeological citation links of the journal, as well as the branches to such journals as Medical History and Oregon Historical Quarterly on the right side of the map, and Philosophy of the Social Sciences and Phenomenology and the Cognitive Sciences on the left. These do not dominate the map, but the Journal of Archaeological Science displays more multidisciplinarity than many other archaeology journals.

Figure 3: Journal citation map covering 135 journals from the Arts & Humanities with a direct citation link to Journal of Archaeological Science in the years 2001–10; selected journals are labeled. Source: Scopus.

Journal of the History of Ideas describes its field of focus, intellectual history, “expansively and ecumenically, including the histories of philosophy, of literature and the arts, of the natural and social sciences, of religion, and of political thought.” (6) Figure 4 shows the journal at the center of a wide-reaching web of citations, touching upon Language and Literature at the top (Language Sciences, Poetics Today), Music and History to the right (Music Analysis, Historical Research), Religion at the bottom (Journal of Religion), and Philosophy to the left (Philosophical Quarterly). Rather than sitting within a specific field and reaching out to others, as is the case for most journals, the Journal of the History of Ideas covers a vast range of the subject area from a central location.

Figure 4: Journal citation map covering 185 journals from the Arts & Humanities with a direct citation link to Journal of the History of Ideas in the years 2001–10; selected journals are labeled. Source: Scopus.

New Literary History “focuses on questions of theory, method, interpretation, and literary history.” (7) Figure 5 shows the journal within a context of Literature & Literary Theory journals as well as those in Philosophy, History, Archaeology, and the other Arts it connects with (Philosophy East and WestChurch HistoryArchival ScienceMusic and Letters).

Figure 5: Journal citation map covering 164 journals from the Arts & Humanities with a direct citation link to New Literary History in the years 2001–10; selected journals are labeled. Source: Scopus.

The four selected journals used for these maps lie in different fields of the Arts & Humanities, and so show different scopes. However, even closely related journals can show very different reach when the citation relationships are analyzed. These citation maps are one method to look into the differences in scope and influence between journals.

What links the Journal of Archaeological Science to Oceanic Linguistics?

While the Journal of Archaeological Sciences is firmly placed in the archaeology section of the Arts & Humanities map, it reaches out to a diversity of journals including Oceanic Linguistics. What kind of paper causes these seemingly unusual links?

A 2010 paper authored by a group of researchers from France, the UK and New Zealand brings together the fields of Archaeology, Linguistics, and even Genetics in a study of the settlement of the Solomon Islands. (8) Among its cited references are four earlier papers from Oceanic Linguistics, as well as articles published in Human Biology, the American Journal of Human Genetics, and the Journal of Forensic Sciences.

Since publication, this paper has itself been cited in the Annual Review of Genetics, Current Anthropology, and Molecular Biology and Evolution, making it a true case of multiple fields interacting in the literature.

 

Conclusion

While interdisciplinary links such as those we have discussed are not to be found in all journals – many, often smaller, journals only have direct links to journals well-embedded in their own topic of interest – they are common across the Arts & Humanities, and indeed science in general. The maps shown here were limited to Arts & Humanities journals, but if expanded to all scholarly journals we would see even more dramatic examples of multidisciplinarity: citations between disparate subject areas. Ideas from one field are often relevant to those working in another, and the four journals illustrated here are some of many which have an influence beyond defined subject boundaries. These visualizations of the citation links in the Arts & Humanities show that it is a collection of interrelated topics: different facets of an investigation of culture and humanity.

References:

(1) Richardson, M. (2012) “Citography: the visualization of nineteen thousand journals through their recent citations”, Research Trends, Issue 26, January 2012.
(2) Calero Medina, C.M., van Leeuwen, T.N. (2012) “Seed Journal Citation Network Maps: A Method Based on Network Theory”, Journal of the American Society for Information Science and Technology, Vol. 63, No. 6, pp. 1226–1234.
(3) Leydesdorff, L., et al. (2011) “The structure of the Arts & Humanities Citation Index: A mapping on the basis of aggregated citations among 1,157 journals”, Journal of the American Society for Information Science and Technology, Vol. 62, No. 12, pp. 2414–2426.
(4) http://www.journals.elsevier.com/journal-of-memory-and-language/
(5) http://www.journals.elsevier.com/journal-of-archaeological-science/
(6) http://jhi.pennpress.org/strands/jhi/home.htm
(7) http://www.press.jhu.edu/journals/new_literary_history/
(8) Ricaut, F.-X., et al. (2010) “Ancient Solomon Islands mtDNA: Assessing Holocene settlement and the impact of European contact”, Journal of Archaeological Science, Vol. 37, No. 6, pp. 1161–1170.

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The Arts & Humanities include a diverse range of subjects, including many of the oldest intellectual pursuits such as Philosophy, Religion, Music, History, Art, Theatre and Literature. These disciplines, along with fields such as Language, Linguistics and the History of Science, share a common concern with humanity and culture.

This mutual interest means that we can expect much of the research in the Arts & Humanities to bridge disparate fields, in much the same way as modern scientific research increasingly links traditionally separate disciplines. One method for investigating this multidisciplinarity of research is to look at the citation links formed between journals when a paper published in one journal makes reference to a paper published in another. Citations are made to earlier work with some relevance to the current research, and so they can be used to draw together topically similar journals; see Research Trends issue 26 (1) for an earlier exploration of citation mapping using Scopus citation data. A similar approach applied to the Arts & Humanities reveals the structure of the subject area, and can also be used to highlight examples of multidisciplinarity.

The Arts & Humanities landscape

The Arts & Humanities as a subject area is only rarely the focus of bibliometric analyses, due to a common emphasis within the bibliometric community on citation analysis. These approaches tend to rely on sufficient quantities of journal articles, and of citations to recent research. In the Arts & Humanities, scholarly work is often published outside journals, for instance in monographs and books.

Our citation mapping method can be adapted to better suit the disciplines; for this map, we used ten years of publication and citation data (using the years 2001–10). This allows a greater period of time for citations to be made to previously-published work, and increases the confidence we can have in the structure of the graph, at the expense of having a map which does not reflect the most current trends. Otherwise, the method remains similar: citation data from Scopus are gathered at journal–journal level; these citations are first normalized, and then used as the edge data for a network graph in Gephi. Citations from a journal to any other are normalized both by the total citations from the citing journal and the total citations to the cited journal within the time period: the L index defined by Calero Medina et al. (2).

The map presented in Figure 1 is the result of mapping the journals classified within the Arts & Humanities in Scopus. All included journals published at least ten articles in the period of study (2001–10), and made at least ten citations to other journals in the map. After outliers and unconnected journals are removed, there remain 1570 journals which form the core of the decade’s Arts & Humanities work.

Figure 1: Journal citation map covering 1570 journals from the Arts & Humanities.

Journals are visualized using Gephi 0.8.1 beta and the ForceAtlas2 layout algorithm. Each node represents a journal, and edges the citations from one journal to the other in the years 2001–10. Each subject is assigned a color, used to highlight journals belonging to a single subject within the Arts & Humanities according to the ASJC classification used by Scopus; journals in multiple subjects are shown in white. Source: Scopus.

The map uses colors to highlight the position of the various subjects classed within the Arts & Humanities in Scopus; the major subjects have been labeled directly in Figure 1. These subjects group around Literature and the Arts in the center, with close connections to History to the right, and Religious Studies and Philosophy to the left; and looser connections to Language & Linguistics at the top, and Archaeology at the bottom.

When compared with a previous journal map of the Arts & Humanities produced by Leydesdorff et al. (3), similarities include the positioning of Literature between Music, Philosophy, Art and History; the proximity of Archaeology to History; and the connection between Linguistics and Philosophy. The maps are in broad agreement, despite differences in data source, normalization and layout algorithm; however, the visualization presented by Leydesdorff et al. shows Linguistics closer to the center of the map, and Theology as one key group of journals outside the core of the field. In the present map shown in Figure 1, Language & Linguistics are situated further from the center, while Religious Studies is shown to have strong ties not only to Philosophy but also to the Arts and History.

 

Mapping journal context

The curved lines reaching across Figure 1 represent citations from one journal to another, and it follows that the long lines reaching from one discipline to another show those citations made to journals in a different field of research. While Figure 1 shows the many interconnections between subjects, we can focus on smaller subsets of the map to see how individual journals exemplify multidisciplinarity.

Journal of Memory and Language publishes articles that “contribute to the formulation of scientific issues and theories in the areas of memory, language comprehension and production, and cognitive processes.” (4) The journal sits within the Language & Linguistics subject of the map shown in Figure 1, and, as expected, many of the journals it has citation links with are in the same field. However, the journal also has links with a broad range of journals in the field of Philosophy, as well as Literature and Music. Figure 2 shows only those journals with a direct link to the Journal of Memory and Language, and while the core of Language & Linguistics journals is immediately visible, the other related journals are shown at the right and bottom edges of the map; the links to a variety of Philosophy journals (turquoise) are particularly evident at the bottom-left.

Figure 2: Journal citation map covering 113 journals from the Arts & Humanities with a direct citation link to Journal of Memory and Language in the years 2001–10; selected journals are labeled. Source: Scopus.

Three further journal maps illustrate the wide range of interdisciplinary links across the Arts & Humanities. Figures 3–5 show maps for Journal of Archaeological Science, Journal of the History of Ideas, and New Literary History, each of which are based in different subjects within the area: respectively Archaeology, History & Philosophy of Science, and Literature & Literary Theory (see Figure 1).

Journal of Archaeological Science publishes work “advancing the development and application of scientific techniques and methodologies to all areas of archaeology.” (5) Figure 3 shows the predominately archaeological citation links of the journal, as well as the branches to such journals as Medical History and Oregon Historical Quarterly on the right side of the map, and Philosophy of the Social Sciences and Phenomenology and the Cognitive Sciences on the left. These do not dominate the map, but the Journal of Archaeological Science displays more multidisciplinarity than many other archaeology journals.

Figure 3: Journal citation map covering 135 journals from the Arts & Humanities with a direct citation link to Journal of Archaeological Science in the years 2001–10; selected journals are labeled. Source: Scopus.

Journal of the History of Ideas describes its field of focus, intellectual history, “expansively and ecumenically, including the histories of philosophy, of literature and the arts, of the natural and social sciences, of religion, and of political thought.” (6) Figure 4 shows the journal at the center of a wide-reaching web of citations, touching upon Language and Literature at the top (Language Sciences, Poetics Today), Music and History to the right (Music Analysis, Historical Research), Religion at the bottom (Journal of Religion), and Philosophy to the left (Philosophical Quarterly). Rather than sitting within a specific field and reaching out to others, as is the case for most journals, the Journal of the History of Ideas covers a vast range of the subject area from a central location.

Figure 4: Journal citation map covering 185 journals from the Arts & Humanities with a direct citation link to Journal of the History of Ideas in the years 2001–10; selected journals are labeled. Source: Scopus.

New Literary History “focuses on questions of theory, method, interpretation, and literary history.” (7) Figure 5 shows the journal within a context of Literature & Literary Theory journals as well as those in Philosophy, History, Archaeology, and the other Arts it connects with (Philosophy East and WestChurch HistoryArchival ScienceMusic and Letters).

Figure 5: Journal citation map covering 164 journals from the Arts & Humanities with a direct citation link to New Literary History in the years 2001–10; selected journals are labeled. Source: Scopus.

The four selected journals used for these maps lie in different fields of the Arts & Humanities, and so show different scopes. However, even closely related journals can show very different reach when the citation relationships are analyzed. These citation maps are one method to look into the differences in scope and influence between journals.

What links the Journal of Archaeological Science to Oceanic Linguistics?

While the Journal of Archaeological Sciences is firmly placed in the archaeology section of the Arts & Humanities map, it reaches out to a diversity of journals including Oceanic Linguistics. What kind of paper causes these seemingly unusual links?

A 2010 paper authored by a group of researchers from France, the UK and New Zealand brings together the fields of Archaeology, Linguistics, and even Genetics in a study of the settlement of the Solomon Islands. (8) Among its cited references are four earlier papers from Oceanic Linguistics, as well as articles published in Human Biology, the American Journal of Human Genetics, and the Journal of Forensic Sciences.

Since publication, this paper has itself been cited in the Annual Review of Genetics, Current Anthropology, and Molecular Biology and Evolution, making it a true case of multiple fields interacting in the literature.

 

Conclusion

While interdisciplinary links such as those we have discussed are not to be found in all journals – many, often smaller, journals only have direct links to journals well-embedded in their own topic of interest – they are common across the Arts & Humanities, and indeed science in general. The maps shown here were limited to Arts & Humanities journals, but if expanded to all scholarly journals we would see even more dramatic examples of multidisciplinarity: citations between disparate subject areas. Ideas from one field are often relevant to those working in another, and the four journals illustrated here are some of many which have an influence beyond defined subject boundaries. These visualizations of the citation links in the Arts & Humanities show that it is a collection of interrelated topics: different facets of an investigation of culture and humanity.

References:

(1) Richardson, M. (2012) “Citography: the visualization of nineteen thousand journals through their recent citations”, Research Trends, Issue 26, January 2012.
(2) Calero Medina, C.M., van Leeuwen, T.N. (2012) “Seed Journal Citation Network Maps: A Method Based on Network Theory”, Journal of the American Society for Information Science and Technology, Vol. 63, No. 6, pp. 1226–1234.
(3) Leydesdorff, L., et al. (2011) “The structure of the Arts & Humanities Citation Index: A mapping on the basis of aggregated citations among 1,157 journals”, Journal of the American Society for Information Science and Technology, Vol. 62, No. 12, pp. 2414–2426.
(4) http://www.journals.elsevier.com/journal-of-memory-and-language/
(5) http://www.journals.elsevier.com/journal-of-archaeological-science/
(6) http://jhi.pennpress.org/strands/jhi/home.htm
(7) http://www.press.jhu.edu/journals/new_literary_history/
(8) Ricaut, F.-X., et al. (2010) “Ancient Solomon Islands mtDNA: Assessing Holocene settlement and the impact of European contact”, Journal of Archaeological Science, Vol. 37, No. 6, pp. 1161–1170.

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Towards a comprehensive citation index for the Arts & Humanities

Dr. Wim Meester discusses the need for a comprehensive citation index that includes the Humanities. How is Scopus being adapted to meet this need?

Read more >


Introduction

The Humanities is a hugely diverse field with output published in many regional and national journals, but also in book chapters and monographs (1). In their recent publication on comprehensive coverage of the Social Sciences and Humanities, Sivertsen and Larsen pointed out that: “A well-designed and comprehensive citation index for the Social Sciences and Humanities has many potential uses, but has yet to be realized” (2). In 2008–09 the European Science Foundation (ESF) created the European Reference Index for the Humanities (ERIH). “The reference index was created and developed by European researchers both for their own purposes and in order to present their ongoing research achievements systematically to the rest of the world” (3). As a result of the second round of the ERIH project and following revisions of the initial list, in 2011 the ERIH Revised Lists was published (4). Project MUSE is a not-for-profit full text platform of many Arts & Humanities journals with international relevance from primarily US based University Presses (5). In addition, local databases with comprehensive coverage of Social Science and Humanities journal articles exist, for example, in Flanders and in Norway (6). Multidisciplinary citation indexes like Scopus are fairly comprehensive in the Science, Technology and Medicine (STM) subject fields. However, it has remained a challenge to create an index that is comprehensive in both STM and Humanities.

 

Coverage of Humanities journals in Scopus

In 2008, Scopus covered around 2,000 Humanities titles. To further increase the number of Humanities titles in the database, project MUSE and the initial ERIH list were used to identify additional relevant titles in 2009. In 2011, a similar project was executed in which the coverage of the revised ERIH list, the Social Science Citation Index, the Arts & Humanities Citation Index, the titles list of Evaluation Agency for Research and Evaluation, France (AERES), and the Humanities journal indexes Cairns and Francis were used. These journals were reviewed and added to the database, together with the Humanities titles selected for Scopus coverage via the Scopus Title Evaluation Process (STEP). The Scopus coverage has now grown to almost 3,500 Humanities titles (and to 4,200 when also including Humanities-related titles) and includes all serial publication types, such as journals, book series and conference series.

That the origin of Humanities journals is diverse is shown by the vast number of publishers from which the journals are sourced. Also, in Humanities, there is less concentration of journals at a minority of publishers than for STM titles (see Figure 1). The diagonal represents a situation where each publisher contributes the same percentage of journals. The surface between the curve and diagonal is proportional to the Gini index, which is a measure for concentration. For STM the surface is larger than for Humanities, which means a higher Gini Index value and a stronger degree of concentration of journals amongst publishers.

Figure 1 – Cumulative percentage of journals versus cumulative percentage of publishers for STM and Humanities journals covered in Scopus (November 2012). Source: Scopus

In Scopus, the location of a journal is determined by the country in which the publisher is located. Most of the larger publishers are located in Western-Europe or North America; therefore most of the Humanities titles come from these regions. However, the Humanities content is published in 63 different countries. In the top 25 countries, there are six Eastern-European countries; three Scandinavian countries; two South-American countries; one African country and one Asian country (see Figure 2a). Looking at the regional diversity it is clear that Europe as a whole is best represented and Central & South America and Asia Pacific are underrepresented with respect to Humanities content (See Figure 2b).

Figure 2a (left): The number of Humanities titles covered in Scopus for the top 25 countries with most Humanities titles covered (November 2012). Figure 2b (right): Regional distribution of the number of Humanities titles covered in Scopus per region (November 2012). Source: Scopus

Since the majority of titles come from Anglo-Saxon countries it is to be expected that most titles have English as their primary publication language. However, 975 of the Humanities titles do not have English as their primary language and a further 500 English language titles have a second publication language. In total 32 different languages are covered. French, Spanish, German and Italian are the most occurring languages after English (see Figure 3). Most of the other frequently occurring languages are other European languages, with the notable exceptions of Russian and Turkish. More analysis about the publication language of Humanities content is provided in another article in this Research Trends issue (7).

Figure 3 – Proportion of non-English languages in Humanities journals in Scopus. Only languages with at least 10 titles are mentioned (November 2012). Source: Scopus

With respect to the subject classification of the Humanities titles, there is a fairly even distribution over the different sub-fields (see Figure 4).*) History is the largest field with more than 900 titles, after that Literature & Literary Theory (668), Language & Linguistics (649), Philosophy (445), Visual Arts (392) and Religious Studies (356) are the largest sub-fields. Of the Humanities-related subject fields, Cultural Studies (678), Linguistics & Language (673) and Law (462) are the most frequently occurring.

Figure 4 – Figure 4a (left): Distribution of the number of titles classified in Humanities subject fields. Figure 4b (right): Distribution of the number of titles classified in Humanities-related subject fields (November 2012). Source: Scopus
*) Titles can be classified in more than one subject field.


Humanities articles in Scopus (2007–11)

As of November 2012, the total number of Arts & Humanities articles in the database is a little over 1 million, just over 2% of the total database. All document types that are within the Scopus coverage policy are included in the article counts. From 2007 to 2011 the number of articles has grown from 42 thousand to 76 thousand articles per year, which comes down to a compound annual growth rate (CAGR) of 16.2% (see Figure 5). Particularly since 2009 the year-on-year growth of Humanities articles has increased substantially (20.1%), which is in line with the increase of Humanities indexed titles in the database.

Figure 5 –The growth percentage (green line, top) and the number (blue bars, below) of Humanities articles covered in Scopus per year in 2007–11 (30 November 2012). Source: Scopus

The US and the UK are the countries with the most Humanities articles during 2007–11 (see Table 1). All of the countries in the top 10 experience a year-on-year growth of more than 30%. Most remarkable is Spain with a year–on-year growth rate of 50%; despite the fact that the CAGR of Humanities articles published in the Spanish language remained in line with the overall growth rate at 17%.

2007 2008 2009 2010 2011
United States 6,363 7,125 11,025 15,358 19,259
United Kingdom 2,724 3,035 4,466 6,077 8,055
France 1,006 1,168 1,970 2,553 3,010
Germany 955 1,092 1,636 1,994 2,769
Canada 883 1,060 1,611 2,103 2,770
Spain 504 769 1,236 1,637 2,555
Australia 574 725 1,298 1,732 2,261
Italy 399 536 896 1,188 1,791
Netherlands 376 520 717 926 1,342
China 316 359 450 942 1,150

Table 1 – Number of Humanities articles published per year for the top 10 countries by article output in Humanities (30 November 2012). Source: Scopus

 

Humanities book content

As many publications in Arts & Humanities are not published in journals but in books, for comprehensive coverage of Humanities research output it is also important to cover books (1), (8). The Books Enhancement Program was set up to tackle this issue. It aims to index around 75,000 books in Scopus by the end of 2015. Although books in all subject areas will be covered, the focus will be on those subject fields where books matter most: Social Sciences and Arts & Humanities. The selection policy of books content will be on a publisher level, taking into account aspects like the reputation of the publisher, the composition of the books list and expected impact of the books. As part of the Books Enhancement Program, full bibliographic metadata will be indexed as well as abstracts (where available), author and affiliation information and cited references. By capturing author and affiliation data, it will be possible to attribute a book chapter or monograph to an author, create profiles and measure output. This is also of relevance for new initiatives to create user-generated author profiles such as the Open Researcher and Contributor ID (ORCID) (9). Covering more comprehensive Humanities output will make it easier for researchers to create their ORCID profiles. By making the cited references available and matching the citations to records in the database, it will also be possible to display citation counts and measure impact.

 

Conclusion

In conclusion, various actions have been taken in order to make Scopus more comprehensive with respect to Humanities content. The number of Humanities titles covered and articles published in the database has grown substantially. Particularly Humanities journal output from North America and Europe seems to be covered well. Next steps will be to increase coverage of Humanities journal content from Asia, which currently seems to be underrepresented in the database. Also of importance will be to extend the source types to books and capture the relevant Humanities output that is not published in journals but in books. A fully comprehensive citation index for both STM and Humanities may not be there yet, but we are getting closer.

 

References:

(1) Hicks, D. (2004). “The four literatures of social science” in Moed, H.F. (Ed.), Handbook of quantitative science and technology research, pp. 473–496. Dordrecht: Kluwer Academic
(2) Sivertsen, G., Larsen, B. (2012) “Comprehensive bibliographic coverage of the social sciences and humanities in a citation index: An empirical analysis of the potential”, Scientometrics, Vol. 91, Issue 2, pp. 567-575.
(3) Martin, B., Tang, P., Morgan, M., Glänzel, W., Hornbostel, S., Lauer, G., et al. (2010) “Towards a bibliometric database for the social sciences and humanities—A European scoping project” a report produced for DFG, ESRC, AHRC, NWO, ANR and ESF. Sussex: Science and Technology Policy Research Unit
(4) European Science Foundation, “European Reference Index for the Humanities (ERIH)”. Available at: http://www.esf.org/research-areas/humanities/erih-european-reference-index-for-the-humanities.html [Accessed 13 December 2012]
(5) Project MUSE. Available at: http://muse.jhu.edu/ [Accessed 13 December 2012]
(6) Ossenblok, T.L.B., Engels, T.C.E., Sivertsen, G. (2012) “The representation of the social sciences and humanities in the Web of Science - A comparison of publication patterns and incentive structures in Flanders and Norway (2005-9)”, Research Evaluation, Vol. 21, Issue 4, pp. 280-290.
(7) Van Weijen, D. (2012) “Publication Languages in the Arts & Humanities”, Research Trends, Issue 32
(8) Huang, M.-H., Chang, Y.-W. (2008) “Characteristics of research output in social sciences and humanities: From a research evaluation perspective”, Journal of the American Society for Information Science and Technology, Vol. 59, Issue 11, pp. 1819-1828
(9) Taylor, M., Thorisson, G.A. (2012) “Fixing authorship – towards a practical model of contributorship”, Research Trends, Issue 31
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Introduction

The Humanities is a hugely diverse field with output published in many regional and national journals, but also in book chapters and monographs (1). In their recent publication on comprehensive coverage of the Social Sciences and Humanities, Sivertsen and Larsen pointed out that: “A well-designed and comprehensive citation index for the Social Sciences and Humanities has many potential uses, but has yet to be realized” (2). In 2008–09 the European Science Foundation (ESF) created the European Reference Index for the Humanities (ERIH). “The reference index was created and developed by European researchers both for their own purposes and in order to present their ongoing research achievements systematically to the rest of the world” (3). As a result of the second round of the ERIH project and following revisions of the initial list, in 2011 the ERIH Revised Lists was published (4). Project MUSE is a not-for-profit full text platform of many Arts & Humanities journals with international relevance from primarily US based University Presses (5). In addition, local databases with comprehensive coverage of Social Science and Humanities journal articles exist, for example, in Flanders and in Norway (6). Multidisciplinary citation indexes like Scopus are fairly comprehensive in the Science, Technology and Medicine (STM) subject fields. However, it has remained a challenge to create an index that is comprehensive in both STM and Humanities.

 

Coverage of Humanities journals in Scopus

In 2008, Scopus covered around 2,000 Humanities titles. To further increase the number of Humanities titles in the database, project MUSE and the initial ERIH list were used to identify additional relevant titles in 2009. In 2011, a similar project was executed in which the coverage of the revised ERIH list, the Social Science Citation Index, the Arts & Humanities Citation Index, the titles list of Evaluation Agency for Research and Evaluation, France (AERES), and the Humanities journal indexes Cairns and Francis were used. These journals were reviewed and added to the database, together with the Humanities titles selected for Scopus coverage via the Scopus Title Evaluation Process (STEP). The Scopus coverage has now grown to almost 3,500 Humanities titles (and to 4,200 when also including Humanities-related titles) and includes all serial publication types, such as journals, book series and conference series.

That the origin of Humanities journals is diverse is shown by the vast number of publishers from which the journals are sourced. Also, in Humanities, there is less concentration of journals at a minority of publishers than for STM titles (see Figure 1). The diagonal represents a situation where each publisher contributes the same percentage of journals. The surface between the curve and diagonal is proportional to the Gini index, which is a measure for concentration. For STM the surface is larger than for Humanities, which means a higher Gini Index value and a stronger degree of concentration of journals amongst publishers.

Figure 1 – Cumulative percentage of journals versus cumulative percentage of publishers for STM and Humanities journals covered in Scopus (November 2012). Source: Scopus

In Scopus, the location of a journal is determined by the country in which the publisher is located. Most of the larger publishers are located in Western-Europe or North America; therefore most of the Humanities titles come from these regions. However, the Humanities content is published in 63 different countries. In the top 25 countries, there are six Eastern-European countries; three Scandinavian countries; two South-American countries; one African country and one Asian country (see Figure 2a). Looking at the regional diversity it is clear that Europe as a whole is best represented and Central & South America and Asia Pacific are underrepresented with respect to Humanities content (See Figure 2b).

Figure 2a (left): The number of Humanities titles covered in Scopus for the top 25 countries with most Humanities titles covered (November 2012). Figure 2b (right): Regional distribution of the number of Humanities titles covered in Scopus per region (November 2012). Source: Scopus

Since the majority of titles come from Anglo-Saxon countries it is to be expected that most titles have English as their primary publication language. However, 975 of the Humanities titles do not have English as their primary language and a further 500 English language titles have a second publication language. In total 32 different languages are covered. French, Spanish, German and Italian are the most occurring languages after English (see Figure 3). Most of the other frequently occurring languages are other European languages, with the notable exceptions of Russian and Turkish. More analysis about the publication language of Humanities content is provided in another article in this Research Trends issue (7).

Figure 3 – Proportion of non-English languages in Humanities journals in Scopus. Only languages with at least 10 titles are mentioned (November 2012). Source: Scopus

With respect to the subject classification of the Humanities titles, there is a fairly even distribution over the different sub-fields (see Figure 4).*) History is the largest field with more than 900 titles, after that Literature & Literary Theory (668), Language & Linguistics (649), Philosophy (445), Visual Arts (392) and Religious Studies (356) are the largest sub-fields. Of the Humanities-related subject fields, Cultural Studies (678), Linguistics & Language (673) and Law (462) are the most frequently occurring.

Figure 4 – Figure 4a (left): Distribution of the number of titles classified in Humanities subject fields. Figure 4b (right): Distribution of the number of titles classified in Humanities-related subject fields (November 2012). Source: Scopus
*) Titles can be classified in more than one subject field.


Humanities articles in Scopus (2007–11)

As of November 2012, the total number of Arts & Humanities articles in the database is a little over 1 million, just over 2% of the total database. All document types that are within the Scopus coverage policy are included in the article counts. From 2007 to 2011 the number of articles has grown from 42 thousand to 76 thousand articles per year, which comes down to a compound annual growth rate (CAGR) of 16.2% (see Figure 5). Particularly since 2009 the year-on-year growth of Humanities articles has increased substantially (20.1%), which is in line with the increase of Humanities indexed titles in the database.

Figure 5 –The growth percentage (green line, top) and the number (blue bars, below) of Humanities articles covered in Scopus per year in 2007–11 (30 November 2012). Source: Scopus

The US and the UK are the countries with the most Humanities articles during 2007–11 (see Table 1). All of the countries in the top 10 experience a year-on-year growth of more than 30%. Most remarkable is Spain with a year–on-year growth rate of 50%; despite the fact that the CAGR of Humanities articles published in the Spanish language remained in line with the overall growth rate at 17%.

2007 2008 2009 2010 2011
United States 6,363 7,125 11,025 15,358 19,259
United Kingdom 2,724 3,035 4,466 6,077 8,055
France 1,006 1,168 1,970 2,553 3,010
Germany 955 1,092 1,636 1,994 2,769
Canada 883 1,060 1,611 2,103 2,770
Spain 504 769 1,236 1,637 2,555
Australia 574 725 1,298 1,732 2,261
Italy 399 536 896 1,188 1,791
Netherlands 376 520 717 926 1,342
China 316 359 450 942 1,150

Table 1 – Number of Humanities articles published per year for the top 10 countries by article output in Humanities (30 November 2012). Source: Scopus

 

Humanities book content

As many publications in Arts & Humanities are not published in journals but in books, for comprehensive coverage of Humanities research output it is also important to cover books (1), (8). The Books Enhancement Program was set up to tackle this issue. It aims to index around 75,000 books in Scopus by the end of 2015. Although books in all subject areas will be covered, the focus will be on those subject fields where books matter most: Social Sciences and Arts & Humanities. The selection policy of books content will be on a publisher level, taking into account aspects like the reputation of the publisher, the composition of the books list and expected impact of the books. As part of the Books Enhancement Program, full bibliographic metadata will be indexed as well as abstracts (where available), author and affiliation information and cited references. By capturing author and affiliation data, it will be possible to attribute a book chapter or monograph to an author, create profiles and measure output. This is also of relevance for new initiatives to create user-generated author profiles such as the Open Researcher and Contributor ID (ORCID) (9). Covering more comprehensive Humanities output will make it easier for researchers to create their ORCID profiles. By making the cited references available and matching the citations to records in the database, it will also be possible to display citation counts and measure impact.

 

Conclusion

In conclusion, various actions have been taken in order to make Scopus more comprehensive with respect to Humanities content. The number of Humanities titles covered and articles published in the database has grown substantially. Particularly Humanities journal output from North America and Europe seems to be covered well. Next steps will be to increase coverage of Humanities journal content from Asia, which currently seems to be underrepresented in the database. Also of importance will be to extend the source types to books and capture the relevant Humanities output that is not published in journals but in books. A fully comprehensive citation index for both STM and Humanities may not be there yet, but we are getting closer.

 

References:

(1) Hicks, D. (2004). “The four literatures of social science” in Moed, H.F. (Ed.), Handbook of quantitative science and technology research, pp. 473–496. Dordrecht: Kluwer Academic
(2) Sivertsen, G., Larsen, B. (2012) “Comprehensive bibliographic coverage of the social sciences and humanities in a citation index: An empirical analysis of the potential”, Scientometrics, Vol. 91, Issue 2, pp. 567-575.
(3) Martin, B., Tang, P., Morgan, M., Glänzel, W., Hornbostel, S., Lauer, G., et al. (2010) “Towards a bibliometric database for the social sciences and humanities—A European scoping project” a report produced for DFG, ESRC, AHRC, NWO, ANR and ESF. Sussex: Science and Technology Policy Research Unit
(4) European Science Foundation, “European Reference Index for the Humanities (ERIH)”. Available at: http://www.esf.org/research-areas/humanities/erih-european-reference-index-for-the-humanities.html [Accessed 13 December 2012]
(5) Project MUSE. Available at: http://muse.jhu.edu/ [Accessed 13 December 2012]
(6) Ossenblok, T.L.B., Engels, T.C.E., Sivertsen, G. (2012) “The representation of the social sciences and humanities in the Web of Science - A comparison of publication patterns and incentive structures in Flanders and Norway (2005-9)”, Research Evaluation, Vol. 21, Issue 4, pp. 280-290.
(7) Van Weijen, D. (2012) “Publication Languages in the Arts & Humanities”, Research Trends, Issue 32
(8) Huang, M.-H., Chang, Y.-W. (2008) “Characteristics of research output in social sciences and humanities: From a research evaluation perspective”, Journal of the American Society for Information Science and Technology, Vol. 59, Issue 11, pp. 1819-1828
(9) Taylor, M., Thorisson, G.A. (2012) “Fixing authorship – towards a practical model of contributorship”, Research Trends, Issue 31
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Trends in Arts & Humanities Funding 2004-2012

Gali Halevi and Judit Bar-Ilan investigate the major trends in the funding of research projects and grants in the Arts & Humanities. Is funding of the Humanities really on the decline?

Read more >


Global economic crises and depleting government budgets are causing funding cuts across research areas and disciplines (1, 4, 6, 8 ). Public as well as private funding of Arts & Humanities (A&H) research and activities is a concern and often a matter of debate, especially in times when capital is expected to be invested in life-saving research (7). This article explores some global trends in funding of A&H over time. There are a few studies that provide overviews of funding trends in A&H (10, 2). However, most of these studies are localized and cover specific countries and do not depict these trends on a global basis. The main purposes of this paper are therefore:

  1. To sketch the general trends of funded A&H awards by:
  • Allocated capital: i.e. how much money is dedicated to A&H funding over time
  • Geographical distribution and monetary attributions: i.e. how much funding is allocated to A&H and in which countries
  • Type of funding: i.e. what are the comparative contributions of private and government funding for example

2. To sketch the trends of types of granted A&H awards by the types of projects and/or research being funded

The data analyzed in this paper was retrieved from SciVal Funding™ (“the database”), an Elsevier database that covers awarded and open funding opportunities across disciplines. The database captures its data directly from the grants and funding bodies’ websites and covers organizations that fund scientific research in the United States, Canada, United Kingdom, European Commission, Australia, Ireland, Singapore, India, South Africa, and New Zealand. At the time this research was performed, the database included 4,500 research funding organizations including private and public funding institutions.

However, since this paper aims to cover international trends in A&H research, one has to note that SciVal funding mostly covers English-language grants and opportunities. In this respect, grants derived from the European Commission for example, are mostly written in English and scarcely include local languages. There are also many non-English language grants that are not covered by the database; hence this analysis focuses mostly on English-language grants. It is acknowledged that A&H, unlike other areas of research, are sensitive to language, especially in the literary arts (9).

Data Collection

We collected all awarded grants information from 2004 to 2012 that was classified as “Arts & Humanities” in SciVal Funding, which resulted in approximately 370,000 records. Each of the records contained 13 unique fields (see Table 1):

Field Name Field Content
Award type A classification, created by SciVal funding, which describes the type of award. i.e. research, fellowship, project etc.
Award title The actual title of the award as retrieved from the funding body website
Amount The amount of money allocated for the award
Currency Which currency the award was granted in (note that there are times that one currency can be used in different countries )
Awardee Type A classification, created by SciVal Funding, which annotates whether the grant was given to an institution or a private person (i.e. fellowship)
Awardee country The country of the receiving institution or person
Awardee name The name of the receiving person
Sponsor country The origin country of the funding body
Sponsor name The name of the funding body
Sponsor type Type of funding body i.e. government, private, corporate etc
Start date The date the project / research starts
Institution The name of the receiving institution
Abstract A summary of the awarded research or project

Table 1: Database Fields

Findings

Geographical distribution of A&H Awards

The geographical analysis of the awarded grants was performed based on the country sponsoring the award. In the database, this field represents the origin of the grant. Since the awarded grants covered in the database are mostly English-language ones, it is not surprising that most awards are by Anglophone countries such as the United States, Canada, United Kingdom, Australia and Ireland. Since the database aggregates open web sources, the breakdown of individual European countries was sometimes not available to us. An analysis of the overall number of awarded grants by countries covered in the database shows that some non-English speaking countries are well represented in the data such as Taiwan, India, Hong Kong and others.

International trends of funded A&H awards: Allocated Capital

This section describes the overall money amounts of awarded grants in A&H. The data that we have analyzed has different currencies per each country covered. In order to get a sense of the total monetary expenditure, we converted all currencies into US dollars using January 2013 rates. These amounts represent the total expenditure of public and private funding of A&H across all types of grants and countries. The analysis shows that there has been a constant decline in the monetary expenditure dedicated to A&H activities since 2009 (see Figure 1).

Figure 1: Total monetary expenditure of granted awards in A&H, 2004-2012

There has been an evident decline in A&H funding from 2009 to 2012. Reports on A&H funding in North America have also pointed to the same trend, showing decline in funding for A&H activities (3). There was a sharp decline between 2010 and 2011 with funds cut in almost half each year.

Sponsors and their expenditure

The sponsor types in the database pertain to the type of institutions that provide funding to A&H activities. These include:

  • State/ Provincial Government
  • Federal/ National Government
  • Private
  • Foundation
  • International
  • Corporate
  • Professional Associations and Societies

Figure 2 shows that both state and government funding are still the major sources for A&H grants and awards. Interestingly, the number of state funds is larger than the federal ones. These are followed by private funds and awards given by foundations.

Figure 2: Types of sponsors and the number of awarded grants

An analysis of the monetary expenditure per sponsor (see Figure 3) reveals that federal sponsored awards offer more awards worth at least a million, followed by foundations and state/ provincial government. Private, international and academic funding at this level is scarce. Professional associations and cooperations do not offer awards at this level of funding but do offer them mostly in the under $50,000 range.

Figure 3: Amount of capital expenditure per sponsor type

Sponsors and types of awards

An analysis of the types of awards each sponsor supports (see Figure 4) shows that projects are mainly funded by state government, private funding and federal/ national government. Research is mostly funded by federal government and foundations awards, while community awards are typically funded by state government. Fellowships are funded by private, foundation, and academic awards. Cooperation funds are mostly given to community, research and conferences while academic funds are not surprisingly focused on research, fellowships and conferences. There are several awards types indexed in the database (see Table 2).

Community Projects and programs aimed at communities and including for example local art workshops, and special performing arts events such as folk festivals and song and dance festivals at local towns or states
Project Generally one-time projects aimed at specific goals (i.e. building improvement, exhibitions)
Research Grants An amount paid to cover any funding for scientific research
Fellowship An amount paid to an individual for the purpose of research
Conference / Travel Grants A grant paid for the purpose of travel to a conference or to cover conference costs
Training An award to support costs of furthering the education of personnel, often students
Career Development An award to defray costs associated with the development of an individual’s career
Equipment An award to be used exclusively for the purchase of equipment related to a research
Prize Monetary Recognition based on competition or other criteria

Table 2: Overview of awards types indexed in the database

Figure 4: Types of awards and their contributing sponsors

Figure 5: Types of awards granted, their frequencies of occurrence and respective monetary value

Figure 5 shows that the types of awards that receive the most funding are those related to projects, followed by research and community. The proportion between the number of awards and their monetary value can also be seen in Figure 5. Projects, research and community awards account for most of the capital spent on A&H awards. There’s a striking gap between these awards and awards given to cover fellowships, conferences, training and equipment for example.

Types of awards and awardee types

In SciVal Funding there are several awardee types, including (1) Institution, (2) Principal Investigator who heads the research, project or program and (3) Co-principal Investigator. The occurrence of co-principal investigator is very rare and most of the awards are assigned to either an Institution or Principal investigator. We found 200,352 grants that were given to an Institution and 163,502 grants to Investigators, including Principal and Co-principal. An analysis of the types of award per awardee type shows that when a principal investigator is assigned to an award it is mostly for research purposes. Following that, principal investigators are heading community work or are the recipients of fellowships (see Figure 6).

Figure 6: Types of awards granted to institutes and investigators and their frequencies of occurrence

When an institution is the recepient of an award, it is mainly for either community or project related activities. Research is far less common as an award type granted to institutions (see Figure 6).

Conclusions

The funds allocated for A&H activities are declining, showing sharp decreases from 2009 to 2012. The global economic crisis which culminated in 2009 might be a major contributor to this decline. State and federal bodies are still the major funding bodies of A&H. The federal government is the main source of funding awards that are worth a million or more, followed by foundations and state /provincial government.

Most of the A&H awarded grants are those related to projects that depict specific programs, research and community programs. Projects are mainly sponsored by state government, private funding and federal/national government, while community related awards are typically sponsored by state government. Fellowships are funded by private, foundation and academic awards and academic funds are allocated to research, fellowships, and conferences.

Research-related grants are mostly received by principal investigators rather than institutions. Institutions receive A&H grants mostly for community and specific projects.

From the analysis above there seems to be a lack of funding for equipment, which is probably needed in the arts and for prizes. More grants are available for research, fellowships and community work for investigators. Awards such as training are available through institutional grants, where there seems to be more room for career-related funding.

Acknowledgement

The authors are grateful to Brie Betz, former project manager of SciVal Funding at Elsevier, for her support and Mehul Pandya, Prakash Devaraj and Hariharan Yuvaraj, from the SciVal Funding product team, for providing the data needed for this study.

References

(1) Baker, J.O., Gutheil, T.G. (2011) "‘Are you kidding?’: Effects of funding cutbacks in the mental health field on patient care and potential liability issues”, Journal of Psychiatry and Law, Vol. 39, No. 3, pp. 425-440
(2) Borgonovi, F., O'Hare, M. (2004) “The impact of the National Endowment for the Arts in the United States: institutional and sectoral effects on private funding”, Journal of Cultural Economics, Vol. 28, No. 1, pp. 21-36
(3) Brinkley, A. (2009) “The Landscape of Humanities Research and Funding”. Available from: http://www.humanitiesindicators.org/essays/brinkley.pdf
(4) Clery, D. (2007) “Research funding: U.K. cutbacks rattle physics, astronomy”, Science, Vol. 318, No. 5858, pp. 1851
(5) Jenkins, B. (2009) “Cultural spending in Ontario, Canada: trends in public and private funding”, International Journal of Cultural Policy, Vol. 15, No. 3, pp. 329-342
(6) Kapp, C. (2001) “Funding squeeze forces UNHCR cutbacks”, Lancet, Vol. 358, No. 9288, pp. 1177
(7) Katz-Gerro, T. (2012) “Do individuals who attend the arts support public funding of the arts? Evidence from England and the USA”, Journal of Policy Research in Tourism, Leisure and Events, Vol. 4, No. 1, pp. 1-27
(8) Rossberg, R.R. (2006) “Funding cutbacks THREATEN regional revival”, Railway Gazette International, Vol. 162, No. 9, pp. 561-568
(9) Van Weijen, D. (2012) “Publication Languages in the Arts & Humanities”, Research Trends, Issue 32.
(10) Zan, L., Baraldi, S., Bonini, Ferri, Paolo, et al.(2012) “Behind the scenes of public funding for performing arts in Italy: hidden phenomena beyond the rhetoric of legislation”, International Journal of Cultural Policy, Vol. 18, No. 1, pp. 76-92
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Global economic crises and depleting government budgets are causing funding cuts across research areas and disciplines (1, 4, 6, 8 ). Public as well as private funding of Arts & Humanities (A&H) research and activities is a concern and often a matter of debate, especially in times when capital is expected to be invested in life-saving research (7). This article explores some global trends in funding of A&H over time. There are a few studies that provide overviews of funding trends in A&H (10, 2). However, most of these studies are localized and cover specific countries and do not depict these trends on a global basis. The main purposes of this paper are therefore:

  1. To sketch the general trends of funded A&H awards by:
  • Allocated capital: i.e. how much money is dedicated to A&H funding over time
  • Geographical distribution and monetary attributions: i.e. how much funding is allocated to A&H and in which countries
  • Type of funding: i.e. what are the comparative contributions of private and government funding for example

2. To sketch the trends of types of granted A&H awards by the types of projects and/or research being funded

The data analyzed in this paper was retrieved from SciVal Funding™ (“the database”), an Elsevier database that covers awarded and open funding opportunities across disciplines. The database captures its data directly from the grants and funding bodies’ websites and covers organizations that fund scientific research in the United States, Canada, United Kingdom, European Commission, Australia, Ireland, Singapore, India, South Africa, and New Zealand. At the time this research was performed, the database included 4,500 research funding organizations including private and public funding institutions.

However, since this paper aims to cover international trends in A&H research, one has to note that SciVal funding mostly covers English-language grants and opportunities. In this respect, grants derived from the European Commission for example, are mostly written in English and scarcely include local languages. There are also many non-English language grants that are not covered by the database; hence this analysis focuses mostly on English-language grants. It is acknowledged that A&H, unlike other areas of research, are sensitive to language, especially in the literary arts (9).

Data Collection

We collected all awarded grants information from 2004 to 2012 that was classified as “Arts & Humanities” in SciVal Funding, which resulted in approximately 370,000 records. Each of the records contained 13 unique fields (see Table 1):

Field Name Field Content
Award type A classification, created by SciVal funding, which describes the type of award. i.e. research, fellowship, project etc.
Award title The actual title of the award as retrieved from the funding body website
Amount The amount of money allocated for the award
Currency Which currency the award was granted in (note that there are times that one currency can be used in different countries )
Awardee Type A classification, created by SciVal Funding, which annotates whether the grant was given to an institution or a private person (i.e. fellowship)
Awardee country The country of the receiving institution or person
Awardee name The name of the receiving person
Sponsor country The origin country of the funding body
Sponsor name The name of the funding body
Sponsor type Type of funding body i.e. government, private, corporate etc
Start date The date the project / research starts
Institution The name of the receiving institution
Abstract A summary of the awarded research or project

Table 1: Database Fields

Findings

Geographical distribution of A&H Awards

The geographical analysis of the awarded grants was performed based on the country sponsoring the award. In the database, this field represents the origin of the grant. Since the awarded grants covered in the database are mostly English-language ones, it is not surprising that most awards are by Anglophone countries such as the United States, Canada, United Kingdom, Australia and Ireland. Since the database aggregates open web sources, the breakdown of individual European countries was sometimes not available to us. An analysis of the overall number of awarded grants by countries covered in the database shows that some non-English speaking countries are well represented in the data such as Taiwan, India, Hong Kong and others.

International trends of funded A&H awards: Allocated Capital

This section describes the overall money amounts of awarded grants in A&H. The data that we have analyzed has different currencies per each country covered. In order to get a sense of the total monetary expenditure, we converted all currencies into US dollars using January 2013 rates. These amounts represent the total expenditure of public and private funding of A&H across all types of grants and countries. The analysis shows that there has been a constant decline in the monetary expenditure dedicated to A&H activities since 2009 (see Figure 1).

Figure 1: Total monetary expenditure of granted awards in A&H, 2004-2012

There has been an evident decline in A&H funding from 2009 to 2012. Reports on A&H funding in North America have also pointed to the same trend, showing decline in funding for A&H activities (3). There was a sharp decline between 2010 and 2011 with funds cut in almost half each year.

Sponsors and their expenditure

The sponsor types in the database pertain to the type of institutions that provide funding to A&H activities. These include:

  • State/ Provincial Government
  • Federal/ National Government
  • Private
  • Foundation
  • International
  • Corporate
  • Professional Associations and Societies

Figure 2 shows that both state and government funding are still the major sources for A&H grants and awards. Interestingly, the number of state funds is larger than the federal ones. These are followed by private funds and awards given by foundations.

Figure 2: Types of sponsors and the number of awarded grants

An analysis of the monetary expenditure per sponsor (see Figure 3) reveals that federal sponsored awards offer more awards worth at least a million, followed by foundations and state/ provincial government. Private, international and academic funding at this level is scarce. Professional associations and cooperations do not offer awards at this level of funding but do offer them mostly in the under $50,000 range.

Figure 3: Amount of capital expenditure per sponsor type

Sponsors and types of awards

An analysis of the types of awards each sponsor supports (see Figure 4) shows that projects are mainly funded by state government, private funding and federal/ national government. Research is mostly funded by federal government and foundations awards, while community awards are typically funded by state government. Fellowships are funded by private, foundation, and academic awards. Cooperation funds are mostly given to community, research and conferences while academic funds are not surprisingly focused on research, fellowships and conferences. There are several awards types indexed in the database (see Table 2).

Community Projects and programs aimed at communities and including for example local art workshops, and special performing arts events such as folk festivals and song and dance festivals at local towns or states
Project Generally one-time projects aimed at specific goals (i.e. building improvement, exhibitions)
Research Grants An amount paid to cover any funding for scientific research
Fellowship An amount paid to an individual for the purpose of research
Conference / Travel Grants A grant paid for the purpose of travel to a conference or to cover conference costs
Training An award to support costs of furthering the education of personnel, often students
Career Development An award to defray costs associated with the development of an individual’s career
Equipment An award to be used exclusively for the purchase of equipment related to a research
Prize Monetary Recognition based on competition or other criteria

Table 2: Overview of awards types indexed in the database

Figure 4: Types of awards and their contributing sponsors

Figure 5: Types of awards granted, their frequencies of occurrence and respective monetary value

Figure 5 shows that the types of awards that receive the most funding are those related to projects, followed by research and community. The proportion between the number of awards and their monetary value can also be seen in Figure 5. Projects, research and community awards account for most of the capital spent on A&H awards. There’s a striking gap between these awards and awards given to cover fellowships, conferences, training and equipment for example.

Types of awards and awardee types

In SciVal Funding there are several awardee types, including (1) Institution, (2) Principal Investigator who heads the research, project or program and (3) Co-principal Investigator. The occurrence of co-principal investigator is very rare and most of the awards are assigned to either an Institution or Principal investigator. We found 200,352 grants that were given to an Institution and 163,502 grants to Investigators, including Principal and Co-principal. An analysis of the types of award per awardee type shows that when a principal investigator is assigned to an award it is mostly for research purposes. Following that, principal investigators are heading community work or are the recipients of fellowships (see Figure 6).

Figure 6: Types of awards granted to institutes and investigators and their frequencies of occurrence

When an institution is the recepient of an award, it is mainly for either community or project related activities. Research is far less common as an award type granted to institutions (see Figure 6).

Conclusions

The funds allocated for A&H activities are declining, showing sharp decreases from 2009 to 2012. The global economic crisis which culminated in 2009 might be a major contributor to this decline. State and federal bodies are still the major funding bodies of A&H. The federal government is the main source of funding awards that are worth a million or more, followed by foundations and state /provincial government.

Most of the A&H awarded grants are those related to projects that depict specific programs, research and community programs. Projects are mainly sponsored by state government, private funding and federal/national government, while community related awards are typically sponsored by state government. Fellowships are funded by private, foundation and academic awards and academic funds are allocated to research, fellowships, and conferences.

Research-related grants are mostly received by principal investigators rather than institutions. Institutions receive A&H grants mostly for community and specific projects.

From the analysis above there seems to be a lack of funding for equipment, which is probably needed in the arts and for prizes. More grants are available for research, fellowships and community work for investigators. Awards such as training are available through institutional grants, where there seems to be more room for career-related funding.

Acknowledgement

The authors are grateful to Brie Betz, former project manager of SciVal Funding at Elsevier, for her support and Mehul Pandya, Prakash Devaraj and Hariharan Yuvaraj, from the SciVal Funding product team, for providing the data needed for this study.

References

(1) Baker, J.O., Gutheil, T.G. (2011) "‘Are you kidding?’: Effects of funding cutbacks in the mental health field on patient care and potential liability issues”, Journal of Psychiatry and Law, Vol. 39, No. 3, pp. 425-440
(2) Borgonovi, F., O'Hare, M. (2004) “The impact of the National Endowment for the Arts in the United States: institutional and sectoral effects on private funding”, Journal of Cultural Economics, Vol. 28, No. 1, pp. 21-36
(3) Brinkley, A. (2009) “The Landscape of Humanities Research and Funding”. Available from: http://www.humanitiesindicators.org/essays/brinkley.pdf
(4) Clery, D. (2007) “Research funding: U.K. cutbacks rattle physics, astronomy”, Science, Vol. 318, No. 5858, pp. 1851
(5) Jenkins, B. (2009) “Cultural spending in Ontario, Canada: trends in public and private funding”, International Journal of Cultural Policy, Vol. 15, No. 3, pp. 329-342
(6) Kapp, C. (2001) “Funding squeeze forces UNHCR cutbacks”, Lancet, Vol. 358, No. 9288, pp. 1177
(7) Katz-Gerro, T. (2012) “Do individuals who attend the arts support public funding of the arts? Evidence from England and the USA”, Journal of Policy Research in Tourism, Leisure and Events, Vol. 4, No. 1, pp. 1-27
(8) Rossberg, R.R. (2006) “Funding cutbacks THREATEN regional revival”, Railway Gazette International, Vol. 162, No. 9, pp. 561-568
(9) Van Weijen, D. (2012) “Publication Languages in the Arts & Humanities”, Research Trends, Issue 32.
(10) Zan, L., Baraldi, S., Bonini, Ferri, Paolo, et al.(2012) “Behind the scenes of public funding for performing arts in Italy: hidden phenomena beyond the rhetoric of legislation”, International Journal of Cultural Policy, Vol. 18, No. 1, pp. 76-92
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
  • Elsevier has recently launched the International Center for the Study of Research - ICSR - to help create a more transparent approach to research assessment. Its mission is to encourage the examination of research using an array of metrics and a variety of qualitative and quantitive methods.