Articles

Research trends is an online magazine providing objective insights into scientific trends based on bibliometrics analyses.

Stem cell research: Trends in and perspectives on the evolving international landscape

Stem cell research is an exciting yet complex and controversial science. In this piece, Alexander van Servellen and Ikuko Oba present the most important findings of their recent study on publication trends in Stem Cell Research.

Read more >


Introduction

Stem cell research is an exciting yet complex and controversial science. The field holds the potential to revolutionize the way human diseases are treated, and many nations have therefore invested heavily in stem cell research and its applications. However, human stem cell research is also controversial with many ethical and regulatory questions that impact a nation’s policies.

Elsevier recently partnered with EuroStemCell and Kyoto University’s Institute for Integrated Cell-Material Sciences (iCeMS) to study publication trends in stem cell research. The resulting paper was published online to coincide with the World Stem Cell Summit in San Diego on December 6th, 2013. The study provides an overview of the stem cell research field as a whole, with particular focus on pluripotent stem cells.

Pluripotent stem cells are of particular interest because they are undifferentiated cells, which have the potential to differentiate into virtually any cell type in the body (1; see Figure 1). This property opens the door to clinical applications such as cell and organ replacement (2) and may accelerate drug discovery, drug screening and toxicological assessment. There are different kinds of pluripotent stem cells: embryonic stem cells (ES) are sourced from a blastocyst (an early embryo), and when sourced from human blastocysts are called human embryonic stem cells (hES), while induced pluripotent stem cells (iPS) - which were only recently discovered in 2006 by Shinya Yamanaka and colleagues at Kyoto University - are sourced from body cells, and then genetically reprogrammed to become pluripotent. For more detailed information on stem cells, please refer to our study (3).

The document sets underlying our analyses were created using keyword searches which are provided in the methodology section of our study, and were limited to articles, reviews and conference proceedings. They include primary research articles as well as other publication types, such as reviews, papers on policy and regulation, ethical considerations, etcetera. In this article we briefly review some key findings of our study, and expand by having a closer look at the clinical theme ‘drug development’ using SciVal. We will also examine the publication trends of China and the United States specifically, to see whether we can observe the impact of country level policy decisions in the publication data.

 

SvS  (1)

Figure 1: Stem Cell types and characteristics


Publication output, growth and field-weighted citation impact

Our study found that the overall corpus of stem cell related papers shows a relatively fast growth rate and citation impact.

Stem cell publications show a Compound Annual Growth Rate (CAGR) of 7.0% from 2008 to 2012, which is more than twice as great as the 2.9% CAGR for global publication output on all topics in the same period. Stem cell publications have a Field-Weighted Citation Impact (FWCI) of approximately 1.5 throughout the 2008-2012 period, which indicates that stem cell papers, on average, received 50% more citations than all other papers published in related disciplines in that period. (Stem cells and its subtypes are custom subject areas that were created using keyword searches. Each document set therefore includes publications belonging to various disciplines of the All Science Journal Classification).

Looking at specific types of stem cell research, the emergence of the iPS cell field (first publication in 2006) stands out. iPS cell papers show explosive growth and the highest impact of all types of stem cell research papers. The FWCI of the iPS field was extremely high just after its discovery, as might be expected of an emerging field. The FWCI calculated at the beginning of the period was based on relatively low publication counts, which are more subject to outlier effects than later data-points, which are based on larger publication volumes. The decline in FWCI (see Figure 3) should not be interpreted as a decrease in quality of research, rather it should be seen as a natural and expected decline as publication volume increases. Nonetheless, the 3,080 iPS cell papers published between 2008-2012 have a FWCI of 2.93, which is almost 3 times world level for all papers published in related disciplines. That is strong evidence to support the sustained recognition and importance of the emerging field of iPS cell research.

We observed that hES cell publication output peaked in 2010, and while ES cell research overall shows a high publication volume, it is predominantly represented by non-human ES cell research (see Figure 2). The FWCI for ES cell and hES cell publications also remained relatively stable during the same period, at around 1.8 times the world average for ES cell publications, and over two times world average for hES cell publications.

SvS  (2)

Figure 2 - Global publication count (1996-2012) and compound annual growth rate (CAGR)(2008-2012) for all stem cells (Stem Cells), ES cells (all organisms; ESCs), hES cells (hESCs), and iPS cells (iPSCs). Source: Scopus.

 

SvS  (3)

Figure 3 - The FWCI of publications on stem cells overall and by cell type from 2008-2012. The pale blue line represents the global average field-weighted citation impact for all publications in the various subject areas, assigned to the journals in which stem cell papers are published. Source: Scopus

 

Clinical themes: Regenerative Medicine and Drug Development

Our study also examined the extent to which stem cell publications are aligned with the societal goals of developing new treatments for diseases, by analyzing the publications for use of keywords related to two themes: regenerative medicine and drug development. The results show that more than half of all stem cell publications do not use keywords related to either theme (see Figure 4). Such publications may be related to basic research which addresses the fundamental biology of stem cells. These specific themes may also not be relevant to many clinical or translational publications, e.g., those related to hematopoietic stem cell transplantation and cancer (translational research is scientific research that helps to make findings from basic science useful for practical applications that enhance human health and well-being). It should also be noted that the search used to compile the document set for overall stem cell research was purposely broad, and can be expected to include stem cell research of all kinds, as well as research which refers to stem cells in the title, abstract, and keywords, but may not necessarily be considered “stem cell research” per se.

 “Today, stem cell research is more about understanding than about treating illnesses. I do think it’s most important to understand how our tissues are formed, and how they get ill. I’d go further and say that understanding stem cells means understanding where we come from. If we think of the embryonic stem cells, they tell us a lot about how our bodies develop from an embryo. They provide a window on events which we couldn’t otherwise observe.”

— Elena Cattaneo, Full Professor, Director, UniStem, Center for Stem Cell Research, University of Milan

 

SvS 4
Figure 4 - The percentage of stem cell papers published from 2008 to 2012 using keywords related to “drug development,” “regenerative medicine,” or other by cell type. Source: Scopus

 

 It is not surprising to see that regenerative medicine is significantly represented within each type of stem cell research. Alongside positive developments in stem cell biology, regenerative medicine has enabled the development of new biotechnologies that promote self-repair and regeneration, such as the construction of new tissues to improve or restore the function of injured or destroyed tissues and organs (4).

Drug development is represented by a much smaller share of each type of stem cell research. The fact that many more iPS cell papers were related to drug development (11%) compared to ES cells (4%) and stem cells overall (2%) stands out. This may reflect the particular potential that iPS cells hold for the development of disease models, personalized medicine, and drug toxicity testing. iPS cells can be derived from selected living individuals, including those with inherited diseases and their unaffected relatives, which could allow the screening process to account for genetic differences in response to potential new drugs.

 

 Exploring Drug Development using SciVal

To expand on the analysis done in the study, we used the new generation of SciVal to examine stem cell papers related to drug development by setting up the relevant research fields using the same keywords applied in our initial study. The results are presented in figure 5. The number of iPS cell papers related to drug development has clearly grown fast since the first iPS cell paper was published in 2006, as it has since surpassed the numbers of ES and hES cell papers related to drug development.

SvS 5

Figure 5 Number of global publications related to “drug development” in ES, hES and iPS cell research 2006-2012. Source: SciVal.

 

 “I believe the biggest impact to date of iPS cell technology is not regenerative medicine, but in making disease models, drug discovery, and toxicology testing…”

— Shinya Yamanaka, Director Center for iPS Cell Research and Application (CiRA), Kyoto University.

 

World Publication Activity in Embryonic Stem Cell (ES), Human Embryonic Stem cell (hES) and induced Pluripotent Stem cell research (iPS)

Stem cell research has provoked debate regarding the ethics and regulation of the research and resulting therapies. Initially these discussions focused largely on the moral status of the embryo. The discovery of iPS cells raised the possibility that ES cell research would no longer be necessary, thereby circumventing the ethical issues present in embryonic research. This has not been the case, as the stem cell field continues to rely both on ES and iPS cell research to progress the understanding of pluripotency and potential applications (5). Furthermore, iPS cell research is not free of ethical considerations in terms of how they may be used as well as the question of tissue ownership. Looking at the data, we see continued publications in ES and hES, but do observe that the global volume of iPS publications has surpassed the volume of hES cell publications in 2010 (see Figure 6). There also seems to be an overall slowing in growth, and even a recent decrease in world ES and hES cell publication output. These findings should be interpreted with caution, keeping in mind that our datasets represent publications which use keywords related to stem cell research, and not solely “stem cell research papers”.

SvS 6

Figure 6 Global Stem Cell publications (top) and ES, hES and iPS cell publications (bottom), as a share of total world output, from 1996-2012. Source: Scopus

 

Stem Cells in China

We also examined the publication trends of China and the United States specifically, to see whether we can observe the impact of country level policy decisions in the publication data.

China is a country which shows steady growth in stem cell research supported by its major funding initiatives. In 2001, the Chinese Ministry of Science and Technology (MOST) launched two independent stem cell programs followed by a number of funding initiatives intended to further promote stem cell research, applications and public awareness. At the same time, China has been working to strengthen ethical guidelines and regulations. In total, the national government’s stem cell funding commitment is estimated at more than 3 billion RMB (close to 500 million USD) for the 5 year period from 2012 to 2016. Confronted with the healthcare needs of a rapidly aging population of nearly 1.4 billion, the impetus behind much stem cell research, so far, has understandably been clinical translation and development (6).

Looking at the publication data from our study, we see that stem cell publications have grown from representing 0.2% of China’s total publication output in 2001 to a peak of 0.82% in 2008, followed by a marginal decline to 0.76% by 2012 (see Figure 7). As also observed globally, China’s iPS cell publication output surpassed hES cell publication output in 2010, after which hES cell output shows fluctuation.

SvS 7

Figure 7 China’s Stem Cell publications (top) and ES, hES and iPS cell publications (bottom), as a share of total country output, from 1996-2012. Source: Scopus


Science policy and Human Embryonic Stem Cell (hES) research in the USA

The United States is an interesting case study because, as reported in our study, they are the world leader in stem cell research considering that they produce the highest absolute publication volume, as well as high relative activity levels, indicating a high focus on stem cell research, and show high field-weighted citation impact. Yet, they have had to grapple with the practical and ethical dilemmas that are inherent in this field, and changing views of different administrations, as governments changed.

The result is a series of policy changes, some of which limited federal funding for hES cell research, while others loosened the limitations. In figure 8 we map such policies along with the corresponding publication output (relative to total country output). Despite the restrictive policies between 2001 and 2009, the United States show steady output growth, which has been supported through individual, state and industry funding as well as donations. We do observe changes in hES cell publication output that coincide with changes in regulation. While such changes in publication output are probably not best explained using a one factor model, these findings are hardly surprising, as we expect science policy to greatly impact scientific activity. Such an analysis can provide insight into the degree to which science policy has indeed affected publication output.

SvS 8

Figure 8 – USA’s hES publications as a share of total country output, from 1996-2012 and relevant US policy decisions. Source: Scopus (and various sources for policy decisions)

 

Conclusion

In recent years, stem cell research has grown remarkably, showing a growth rate more than double the rate of world research publications from 2008 to 2012. However, this increase is not uniform across all stem cell research areas. Our analysis showed that both ES and hES fields have grown more slowly than the stem cell field overall. In contrast, iPS cell publications have shown explosive growth, as would be expected of a new and promising field of research, and iPS cell publication volumes surpassed that of hES cell publications in 2010. However, both cell types continue to be highly active areas.

Stem cell research has attracted considerable attention within the scientific community: stem cell publications overall are cited 50% more than all other publications in related disciplines, while ES cell publications are cited twice the world rate, and iPS cell publications nearly three times the world rate. This high-growth, high-impact field encompasses research across many cell types, with a focus ranging from the most fundamental to the clinical. Reflecting the field’s ongoing development and clinical promise, approximately half of all stem cell publications are associated with regenerative medicine or drug development, a trend that is particularly pronounced in iPS cell research.

Stem cell research is developing fast, with some experimental pluripotent stem cell treatments already in clinical trials. Active debates are underway to adapt regulatory frameworks to address the specific challenges of developing, standardizing, and distributing cell-based therapies, while advances in basic research continue to provide a fuller understanding of how stem cells can be safely and effectively used. Cell replacement or transplantation therapies are not the only application of stem cell research: already the first steps are being taken towards use of cells derived from pluripotent stem cells, in drug discovery and testing. It is with great interest and anticipation that we watch the further development of this exciting field of science.

 

References

(1)    Fakunle, E.S., Loring, J.F. (2012) "Ethnically diverse pluripotent stem cells for drug development", Trends in molecular medicine, Vol. 18, No. 12, pp. 709-716.
(2)    Csete, M. (2013) “Chapter 84 - Regenerative Medicine”, In Transfusion Medicine and Hemostasis (2nd. ed.), edited by Beth H. Shaz, Christopher D. Hillyer, Mikhail Roshal and Charles S. Abrams. San Diego: Elsevier, 2013, pp. 559-563.
(3)    EuroStemCell, iCeMS, Elsevier (2013) “Stem Cell Research Trends and Perspectives on the Evolving International Landscape”. Available at: http://info.scival.com/UserFiles/Stem-Cell-Report-Trends-and-Perspectives-on-the-Evolving-International-Landscape_Dec2013.pdf
(4)    Gurtner, G.C., Callaghan, M.J., Longaker, M.T. (2007) “Progress and potential for regenerative medicine, Annu Rev Med, Vol. 58, pp 299-312.
(5)    Smith, A., Blackburn, C. (2012) “Do we still need research on human embryonic stem cells?”. Available at: http://www.eurostemcell.org/commentanalysis/do-we-still-need-research-human-embryonic-stem-cells
(6)    Yuan, W., Sipp, D., Wang, Z.Z., Deng, H., Pei D., Zhou, Q., Cheng, T., (2012) “Stem Cell Science On the Rise in China”, Cell Stem Cell, Vol 10, No. 1, pp 12-15.

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Introduction

Stem cell research is an exciting yet complex and controversial science. The field holds the potential to revolutionize the way human diseases are treated, and many nations have therefore invested heavily in stem cell research and its applications. However, human stem cell research is also controversial with many ethical and regulatory questions that impact a nation’s policies.

Elsevier recently partnered with EuroStemCell and Kyoto University’s Institute for Integrated Cell-Material Sciences (iCeMS) to study publication trends in stem cell research. The resulting paper was published online to coincide with the World Stem Cell Summit in San Diego on December 6th, 2013. The study provides an overview of the stem cell research field as a whole, with particular focus on pluripotent stem cells.

Pluripotent stem cells are of particular interest because they are undifferentiated cells, which have the potential to differentiate into virtually any cell type in the body (1; see Figure 1). This property opens the door to clinical applications such as cell and organ replacement (2) and may accelerate drug discovery, drug screening and toxicological assessment. There are different kinds of pluripotent stem cells: embryonic stem cells (ES) are sourced from a blastocyst (an early embryo), and when sourced from human blastocysts are called human embryonic stem cells (hES), while induced pluripotent stem cells (iPS) - which were only recently discovered in 2006 by Shinya Yamanaka and colleagues at Kyoto University - are sourced from body cells, and then genetically reprogrammed to become pluripotent. For more detailed information on stem cells, please refer to our study (3).

The document sets underlying our analyses were created using keyword searches which are provided in the methodology section of our study, and were limited to articles, reviews and conference proceedings. They include primary research articles as well as other publication types, such as reviews, papers on policy and regulation, ethical considerations, etcetera. In this article we briefly review some key findings of our study, and expand by having a closer look at the clinical theme ‘drug development’ using SciVal. We will also examine the publication trends of China and the United States specifically, to see whether we can observe the impact of country level policy decisions in the publication data.

 

SvS  (1)

Figure 1: Stem Cell types and characteristics


Publication output, growth and field-weighted citation impact

Our study found that the overall corpus of stem cell related papers shows a relatively fast growth rate and citation impact.

Stem cell publications show a Compound Annual Growth Rate (CAGR) of 7.0% from 2008 to 2012, which is more than twice as great as the 2.9% CAGR for global publication output on all topics in the same period. Stem cell publications have a Field-Weighted Citation Impact (FWCI) of approximately 1.5 throughout the 2008-2012 period, which indicates that stem cell papers, on average, received 50% more citations than all other papers published in related disciplines in that period. (Stem cells and its subtypes are custom subject areas that were created using keyword searches. Each document set therefore includes publications belonging to various disciplines of the All Science Journal Classification).

Looking at specific types of stem cell research, the emergence of the iPS cell field (first publication in 2006) stands out. iPS cell papers show explosive growth and the highest impact of all types of stem cell research papers. The FWCI of the iPS field was extremely high just after its discovery, as might be expected of an emerging field. The FWCI calculated at the beginning of the period was based on relatively low publication counts, which are more subject to outlier effects than later data-points, which are based on larger publication volumes. The decline in FWCI (see Figure 3) should not be interpreted as a decrease in quality of research, rather it should be seen as a natural and expected decline as publication volume increases. Nonetheless, the 3,080 iPS cell papers published between 2008-2012 have a FWCI of 2.93, which is almost 3 times world level for all papers published in related disciplines. That is strong evidence to support the sustained recognition and importance of the emerging field of iPS cell research.

We observed that hES cell publication output peaked in 2010, and while ES cell research overall shows a high publication volume, it is predominantly represented by non-human ES cell research (see Figure 2). The FWCI for ES cell and hES cell publications also remained relatively stable during the same period, at around 1.8 times the world average for ES cell publications, and over two times world average for hES cell publications.

SvS  (2)

Figure 2 - Global publication count (1996-2012) and compound annual growth rate (CAGR)(2008-2012) for all stem cells (Stem Cells), ES cells (all organisms; ESCs), hES cells (hESCs), and iPS cells (iPSCs). Source: Scopus.

 

SvS  (3)

Figure 3 - The FWCI of publications on stem cells overall and by cell type from 2008-2012. The pale blue line represents the global average field-weighted citation impact for all publications in the various subject areas, assigned to the journals in which stem cell papers are published. Source: Scopus

 

Clinical themes: Regenerative Medicine and Drug Development

Our study also examined the extent to which stem cell publications are aligned with the societal goals of developing new treatments for diseases, by analyzing the publications for use of keywords related to two themes: regenerative medicine and drug development. The results show that more than half of all stem cell publications do not use keywords related to either theme (see Figure 4). Such publications may be related to basic research which addresses the fundamental biology of stem cells. These specific themes may also not be relevant to many clinical or translational publications, e.g., those related to hematopoietic stem cell transplantation and cancer (translational research is scientific research that helps to make findings from basic science useful for practical applications that enhance human health and well-being). It should also be noted that the search used to compile the document set for overall stem cell research was purposely broad, and can be expected to include stem cell research of all kinds, as well as research which refers to stem cells in the title, abstract, and keywords, but may not necessarily be considered “stem cell research” per se.

 “Today, stem cell research is more about understanding than about treating illnesses. I do think it’s most important to understand how our tissues are formed, and how they get ill. I’d go further and say that understanding stem cells means understanding where we come from. If we think of the embryonic stem cells, they tell us a lot about how our bodies develop from an embryo. They provide a window on events which we couldn’t otherwise observe.”

— Elena Cattaneo, Full Professor, Director, UniStem, Center for Stem Cell Research, University of Milan

 

SvS 4
Figure 4 - The percentage of stem cell papers published from 2008 to 2012 using keywords related to “drug development,” “regenerative medicine,” or other by cell type. Source: Scopus

 

 It is not surprising to see that regenerative medicine is significantly represented within each type of stem cell research. Alongside positive developments in stem cell biology, regenerative medicine has enabled the development of new biotechnologies that promote self-repair and regeneration, such as the construction of new tissues to improve or restore the function of injured or destroyed tissues and organs (4).

Drug development is represented by a much smaller share of each type of stem cell research. The fact that many more iPS cell papers were related to drug development (11%) compared to ES cells (4%) and stem cells overall (2%) stands out. This may reflect the particular potential that iPS cells hold for the development of disease models, personalized medicine, and drug toxicity testing. iPS cells can be derived from selected living individuals, including those with inherited diseases and their unaffected relatives, which could allow the screening process to account for genetic differences in response to potential new drugs.

 

 Exploring Drug Development using SciVal

To expand on the analysis done in the study, we used the new generation of SciVal to examine stem cell papers related to drug development by setting up the relevant research fields using the same keywords applied in our initial study. The results are presented in figure 5. The number of iPS cell papers related to drug development has clearly grown fast since the first iPS cell paper was published in 2006, as it has since surpassed the numbers of ES and hES cell papers related to drug development.

SvS 5

Figure 5 Number of global publications related to “drug development” in ES, hES and iPS cell research 2006-2012. Source: SciVal.

 

 “I believe the biggest impact to date of iPS cell technology is not regenerative medicine, but in making disease models, drug discovery, and toxicology testing…”

— Shinya Yamanaka, Director Center for iPS Cell Research and Application (CiRA), Kyoto University.

 

World Publication Activity in Embryonic Stem Cell (ES), Human Embryonic Stem cell (hES) and induced Pluripotent Stem cell research (iPS)

Stem cell research has provoked debate regarding the ethics and regulation of the research and resulting therapies. Initially these discussions focused largely on the moral status of the embryo. The discovery of iPS cells raised the possibility that ES cell research would no longer be necessary, thereby circumventing the ethical issues present in embryonic research. This has not been the case, as the stem cell field continues to rely both on ES and iPS cell research to progress the understanding of pluripotency and potential applications (5). Furthermore, iPS cell research is not free of ethical considerations in terms of how they may be used as well as the question of tissue ownership. Looking at the data, we see continued publications in ES and hES, but do observe that the global volume of iPS publications has surpassed the volume of hES cell publications in 2010 (see Figure 6). There also seems to be an overall slowing in growth, and even a recent decrease in world ES and hES cell publication output. These findings should be interpreted with caution, keeping in mind that our datasets represent publications which use keywords related to stem cell research, and not solely “stem cell research papers”.

SvS 6

Figure 6 Global Stem Cell publications (top) and ES, hES and iPS cell publications (bottom), as a share of total world output, from 1996-2012. Source: Scopus

 

Stem Cells in China

We also examined the publication trends of China and the United States specifically, to see whether we can observe the impact of country level policy decisions in the publication data.

China is a country which shows steady growth in stem cell research supported by its major funding initiatives. In 2001, the Chinese Ministry of Science and Technology (MOST) launched two independent stem cell programs followed by a number of funding initiatives intended to further promote stem cell research, applications and public awareness. At the same time, China has been working to strengthen ethical guidelines and regulations. In total, the national government’s stem cell funding commitment is estimated at more than 3 billion RMB (close to 500 million USD) for the 5 year period from 2012 to 2016. Confronted with the healthcare needs of a rapidly aging population of nearly 1.4 billion, the impetus behind much stem cell research, so far, has understandably been clinical translation and development (6).

Looking at the publication data from our study, we see that stem cell publications have grown from representing 0.2% of China’s total publication output in 2001 to a peak of 0.82% in 2008, followed by a marginal decline to 0.76% by 2012 (see Figure 7). As also observed globally, China’s iPS cell publication output surpassed hES cell publication output in 2010, after which hES cell output shows fluctuation.

SvS 7

Figure 7 China’s Stem Cell publications (top) and ES, hES and iPS cell publications (bottom), as a share of total country output, from 1996-2012. Source: Scopus


Science policy and Human Embryonic Stem Cell (hES) research in the USA

The United States is an interesting case study because, as reported in our study, they are the world leader in stem cell research considering that they produce the highest absolute publication volume, as well as high relative activity levels, indicating a high focus on stem cell research, and show high field-weighted citation impact. Yet, they have had to grapple with the practical and ethical dilemmas that are inherent in this field, and changing views of different administrations, as governments changed.

The result is a series of policy changes, some of which limited federal funding for hES cell research, while others loosened the limitations. In figure 8 we map such policies along with the corresponding publication output (relative to total country output). Despite the restrictive policies between 2001 and 2009, the United States show steady output growth, which has been supported through individual, state and industry funding as well as donations. We do observe changes in hES cell publication output that coincide with changes in regulation. While such changes in publication output are probably not best explained using a one factor model, these findings are hardly surprising, as we expect science policy to greatly impact scientific activity. Such an analysis can provide insight into the degree to which science policy has indeed affected publication output.

SvS 8

Figure 8 – USA’s hES publications as a share of total country output, from 1996-2012 and relevant US policy decisions. Source: Scopus (and various sources for policy decisions)

 

Conclusion

In recent years, stem cell research has grown remarkably, showing a growth rate more than double the rate of world research publications from 2008 to 2012. However, this increase is not uniform across all stem cell research areas. Our analysis showed that both ES and hES fields have grown more slowly than the stem cell field overall. In contrast, iPS cell publications have shown explosive growth, as would be expected of a new and promising field of research, and iPS cell publication volumes surpassed that of hES cell publications in 2010. However, both cell types continue to be highly active areas.

Stem cell research has attracted considerable attention within the scientific community: stem cell publications overall are cited 50% more than all other publications in related disciplines, while ES cell publications are cited twice the world rate, and iPS cell publications nearly three times the world rate. This high-growth, high-impact field encompasses research across many cell types, with a focus ranging from the most fundamental to the clinical. Reflecting the field’s ongoing development and clinical promise, approximately half of all stem cell publications are associated with regenerative medicine or drug development, a trend that is particularly pronounced in iPS cell research.

Stem cell research is developing fast, with some experimental pluripotent stem cell treatments already in clinical trials. Active debates are underway to adapt regulatory frameworks to address the specific challenges of developing, standardizing, and distributing cell-based therapies, while advances in basic research continue to provide a fuller understanding of how stem cells can be safely and effectively used. Cell replacement or transplantation therapies are not the only application of stem cell research: already the first steps are being taken towards use of cells derived from pluripotent stem cells, in drug discovery and testing. It is with great interest and anticipation that we watch the further development of this exciting field of science.

 

References

(1)    Fakunle, E.S., Loring, J.F. (2012) "Ethnically diverse pluripotent stem cells for drug development", Trends in molecular medicine, Vol. 18, No. 12, pp. 709-716.
(2)    Csete, M. (2013) “Chapter 84 - Regenerative Medicine”, In Transfusion Medicine and Hemostasis (2nd. ed.), edited by Beth H. Shaz, Christopher D. Hillyer, Mikhail Roshal and Charles S. Abrams. San Diego: Elsevier, 2013, pp. 559-563.
(3)    EuroStemCell, iCeMS, Elsevier (2013) “Stem Cell Research Trends and Perspectives on the Evolving International Landscape”. Available at: http://info.scival.com/UserFiles/Stem-Cell-Report-Trends-and-Perspectives-on-the-Evolving-International-Landscape_Dec2013.pdf
(4)    Gurtner, G.C., Callaghan, M.J., Longaker, M.T. (2007) “Progress and potential for regenerative medicine, Annu Rev Med, Vol. 58, pp 299-312.
(5)    Smith, A., Blackburn, C. (2012) “Do we still need research on human embryonic stem cells?”. Available at: http://www.eurostemcell.org/commentanalysis/do-we-still-need-research-human-embryonic-stem-cells
(6)    Yuan, W., Sipp, D., Wang, Z.Z., Deng, H., Pei D., Zhou, Q., Cheng, T., (2012) “Stem Cell Science On the Rise in China”, Cell Stem Cell, Vol 10, No. 1, pp 12-15.

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Research assessment: Review of methodologies and approaches

In this article, Dr. Henk Moed and Dr. Gali Halevi discuss the different levels at which research evaluation takes place and provide an overview of the various quantitative approaches to research assessment that are currently available.

Read more >


The assessment of scientific merit and individuals has a long and respectable history which has been demonstrated in numerous methods and models utilizing different data sources and approaches (1, 2). The proliferation and increasing availability of primary data has created the ability to evaluate research on many levels and degrees of complexity, but has also introduced some fundamental challenges to all who are involved in this process, including evaluators, administrators and researchers, and others (3).

Evaluative methods are used on several levels within the scientific world: (1) Institutional (including departmental) level, (2) Program level, and (3) Individual level. Each of these levels has its own objectives and goals; for example,

Institutional evaluation is being used in order to establish accreditation, define missions, establish new programs and monitor the quality of an institute’s research activities among others. The types of evaluative results can be seen in the ranking systems of universities, which at present are produced at both regional and international levels, based on different criteria (4). Institutional evaluations are performed based on prestige measures derived from publications, citations, patents, collaborations and levels of expertise of the individuals within the institution.

Program level evaluations are performed in order to measure the cost-benefit aspects of specific scientific programs. These are usually based on discovering the linkage between the investment made and the potential results of the program (5). Within this realm we find measures developed for technology transfer capabilities and commercialization potentialities of the program, among others (6).

Finally an individual evaluation is mainly performed for purposes of promotion and retention of individuals and is done at specific times in a researcher’s career. Individual assessment methods rely mainly on counts of publications or citations (7). In the past few years, with the advent of social media, we have seen an increase in the use of measures based on mentions in social media sites such as blogs, Facebook, LinkedIn, Wikipedia, Twitter, and others, which are labelled as sources of “altmetrics” and include news outlets as well (8). The data used for each of these evaluation goals, whether they measure a publication’s impact in social or economic terms or both, varies by the method chosen in each case.


Evaluative Indicators

Based on the different methodologies and approaches, several indicators, aimed at quantifying and benchmarking the results of these evaluative methods, have emerged through the years. Table 1 summarizes some of the main indicators used today in research evaluation.

Type of Indicator Description Main Uses Main Challenges
Publications – Citations Methods involving counts of the number of publications produced by the evaluated entity (e.g. researcher, department, institution) and the citations they receive. Measuring the impact and intellectual influence of scientific and scholarly activities including:Publication impact
Author impact
Institution /department impact
Country impact
Name variations of institutions and individuals make it difficult to count these correctly.Limited coverage or lack of coverage of the database selected for the analysis can cause fundamental errors.Some documents such as technical reports or professional papers (“grey literature”) are usually excluded from the analysis due to lack of indexing and thus, in certain disciplines, decrease the accuracy of the assessment.Differences between disciplines and institution reading behaviors are difficult to account for.
Usage Methods that aim to quantify the number of times a scholarly work has been downloaded or viewed. Indicating works that are read, viewed or shared as a measure of impact.Enabling authors to be recognized for publications that might be less cited but heavily used. Incomplete usage data availability across providers leads to partial analysis.Differences between disciplines and institution reading behaviors are difficult to account for.Content crawling and automated downloads software tools that allow individuals to automatically download large amounts of content, which doesn’t necessarily mean that it was read or viewed.Difficult to ascertain whether downloaded publications were actually read or used.
Social (Altmetrics) Methods that aim to capture the number of times a publication is mentioned in blogs, tweets or other social media platforms such as shared reference management tools. Measuring the mentions of a publication in social media sites, which can be considered as citations and usage and thus indicate the impact of research, an individual or institution. A relatively new area with few providers of social media tracking.The weight given to each social media source is different from one provider to the other thus leading to different “impact” scores.
Patents Measures the number of patents assigned to an institution or an individual.Identification of citations to basic research papers in patents as well as patents that are highly cited by recently issued patents. Attempting to provide a direct link between basic science and patents as an indication of economic, social and/or methodological contribution. Incomplete and un-standardized references and names limit the ability of properly assigning citations and patents to individuals or institutions.Patenting in countries other than where the institution or individual originates from is problematic for impact analysis.Lack of exhaustive reference lists within the patents limits the analysis.
Economic Measures the strengths between science and its effect on industry, innovation and the economy as a whole. Providing technology transfer indicators.Indicating patentability potentialities of a Research project.Providing cost-benefit measures The statistical models used are complex and require deep understanding of the investment made but also of the program itself.Long term programs are more difficult to measure as far as the cost-benefit is concerned.Requires expertise not only in mathematics and statistics but also in the field of investigation itself.
Networks Calculates collaborations between institutions and individuals on a domestic and global scale.Institutions and individuals that develop and maintain a prolific research network are not only more productive but also active, visible and established. Enabling the tracking of highly connected and globally active individuals and institutions.Allowing benchmarking to be performed by evaluators by comparing collaborating individuals and institutions to each other. Affiliation names as mentioned in the published papers are not always standardized, thus making them difficult to trace.Education in a different country which might not have resulted in a publication cannot be measured, thus making this particular aspect of expertise building impossible to trace.

Table 1 - Types of evaluative indicators


Big data and its effect on evaluation methods

Big data refers to a collection of data sets that is so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. The advent of supercomputers and cloud computing which are able to process, analyze and visualize these datasets also has an effect on evaluation methods and models. While a decade ago scientific evaluation relied mainly on citations and publication counts, most of which were even done manually, today these data are not only available digitally but can also be triangulated with other data types (9). Table 2 depicts some examples of big datasets that can be combined in a bibliometric study to investigate different phenomena related to publications and scholarly output. Thus, for example, publication and citation counts can be triangulated with collaborative indicators, text analysis and econometric measures to produce a multi-level view of an institution, program or an individual. Yet, the availability and processing capabilities of these large datasets does not necessarily mean that evaluation becomes simple or easy to communicate. The fact of the matter is that as they become more complex, both administrators and evaluators find it difficult to reach consensus as to which model best depicts the productivity and impact of scientific activities. These technological abilities are becoming a breeding ground for more indices, models and measures and while each may be valid and grounded in research, they present a challenge in deciding which are best to use and in what setting.

Combined datasets Studied phenomena Typical research questions
Citation indexes and usage log files of full text publication archives Downloads versus citations; distinct phases in the process of processing scientific information What do downloads of full text articles measure? To what extent do downloads and citations correlate?
Citation indexes and patent databases Linkages between science and technology (the science–technology interface) What is the technological impact of a scientific research finding or field?
Citation indexes and scholarly book indexes The role of books in scholarly communication; research productivity taking scholarly book output into account How important are books in the various scientific disciplines, how do journals and books interrelate, and what are the most important book publishers?
Citation indexes (or publication databases) and OECD national statistics Research input or capacity; evolution of the number of active researchers in a country and the phase of their career How many researchers enter and/or move out of a national research system in a particular year?
Citation indexes and full text article databases The context of citations; sentiment analysis of the scientific-scholarly literature In what ways can one objectively characterize citation contexts? And identify implicit citations to documents or concepts?

Table 2 - Compound Big Datasets and their objects of study. Source: Research Trends, Issue 30, September 2012 (9)


Conclusions

As shown by this brief review, research assessment manifests itself in different methodologies and indicators. Each methodology has its strengths and limitations, and is associated with a certain risk of arriving at invalid outcomes. Indicators and other science metrics are essential tools on two levels: in the assessment process itself, and on the Meta level aimed to shape that process. Yet, their function on these two levels is different. On the first they are tools in the assessment of a particular unit, e.g. a particular individual researcher, or department, and may provide one of the foundations of evaluative statements about such a unit. On the second level they provide insight into the functionality of a research system as a whole, and help draw general conclusions about its state, assisting in drafting policy conclusions regarding the overall objective and general set-up of an assessment process.

Closely defining the unit of assessment and the evaluative methodologies to be used can provide a clue as to how peer review and quantitative approaches might be combined. For instance, the complexity of finding appropriate peers to assess all research groups in a broad science discipline in a national research assessment exercise may urge the organizers of that exercise to carry out a bibliometric study first and decide on the basis of its outcomes in which specialized fields or for which groups a thorough peer assessment seems necessary.

As Ben Martin pointed out in his 1996 article (10), this is true not only for metrics but also for peer review. It is the task of members from the scholarly community and the domain of research policy to decide what are acceptable “error rates” in the methodology and indicators being used, and whether its benefits prevail over its detriments. Bibliometricians and other science and technology analysts should provide insight into the uses and limits of various types of metrics, in order to help scholars and policy makers to carry out such a delicate task.

References

(1)    Vale, R. D. (2012). “Evaluating how we evaluate”, Molecular Biology of the Cell, Vol. 23, No. 17, pp. 3285-3289.
(2)    Zare, R. N. (2012). “Editorial: Assessing academic researchers”, Angewandte Chemie - International Edition, Vol. 51, No. 30, pp. 7338-7339.
(3)    Simons, K. (2008). “The misused impact factor”, Science, Vol. 322, No. 5899, pp. 165.
(4)    O'Connell, C. (2013). “Research discourses surrounding global university rankings: Exploring the relationship with policy and practice recommendations”, Higher Education, Vol. 65, No. 6, pp. 709-723.
(5)    Guido M. Imbens and Jeffrey M. Wooldridge (2008). “Recent Developments in the Econometrics of Program Evaluation”, The National Bureau of Economic Research Working Papers Series. Available at: http://www.nber.org/papers/w14251
(6)    Arthur, M. W., & Blitz, C. (2000). “Bridging the gap between science and practice in drug abuse prevention through needs assessment and strategic community planning”, Journal of Community Psychology, Vol. 28, No. 3, pp. 241-255.
(7)    Lee, L. S., Pusek, S. N., McCormack, W. T., Helitzer, D. L., Martina, C. A., Dozier, A. M., & Rubio, D. M. (2012). “Clinical and Translational Scientist Career Success: Metrics for Evaluation”, Clinical and translational science, Vol. 5, No. 5, pp. 400-407.
(8)    Taylor, M. (2013). “Exploring the boundaries: How altmetrics can expand our vision of scholarly communication and social impact”, Information Standards Quality, Vol. 25, No. 2, pp. 27-32.  Available at: http://www.niso.org/publications/isq/2013/v25no2/taylor/
(9)    Moed, H.F., (2012). “The Use of Big Datasets in Bibliometric Research”, Research Trends, Issue 30, September 2012. Available at: https://www.researchtrends.com/issue-30-september-2012/the-use-of-big-datasets-in-bibliometric-research/
(10) Martin, B.R., (1996). “The use of multiple indicators in the assessment of basic research”, Scientometrics, Vol. 36, No. 3, pp. 343-362.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The assessment of scientific merit and individuals has a long and respectable history which has been demonstrated in numerous methods and models utilizing different data sources and approaches (1, 2). The proliferation and increasing availability of primary data has created the ability to evaluate research on many levels and degrees of complexity, but has also introduced some fundamental challenges to all who are involved in this process, including evaluators, administrators and researchers, and others (3).

Evaluative methods are used on several levels within the scientific world: (1) Institutional (including departmental) level, (2) Program level, and (3) Individual level. Each of these levels has its own objectives and goals; for example,

Institutional evaluation is being used in order to establish accreditation, define missions, establish new programs and monitor the quality of an institute’s research activities among others. The types of evaluative results can be seen in the ranking systems of universities, which at present are produced at both regional and international levels, based on different criteria (4). Institutional evaluations are performed based on prestige measures derived from publications, citations, patents, collaborations and levels of expertise of the individuals within the institution.

Program level evaluations are performed in order to measure the cost-benefit aspects of specific scientific programs. These are usually based on discovering the linkage between the investment made and the potential results of the program (5). Within this realm we find measures developed for technology transfer capabilities and commercialization potentialities of the program, among others (6).

Finally an individual evaluation is mainly performed for purposes of promotion and retention of individuals and is done at specific times in a researcher’s career. Individual assessment methods rely mainly on counts of publications or citations (7). In the past few years, with the advent of social media, we have seen an increase in the use of measures based on mentions in social media sites such as blogs, Facebook, LinkedIn, Wikipedia, Twitter, and others, which are labelled as sources of “altmetrics” and include news outlets as well (8). The data used for each of these evaluation goals, whether they measure a publication’s impact in social or economic terms or both, varies by the method chosen in each case.


Evaluative Indicators

Based on the different methodologies and approaches, several indicators, aimed at quantifying and benchmarking the results of these evaluative methods, have emerged through the years. Table 1 summarizes some of the main indicators used today in research evaluation.

Type of Indicator Description Main Uses Main Challenges
Publications – Citations Methods involving counts of the number of publications produced by the evaluated entity (e.g. researcher, department, institution) and the citations they receive. Measuring the impact and intellectual influence of scientific and scholarly activities including:Publication impact
Author impact
Institution /department impact
Country impact
Name variations of institutions and individuals make it difficult to count these correctly.Limited coverage or lack of coverage of the database selected for the analysis can cause fundamental errors.Some documents such as technical reports or professional papers (“grey literature”) are usually excluded from the analysis due to lack of indexing and thus, in certain disciplines, decrease the accuracy of the assessment.Differences between disciplines and institution reading behaviors are difficult to account for.
Usage Methods that aim to quantify the number of times a scholarly work has been downloaded or viewed. Indicating works that are read, viewed or shared as a measure of impact.Enabling authors to be recognized for publications that might be less cited but heavily used. Incomplete usage data availability across providers leads to partial analysis.Differences between disciplines and institution reading behaviors are difficult to account for.Content crawling and automated downloads software tools that allow individuals to automatically download large amounts of content, which doesn’t necessarily mean that it was read or viewed.Difficult to ascertain whether downloaded publications were actually read or used.
Social (Altmetrics) Methods that aim to capture the number of times a publication is mentioned in blogs, tweets or other social media platforms such as shared reference management tools. Measuring the mentions of a publication in social media sites, which can be considered as citations and usage and thus indicate the impact of research, an individual or institution. A relatively new area with few providers of social media tracking.The weight given to each social media source is different from one provider to the other thus leading to different “impact” scores.
Patents Measures the number of patents assigned to an institution or an individual.Identification of citations to basic research papers in patents as well as patents that are highly cited by recently issued patents. Attempting to provide a direct link between basic science and patents as an indication of economic, social and/or methodological contribution. Incomplete and un-standardized references and names limit the ability of properly assigning citations and patents to individuals or institutions.Patenting in countries other than where the institution or individual originates from is problematic for impact analysis.Lack of exhaustive reference lists within the patents limits the analysis.
Economic Measures the strengths between science and its effect on industry, innovation and the economy as a whole. Providing technology transfer indicators.Indicating patentability potentialities of a Research project.Providing cost-benefit measures The statistical models used are complex and require deep understanding of the investment made but also of the program itself.Long term programs are more difficult to measure as far as the cost-benefit is concerned.Requires expertise not only in mathematics and statistics but also in the field of investigation itself.
Networks Calculates collaborations between institutions and individuals on a domestic and global scale.Institutions and individuals that develop and maintain a prolific research network are not only more productive but also active, visible and established. Enabling the tracking of highly connected and globally active individuals and institutions.Allowing benchmarking to be performed by evaluators by comparing collaborating individuals and institutions to each other. Affiliation names as mentioned in the published papers are not always standardized, thus making them difficult to trace.Education in a different country which might not have resulted in a publication cannot be measured, thus making this particular aspect of expertise building impossible to trace.

Table 1 - Types of evaluative indicators


Big data and its effect on evaluation methods

Big data refers to a collection of data sets that is so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. The advent of supercomputers and cloud computing which are able to process, analyze and visualize these datasets also has an effect on evaluation methods and models. While a decade ago scientific evaluation relied mainly on citations and publication counts, most of which were even done manually, today these data are not only available digitally but can also be triangulated with other data types (9). Table 2 depicts some examples of big datasets that can be combined in a bibliometric study to investigate different phenomena related to publications and scholarly output. Thus, for example, publication and citation counts can be triangulated with collaborative indicators, text analysis and econometric measures to produce a multi-level view of an institution, program or an individual. Yet, the availability and processing capabilities of these large datasets does not necessarily mean that evaluation becomes simple or easy to communicate. The fact of the matter is that as they become more complex, both administrators and evaluators find it difficult to reach consensus as to which model best depicts the productivity and impact of scientific activities. These technological abilities are becoming a breeding ground for more indices, models and measures and while each may be valid and grounded in research, they present a challenge in deciding which are best to use and in what setting.

Combined datasets Studied phenomena Typical research questions
Citation indexes and usage log files of full text publication archives Downloads versus citations; distinct phases in the process of processing scientific information What do downloads of full text articles measure? To what extent do downloads and citations correlate?
Citation indexes and patent databases Linkages between science and technology (the science–technology interface) What is the technological impact of a scientific research finding or field?
Citation indexes and scholarly book indexes The role of books in scholarly communication; research productivity taking scholarly book output into account How important are books in the various scientific disciplines, how do journals and books interrelate, and what are the most important book publishers?
Citation indexes (or publication databases) and OECD national statistics Research input or capacity; evolution of the number of active researchers in a country and the phase of their career How many researchers enter and/or move out of a national research system in a particular year?
Citation indexes and full text article databases The context of citations; sentiment analysis of the scientific-scholarly literature In what ways can one objectively characterize citation contexts? And identify implicit citations to documents or concepts?

Table 2 - Compound Big Datasets and their objects of study. Source: Research Trends, Issue 30, September 2012 (9)


Conclusions

As shown by this brief review, research assessment manifests itself in different methodologies and indicators. Each methodology has its strengths and limitations, and is associated with a certain risk of arriving at invalid outcomes. Indicators and other science metrics are essential tools on two levels: in the assessment process itself, and on the Meta level aimed to shape that process. Yet, their function on these two levels is different. On the first they are tools in the assessment of a particular unit, e.g. a particular individual researcher, or department, and may provide one of the foundations of evaluative statements about such a unit. On the second level they provide insight into the functionality of a research system as a whole, and help draw general conclusions about its state, assisting in drafting policy conclusions regarding the overall objective and general set-up of an assessment process.

Closely defining the unit of assessment and the evaluative methodologies to be used can provide a clue as to how peer review and quantitative approaches might be combined. For instance, the complexity of finding appropriate peers to assess all research groups in a broad science discipline in a national research assessment exercise may urge the organizers of that exercise to carry out a bibliometric study first and decide on the basis of its outcomes in which specialized fields or for which groups a thorough peer assessment seems necessary.

As Ben Martin pointed out in his 1996 article (10), this is true not only for metrics but also for peer review. It is the task of members from the scholarly community and the domain of research policy to decide what are acceptable “error rates” in the methodology and indicators being used, and whether its benefits prevail over its detriments. Bibliometricians and other science and technology analysts should provide insight into the uses and limits of various types of metrics, in order to help scholars and policy makers to carry out such a delicate task.

References

(1)    Vale, R. D. (2012). “Evaluating how we evaluate”, Molecular Biology of the Cell, Vol. 23, No. 17, pp. 3285-3289.
(2)    Zare, R. N. (2012). “Editorial: Assessing academic researchers”, Angewandte Chemie - International Edition, Vol. 51, No. 30, pp. 7338-7339.
(3)    Simons, K. (2008). “The misused impact factor”, Science, Vol. 322, No. 5899, pp. 165.
(4)    O'Connell, C. (2013). “Research discourses surrounding global university rankings: Exploring the relationship with policy and practice recommendations”, Higher Education, Vol. 65, No. 6, pp. 709-723.
(5)    Guido M. Imbens and Jeffrey M. Wooldridge (2008). “Recent Developments in the Econometrics of Program Evaluation”, The National Bureau of Economic Research Working Papers Series. Available at: http://www.nber.org/papers/w14251
(6)    Arthur, M. W., & Blitz, C. (2000). “Bridging the gap between science and practice in drug abuse prevention through needs assessment and strategic community planning”, Journal of Community Psychology, Vol. 28, No. 3, pp. 241-255.
(7)    Lee, L. S., Pusek, S. N., McCormack, W. T., Helitzer, D. L., Martina, C. A., Dozier, A. M., & Rubio, D. M. (2012). “Clinical and Translational Scientist Career Success: Metrics for Evaluation”, Clinical and translational science, Vol. 5, No. 5, pp. 400-407.
(8)    Taylor, M. (2013). “Exploring the boundaries: How altmetrics can expand our vision of scholarly communication and social impact”, Information Standards Quality, Vol. 25, No. 2, pp. 27-32.  Available at: http://www.niso.org/publications/isq/2013/v25no2/taylor/
(9)    Moed, H.F., (2012). “The Use of Big Datasets in Bibliometric Research”, Research Trends, Issue 30, September 2012. Available at: https://www.researchtrends.com/issue-30-september-2012/the-use-of-big-datasets-in-bibliometric-research/
(10) Martin, B.R., (1996). “The use of multiple indicators in the assessment of basic research”, Scientometrics, Vol. 36, No. 3, pp. 343-362.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Towards a common model of citation: some thoughts on merging altmetrics and bibliometrics

In this contribution, Mike Taylor stresses the need for a theoretical basis for the nascent field of alternative metrics, on which any methodological decisions need to rely.

Read more >


Reporting back: This article is based on presentations that Mike Taylor gave at the PLoS article level metrics workshop in San Francisco and at the World Social Science Forum (WSSF) in Montreal, both in October 2013

The increasing visibility of scholarly communication and discussion has led to a dramatic increase in the complexity of understanding its academic impact and social reach.

Although the nature of the communication has many different forms, with radically different attributes, it is generally treated as a singular entity: that of altmetrics.

In fact, it is arguable that the creation of altmetrics as a singular entity was technocratic (driven by what is technically possible) and thus pragmatic (built from what is available), rather than rooted in a theoretical discipline, and, had the different sources emerged at different times, or been accessed via different technical solutions, they would have been kept discrete.

The fundamental differences are readily apparent. For example, when one tweets a reference to a paper, it can be observed that the communication is necessarily brief, and is unlikely to have taken much time or thought. Frequently it is in the form of a ‘retweet’ and can be classified as the mere repetition of a message through personal networks.

The effort taken to tweet a link or reference may be contrasted to a blog post, where the intended recipient may well be the original research team, as well as others interested in this academic area. Other forms of scholarly blogs link to papers when attempting to précis the content for a non-academic audience (http://realclimate.org/), or engage misleading and mendacious uses of research to promote commercial and political aims - a less scholarly endeavor that nonetheless still contains links and discussion.

Nevertheless, both blogs and tweets can be said to have the explicit intention of being public: this can be contrasted with anonymous data that can be harvested and interpreted from many other sites. Of course, formal citation in a peer-reviewed article is also a public act, and this serves to introduce two other important criteria: that of context and immediacy. A tweet may have virtually no context (being only a reference to a paper), whereas a blog post may be several thousand words long. Similarly, a tweet may be an immediate act of impetuosity, whereas a citation in a peer-reviewed paper will necessarily take a longer period.

However, focusing on the issue of privacy: reading or downloading of articles may be considered as a private act in a study room, but user activity counts (and other demographic information) aggregating such acts and provided by tools such as Mendeley, Citeulike, GitHub and DataDryad are often included in publicly available altmetric data, as can be article-level-usage figures from publisher sites.

With the exception of people who are trying deliberately to distort data (for example, by repeatedly downloading an article – a practice which publishers work hard to counter), little is known of how mindful people are of the public nature or use of their activity and how this affects their behavior.

Therefore altmetrics consists of a wide variety of data with different characteristics, linked by a common set of tools. Data is typically accessed via an API (application programming interface), papers referenced by DOIs (digital object identifiers), and the platforms from which the data is gathered are social: this defines the set of data, rather than provides a theoretical foundation. It is not surprising, therefore, that little is known about the intentional, motivational or experiential motives of the users.

When a user posts a paper on Mendeley, we can hypothesize various motives including (but not limited to) the following:

  • Other people might be interested in this paper.
  • I might read this paper in the future.
  • I have read this paper and want it to be easily findable.
  • I want other people to think I have read this paper.
  • It is my paper, and I maintain my own library.
  • It is my paper, and I want people to read it.
  • It is my paper, and I want people to see that I wrote it.
  • I might skim read this paper in the future because I suspect it might back up an argument I’m thinking about making and it looks like it would make a useful citation.

With Twitter, the poster may choose to call attention to their tweet, to direct people to their response, may address the tweet to the authors, or may add inflections by the arbitrary (or organized) use of hashtags.

Each example of altmetric data has its own set of potential underlying motives, and each example requires different research: tweets may be subject to qualitative research, but are less easily studied by user surveys, for example. It would, of course, be possible (although time-consuming) to monitor tweets and ask the tweeter to complete a survey on their motivations for the individual tweet, but the time taken to survey would probably be disproportionately longer than the time taken to compose and post the original tweet.

To date, altmetric research has focused more on correlation (Priem et al, 1) than on motivation, and has relied upon assumptions rather than empirical evidence to postulate the relative level of engagement with an article (Fenner and Lin, 2).

Fifty years of relevant research

 The related field of bibliometrics has – since 1962 – conducted a significant quantity of research in the field of motivation of citation. Amongst the many intellectual assets available for potential re-purpose are theoretical models, methodologies, data sets and references. Bornmann and Daniel’s 2008 article, “What do citation counts measure? A review of studies on citing behavior” (3) reviews the extensive literature and reports the conclusions of this research. However, with the exception of Priem et al’s passing reference to this review, a search on Scopus reveals that of the 162 citations made to this paper, not one of them appears to be related to altmetrics.

The scholarly research into reference and citation attempted to test two potential theories of citation motivation: normative and social constructivist. Broadly speaking, the two camps maybe positioned as:

1. “Scientists give credit to colleagues whose work they use by citing that work” versus

2. “Scientific knowledge is socially constructed through the manipulation of political and financial resources and the use of rhetorical devices” (reported in 3)

After fifty years of research, Cronin was able to summarize the weight of evidence in favor of the normative view:

“The weight of empirical evidence seems to suggest that scientists typically cite the works of their peers in a normatively guided manner and that these signs (citations) perform a mutually intelligible communicative function” (4)

Shortly after the inception of bibliometrics, Eugene Garfield (1962, as reported in 3) listed fifteen possible motivations to cite:

1. Paying homage to pioneers;

2. Giving credit for related work (homage to peers);

3. Identifying methodology, equipment, etc.;

4. Providing background reading;

5. Correcting one’s own work;

6. Correcting the work of others;

7. Criticizing previous work;

8. Substantiating claims;

9. Alerting to forthcoming work;

10. Providing leads to poorly disseminated, poorly indexed, or uncited work;

11. Authenticating data and classes of fact (physical constants, etc.);

12. Identifying original publications in which an idea or concept was discussed;

13. Identifying original publication or other work describing an eponymic concept or term (...);

14. Disclaiming work or ideas of others (negative claims); and

15. Disputing priority claims of others (negative homage).

All of these are as relevant to social citation in 2013 as they were to formal citation in 1962; and the added visibility and speed of activity in social networks only adds to the list, for example:

16. Building a network of related researchers;

17. Building a reputation as a good networker;

18. Paying visible homage to a senior researcher;

19. Seeking the attention of a senior researcher;

20. Demonstrating that one’s reading is up to date; and

21. Intimidating critics with the breadth of one’s reading.

There are many more motivations that can be added to this list.

That there should be general agreement  on the nature of formal citation should come as little surprise: learning how to reference, or “show your reading” is a skill that is taught from an early age. Many websites exist to support and develop best citation practice, even going to the length of invoking the law to encourage completion:

“If you do not include your references both in your essay and on a reference sheet at the end of your essay, you could face legal action for being in violation of plagiarism laws.” How to Add Citations in an Essay, Allison Boyer.

Various Google searches on October 22, 2013 for equivalent guidelines for tweeting scholarly references produced no relevant results, beyond guidance on structuring the actual form of the citation (http://ucanr.edu/blogs/blogcore/postdetail.cfm?postnum=11505). However, there are many resources to support the use of Twitter in the K-12 teaching environment (e.g. http://www.teachhub.com/50-ways-use-twitter-classroom). It seems like a reasonable assumption that people’s first contact with social media will be away from the support of the academic community, and that individual practice will develop in a varied social environment.

Although statistics relating to negative citation are well-known (Bornmann and Daniel report a 5% incidence) there is a distinct contrast when it comes to abusive expression of power relations in social media. Scopus has indexed 30 papers with “cyberbully” in the title or abstract, and Schenk and Fremouw (5) report that 8.6% of college students have been subjected to cyberbullying.

The two observations: that people learn to use social networks away from an academic environment, and that the expression of power relations (at least in Twitter) is common may lead us to conclude that social citation – at least in the sense of public reference - may be less characteristic of normative citation practice.

Developing a methodology

 Altmetric data is complex and varied: in order to study it, it is necessary to simplify and normalize the data. For example, the usage figures of social networks vary across time, with networks drifting in and out of fashion, being subject to phases of organic growth and early adopter use, and with operators controlling access to data via their APIs.

 Increasing engagement with the article, Fenner and Lin (1 is lowest level of engagement, 5 is maximum):

1. Viewing: the activity of accessing the article online.

2. Saving: storing and referencing of articles (or references) in online tools such as Mendeley or Citeulike.

3. Discussing: Ranging from tweeting to blogging.

4. Recommending: formal endorsement of a paper, e.g. F1000Prime.

5. Citating: formal citation of an article in another article.

Developing the idea of Lin and Fenner’s taxonomy of social citation / usage behavior – albeit with some critical changes and without the idea of developing engagement with the article – it is possible to make sense of types of altmetric behavior. Rather than attributing motivation - or assuming that tweets are a deeper level of engagement than reading the article - I propose classifying activity according to the level of engagement with the behavior, as defined by the user’s choice of platform:

  • Social activity – characterized by rapid, brief engagement by users on platforms used by the general population – Twitter, Facebook, Delicious, etc.
  • Component re-use – the re-use of the constituent elements of the research product – data, figures and code.
  • Scholarly commentary – in-depth engagement by people using scholarly platforms, such as Science Blogs, F1000Prime reviews, etc.
  • Scholarly activity – indirect measurement of activity by people using scholarly platforms, e.g., Mendeley, Zotero, Citeulike.
  • Mass media coverage – coverage of research output in the mass media.

Any well-defined and meaningful collection of data should present two characteristics:

1) the sources that comprise an instance of data (e.g., social activity) should correlate well – for example, if the data is measuring the same class of activity, we should see tight correlation of activities between Twitter, Facebook, Delicious, etc.

and

2) each class should show discrete phenomena of activity.

Both of these are readily testable, and as altmetrics grows to encompass more datasets, it should be able to accommodate further classes of data. For example:

  • Social activity surrounding mass media – comments, tweets, etc., linking to mass media coverage of scholarly output.
  • References in books and monographs.
  • Use of scholarly research in commercial activity, e.g., patents.
  • Use of scholarly research in legislation and governmental context.
  • Self-promotion, e.g., additional content to support use of research, press releases.

In each case, the legitimacy of the distinctness of the classes and the difference between the classes can be readily tested. In order to validate the uses of the classes to describe motivational behavior and to discover causal patterns between the different types of activity, it is necessary to engage in qualitative research – methods that have been exhaustively researched by the bibliometric researchers reported in Bornmann and Daniel. It is possible that some of this work may be aided by text-mining and entity-recognition techniques, as used in natural language processing research, but any attempt to ascribe motivation to social users will require surveys and interviews.

If the classes of altmetric activity are validated as distinct and internally consistent, then several research steps might follow:

  • Identifying statistical trends between the classes.
  • Qualitative analysis to understand causation.
  • Surveys to acquire evidence of motivation.
  • Understanding the likely consequences of ‘gaming’ behavior, e.g. buying tweets, encouraging colleagues to load papers into Mendeley, etc.
  • Understanding how behavior changes as a consequence of legitimate promotion.
  • Qualifying social citation / social network activity between disciplines, professionals and as the platforms develop.
  • Discovering how combinations of classes can contribute to the understanding of potential use cases for altmetric data.

Considering this last point, there are many different issues that might be understood via a properly formulated study of altmetrics and bibliometrics. Given the pragmatic nature of altmetrics, the potential methodologies are varied, and this list is advanced as a discussion point.

 

  1. Prediction of ultimate citation – although it has been speculated that some altmetric data might enable a prediction of future citation rates, research has not yet demonstrated a correlation between Twitter counts and citation (Haustein et al 2013) (6). However, disciplines are likely to vary in their adoption of different types of activity, so this work – which may be added to other research that attempts to predict citation rates – will continue to look for correlations in data (7).

  2. Measuring / recognizing component re-use / preparatory work / reproducibility – a distinctive strand of altmetrics research is focused on measuring re-use of scholarly materials. This is of interest to funders and institutions in its own right; however, making data, code, etc. freely available may lead to increases in reproducibility and reliability. Nevertheless, work would need to be undertaken to understand the extent to which data (etc.) is reused simply because it is available, or well curated, rather than driven by scholarly need.

  3. Hidden impact (impact without citation) – there has been speculation that some articles may have an impact that is not detected using bibliographic citation analysis. For example, ”How to choose a good scientific problem” (8) has only been cited 4 times, according to Scopus, but has been shared on Mendeley nearly 42,000 times as of October 31, 2013.

  4. Real-time filtering / real-time evaluation of important / impactful articles relies on both a qualitative and quantitative analysis of real-time data. However, it is unknown if there is sufficient data to make this work at a sufficiently fine granularity, whether this is of use to scholars and whether they would trust such a system.

  5. Platform / publisher / institution comparison – although altmetrics can be used to gauge how effective organizations and authors are at providing social sharing tools, there has been no research on what this data might mean in terms of quality of research, rather than the more obvious values of being a ‘good read’, titivation or scandal.

  6. Measuring social reach / estimating social impact – evidently a crucial part of communicating research outcomes to society is the ability to communicate, and altmetrics could be used as a starting point to understand the flow of research impact in society – if it expands its remit, issues of privacy remain of low concern and if citation practices outside academia improve (https://www.researchtrends.com/issue-33-june-2013/the-challenges-of-measuring-social-impact-using-altmetrics/).

 

Conclusion

The outcome of research in this area should be to align the studies of altmetrics and bibliometrics by developing a common theoretical model that allows for analysis of all forms of accessible reference to scholarly objects: in short, a model of the scholarly network.

Such an ambition would allow for the commonalities between formal citation and altmetric activity, and for understanding the differences. By accepting that different forms of citation or reference take place in environments with different attributes and motivations, we will achieve a richer view of both bibliometric activity and social citation.

Acknowledgements

The author is indebted to the many people who are passionate about understanding scholarly communication and always ready to spend time discussing the issues, most notably Dr Stefanie Haustein, Dr Henk Moed, Euan Adie of Altmetric.com, Gregg Gordon of SSRN and many others.

References

(1) Priem et al, “Altmetrics in the Wild”, Available at: http://jasonpriem.org/self-archived/PLoS-altmetrics-sigmetrics11-abstract.pdf
(2) Lin, J., Fenner, M. (2013) “Altmetrics in Evolution: Defining and Redefining the Ontology of Article-Level Metrics”, Information Standards Quarterly, Vol. 25, No. 2, pp. 20, Available at: 10.3789/isqv25no2.2013.04
(3) Bornmann, L., Daniel, H. (2008) “What do citation counts measure? A review of studies on citing behavior”, Journal of Documentation, Vol. 64, No. 1, pp. 45-80, Available at: 0.1108/00220410810844150
(4) Cronin, B. (2005) “A hundred million acts of whimsy?”, Available at: http://www.iisc.ernet.in/currsci/nov102005/1505.pdf
(5) Schenk, A.M., Fremouw, W.J. (2012) “Prevalence, Psychological Impact, and Coping of Cyberbully Victims Among College Students”, Journal of School Violence, Vol. 11, No. 1, pp. 21-37, Available at: 10.1080/15388220.2011.630310
(6) Haustein, S., Peters, I., Sugimoto, C..R., Thelwall, M., Larivière, V., “Tweeting biomedicine: an analysis of tweets and citations in the biomedical literature”, Available at: http://arxiv.org/ftp/arxiv/papers/1308/1308.1838.pdf
(7) Yan, R., Huang, C., Tang, J., Zhang, Y., & Xiaoming, L. (2012) “To Better Stand on the Shoulder of Giants: Learning to Identify Potentially Influential Literature" Available at: http://keg.cs.tsinghua.edu.cn/jietang/publications/JCDL12-Yan-et-al-To-Better-Stand-on-the-Shoulder-of-Giants.pdf
(8) Alon, U. (2009) “How To Choose a Good Scientific Problem”, Molecular Cell, Vol. 35, No. 6, pp. 726-728.

 Related presentations

Taylor, M. (2013) “140 Characters in Search of a Meaning: Incorporating Motivation into Altmetrics”, Available at: http://dx.doi.org/10.6084/m9.figshare.821283 (Retrieved 12:10, Oct 23, 2013 (GMT))
Taylor, M. (2013) “The Many Faces of Altmetrics Mapping the Social Reach of Research”, Available at: http://dx.doi.org/10.6084/m9.figshare.820136 (Retrieved 12:13, Oct 23, 2013 (GMT))
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Reporting back: This article is based on presentations that Mike Taylor gave at the PLoS article level metrics workshop in San Francisco and at the World Social Science Forum (WSSF) in Montreal, both in October 2013

The increasing visibility of scholarly communication and discussion has led to a dramatic increase in the complexity of understanding its academic impact and social reach.

Although the nature of the communication has many different forms, with radically different attributes, it is generally treated as a singular entity: that of altmetrics.

In fact, it is arguable that the creation of altmetrics as a singular entity was technocratic (driven by what is technically possible) and thus pragmatic (built from what is available), rather than rooted in a theoretical discipline, and, had the different sources emerged at different times, or been accessed via different technical solutions, they would have been kept discrete.

The fundamental differences are readily apparent. For example, when one tweets a reference to a paper, it can be observed that the communication is necessarily brief, and is unlikely to have taken much time or thought. Frequently it is in the form of a ‘retweet’ and can be classified as the mere repetition of a message through personal networks.

The effort taken to tweet a link or reference may be contrasted to a blog post, where the intended recipient may well be the original research team, as well as others interested in this academic area. Other forms of scholarly blogs link to papers when attempting to précis the content for a non-academic audience (http://realclimate.org/), or engage misleading and mendacious uses of research to promote commercial and political aims - a less scholarly endeavor that nonetheless still contains links and discussion.

Nevertheless, both blogs and tweets can be said to have the explicit intention of being public: this can be contrasted with anonymous data that can be harvested and interpreted from many other sites. Of course, formal citation in a peer-reviewed article is also a public act, and this serves to introduce two other important criteria: that of context and immediacy. A tweet may have virtually no context (being only a reference to a paper), whereas a blog post may be several thousand words long. Similarly, a tweet may be an immediate act of impetuosity, whereas a citation in a peer-reviewed paper will necessarily take a longer period.

However, focusing on the issue of privacy: reading or downloading of articles may be considered as a private act in a study room, but user activity counts (and other demographic information) aggregating such acts and provided by tools such as Mendeley, Citeulike, GitHub and DataDryad are often included in publicly available altmetric data, as can be article-level-usage figures from publisher sites.

With the exception of people who are trying deliberately to distort data (for example, by repeatedly downloading an article – a practice which publishers work hard to counter), little is known of how mindful people are of the public nature or use of their activity and how this affects their behavior.

Therefore altmetrics consists of a wide variety of data with different characteristics, linked by a common set of tools. Data is typically accessed via an API (application programming interface), papers referenced by DOIs (digital object identifiers), and the platforms from which the data is gathered are social: this defines the set of data, rather than provides a theoretical foundation. It is not surprising, therefore, that little is known about the intentional, motivational or experiential motives of the users.

When a user posts a paper on Mendeley, we can hypothesize various motives including (but not limited to) the following:

  • Other people might be interested in this paper.
  • I might read this paper in the future.
  • I have read this paper and want it to be easily findable.
  • I want other people to think I have read this paper.
  • It is my paper, and I maintain my own library.
  • It is my paper, and I want people to read it.
  • It is my paper, and I want people to see that I wrote it.
  • I might skim read this paper in the future because I suspect it might back up an argument I’m thinking about making and it looks like it would make a useful citation.

With Twitter, the poster may choose to call attention to their tweet, to direct people to their response, may address the tweet to the authors, or may add inflections by the arbitrary (or organized) use of hashtags.

Each example of altmetric data has its own set of potential underlying motives, and each example requires different research: tweets may be subject to qualitative research, but are less easily studied by user surveys, for example. It would, of course, be possible (although time-consuming) to monitor tweets and ask the tweeter to complete a survey on their motivations for the individual tweet, but the time taken to survey would probably be disproportionately longer than the time taken to compose and post the original tweet.

To date, altmetric research has focused more on correlation (Priem et al, 1) than on motivation, and has relied upon assumptions rather than empirical evidence to postulate the relative level of engagement with an article (Fenner and Lin, 2).

Fifty years of relevant research

 The related field of bibliometrics has – since 1962 – conducted a significant quantity of research in the field of motivation of citation. Amongst the many intellectual assets available for potential re-purpose are theoretical models, methodologies, data sets and references. Bornmann and Daniel’s 2008 article, “What do citation counts measure? A review of studies on citing behavior” (3) reviews the extensive literature and reports the conclusions of this research. However, with the exception of Priem et al’s passing reference to this review, a search on Scopus reveals that of the 162 citations made to this paper, not one of them appears to be related to altmetrics.

The scholarly research into reference and citation attempted to test two potential theories of citation motivation: normative and social constructivist. Broadly speaking, the two camps maybe positioned as:

1. “Scientists give credit to colleagues whose work they use by citing that work” versus

2. “Scientific knowledge is socially constructed through the manipulation of political and financial resources and the use of rhetorical devices” (reported in 3)

After fifty years of research, Cronin was able to summarize the weight of evidence in favor of the normative view:

“The weight of empirical evidence seems to suggest that scientists typically cite the works of their peers in a normatively guided manner and that these signs (citations) perform a mutually intelligible communicative function” (4)

Shortly after the inception of bibliometrics, Eugene Garfield (1962, as reported in 3) listed fifteen possible motivations to cite:

1. Paying homage to pioneers;

2. Giving credit for related work (homage to peers);

3. Identifying methodology, equipment, etc.;

4. Providing background reading;

5. Correcting one’s own work;

6. Correcting the work of others;

7. Criticizing previous work;

8. Substantiating claims;

9. Alerting to forthcoming work;

10. Providing leads to poorly disseminated, poorly indexed, or uncited work;

11. Authenticating data and classes of fact (physical constants, etc.);

12. Identifying original publications in which an idea or concept was discussed;

13. Identifying original publication or other work describing an eponymic concept or term (...);

14. Disclaiming work or ideas of others (negative claims); and

15. Disputing priority claims of others (negative homage).

All of these are as relevant to social citation in 2013 as they were to formal citation in 1962; and the added visibility and speed of activity in social networks only adds to the list, for example:

16. Building a network of related researchers;

17. Building a reputation as a good networker;

18. Paying visible homage to a senior researcher;

19. Seeking the attention of a senior researcher;

20. Demonstrating that one’s reading is up to date; and

21. Intimidating critics with the breadth of one’s reading.

There are many more motivations that can be added to this list.

That there should be general agreement  on the nature of formal citation should come as little surprise: learning how to reference, or “show your reading” is a skill that is taught from an early age. Many websites exist to support and develop best citation practice, even going to the length of invoking the law to encourage completion:

“If you do not include your references both in your essay and on a reference sheet at the end of your essay, you could face legal action for being in violation of plagiarism laws.” How to Add Citations in an Essay, Allison Boyer.

Various Google searches on October 22, 2013 for equivalent guidelines for tweeting scholarly references produced no relevant results, beyond guidance on structuring the actual form of the citation (http://ucanr.edu/blogs/blogcore/postdetail.cfm?postnum=11505). However, there are many resources to support the use of Twitter in the K-12 teaching environment (e.g. http://www.teachhub.com/50-ways-use-twitter-classroom). It seems like a reasonable assumption that people’s first contact with social media will be away from the support of the academic community, and that individual practice will develop in a varied social environment.

Although statistics relating to negative citation are well-known (Bornmann and Daniel report a 5% incidence) there is a distinct contrast when it comes to abusive expression of power relations in social media. Scopus has indexed 30 papers with “cyberbully” in the title or abstract, and Schenk and Fremouw (5) report that 8.6% of college students have been subjected to cyberbullying.

The two observations: that people learn to use social networks away from an academic environment, and that the expression of power relations (at least in Twitter) is common may lead us to conclude that social citation – at least in the sense of public reference - may be less characteristic of normative citation practice.

Developing a methodology

 Altmetric data is complex and varied: in order to study it, it is necessary to simplify and normalize the data. For example, the usage figures of social networks vary across time, with networks drifting in and out of fashion, being subject to phases of organic growth and early adopter use, and with operators controlling access to data via their APIs.

 Increasing engagement with the article, Fenner and Lin (1 is lowest level of engagement, 5 is maximum):

1. Viewing: the activity of accessing the article online.

2. Saving: storing and referencing of articles (or references) in online tools such as Mendeley or Citeulike.

3. Discussing: Ranging from tweeting to blogging.

4. Recommending: formal endorsement of a paper, e.g. F1000Prime.

5. Citating: formal citation of an article in another article.

Developing the idea of Lin and Fenner’s taxonomy of social citation / usage behavior – albeit with some critical changes and without the idea of developing engagement with the article – it is possible to make sense of types of altmetric behavior. Rather than attributing motivation - or assuming that tweets are a deeper level of engagement than reading the article - I propose classifying activity according to the level of engagement with the behavior, as defined by the user’s choice of platform:

  • Social activity – characterized by rapid, brief engagement by users on platforms used by the general population – Twitter, Facebook, Delicious, etc.
  • Component re-use – the re-use of the constituent elements of the research product – data, figures and code.
  • Scholarly commentary – in-depth engagement by people using scholarly platforms, such as Science Blogs, F1000Prime reviews, etc.
  • Scholarly activity – indirect measurement of activity by people using scholarly platforms, e.g., Mendeley, Zotero, Citeulike.
  • Mass media coverage – coverage of research output in the mass media.

Any well-defined and meaningful collection of data should present two characteristics:

1) the sources that comprise an instance of data (e.g., social activity) should correlate well – for example, if the data is measuring the same class of activity, we should see tight correlation of activities between Twitter, Facebook, Delicious, etc.

and

2) each class should show discrete phenomena of activity.

Both of these are readily testable, and as altmetrics grows to encompass more datasets, it should be able to accommodate further classes of data. For example:

  • Social activity surrounding mass media – comments, tweets, etc., linking to mass media coverage of scholarly output.
  • References in books and monographs.
  • Use of scholarly research in commercial activity, e.g., patents.
  • Use of scholarly research in legislation and governmental context.
  • Self-promotion, e.g., additional content to support use of research, press releases.

In each case, the legitimacy of the distinctness of the classes and the difference between the classes can be readily tested. In order to validate the uses of the classes to describe motivational behavior and to discover causal patterns between the different types of activity, it is necessary to engage in qualitative research – methods that have been exhaustively researched by the bibliometric researchers reported in Bornmann and Daniel. It is possible that some of this work may be aided by text-mining and entity-recognition techniques, as used in natural language processing research, but any attempt to ascribe motivation to social users will require surveys and interviews.

If the classes of altmetric activity are validated as distinct and internally consistent, then several research steps might follow:

  • Identifying statistical trends between the classes.
  • Qualitative analysis to understand causation.
  • Surveys to acquire evidence of motivation.
  • Understanding the likely consequences of ‘gaming’ behavior, e.g. buying tweets, encouraging colleagues to load papers into Mendeley, etc.
  • Understanding how behavior changes as a consequence of legitimate promotion.
  • Qualifying social citation / social network activity between disciplines, professionals and as the platforms develop.
  • Discovering how combinations of classes can contribute to the understanding of potential use cases for altmetric data.

Considering this last point, there are many different issues that might be understood via a properly formulated study of altmetrics and bibliometrics. Given the pragmatic nature of altmetrics, the potential methodologies are varied, and this list is advanced as a discussion point.

 

  1. Prediction of ultimate citation – although it has been speculated that some altmetric data might enable a prediction of future citation rates, research has not yet demonstrated a correlation between Twitter counts and citation (Haustein et al 2013) (6). However, disciplines are likely to vary in their adoption of different types of activity, so this work – which may be added to other research that attempts to predict citation rates – will continue to look for correlations in data (7).

  2. Measuring / recognizing component re-use / preparatory work / reproducibility – a distinctive strand of altmetrics research is focused on measuring re-use of scholarly materials. This is of interest to funders and institutions in its own right; however, making data, code, etc. freely available may lead to increases in reproducibility and reliability. Nevertheless, work would need to be undertaken to understand the extent to which data (etc.) is reused simply because it is available, or well curated, rather than driven by scholarly need.

  3. Hidden impact (impact without citation) – there has been speculation that some articles may have an impact that is not detected using bibliographic citation analysis. For example, ”How to choose a good scientific problem” (8) has only been cited 4 times, according to Scopus, but has been shared on Mendeley nearly 42,000 times as of October 31, 2013.

  4. Real-time filtering / real-time evaluation of important / impactful articles relies on both a qualitative and quantitative analysis of real-time data. However, it is unknown if there is sufficient data to make this work at a sufficiently fine granularity, whether this is of use to scholars and whether they would trust such a system.

  5. Platform / publisher / institution comparison – although altmetrics can be used to gauge how effective organizations and authors are at providing social sharing tools, there has been no research on what this data might mean in terms of quality of research, rather than the more obvious values of being a ‘good read’, titivation or scandal.

  6. Measuring social reach / estimating social impact – evidently a crucial part of communicating research outcomes to society is the ability to communicate, and altmetrics could be used as a starting point to understand the flow of research impact in society – if it expands its remit, issues of privacy remain of low concern and if citation practices outside academia improve (https://www.researchtrends.com/issue-33-june-2013/the-challenges-of-measuring-social-impact-using-altmetrics/).

 

Conclusion

The outcome of research in this area should be to align the studies of altmetrics and bibliometrics by developing a common theoretical model that allows for analysis of all forms of accessible reference to scholarly objects: in short, a model of the scholarly network.

Such an ambition would allow for the commonalities between formal citation and altmetric activity, and for understanding the differences. By accepting that different forms of citation or reference take place in environments with different attributes and motivations, we will achieve a richer view of both bibliometric activity and social citation.

Acknowledgements

The author is indebted to the many people who are passionate about understanding scholarly communication and always ready to spend time discussing the issues, most notably Dr Stefanie Haustein, Dr Henk Moed, Euan Adie of Altmetric.com, Gregg Gordon of SSRN and many others.

References

(1) Priem et al, “Altmetrics in the Wild”, Available at: http://jasonpriem.org/self-archived/PLoS-altmetrics-sigmetrics11-abstract.pdf
(2) Lin, J., Fenner, M. (2013) “Altmetrics in Evolution: Defining and Redefining the Ontology of Article-Level Metrics”, Information Standards Quarterly, Vol. 25, No. 2, pp. 20, Available at: 10.3789/isqv25no2.2013.04
(3) Bornmann, L., Daniel, H. (2008) “What do citation counts measure? A review of studies on citing behavior”, Journal of Documentation, Vol. 64, No. 1, pp. 45-80, Available at: 0.1108/00220410810844150
(4) Cronin, B. (2005) “A hundred million acts of whimsy?”, Available at: http://www.iisc.ernet.in/currsci/nov102005/1505.pdf
(5) Schenk, A.M., Fremouw, W.J. (2012) “Prevalence, Psychological Impact, and Coping of Cyberbully Victims Among College Students”, Journal of School Violence, Vol. 11, No. 1, pp. 21-37, Available at: 10.1080/15388220.2011.630310
(6) Haustein, S., Peters, I., Sugimoto, C..R., Thelwall, M., Larivière, V., “Tweeting biomedicine: an analysis of tweets and citations in the biomedical literature”, Available at: http://arxiv.org/ftp/arxiv/papers/1308/1308.1838.pdf
(7) Yan, R., Huang, C., Tang, J., Zhang, Y., & Xiaoming, L. (2012) “To Better Stand on the Shoulder of Giants: Learning to Identify Potentially Influential Literature" Available at: http://keg.cs.tsinghua.edu.cn/jietang/publications/JCDL12-Yan-et-al-To-Better-Stand-on-the-Shoulder-of-Giants.pdf
(8) Alon, U. (2009) “How To Choose a Good Scientific Problem”, Molecular Cell, Vol. 35, No. 6, pp. 726-728.

 Related presentations

Taylor, M. (2013) “140 Characters in Search of a Meaning: Incorporating Motivation into Altmetrics”, Available at: http://dx.doi.org/10.6084/m9.figshare.821283 (Retrieved 12:10, Oct 23, 2013 (GMT))
Taylor, M. (2013) “The Many Faces of Altmetrics Mapping the Social Reach of Research”, Available at: http://dx.doi.org/10.6084/m9.figshare.820136 (Retrieved 12:13, Oct 23, 2013 (GMT))
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

How to overcome common obstacles to publishing in English

For this issue on Research in Developing countries, Dr. Daphne van Weijen takes a closer look at the challenges researchers from developing and/or non-English speaking countries face when trying to get their work published, and suggests what they can do to overcome them.

Read more >


For this issue on research in developing countries, we decided to take a closer look at the challenges researchers from developing and/or non-English speaking countries face when trying to get their work published, and determine what they can do to overcome them.

To answer this question, a literature review was carried out in Scopus using a number of keywords, including: scholarly communication, Africa, language issue, publication language, multilingual scholars, third world, developing country/countries, and scholarly publication. After de-duplication of the output of multiple searches this resulted in a list of 139 articles and reviews. The abstracts of all articles in this list were then carefully reviewed, and those that were not deemed relevant for this piece were removed. The final list included 73 articles and reviews, published between 1996 and 2013.

RT35DvW1

Figure 1 – A Word map based on the titles and abstracts of 73 selected articles and reviews. Source: Wordle

Figure 1 shows a Wordle (1) word map generated based on the titles and abstracts of all the articles and reviews in the list. The larger the font size of the word, the more frequently it occurs in the articles included in this literature review. Words that stand out include: research, English, publication, journal(s), publishing and language. In other words, it’s clear, and not surprising, that publishing is a strong theme in these articles. But what are the actual obstacles that researchers from developing countries and/or non-English speaking countries face when trying to get their work published?

 

Obstacles to overcome

A review of multilingual scholars’ participation in global academic communities revealed a number of problems that researchers face when trying to publish their work in international, English language journals (2). The most obvious obstacles are related to language issues. For example, writing in English is cognitively more demanding for non-native speakers than for native speakers, which can make the process of writing far more time consuming. Second, the presence of linguistic errors in a manuscript, or the use of a rhetorical or regional style that does not match the style of the English language research community, can negatively influence the outcome of the peer review process (3), particularly if the research being described is of mediocre rather than outstanding quality. Furthermore, non-native speakers can have difficulties paraphrasing the work of others, which means they run the risk of unintentionally plagiarizing the work of others (4). Finally, non-native speakers are sometimes less familiar with rhetorical styles favored by the English language research community.

Other potential obstacles include a lack of connections to key members of the disciplinary community, potential bias towards manuscript submission by non-English speaking authors (see also 5), scarcity of funding to conduct research, and a general focus on ‘local’ research and collaboration with neighboring countries rather than on wider international collaboration (2).

Another study focusing specifically on challenges related to scholarly publishing in sub-Saharan Africa revealed that: “Scholarly publishing in sub-Saharan Africa faces numerous challenges, including technological, socio-political, and economic challenges as well as an environment that does not favor scholarly publishing” (6). Specific obstacles raised in this study include the lack of participation in scholarly conferences, brain drain and technological challenges. For example, lack of Internet access makes it very difficult for scholars in the region to submit their work electronically, to access electronic journal content online, or to act as reviewers using electronic submission systems. As a result of these, and other potential obstacles: “Very few articles published by scholars from sub-Saharan Africa may become citation classics or even find a place in the list of key papers on the emerging research fronts” (6).

Possible solutions

There are clearly many potential obstacles that researchers from developing countries and/or non-English speaking countries face when trying to get their work published. But what can they do to try to overcome them?

Recommendations for researchers to increase their chances for publication success include:

(1) Be patient and persistent: do not give up too quickly. If your paper is rejected, use the Editors’ and reviewers’ comments and feedback to further improve the quality of your paper. If at first you don’t succeed, try again (2).

(2) Collaborate with other researchers: make contact with other, more experienced, researchers whenever possible, and look for potential areas of collaboration (7).

(3) Imitate the style of others: read papers in your field of research by prominent researchers and try to mirror their rhetorical styles (2, 7). However beware of committing plagiarism. For information on publishing ethics, see the Ethics Toolkit.

(4) Adhere to journal guidelines: make sure you read the journal’s guidelines carefully and comply with them before submitting your paper (7, 8).

(5) Linguistic editing:  “The importance of good English language usage cannot be over-emphasized. {…} Should one do a full language check before sending in an article? Although it is expensive and time consuming, the answer is YES” (9). So when in doubt, ask someone to review your manuscript before submitting it. But opinions differ on whether this should be a professional corrector, a local editor, a language service provider, a convenience editor (e.g. English speaking colleague), or a literacy broker (10, 11).

(6) Find the right outlet for your work: Some journals are more open to publishing work by non-native English researchers than others. Investing some time in finding the right journal, by checking journal websites, and reviewing work they’ve already published can also help increase your chance of publication success.

(7) Increase the visibility of your work: help increase the visibility of your research findings by maintaining a website for your research team, blogging about your results, using social media, and consider submitting your paper to a more visible (Open Access) journal.

References

(1) Wordle.net
(2) Uzuner, S. (2008) “Multilingual scholars’ participation in core/global academic communities: A literature review”, Journal of English for Academic Purposes, Vol. 7, No. 4, pp. 250 – 263.
(3) Curry, M.J., & Lillis, T. (2004) “Multilingual scholars and the imperative to publish in English: Negotiating interests, demands, and rewards”, TESOL Quarterly, Vol. 38, No. 4, pp. 663 – 688.
(4) Shi, L. (2012) “Rewriting and paraphrasing source texts in second language writing”, Journal of Second Language Writing, Vol. 21, No. 2, pp. 134 – 148.
(5) Kliewer, M.A., DeLong, D.M., Freed, K., Jenkins, C.B., Paulson, E.K., Provenzale, J.M. (2004) “Peer review at the American Journal of Roentgenology: How reviewer and manuscript characteristics affected editorial decisions on 196 major papers”, American Journal of Roentgenology, Vol. 183, No. 6, pp. 1545 – 1550.
(6) Ondari-Okemwa, E. (2007) “Scholarly publishing in sub-Saharan Africa in the twenty-first century: Challenges and opportunities”, First Monday, Vol. 12, No. 10.
(7) Liu, J. (2004) “Co-constructing academic discourse from the periphery: Chinese applied linguists’ centripetal participation in scholarly publication”, Asian Journal of English Language Teaching, Vol. 14, pp. 1 - 22.
(8) Thrower, P. (2012) “'Eight reasons I rejected your article', A journal editor reveals the top reasons so many manuscripts don’t make it to the peer review process”, Available at: http://www.elsevier.com/connect/8-reasons-i-rejected-your-article
(9) Babor, T.F., Stenius, K., Savva, S., O'Reilly, J. (eds) (2011), Publishing Addiction Science: a Guide for the Perplexed. (2nd edition) (Co-sponsored by the International Society of Addiction Journal Editors and the Society for the Study of Addiction), pp. 236. Brentwood, Essex: Multi-Science Publishing Co. Ltd.
(10) Curry, M.J. & Lillis, T. (2010) “Academic research networks: Accessing resources for English-medium publishing”, English for Specific Purposes, Vol. 29, No. 4, pp. 281–295.
(11) Burrough-Boenisch, J. (2003) “Shapers of published NNS research articles”, Journal of Second Language Writing, Vol. 12, No. 3, pp. 223–243.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

For this issue on research in developing countries, we decided to take a closer look at the challenges researchers from developing and/or non-English speaking countries face when trying to get their work published, and determine what they can do to overcome them.

To answer this question, a literature review was carried out in Scopus using a number of keywords, including: scholarly communication, Africa, language issue, publication language, multilingual scholars, third world, developing country/countries, and scholarly publication. After de-duplication of the output of multiple searches this resulted in a list of 139 articles and reviews. The abstracts of all articles in this list were then carefully reviewed, and those that were not deemed relevant for this piece were removed. The final list included 73 articles and reviews, published between 1996 and 2013.

RT35DvW1

Figure 1 – A Word map based on the titles and abstracts of 73 selected articles and reviews. Source: Wordle

Figure 1 shows a Wordle (1) word map generated based on the titles and abstracts of all the articles and reviews in the list. The larger the font size of the word, the more frequently it occurs in the articles included in this literature review. Words that stand out include: research, English, publication, journal(s), publishing and language. In other words, it’s clear, and not surprising, that publishing is a strong theme in these articles. But what are the actual obstacles that researchers from developing countries and/or non-English speaking countries face when trying to get their work published?

 

Obstacles to overcome

A review of multilingual scholars’ participation in global academic communities revealed a number of problems that researchers face when trying to publish their work in international, English language journals (2). The most obvious obstacles are related to language issues. For example, writing in English is cognitively more demanding for non-native speakers than for native speakers, which can make the process of writing far more time consuming. Second, the presence of linguistic errors in a manuscript, or the use of a rhetorical or regional style that does not match the style of the English language research community, can negatively influence the outcome of the peer review process (3), particularly if the research being described is of mediocre rather than outstanding quality. Furthermore, non-native speakers can have difficulties paraphrasing the work of others, which means they run the risk of unintentionally plagiarizing the work of others (4). Finally, non-native speakers are sometimes less familiar with rhetorical styles favored by the English language research community.

Other potential obstacles include a lack of connections to key members of the disciplinary community, potential bias towards manuscript submission by non-English speaking authors (see also 5), scarcity of funding to conduct research, and a general focus on ‘local’ research and collaboration with neighboring countries rather than on wider international collaboration (2).

Another study focusing specifically on challenges related to scholarly publishing in sub-Saharan Africa revealed that: “Scholarly publishing in sub-Saharan Africa faces numerous challenges, including technological, socio-political, and economic challenges as well as an environment that does not favor scholarly publishing” (6). Specific obstacles raised in this study include the lack of participation in scholarly conferences, brain drain and technological challenges. For example, lack of Internet access makes it very difficult for scholars in the region to submit their work electronically, to access electronic journal content online, or to act as reviewers using electronic submission systems. As a result of these, and other potential obstacles: “Very few articles published by scholars from sub-Saharan Africa may become citation classics or even find a place in the list of key papers on the emerging research fronts” (6).

Possible solutions

There are clearly many potential obstacles that researchers from developing countries and/or non-English speaking countries face when trying to get their work published. But what can they do to try to overcome them?

Recommendations for researchers to increase their chances for publication success include:

(1) Be patient and persistent: do not give up too quickly. If your paper is rejected, use the Editors’ and reviewers’ comments and feedback to further improve the quality of your paper. If at first you don’t succeed, try again (2).

(2) Collaborate with other researchers: make contact with other, more experienced, researchers whenever possible, and look for potential areas of collaboration (7).

(3) Imitate the style of others: read papers in your field of research by prominent researchers and try to mirror their rhetorical styles (2, 7). However beware of committing plagiarism. For information on publishing ethics, see the Ethics Toolkit.

(4) Adhere to journal guidelines: make sure you read the journal’s guidelines carefully and comply with them before submitting your paper (7, 8).

(5) Linguistic editing:  “The importance of good English language usage cannot be over-emphasized. {…} Should one do a full language check before sending in an article? Although it is expensive and time consuming, the answer is YES” (9). So when in doubt, ask someone to review your manuscript before submitting it. But opinions differ on whether this should be a professional corrector, a local editor, a language service provider, a convenience editor (e.g. English speaking colleague), or a literacy broker (10, 11).

(6) Find the right outlet for your work: Some journals are more open to publishing work by non-native English researchers than others. Investing some time in finding the right journal, by checking journal websites, and reviewing work they’ve already published can also help increase your chance of publication success.

(7) Increase the visibility of your work: help increase the visibility of your research findings by maintaining a website for your research team, blogging about your results, using social media, and consider submitting your paper to a more visible (Open Access) journal.

References

(1) Wordle.net
(2) Uzuner, S. (2008) “Multilingual scholars’ participation in core/global academic communities: A literature review”, Journal of English for Academic Purposes, Vol. 7, No. 4, pp. 250 – 263.
(3) Curry, M.J., & Lillis, T. (2004) “Multilingual scholars and the imperative to publish in English: Negotiating interests, demands, and rewards”, TESOL Quarterly, Vol. 38, No. 4, pp. 663 – 688.
(4) Shi, L. (2012) “Rewriting and paraphrasing source texts in second language writing”, Journal of Second Language Writing, Vol. 21, No. 2, pp. 134 – 148.
(5) Kliewer, M.A., DeLong, D.M., Freed, K., Jenkins, C.B., Paulson, E.K., Provenzale, J.M. (2004) “Peer review at the American Journal of Roentgenology: How reviewer and manuscript characteristics affected editorial decisions on 196 major papers”, American Journal of Roentgenology, Vol. 183, No. 6, pp. 1545 – 1550.
(6) Ondari-Okemwa, E. (2007) “Scholarly publishing in sub-Saharan Africa in the twenty-first century: Challenges and opportunities”, First Monday, Vol. 12, No. 10.
(7) Liu, J. (2004) “Co-constructing academic discourse from the periphery: Chinese applied linguists’ centripetal participation in scholarly publication”, Asian Journal of English Language Teaching, Vol. 14, pp. 1 - 22.
(8) Thrower, P. (2012) “'Eight reasons I rejected your article', A journal editor reveals the top reasons so many manuscripts don’t make it to the peer review process”, Available at: http://www.elsevier.com/connect/8-reasons-i-rejected-your-article
(9) Babor, T.F., Stenius, K., Savva, S., O'Reilly, J. (eds) (2011), Publishing Addiction Science: a Guide for the Perplexed. (2nd edition) (Co-sponsored by the International Society of Addiction Journal Editors and the Society for the Study of Addiction), pp. 236. Brentwood, Essex: Multi-Science Publishing Co. Ltd.
(10) Curry, M.J. & Lillis, T. (2010) “Academic research networks: Accessing resources for English-medium publishing”, English for Specific Purposes, Vol. 29, No. 4, pp. 281–295.
(11) Burrough-Boenisch, J. (2003) “Shapers of published NNS research articles”, Journal of Second Language Writing, Vol. 12, No. 3, pp. 223–243.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Ancient medicine in modern times

Dr. Gali Halevi investigates the extent to which “Complementary” or “Alternative” medicine (CAM) has penetrated modern practices by means of a bibliometric study that looks at journals, articles and citations of alternative medicine in mainstream medical research.

Read more >


One of the interesting contributions to science made by countries in regions that are considered “developing”, is what is labeled as “Alternative” or “Complementary” medicine (CAM) which stems from practices carried through thousands of years into modern times. Methods of healing using local herbs and plants or physical and spiritual elements penetrated Western medicine through the years, becoming an integral part of medical procedures in many countries around the world.

In this article, we sought to find out the extent to which CAM has penetrated modern practices by means of a bibliometric study that looks at journals, articles and citations of alternative medicine in mainstream medical research.

The diversity and sheer number of journals dedicated to Alternative or Complimentary Medicine (CAM) and the countries from which they originate is an interesting phenomenon. According to Ulrich’s Global Serials Directory (http://ulrichsweb.serialssolutions.com/) there are 358 active scientific journals categorized as CAM journals. Figure 1 visualizes the top publishing countries in the area of CAM. Perhaps not surprisingly, China is a leader in CAM journal publications. Many popular alternative medicine methods originated from China and carried through thousands of years making their way into Western medicine and culture. The same goes for India. It is interesting, though, to see leading Western countries such as the United States and Canada in North America and the United Kingdom, Germany, and France in Europe taking a leading role in the publication of CAM journals. This could be interpreted as an indicator of the high penetration rate of alternative medicine in the Western world.

RT35GH1

Figure 1 - CAM journals publication by country. Source: Ulrich’s Global Serial Directory

 

The number of new CAM journals launched in the past few decades tells a lot about the increasing scientific interest in Alternative Medicine. The largest growth in the number of journals published on the topic is seen from the 1960s to the 1980s and from the 1990s to the 2000s (see Figure 2).

The growth of scientific journals focusing on CAM can be attributed to the direct funding of CAM research received in the 1990s. In 1998, for example, the United States congress established the National Center for Complementary and Alternative Medicine (NCCAM) at the National Institutes of Health. The NCCAM funds university-based centres for research on CAM. This funding stimulated researchers to apply for grants and conduct research in this area, the results of which are published in an ever growing number of dedicated scientific journals. There is similar funding in Europe, where research funded nationally and by the European Commission is conducted by over 50 different foundations, universities and research centers (1). The availability of monetary means to support such research has a direct impact on publication growth.

RT35GH2

Figure 2 - Growth in the number of CAM journals launched. Source: Ulrich’s Global Serial Directory

 

Based on Ulrich’s Global Serial Directory, English is the leading publication language in CAM journals publications as can be seen in Figure 3. As such, CAM can be seen as a part of the global scientific discourse, as English is no doubt the most common language published in science.

RT35GH3

Figure 3 - Publication languages of CAM journals. Source: Ulrich’s Global Serial Directory

 

In addition to the trends in growth of journals published in this area, we also examined the citations characteristics of this topic and especially the growth of cited references to CAM journals, the top cited CAM journals and the top citing journals from disciplines other than CAM. The purpose of this analysis was to try and establish how this topic evolved in terms of its scientific activity and the manner of exchange between CAM research and other disciplines.

In the first step in our analysis we conducted a Scopus search of all CAM journals indexed by Ulrich’s Global Serial Directory. We then retrieved all the cited references to the CAM journals found. The cited references analyzed here include all publications regardless of their coverage in Scopus. Since Scopus displays all references listed in each publication it indexes, whether their source is indexed in the database or not, it is possible to analyze them in a complete manner. It is important to note here that this analysis depends on database coverage and might result in different numbers of references if a different database is used.

As can be seen in Figure 4, there is an evident and significant growth in the number of cited references to CAM journals and articles through the years. This could indicate a growing research agenda and scientific network around topics and issues related to CAM. Although these numbers do not differentiate between cited references appearing in articles in journals focusing on CAM to those appearing in other disciplines, the mere growth of cited references exchange points to the increased activity in this area of research.

RT35GH4

Figure 4 - Cited References to CAM journals and articles. Source: Scopus

 

In addition, we examined the most cited CAM journals. Figure 5 shows the top 10 most cited journals in this area which include Evidence Based Complementary and Alternative Medicine, Journal of Alternative and Complementary Medicine and the American Journal of Chinese Medicine as leading journals.

RT35GH5

Figure 5 - Top cited CAM journals. Source: Scopus

 

Finally we examined the top 50 most cited articles in CAM journals and looked at the top journals that cite them. As can be seen from Figure 6, PLOS One has the most citations to CAM articles. However, it must be noted here that PLOS One is an exception as it publishes thousands of articles every year compared to other journals that have more moderate rates of publications. This by itself could be a factor in its prominent place in the top citing journals. The inclusion of PLOS One in our analysis stems from the fact that it is a mainstream Medical journal and thus fits the criteria of our analysis. CAM articles cited in PLOS One were mainly in the areas of Agricultural and Biological Biochemistry, Genetics and Molecular Biology and general Medicine. Other journals citing CAM articles include journals focusing on Chemistry and Pharmacology such as the Journal of Ethnopharmacology and the Journal of Agricultural and Food Chemistry. In addition, journals focusing on internal medicine, pain management and cardiology are also seen to be citing CAM articles.

RT35GH6

Figure 6 - Journals most frequently citing CAM periodicals. Source: Scopus

 

Summary

According to the 2009 “National Health Statistics Report” published by the the US Department of Health and Human Services, the American public spent over 33 billion dollars on CAM practitioners and purchases of CAM products, classes, and materials (2). As awareness of nutrition and general well-being as well as a holistic approach to health is gaining more momentum, more and more people seek self-care CAM therapies such as homeopathic products, yoga, and natural products. Moreover, Alternative Medicine and complimentary health services are becoming an integral part of mainstream medical practices and are in many countries sponsored by governments’ health systems. Practices that could have been considered esoteric or exotic a couple of decades ago, are now to be found almost everywhere in the world. These originated from regions and countries that are or were considered ‘developing’ and managed to penetrate and influence science, medicine and human well-being for years to come.

 

References

(1)    http://www.eiccam.eu/pdfs/eiccambrochurecomplete.pdf
(2)    http://www.cdc.gov/NCHS/data/nhsr/nhsr018.pdf

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

One of the interesting contributions to science made by countries in regions that are considered “developing”, is what is labeled as “Alternative” or “Complementary” medicine (CAM) which stems from practices carried through thousands of years into modern times. Methods of healing using local herbs and plants or physical and spiritual elements penetrated Western medicine through the years, becoming an integral part of medical procedures in many countries around the world.

In this article, we sought to find out the extent to which CAM has penetrated modern practices by means of a bibliometric study that looks at journals, articles and citations of alternative medicine in mainstream medical research.

The diversity and sheer number of journals dedicated to Alternative or Complimentary Medicine (CAM) and the countries from which they originate is an interesting phenomenon. According to Ulrich’s Global Serials Directory (http://ulrichsweb.serialssolutions.com/) there are 358 active scientific journals categorized as CAM journals. Figure 1 visualizes the top publishing countries in the area of CAM. Perhaps not surprisingly, China is a leader in CAM journal publications. Many popular alternative medicine methods originated from China and carried through thousands of years making their way into Western medicine and culture. The same goes for India. It is interesting, though, to see leading Western countries such as the United States and Canada in North America and the United Kingdom, Germany, and France in Europe taking a leading role in the publication of CAM journals. This could be interpreted as an indicator of the high penetration rate of alternative medicine in the Western world.

RT35GH1

Figure 1 - CAM journals publication by country. Source: Ulrich’s Global Serial Directory

 

The number of new CAM journals launched in the past few decades tells a lot about the increasing scientific interest in Alternative Medicine. The largest growth in the number of journals published on the topic is seen from the 1960s to the 1980s and from the 1990s to the 2000s (see Figure 2).

The growth of scientific journals focusing on CAM can be attributed to the direct funding of CAM research received in the 1990s. In 1998, for example, the United States congress established the National Center for Complementary and Alternative Medicine (NCCAM) at the National Institutes of Health. The NCCAM funds university-based centres for research on CAM. This funding stimulated researchers to apply for grants and conduct research in this area, the results of which are published in an ever growing number of dedicated scientific journals. There is similar funding in Europe, where research funded nationally and by the European Commission is conducted by over 50 different foundations, universities and research centers (1). The availability of monetary means to support such research has a direct impact on publication growth.

RT35GH2

Figure 2 - Growth in the number of CAM journals launched. Source: Ulrich’s Global Serial Directory

 

Based on Ulrich’s Global Serial Directory, English is the leading publication language in CAM journals publications as can be seen in Figure 3. As such, CAM can be seen as a part of the global scientific discourse, as English is no doubt the most common language published in science.

RT35GH3

Figure 3 - Publication languages of CAM journals. Source: Ulrich’s Global Serial Directory

 

In addition to the trends in growth of journals published in this area, we also examined the citations characteristics of this topic and especially the growth of cited references to CAM journals, the top cited CAM journals and the top citing journals from disciplines other than CAM. The purpose of this analysis was to try and establish how this topic evolved in terms of its scientific activity and the manner of exchange between CAM research and other disciplines.

In the first step in our analysis we conducted a Scopus search of all CAM journals indexed by Ulrich’s Global Serial Directory. We then retrieved all the cited references to the CAM journals found. The cited references analyzed here include all publications regardless of their coverage in Scopus. Since Scopus displays all references listed in each publication it indexes, whether their source is indexed in the database or not, it is possible to analyze them in a complete manner. It is important to note here that this analysis depends on database coverage and might result in different numbers of references if a different database is used.

As can be seen in Figure 4, there is an evident and significant growth in the number of cited references to CAM journals and articles through the years. This could indicate a growing research agenda and scientific network around topics and issues related to CAM. Although these numbers do not differentiate between cited references appearing in articles in journals focusing on CAM to those appearing in other disciplines, the mere growth of cited references exchange points to the increased activity in this area of research.

RT35GH4

Figure 4 - Cited References to CAM journals and articles. Source: Scopus

 

In addition, we examined the most cited CAM journals. Figure 5 shows the top 10 most cited journals in this area which include Evidence Based Complementary and Alternative Medicine, Journal of Alternative and Complementary Medicine and the American Journal of Chinese Medicine as leading journals.

RT35GH5

Figure 5 - Top cited CAM journals. Source: Scopus

 

Finally we examined the top 50 most cited articles in CAM journals and looked at the top journals that cite them. As can be seen from Figure 6, PLOS One has the most citations to CAM articles. However, it must be noted here that PLOS One is an exception as it publishes thousands of articles every year compared to other journals that have more moderate rates of publications. This by itself could be a factor in its prominent place in the top citing journals. The inclusion of PLOS One in our analysis stems from the fact that it is a mainstream Medical journal and thus fits the criteria of our analysis. CAM articles cited in PLOS One were mainly in the areas of Agricultural and Biological Biochemistry, Genetics and Molecular Biology and general Medicine. Other journals citing CAM articles include journals focusing on Chemistry and Pharmacology such as the Journal of Ethnopharmacology and the Journal of Agricultural and Food Chemistry. In addition, journals focusing on internal medicine, pain management and cardiology are also seen to be citing CAM articles.

RT35GH6

Figure 6 - Journals most frequently citing CAM periodicals. Source: Scopus

 

Summary

According to the 2009 “National Health Statistics Report” published by the the US Department of Health and Human Services, the American public spent over 33 billion dollars on CAM practitioners and purchases of CAM products, classes, and materials (2). As awareness of nutrition and general well-being as well as a holistic approach to health is gaining more momentum, more and more people seek self-care CAM therapies such as homeopathic products, yoga, and natural products. Moreover, Alternative Medicine and complimentary health services are becoming an integral part of mainstream medical practices and are in many countries sponsored by governments’ health systems. Practices that could have been considered esoteric or exotic a couple of decades ago, are now to be found almost everywhere in the world. These originated from regions and countries that are or were considered ‘developing’ and managed to penetrate and influence science, medicine and human well-being for years to come.

 

References

(1)    http://www.eiccam.eu/pdfs/eiccambrochurecomplete.pdf
(2)    http://www.cdc.gov/NCHS/data/nhsr/nhsr018.pdf

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Africa doubles research output over past decade, moves towards a knowledge-based economy

In this contribution Ylann Schemm discusses some of the factors contributing to the increase in research output from sub-Saharan Africa, including increased funding, the rise of open access and Research4Life.

Read more >


There is no doubt that economic growth across sub-Saharan Africa has been stellar in recent years, illustrated by GDP growth expected to be 5.3 per cent in 2014 (1). Coverage of Africa has increasingly focused on this growth, seeing the encouraging future prospects for many countries and the improved life chances for the younger generation as a result. However, to date, there has been little focus on the flourishing academic and scientific success on the continent as countries move beyond agriculture-dominated economies towards a research and knowledge-based future.

One of the most positive signs for Africa has been the recent increase in scientific research being conducted by local African scientists. From 1996 to 2012, the number of research papers published in scientific journals with at least one African author more than quadrupled (from about 12,500 to over 52,000). During the same time the share of the world’s articles with African authors almost doubled from 1.2% to around 2.3%.

Looking at Africa’s outputs overall and as a share of total articles globally, we see clearly that the continent is starting to emerge scientifically onto the world stage. Admittedly starting from a relatively low base, this still reflects more than 52,000 research outputs in 2012 featuring at least one African author. Clearly, this also has important implications for the entire developing world, given the potential for “South-South” information transfer. Through published research, what is learned in one region/country can be the basis for improvements in other developing-world regions and countries.

RT35YS1

Figure 1 - African research output and global share of articles. Source: Scopus

 

Key Findings

Group A (free access) and B (low cost access) Research4Life-eligible African countries show increasing outputs since about 2002, which coincides with the early rollout of the HINARI program. Comparing Research4Life-eligible African countries with non-R4L-eligible countries (essentially Egypt and South Africa), outputs are greater until 2005, when the Research4Life eligible countries’ output for the first time exceeds that of the non-eligible countries. The output of the Research4Life-eligible countries has continued to increase at a greater rate since then (2).

So what are the factors contributing to this promising trend? They are many: increased funding, significant policy changes within countries, improved research infrastructure, both human and physical, ICT resources, open, free and low cost access to peer reviewed literature, and research capacity building training have all contributed to the positive, upward trend in African research output.

One key contributor to increased research access is Research4Life, a public-private, UN-publisher partnership that provides scientists in developing countries with free or low-cost access to articles from leading scientific journals. Altogether, more than 35,000 peer-reviewed resources are available to researchers in the developing world and 6,000 institutions from more than 100 developing countries have signed up to use Research4Life programs. A 2010 Research4life user experience review revealed that more respondents (24%) cite HINARI as a source for life-science and medical research than cite any other source, while more respondents (32%) cite HINARI as the source they use most frequently. For agricultural research, AGORA similarly tops the list of resources used, with equivalent figures of 27% and 54% respectively (3).

R4LifeHowever, as Richard Gedye, Director of Outreach Programs for the International Association of Scientific, Technical & Medical Publishers (STM), who serves as Research4Life’s key publisher representative, recently pointed out, demonstrating real research output impact is not a straightforward undertaking (4). “Through our booklet, Making a Difference, we’ve been able to build up a convincing collection of case studies: more robust research agendas, stronger grant applications, better teaching and improved health outcomes have been achieved (5). A strong bibliometric analysis, however, remains problematic due to a lack of basic information about registered institutions and other important viable, co-existing access programs such as the International Network for the Availability of Scientific Publications (INASP) and open access. Instead, we’ve decided to plan a comprehensive survey in 2014 of the impact of Research4Life access with African researchers, research funders, research administrators, physicians, trainers and librarians.”

 

But there’s no room for complacency. In the November 2010 report on "Growing knowledge: Access to research in east and southern African universities" from the Association of Commonwealth Universities, Jonathan Harle, now INASP’s Senior Programme Manager, Research Access and Availability, clarifies that increased access and bandwidth won’t solve the problem (6). Skills development and awareness of free and low cost access to peer reviewed research need to receive much more attention if research output is to grow substantially. INASP is another critical contributor to African research output growth with a focus going far beyond access to support the full cycle of research capacity building across 22 countries. They provide a range of services including developing deeply discounted consortial arrangements, training librarians in managing digital resources, mentoring researchers in academic literacy and authorship skills and supporting the process of local online journal development and hosting.

The rapid growth of open access also represents a significant contributor to African research output. David Tempest, Director of Access Relations at Elsevier, hosted an “Open Access in Africa” workshop in Kenya in April 2013 in cooperation with the African Academy of Sciences to explore the African access experience and how publishers can assist in the process of enhancing access to African research. “Africa has some outstanding researchers, and the quality of their work is improving, but there needs to be a concerted effort to work together and create excellence on a local level. […] At Elsevier, we're happy to work with these organizations to run author workshops to help researchers understand how to publish in journals, the ethical dimensions of publishing, and the emergence of new publishing possibilities, such as open access.” (7). As a follow on, Elsevier is supporting the National Research Foundation’s National Postdoctoral Forum, which will take place in Cape Town in early December 2013. He noted, “This is an opportunity to work with young researchers in Africa to illustrate how the scholarly communication system is changing and to answer any questions on how young researchers can publish in our journals.”

While African authors have nearly doubled their article share over the past decade, the returns could be many times greater over the next decade if awareness, usage and research capacity are tackled in a collaborative and integrated manner by African institutions, access programs and publishers.

Research4LifeResearch4Life is a public-private partnership of over 200 international scientific publishers; the International Association of Scientific, Technical and Medical Publishers (STM); Cornell and Yale universities in collaboration with WHO, FAO, UNEP, WIPO and technology partner Microsoft. Since 2001, the four programs — Access to Research in Health (HINARI), Access to Global Online Research in Agriculture (AGORA), Online Access to Research in the Environment (OARE) and Access to Research for Development and Innovation (ARDI) — have grown and developed to the point where they now give researchers at more than 6,000 institutions in over 100 developing-world countries and territories free or low-cost online access to over 35,000 peer-reviewed international scientific journals, books and databases provided by leading scientific publishers. In November 2012, Research4Life partners announced their commitment to the program through 2020.

Related links & articles

 

About the author

Ylann SchemmYlann Schemm (@ylannschemm), Senior Corporate Responsibility Manager,  drives Elsevier’s corporate responsibility programs which focus on research capacity building in the developing world and advancing women in science. She manages the Elsevier Foundation’s Innovative Libraries in Developing Countries Program which provides additional infrastructure-building, medical library needs assessments, preservation of unique research, and training to boost information literacy and research skills to enable the optimum use of Research4Life. Ylann also chairs the Research4Life partnership’s Communications and Marketing working group.

 

References

(1) Sulaiman,T. (2013) “Africa set to grow at 5.3 percent in 2014: World Bank”, Reuters, Oct 7, 2013, Available at: http://www.reuters.com/article/2013/10/07/us-africa-growth-idUSBRE9960MY20131007.
(2) Scopus data analysis by Dr. Andrew Plume, Director of Scientometrics & Market Analysis, Elsevier.
(3) Gaible, E., et al. (2011) “Research4Life: Bringing academic and professional peer-reviewed content to developing countries through public-private partnership”, IFLA 2011 Conference paper. Available at: http://conference.ifla.org/past/ifla77/164-gaible-en.pdf.
(4) Gedye, R., (2013) “Measuring the impact of research access in the developing world”, ElsevierConnect.
(5) (2011) “Making a Difference: Stories from the Field: How access to scientific literature is improving the livelihoods of communities around the world.” Research4Life, Available at: http://www.research4life.org/wp-content/uploads/promotions/R4L_Making_a_difference_final_LR.pdf.
(6) Harle, J., (2010) "Growing knowledge: Access to research in east and southern African universities", The Association of Commonwealth Universities, Available at: http://www.arcadiafund.org.uk/sites/default/files/arc_pub_africanconnectivity_theassociationofcommonwealthunis_0.pdf.
(7) Tempest, D., (2013) “Open access in Africa — changes and challenges”, Elsevier Connect, Available at: http://www.elsevier.com/connect/open-access-in-africa-changes-and-challenges.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

There is no doubt that economic growth across sub-Saharan Africa has been stellar in recent years, illustrated by GDP growth expected to be 5.3 per cent in 2014 (1). Coverage of Africa has increasingly focused on this growth, seeing the encouraging future prospects for many countries and the improved life chances for the younger generation as a result. However, to date, there has been little focus on the flourishing academic and scientific success on the continent as countries move beyond agriculture-dominated economies towards a research and knowledge-based future.

One of the most positive signs for Africa has been the recent increase in scientific research being conducted by local African scientists. From 1996 to 2012, the number of research papers published in scientific journals with at least one African author more than quadrupled (from about 12,500 to over 52,000). During the same time the share of the world’s articles with African authors almost doubled from 1.2% to around 2.3%.

Looking at Africa’s outputs overall and as a share of total articles globally, we see clearly that the continent is starting to emerge scientifically onto the world stage. Admittedly starting from a relatively low base, this still reflects more than 52,000 research outputs in 2012 featuring at least one African author. Clearly, this also has important implications for the entire developing world, given the potential for “South-South” information transfer. Through published research, what is learned in one region/country can be the basis for improvements in other developing-world regions and countries.

RT35YS1

Figure 1 - African research output and global share of articles. Source: Scopus

 

Key Findings

Group A (free access) and B (low cost access) Research4Life-eligible African countries show increasing outputs since about 2002, which coincides with the early rollout of the HINARI program. Comparing Research4Life-eligible African countries with non-R4L-eligible countries (essentially Egypt and South Africa), outputs are greater until 2005, when the Research4Life eligible countries’ output for the first time exceeds that of the non-eligible countries. The output of the Research4Life-eligible countries has continued to increase at a greater rate since then (2).

So what are the factors contributing to this promising trend? They are many: increased funding, significant policy changes within countries, improved research infrastructure, both human and physical, ICT resources, open, free and low cost access to peer reviewed literature, and research capacity building training have all contributed to the positive, upward trend in African research output.

One key contributor to increased research access is Research4Life, a public-private, UN-publisher partnership that provides scientists in developing countries with free or low-cost access to articles from leading scientific journals. Altogether, more than 35,000 peer-reviewed resources are available to researchers in the developing world and 6,000 institutions from more than 100 developing countries have signed up to use Research4Life programs. A 2010 Research4life user experience review revealed that more respondents (24%) cite HINARI as a source for life-science and medical research than cite any other source, while more respondents (32%) cite HINARI as the source they use most frequently. For agricultural research, AGORA similarly tops the list of resources used, with equivalent figures of 27% and 54% respectively (3).

R4LifeHowever, as Richard Gedye, Director of Outreach Programs for the International Association of Scientific, Technical & Medical Publishers (STM), who serves as Research4Life’s key publisher representative, recently pointed out, demonstrating real research output impact is not a straightforward undertaking (4). “Through our booklet, Making a Difference, we’ve been able to build up a convincing collection of case studies: more robust research agendas, stronger grant applications, better teaching and improved health outcomes have been achieved (5). A strong bibliometric analysis, however, remains problematic due to a lack of basic information about registered institutions and other important viable, co-existing access programs such as the International Network for the Availability of Scientific Publications (INASP) and open access. Instead, we’ve decided to plan a comprehensive survey in 2014 of the impact of Research4Life access with African researchers, research funders, research administrators, physicians, trainers and librarians.”

 

But there’s no room for complacency. In the November 2010 report on "Growing knowledge: Access to research in east and southern African universities" from the Association of Commonwealth Universities, Jonathan Harle, now INASP’s Senior Programme Manager, Research Access and Availability, clarifies that increased access and bandwidth won’t solve the problem (6). Skills development and awareness of free and low cost access to peer reviewed research need to receive much more attention if research output is to grow substantially. INASP is another critical contributor to African research output growth with a focus going far beyond access to support the full cycle of research capacity building across 22 countries. They provide a range of services including developing deeply discounted consortial arrangements, training librarians in managing digital resources, mentoring researchers in academic literacy and authorship skills and supporting the process of local online journal development and hosting.

The rapid growth of open access also represents a significant contributor to African research output. David Tempest, Director of Access Relations at Elsevier, hosted an “Open Access in Africa” workshop in Kenya in April 2013 in cooperation with the African Academy of Sciences to explore the African access experience and how publishers can assist in the process of enhancing access to African research. “Africa has some outstanding researchers, and the quality of their work is improving, but there needs to be a concerted effort to work together and create excellence on a local level. […] At Elsevier, we're happy to work with these organizations to run author workshops to help researchers understand how to publish in journals, the ethical dimensions of publishing, and the emergence of new publishing possibilities, such as open access.” (7). As a follow on, Elsevier is supporting the National Research Foundation’s National Postdoctoral Forum, which will take place in Cape Town in early December 2013. He noted, “This is an opportunity to work with young researchers in Africa to illustrate how the scholarly communication system is changing and to answer any questions on how young researchers can publish in our journals.”

While African authors have nearly doubled their article share over the past decade, the returns could be many times greater over the next decade if awareness, usage and research capacity are tackled in a collaborative and integrated manner by African institutions, access programs and publishers.

Research4LifeResearch4Life is a public-private partnership of over 200 international scientific publishers; the International Association of Scientific, Technical and Medical Publishers (STM); Cornell and Yale universities in collaboration with WHO, FAO, UNEP, WIPO and technology partner Microsoft. Since 2001, the four programs — Access to Research in Health (HINARI), Access to Global Online Research in Agriculture (AGORA), Online Access to Research in the Environment (OARE) and Access to Research for Development and Innovation (ARDI) — have grown and developed to the point where they now give researchers at more than 6,000 institutions in over 100 developing-world countries and territories free or low-cost online access to over 35,000 peer-reviewed international scientific journals, books and databases provided by leading scientific publishers. In November 2012, Research4Life partners announced their commitment to the program through 2020.

Related links & articles

 

About the author

Ylann SchemmYlann Schemm (@ylannschemm), Senior Corporate Responsibility Manager,  drives Elsevier’s corporate responsibility programs which focus on research capacity building in the developing world and advancing women in science. She manages the Elsevier Foundation’s Innovative Libraries in Developing Countries Program which provides additional infrastructure-building, medical library needs assessments, preservation of unique research, and training to boost information literacy and research skills to enable the optimum use of Research4Life. Ylann also chairs the Research4Life partnership’s Communications and Marketing working group.

 

References

(1) Sulaiman,T. (2013) “Africa set to grow at 5.3 percent in 2014: World Bank”, Reuters, Oct 7, 2013, Available at: http://www.reuters.com/article/2013/10/07/us-africa-growth-idUSBRE9960MY20131007.
(2) Scopus data analysis by Dr. Andrew Plume, Director of Scientometrics & Market Analysis, Elsevier.
(3) Gaible, E., et al. (2011) “Research4Life: Bringing academic and professional peer-reviewed content to developing countries through public-private partnership”, IFLA 2011 Conference paper. Available at: http://conference.ifla.org/past/ifla77/164-gaible-en.pdf.
(4) Gedye, R., (2013) “Measuring the impact of research access in the developing world”, ElsevierConnect.
(5) (2011) “Making a Difference: Stories from the Field: How access to scientific literature is improving the livelihoods of communities around the world.” Research4Life, Available at: http://www.research4life.org/wp-content/uploads/promotions/R4L_Making_a_difference_final_LR.pdf.
(6) Harle, J., (2010) "Growing knowledge: Access to research in east and southern African universities", The Association of Commonwealth Universities, Available at: http://www.arcadiafund.org.uk/sites/default/files/arc_pub_africanconnectivity_theassociationofcommonwealthunis_0.pdf.
(7) Tempest, D., (2013) “Open access in Africa — changes and challenges”, Elsevier Connect, Available at: http://www.elsevier.com/connect/open-access-in-africa-changes-and-challenges.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Migration and co-authorship networks in Mexico, Turkey and India

In this article, Dr. Gali Halevi describes the co-authorship networks and researchers’ migration trends for three scientifically developing countries: Mexico, India and Turkey.

Read more >


Scientific networks, collaboration and exchange have been discussed in Research Trends before (1, 2, 3). The main reason for the continuing interest in these topics has been the premise that these types of exchanges benefit scientific progress in that they foster innovation, and enhance and enable the flow of ideas between scientists in different institutions. In addition to the ability to track and sketch scientific collaborations between institutions, the availability of author profiles and their affiliation information in Scopus has also made possible the tracking of scientific migration from country to country. Research migration or mobility, although related to the formation of networks and collaboration, has unique characteristics and far reaching implications that go beyond the development of collaborative scientific activities. In a migration scenario, collaboration is achieved through the physical move of a scientist from one country to another. In addition to its impact on immigration rates, economy and culture, scientific migration has professional implications as well. Potential outcomes of research migration include: enhanced scientific contributions to the receiving country, the enrichment of its scientific strength, the flow of new ideas and perspectives in different areas of research as well as its potential to develop new products and technologies.

This article describes the co-authorship networks and researchers’ migration trends for three scientifically developing countries. It presents case studies on three countries from different world regions: Mexico from Latin America; India from Asia; and Turkey from Europe. It analyzes scientific migration from each of these three source countries to a set of 17 destination countries from all over the world, including both scientifically developing and big, developed countries, including the USA, China, the UK, Japan and Germany. A full list of destination countries is given in Table 1. In addition, it looks at co-authorship patterns between Mexico, India and Turkey and the 17 selected countries and describes the similarities and differences between these two phenomena. The result is an examination of the unique patterns that both these lines of investigation offer and the ways in which each can be used to shed some light on the development of scientific excellence in different areas of the world.

 

Developing 8 (D-8) Europe (EU) BRIC Other
Egypt Romania Brazil Thailand
Iran Portugal China Australia
Malaysia Germany India Japan
Pakistan Italy United States
The Netherlands
United Kingdom

Table 1 – set of 17 selected destination countries

We collected the research output of 17 countries among which 10 are considered “growing” or “developing” countries (noted in italic) and 7 are considered as “established” from different regions in the world (see Table 1). For this study, each country’s research output for 2000-2012 was collected. In order to trace the movement of researchers from one country to another we used the unique Author ID offered by Scopus as a way to identify individual authors. The affiliations associated with an author ID as they appear on their publications are kept and become part of the unique author profile constructed within Scopus. This allows for an analysis of migration as one can identify from which institution and country an author published research articles in the course of his or her scientific career. Moreover, the fact that the affiliation is tracked per author allows for a comparison between international migration and co-authorship and enables the distinction between the two as separate indicators of mobility and collaboration, respectively.

 

Mexico co-authorships

Over 15,000 documents were analyzed in order to track the patterns of co-authorship between Mexico and the 17 selected countries. Figure 1 shows the countries Mexico collaborates with as well as the strength of the co-authorship network. The thickness of the lines and the numbers indicated on the map demonstrate the most frequently co-authored countries, which are Brazil in the first place, Pakistan in the second place and the USA in the third place. Other countries Mexican researchers frequently collaborate with are India, Portugal and Egypt.

RT35GHb1

Figure 1 - Mexico’s co-authorship network (2000-2012)

The collaborative scientific output between Mexico, Brazil, Pakistan and the Unites States focuses on the areas of Agriculture, Biology, Medicine and Social Sciences.

 

Migration from Mexico

The migration of researchers from Mexico to the 17 destination countries studied shows a somewhat different pattern compared to co-authorship (see Figure 2). Brazil, the United States and Portugal are the leading destination countries of Mexican researchers. High co-authorship and migration exchange between Mexico and Brazil could be attributed to the fact that both countries are not only the most populated nations in Latin America but also have the largest global emerging economies and are considered to have high regional powers. In addition, both countries focus on common research issues, especially those related to climate change and the environment.

RT35GHb2

Figure 2 - Migration from Mexico (2000-2012)

India co-authorships

India’s co-authorship network, as can be seen in Figure 3, is led by Malaysia, Pakistan and Iran followed by Thailand, the United States and Japan. Collaborative research between India, Malaysia and Iran coincides with the close economic relationships between these countries, especially relating to Oil trade. The collaborative scientific output between these countries falls in the areas of Physics, Chemistry and Medicine as well as Materials Sciences.

RT35GHb3

Figure 3 - India’s co-authorship network (2000-2012)

Migration from India

The migration patterns (see Figure 4) from India show Pakistan and Malaysia as leading destinations followed by the United States, which is somewhat different from the co-authorship patterns of India. Both geographic proximity and close economic ties seem to be the drivers of co-authorships and migration. The migration to the USA might be the result of specialized programs and funding such as the Indo-US Joint Research Programs that received over 200 billion dollars in grants and fellowships in the past year (4).

RT35GHb4

Figure 4 - Migration from India (2000-2012)

Turkey co-authorships

Turkey’s co-authorship network shows strong ties to Pakistan, Iran and Romania and is in areas of Physics, Chemistry and Engineering. The strong ties between Turkey and Pakistan could be the result of the diplomatic efforts to strengthen the economic, educational and technological relationships between the two countries through the High Level Strategic Cooperation Council (HLSCC). Among other topics promoted by the council is the “interaction between universities and academic institutions” (5).

RT35GHb5

Figure 5 - Turkey’s co-authorship network (2000-2012)

Migration from Turkey

However, the migration from Turkey as depicted in Figure 6 shows a somewhat different pattern. The leading destination country is the United States, followed by the Netherlands and Iran. The high migration to the United States could be attributed to the relatively high number of US National Science Foundation (NSF) funded projects with Turkey. The purpose of these projects is “to promote inclusion of junior researchers in collaborations to stimulate long-term research partnerships.” (6). Turkish migration to the Netherlands has historical roots which date back to 1964 when the Dutch government initiated a “recruitment agreement” with Turkey for the purpose of attracting Turkish workers, mainly low skilled laborers, to migrate to the Netherlands. This agreement lasted for 10 years and created a peak in migration from Turkey to the Netherlands in that period.

RT35GHb6

Figure 6 - Migration from Turkey (2000-2012)

Summary

Co-authorship and migration patterns can differ, although both are stimulated by economic and diplomatic factors. Grants, fellowships and joint technological projects on national levels drive scientific collaborations, which in turn result in co-authored papers. Migration can sometimes display similar patterns, but is also driven by personal factors such as career opportunities. Although one can see these patterns through the scientific affiliations of published papers, explaining them can only be done by looking at special agreements and diplomatic developments between countries in the preceding years. Language similarity and geographic vicinity are probably some of the factors affecting co-authorships and migration, and though we might not be able to show that through publications since most are published in English, the co-authorship networks could imply that. For migration, however, economic stimuli and opportunities could also be a driving cause, as researchers look to further develop their careers.

 

References

(1)    Moed, H. and Halevi, G. (2012) “International scientific migration analysis generates new insights”, Research Trends, Issue 31, https://www.researchtrends.com/issue-31-november-2012/international-scientific-migration-analysis-generates-new-insights/
(2)    Huggett, S. (2012) “The Rise of Latin America”, Research Trends, Issue 31, https://www.researchtrends.com/issue-31-november-2012/the-rise-of-latin-american-science/
(3)    Plume, A., Moed, H , Aisati, M., and Berkvens, P. (2011) “Is science in your country declining? Or is your country becoming a scientific super power, and how quickly?”, Research Trends, Issue 25, https://www.researchtrends.com/issue25-november-2011/is-science-in-your-country-declining-or-is-your-country-becoming-a-scientific-super-power-and-how-quickly/
(4)    Pulakkat, H., “Indo-US joint research gets big, funding jumps to $220 million”, The Economic Times, July 2013, http://articles.economictimes.indiatimes.com/2013-07-25/news/40795356_1_indian-institute-development-centre-collaboration
(5)    “Pakistan, Turkey agree to consolidate relations through enhanced political, economic cooperation”, The Associated Press of Pakistan, http://www.app.com.pk/en_/index.php?option=com_content&task=view&id=239632&Itemid=2 “Africa, Near East & South Asia (ANESA) Regional Opportunities”, National Science Foundation
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Scientific networks, collaboration and exchange have been discussed in Research Trends before (1, 2, 3). The main reason for the continuing interest in these topics has been the premise that these types of exchanges benefit scientific progress in that they foster innovation, and enhance and enable the flow of ideas between scientists in different institutions. In addition to the ability to track and sketch scientific collaborations between institutions, the availability of author profiles and their affiliation information in Scopus has also made possible the tracking of scientific migration from country to country. Research migration or mobility, although related to the formation of networks and collaboration, has unique characteristics and far reaching implications that go beyond the development of collaborative scientific activities. In a migration scenario, collaboration is achieved through the physical move of a scientist from one country to another. In addition to its impact on immigration rates, economy and culture, scientific migration has professional implications as well. Potential outcomes of research migration include: enhanced scientific contributions to the receiving country, the enrichment of its scientific strength, the flow of new ideas and perspectives in different areas of research as well as its potential to develop new products and technologies.

This article describes the co-authorship networks and researchers’ migration trends for three scientifically developing countries. It presents case studies on three countries from different world regions: Mexico from Latin America; India from Asia; and Turkey from Europe. It analyzes scientific migration from each of these three source countries to a set of 17 destination countries from all over the world, including both scientifically developing and big, developed countries, including the USA, China, the UK, Japan and Germany. A full list of destination countries is given in Table 1. In addition, it looks at co-authorship patterns between Mexico, India and Turkey and the 17 selected countries and describes the similarities and differences between these two phenomena. The result is an examination of the unique patterns that both these lines of investigation offer and the ways in which each can be used to shed some light on the development of scientific excellence in different areas of the world.

 

Developing 8 (D-8) Europe (EU) BRIC Other
Egypt Romania Brazil Thailand
Iran Portugal China Australia
Malaysia Germany India Japan
Pakistan Italy United States
The Netherlands
United Kingdom

Table 1 – set of 17 selected destination countries

We collected the research output of 17 countries among which 10 are considered “growing” or “developing” countries (noted in italic) and 7 are considered as “established” from different regions in the world (see Table 1). For this study, each country’s research output for 2000-2012 was collected. In order to trace the movement of researchers from one country to another we used the unique Author ID offered by Scopus as a way to identify individual authors. The affiliations associated with an author ID as they appear on their publications are kept and become part of the unique author profile constructed within Scopus. This allows for an analysis of migration as one can identify from which institution and country an author published research articles in the course of his or her scientific career. Moreover, the fact that the affiliation is tracked per author allows for a comparison between international migration and co-authorship and enables the distinction between the two as separate indicators of mobility and collaboration, respectively.

 

Mexico co-authorships

Over 15,000 documents were analyzed in order to track the patterns of co-authorship between Mexico and the 17 selected countries. Figure 1 shows the countries Mexico collaborates with as well as the strength of the co-authorship network. The thickness of the lines and the numbers indicated on the map demonstrate the most frequently co-authored countries, which are Brazil in the first place, Pakistan in the second place and the USA in the third place. Other countries Mexican researchers frequently collaborate with are India, Portugal and Egypt.

RT35GHb1

Figure 1 - Mexico’s co-authorship network (2000-2012)

The collaborative scientific output between Mexico, Brazil, Pakistan and the Unites States focuses on the areas of Agriculture, Biology, Medicine and Social Sciences.

 

Migration from Mexico

The migration of researchers from Mexico to the 17 destination countries studied shows a somewhat different pattern compared to co-authorship (see Figure 2). Brazil, the United States and Portugal are the leading destination countries of Mexican researchers. High co-authorship and migration exchange between Mexico and Brazil could be attributed to the fact that both countries are not only the most populated nations in Latin America but also have the largest global emerging economies and are considered to have high regional powers. In addition, both countries focus on common research issues, especially those related to climate change and the environment.

RT35GHb2

Figure 2 - Migration from Mexico (2000-2012)

India co-authorships

India’s co-authorship network, as can be seen in Figure 3, is led by Malaysia, Pakistan and Iran followed by Thailand, the United States and Japan. Collaborative research between India, Malaysia and Iran coincides with the close economic relationships between these countries, especially relating to Oil trade. The collaborative scientific output between these countries falls in the areas of Physics, Chemistry and Medicine as well as Materials Sciences.

RT35GHb3

Figure 3 - India’s co-authorship network (2000-2012)

Migration from India

The migration patterns (see Figure 4) from India show Pakistan and Malaysia as leading destinations followed by the United States, which is somewhat different from the co-authorship patterns of India. Both geographic proximity and close economic ties seem to be the drivers of co-authorships and migration. The migration to the USA might be the result of specialized programs and funding such as the Indo-US Joint Research Programs that received over 200 billion dollars in grants and fellowships in the past year (4).

RT35GHb4

Figure 4 - Migration from India (2000-2012)

Turkey co-authorships

Turkey’s co-authorship network shows strong ties to Pakistan, Iran and Romania and is in areas of Physics, Chemistry and Engineering. The strong ties between Turkey and Pakistan could be the result of the diplomatic efforts to strengthen the economic, educational and technological relationships between the two countries through the High Level Strategic Cooperation Council (HLSCC). Among other topics promoted by the council is the “interaction between universities and academic institutions” (5).

RT35GHb5

Figure 5 - Turkey’s co-authorship network (2000-2012)

Migration from Turkey

However, the migration from Turkey as depicted in Figure 6 shows a somewhat different pattern. The leading destination country is the United States, followed by the Netherlands and Iran. The high migration to the United States could be attributed to the relatively high number of US National Science Foundation (NSF) funded projects with Turkey. The purpose of these projects is “to promote inclusion of junior researchers in collaborations to stimulate long-term research partnerships.” (6). Turkish migration to the Netherlands has historical roots which date back to 1964 when the Dutch government initiated a “recruitment agreement” with Turkey for the purpose of attracting Turkish workers, mainly low skilled laborers, to migrate to the Netherlands. This agreement lasted for 10 years and created a peak in migration from Turkey to the Netherlands in that period.

RT35GHb6

Figure 6 - Migration from Turkey (2000-2012)

Summary

Co-authorship and migration patterns can differ, although both are stimulated by economic and diplomatic factors. Grants, fellowships and joint technological projects on national levels drive scientific collaborations, which in turn result in co-authored papers. Migration can sometimes display similar patterns, but is also driven by personal factors such as career opportunities. Although one can see these patterns through the scientific affiliations of published papers, explaining them can only be done by looking at special agreements and diplomatic developments between countries in the preceding years. Language similarity and geographic vicinity are probably some of the factors affecting co-authorships and migration, and though we might not be able to show that through publications since most are published in English, the co-authorship networks could imply that. For migration, however, economic stimuli and opportunities could also be a driving cause, as researchers look to further develop their careers.

 

References

(1)    Moed, H. and Halevi, G. (2012) “International scientific migration analysis generates new insights”, Research Trends, Issue 31, https://www.researchtrends.com/issue-31-november-2012/international-scientific-migration-analysis-generates-new-insights/
(2)    Huggett, S. (2012) “The Rise of Latin America”, Research Trends, Issue 31, https://www.researchtrends.com/issue-31-november-2012/the-rise-of-latin-american-science/
(3)    Plume, A., Moed, H , Aisati, M., and Berkvens, P. (2011) “Is science in your country declining? Or is your country becoming a scientific super power, and how quickly?”, Research Trends, Issue 25, https://www.researchtrends.com/issue25-november-2011/is-science-in-your-country-declining-or-is-your-country-becoming-a-scientific-super-power-and-how-quickly/
(4)    Pulakkat, H., “Indo-US joint research gets big, funding jumps to $220 million”, The Economic Times, July 2013, http://articles.economictimes.indiatimes.com/2013-07-25/news/40795356_1_indian-institute-development-centre-collaboration
(5)    “Pakistan, Turkey agree to consolidate relations through enhanced political, economic cooperation”, The Associated Press of Pakistan, http://www.app.com.pk/en_/index.php?option=com_content&task=view&id=239632&Itemid=2 “Africa, Near East & South Asia (ANESA) Regional Opportunities”, National Science Foundation
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The bibliometrics of the developing world

Sarah Huggett touches upon the difficulty of defining ‘developing countries’ and then discusses their development in bibliometric terms. For example, has research output from developing countries changed in different subject fields in recent years?

Read more >


What is a “developing country”?

According to the Oxford English Dictionary, the developing world consists of “those countries of the world which are poor and not fully industrialized, but are seeking to become more economically and technologically advanced” (1). There is some controversy around the use of the term (2), as it may be perceived to imply inferiority of a “developing” versus a “developed” country, and also because it assumes a trend towards development along the traditional Western model that may not occur by choice or circumstance. Nevertheless, the term is broadly accepted, and in this article used only to define those countries that may perhaps have historically had fewer resources to devote to research and scholarly communications than others. For the purpose of this analysis, the list of countries used was derived from the International Monetary Fund World Economic Outlook April 2013 (3). Whole counting of publications was used, so that each co-authorship equates to an article count. This means that co-publications between the developing world and the developed world are counted towards the developing world’s output, and vice-versa

Recent bibliometrics developments for the developing world

In 2011, the developing world published over 830,000 scholarly papers, representing just under 40% of the world’s scholarly output. These countries have indeed been developing in both absolute and relative terms, as demonstrated by their increasing share of global scholarly papers (see Figure 1). The output of the developing world grew at 15% Compound Annual Growth Rate (CAGR) from 2002 to 2011, compared to 6% CAGR globally.

RT35SH1

Figure 1 - Historical overview of developing world share of global scholarly papers. Source: Scopus

Looking at a historical overview of the developing world’s scholarly publications by region reveals that most of the growth is concentrated in Asia (see Figure 2). A large proportion of the rise is due to China, which grew from an already large 25.1% of the developing world’s scholarly papers in 2002 to a very prominent 43.9% in 2011, with an impressive 23% 2002-2011 CAGR. The three next most prolific developing countries are the other BRIC countries, but their shares of the developing world’s output in 2011 are far behind China’s with 9.9% for India with a strong 14% 2002-2011 CAGR, 5.7% for Brazil with a high 13% 2002-2011 CAGR, and 4.5% for Russia with a very low 2% 2002-2011 CAGR. The only other developing country with more than 4% of the developing world’s 2011 output is Iran at 4.2%, with a tremendous 33% 2002-2011 CAGR.

RT35SH2

Figure 2 - Historical overview of developing world scholarly papers output by region. Source: Scopus

And the developing world’s scholarly output has not only been growing in quantity, but in citability as well, as demonstrated by a historical overview of its five year field weighted relative impact, a measure of citation impact relative to global citation impact. For 2011, this is calculated as a ratio of 2007-2011 citations to 2007-2011 scholarly papers, divided by number of 2007-2011 scholarly papers, then normalized to expected impact worldwide, accounting for different citation patterns in different fields. Although this measure is still under the world average of 1 at 0.70 in 2011, it has grown both absolutely and relatively from 0.52 in 2000 (see Figure 3). To put it in a nutshell, the developing world has grown from about half as impactful as the global average, to more than two thirds as impactful as the global average. This increase may be partly due to increased international collaboration between developing and developed countries over the years.

RT35SH3

Figure 3 - Historical overview of developing world five year field weighted relative impact. Source: Scopus

Looking at a historical overview of the developing world’s five year field weighted relative impact by region reveals that growth in impact is not tied to growth in output (see Figure 4). Indeed, two groups emerge, with developing countries in Africa and the Americas showing relatively high field weighted relative impact at around 0.8 in 2011, while developing countries in Europe and Asia have a lower field weighted relative impact of respectively 0.65 and 0.68 in 2011. However the trends for developing countries in these two regions differ: Asia has shown faster growth, so that while Asia’s field weighted relative impact was notably inferior to Europe in 2000, it caught up with Europe in 2010 and outpaced it in 2011. Developing countries in Oceania have the highest field weighted relative impact and reach above world average at 1.01 in 2011, but their output is relatively small with only 334 scholarly papers published in 2011.

RT35SH4

Figure 4 - Historical overview of developing world five year field weighted relative impact by region. Source: Scopus


Subject trends for the developing world

The scholarly output of the developing world has been quite stable in its composition of various subject areas over time (see Figure 5). Over half (53.7%) of the papers published in 2011 by authors from developing countries are in the Physical Sciences. The rest of the papers are mostly divided between Life Sciences (21.5%) and Health Sciences (17.9%), accounting together for nearly 40% of the developing world’s 2011 scholarly output. By contrast, the developing world is not very prolific in the Social Sciences (5.3% of its 2011 scholarly papers).

A comparison of developed versus developing world scholarly output by main field reveals some interesting patterns. Overall, the developed world’s output follows a similar distribution pattern as the developing world, but appears less unequally distributed across main fields. Even if for both worlds, Physical Sciences is the most prolific area, it represents less than half (43.9%) of the developed world’s output. A larger proportion of the developed world’s output is in the Life Sciences (28.0%) or Health Sciences (31.4%), accounting for nearly 60% of the developed world’s 2011 scholarly output. And while Social Sciences (13.6%) is still the least prolific area, it represents more than double the proportion of scholarly output for the developed world than for the developing world.

RT35SH5

Figure 5 - Historical overview of the distribution of the developed and developing worlds’ scholarly output by main field. Source: Scopus

Given the vast differences in number of papers published by different regions, the developing world aggregate is skewed by the dominating output from Asia. Looking at the distribution of 2011 scholarly publications by main field for developing countries in each region reveals a more diverse picture (see Figure 6). Developing countries in Europe and Asia show a similar pattern to the aggregate, which they influence heavily due to the large number of papers they publish relative to total developing world scholarly output. In these regions, the scholarly output of developing countries is heavily geared towards the Physical Sciences (~60%), followed by Life Sciences (~20%) and Health Sciences (16%), with a much lower share for Social Sciences (~5%). Developing countries in other regions show a more balanced distribution: although Physical Sciences is still the most prolific field for developing countries in North America, Africa, and South America, these papers represent 32-38% of each region’s output, leaving larger shares to Life Sciences (~30%) and Health Sciences (25-29%). Developing countries in these regions also show a higher proportion of Social Sciences papers (~8%).

RT35SH6

Figure 6 - Distribution of developing world 2011 scholarly papers by region and main field. Source: Scopus

This distribution of 2011 scholarly publications by main field for developed countries in each region again shows similar but more balanced patterns (see Figure 7). North America displays the fewest divergences between developing and developed countries amongst regions with enough data for robust results. A third of the scholarly output of developed countries in North America is in the Physical Sciences (-5 percentile points compared to developing countries), a quarter in the Life Sciences (-4 percentile points compared to developing countries), 28% in the Health Sciences (+3 percentile points compared to developing countries), and 14% in the Social Sciences (+6 percentile points compared to developing countries). While the output of developed countries in Asia is still predominantly in the Physical Sciences, this represents just under half of their scholarly output (-11 percentile points compared to developing countries). The share of Life Sciences and Social Sciences scholarly papers in this region is similar to those in developing countries at respectively 22% and 6% (2 percentile points higher than developed countries each). However, developed countries in Asia publish proportionally more in the Health Sciences (23%) than developing countries (16%). The differences are even more blatant for Europe: Physical Sciences account for 38% of scholarly papers (21 fewer percentile points than in developing countries in this region), allowing for higher shares in Life Sciences (24% (+5 percentile points compared to developing countries)), Health Sciences (27% (+11 percentile points compared to developing countries)), and Social Sciences (11% (+5 percentile points compared to developing countries)).

RT35SH7

Figure 7 - Distribution of developed world 2011 scholarly papers by region and main field. Source: Scopus

 Future development for the developing world

In bibliometric terms, the developing world has indeed seen some development over the past few years, both in absolute and relative terms. Although the developing world is still heavily dominated by the BRIC countries in terms of quantity of scholarly papers published, it achieves higher impact with research published from Africa and the Americas. Regional differences also occur in terms of the distribution of content published by main field, with Europe and Asia showing a marked prominence in the Physical Sciences.

The developing world appears to be a combination of varied entities with different specifications when it comes to scholarly output, and its recent growth trends for both quantity and impact bode well for its future, although the distribution of future successes may be unequal between different developing countries.

References:

(1)      http://www.oed.com/view/Entry/51432?redirectedFrom=%22developing+country%22#eid6852973
(2)      http://en.wikipedia.org/wiki/Developing_country#Criticism_of_the_term_.27developing_country.27
(3)      http://www.imf.org/external/pubs/ft/weo/2013/01/pdf/text.pdf


 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

What is a “developing country”?

According to the Oxford English Dictionary, the developing world consists of “those countries of the world which are poor and not fully industrialized, but are seeking to become more economically and technologically advanced” (1). There is some controversy around the use of the term (2), as it may be perceived to imply inferiority of a “developing” versus a “developed” country, and also because it assumes a trend towards development along the traditional Western model that may not occur by choice or circumstance. Nevertheless, the term is broadly accepted, and in this article used only to define those countries that may perhaps have historically had fewer resources to devote to research and scholarly communications than others. For the purpose of this analysis, the list of countries used was derived from the International Monetary Fund World Economic Outlook April 2013 (3). Whole counting of publications was used, so that each co-authorship equates to an article count. This means that co-publications between the developing world and the developed world are counted towards the developing world’s output, and vice-versa

Recent bibliometrics developments for the developing world

In 2011, the developing world published over 830,000 scholarly papers, representing just under 40% of the world’s scholarly output. These countries have indeed been developing in both absolute and relative terms, as demonstrated by their increasing share of global scholarly papers (see Figure 1). The output of the developing world grew at 15% Compound Annual Growth Rate (CAGR) from 2002 to 2011, compared to 6% CAGR globally.

RT35SH1

Figure 1 - Historical overview of developing world share of global scholarly papers. Source: Scopus

Looking at a historical overview of the developing world’s scholarly publications by region reveals that most of the growth is concentrated in Asia (see Figure 2). A large proportion of the rise is due to China, which grew from an already large 25.1% of the developing world’s scholarly papers in 2002 to a very prominent 43.9% in 2011, with an impressive 23% 2002-2011 CAGR. The three next most prolific developing countries are the other BRIC countries, but their shares of the developing world’s output in 2011 are far behind China’s with 9.9% for India with a strong 14% 2002-2011 CAGR, 5.7% for Brazil with a high 13% 2002-2011 CAGR, and 4.5% for Russia with a very low 2% 2002-2011 CAGR. The only other developing country with more than 4% of the developing world’s 2011 output is Iran at 4.2%, with a tremendous 33% 2002-2011 CAGR.

RT35SH2

Figure 2 - Historical overview of developing world scholarly papers output by region. Source: Scopus

And the developing world’s scholarly output has not only been growing in quantity, but in citability as well, as demonstrated by a historical overview of its five year field weighted relative impact, a measure of citation impact relative to global citation impact. For 2011, this is calculated as a ratio of 2007-2011 citations to 2007-2011 scholarly papers, divided by number of 2007-2011 scholarly papers, then normalized to expected impact worldwide, accounting for different citation patterns in different fields. Although this measure is still under the world average of 1 at 0.70 in 2011, it has grown both absolutely and relatively from 0.52 in 2000 (see Figure 3). To put it in a nutshell, the developing world has grown from about half as impactful as the global average, to more than two thirds as impactful as the global average. This increase may be partly due to increased international collaboration between developing and developed countries over the years.

RT35SH3

Figure 3 - Historical overview of developing world five year field weighted relative impact. Source: Scopus

Looking at a historical overview of the developing world’s five year field weighted relative impact by region reveals that growth in impact is not tied to growth in output (see Figure 4). Indeed, two groups emerge, with developing countries in Africa and the Americas showing relatively high field weighted relative impact at around 0.8 in 2011, while developing countries in Europe and Asia have a lower field weighted relative impact of respectively 0.65 and 0.68 in 2011. However the trends for developing countries in these two regions differ: Asia has shown faster growth, so that while Asia’s field weighted relative impact was notably inferior to Europe in 2000, it caught up with Europe in 2010 and outpaced it in 2011. Developing countries in Oceania have the highest field weighted relative impact and reach above world average at 1.01 in 2011, but their output is relatively small with only 334 scholarly papers published in 2011.

RT35SH4

Figure 4 - Historical overview of developing world five year field weighted relative impact by region. Source: Scopus


Subject trends for the developing world

The scholarly output of the developing world has been quite stable in its composition of various subject areas over time (see Figure 5). Over half (53.7%) of the papers published in 2011 by authors from developing countries are in the Physical Sciences. The rest of the papers are mostly divided between Life Sciences (21.5%) and Health Sciences (17.9%), accounting together for nearly 40% of the developing world’s 2011 scholarly output. By contrast, the developing world is not very prolific in the Social Sciences (5.3% of its 2011 scholarly papers).

A comparison of developed versus developing world scholarly output by main field reveals some interesting patterns. Overall, the developed world’s output follows a similar distribution pattern as the developing world, but appears less unequally distributed across main fields. Even if for both worlds, Physical Sciences is the most prolific area, it represents less than half (43.9%) of the developed world’s output. A larger proportion of the developed world’s output is in the Life Sciences (28.0%) or Health Sciences (31.4%), accounting for nearly 60% of the developed world’s 2011 scholarly output. And while Social Sciences (13.6%) is still the least prolific area, it represents more than double the proportion of scholarly output for the developed world than for the developing world.

RT35SH5

Figure 5 - Historical overview of the distribution of the developed and developing worlds’ scholarly output by main field. Source: Scopus

Given the vast differences in number of papers published by different regions, the developing world aggregate is skewed by the dominating output from Asia. Looking at the distribution of 2011 scholarly publications by main field for developing countries in each region reveals a more diverse picture (see Figure 6). Developing countries in Europe and Asia show a similar pattern to the aggregate, which they influence heavily due to the large number of papers they publish relative to total developing world scholarly output. In these regions, the scholarly output of developing countries is heavily geared towards the Physical Sciences (~60%), followed by Life Sciences (~20%) and Health Sciences (16%), with a much lower share for Social Sciences (~5%). Developing countries in other regions show a more balanced distribution: although Physical Sciences is still the most prolific field for developing countries in North America, Africa, and South America, these papers represent 32-38% of each region’s output, leaving larger shares to Life Sciences (~30%) and Health Sciences (25-29%). Developing countries in these regions also show a higher proportion of Social Sciences papers (~8%).

RT35SH6

Figure 6 - Distribution of developing world 2011 scholarly papers by region and main field. Source: Scopus

This distribution of 2011 scholarly publications by main field for developed countries in each region again shows similar but more balanced patterns (see Figure 7). North America displays the fewest divergences between developing and developed countries amongst regions with enough data for robust results. A third of the scholarly output of developed countries in North America is in the Physical Sciences (-5 percentile points compared to developing countries), a quarter in the Life Sciences (-4 percentile points compared to developing countries), 28% in the Health Sciences (+3 percentile points compared to developing countries), and 14% in the Social Sciences (+6 percentile points compared to developing countries). While the output of developed countries in Asia is still predominantly in the Physical Sciences, this represents just under half of their scholarly output (-11 percentile points compared to developing countries). The share of Life Sciences and Social Sciences scholarly papers in this region is similar to those in developing countries at respectively 22% and 6% (2 percentile points higher than developed countries each). However, developed countries in Asia publish proportionally more in the Health Sciences (23%) than developing countries (16%). The differences are even more blatant for Europe: Physical Sciences account for 38% of scholarly papers (21 fewer percentile points than in developing countries in this region), allowing for higher shares in Life Sciences (24% (+5 percentile points compared to developing countries)), Health Sciences (27% (+11 percentile points compared to developing countries)), and Social Sciences (11% (+5 percentile points compared to developing countries)).

RT35SH7

Figure 7 - Distribution of developed world 2011 scholarly papers by region and main field. Source: Scopus

 Future development for the developing world

In bibliometric terms, the developing world has indeed seen some development over the past few years, both in absolute and relative terms. Although the developing world is still heavily dominated by the BRIC countries in terms of quantity of scholarly papers published, it achieves higher impact with research published from Africa and the Americas. Regional differences also occur in terms of the distribution of content published by main field, with Europe and Asia showing a marked prominence in the Physical Sciences.

The developing world appears to be a combination of varied entities with different specifications when it comes to scholarly output, and its recent growth trends for both quantity and impact bode well for its future, although the distribution of future successes may be unequal between different developing countries.

References:

(1)      http://www.oed.com/view/Entry/51432?redirectedFrom=%22developing+country%22#eid6852973
(2)      http://en.wikipedia.org/wiki/Developing_country#Criticism_of_the_term_.27developing_country.27
(3)      http://www.imf.org/external/pubs/ft/weo/2013/01/pdf/text.pdf


 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)