Issue 36 – March 2014


Research assessment: Review of methodologies and approaches

In this article, Dr. Henk Moed and Dr. Gali Halevi discuss the different levels at which research evaluation takes place and provide an overview of the various quantitative approaches to research assessment that are currently available.

Read more >

The assessment of scientific merit and individuals has a long and respectable history which has been demonstrated in numerous methods and models utilizing different data sources and approaches (1, 2). The proliferation and increasing availability of primary data has created the ability to evaluate research on many levels and degrees of complexity, but has also introduced some fundamental challenges to all who are involved in this process, including evaluators, administrators and researchers, and others (3).

Evaluative methods are used on several levels within the scientific world: (1) Institutional (including departmental) level, (2) Program level, and (3) Individual level. Each of these levels has its own objectives and goals; for example,

Institutional evaluation is being used in order to establish accreditation, define missions, establish new programs and monitor the quality of an institute’s research activities among others. The types of evaluative results can be seen in the ranking systems of universities, which at present are produced at both regional and international levels, based on different criteria (4). Institutional evaluations are performed based on prestige measures derived from publications, citations, patents, collaborations and levels of expertise of the individuals within the institution.

Program level evaluations are performed in order to measure the cost-benefit aspects of specific scientific programs. These are usually based on discovering the linkage between the investment made and the potential results of the program (5). Within this realm we find measures developed for technology transfer capabilities and commercialization potentialities of the program, among others (6).

Finally an individual evaluation is mainly performed for purposes of promotion and retention of individuals and is done at specific times in a researcher’s career. Individual assessment methods rely mainly on counts of publications or citations (7). In the past few years, with the advent of social media, we have seen an increase in the use of measures based on mentions in social media sites such as blogs, Facebook, LinkedIn, Wikipedia, Twitter, and others, which are labelled as sources of “altmetrics” and include news outlets as well (8). The data used for each of these evaluation goals, whether they measure a publication’s impact in social or economic terms or both, varies by the method chosen in each case.

Evaluative Indicators

Based on the different methodologies and approaches, several indicators, aimed at quantifying and benchmarking the results of these evaluative methods, have emerged through the years. Table 1 summarizes some of the main indicators used today in research evaluation.

Type of Indicator Description Main Uses Main Challenges
Publications – Citations Methods involving counts of the number of publications produced by the evaluated entity (e.g. researcher, department, institution) and the citations they receive. Measuring the impact and intellectual influence of scientific and scholarly activities including:Publication impact
Author impact
Institution /department impact
Country impact
Name variations of institutions and individuals make it difficult to count these correctly.Limited coverage or lack of coverage of the database selected for the analysis can cause fundamental errors.Some documents such as technical reports or professional papers (“grey literature”) are usually excluded from the analysis due to lack of indexing and thus, in certain disciplines, decrease the accuracy of the assessment.Differences between disciplines and institution reading behaviors are difficult to account for.
Usage Methods that aim to quantify the number of times a scholarly work has been downloaded or viewed. Indicating works that are read, viewed or shared as a measure of impact.Enabling authors to be recognized for publications that might be less cited but heavily used. Incomplete usage data availability across providers leads to partial analysis.Differences between disciplines and institution reading behaviors are difficult to account for.Content crawling and automated downloads software tools that allow individuals to automatically download large amounts of content, which doesn’t necessarily mean that it was read or viewed.Difficult to ascertain whether downloaded publications were actually read or used.
Social (Altmetrics) Methods that aim to capture the number of times a publication is mentioned in blogs, tweets or other social media platforms such as shared reference management tools. Measuring the mentions of a publication in social media sites, which can be considered as citations and usage and thus indicate the impact of research, an individual or institution. A relatively new area with few providers of social media tracking.The weight given to each social media source is different from one provider to the other thus leading to different “impact” scores.
Patents Measures the number of patents assigned to an institution or an individual.Identification of citations to basic research papers in patents as well as patents that are highly cited by recently issued patents. Attempting to provide a direct link between basic science and patents as an indication of economic, social and/or methodological contribution. Incomplete and un-standardized references and names limit the ability of properly assigning citations and patents to individuals or institutions.Patenting in countries other than where the institution or individual originates from is problematic for impact analysis.Lack of exhaustive reference lists within the patents limits the analysis.
Economic Measures the strengths between science and its effect on industry, innovation and the economy as a whole. Providing technology transfer indicators.Indicating patentability potentialities of a Research project.Providing cost-benefit measures The statistical models used are complex and require deep understanding of the investment made but also of the program itself.Long term programs are more difficult to measure as far as the cost-benefit is concerned.Requires expertise not only in mathematics and statistics but also in the field of investigation itself.
Networks Calculates collaborations between institutions and individuals on a domestic and global scale.Institutions and individuals that develop and maintain a prolific research network are not only more productive but also active, visible and established. Enabling the tracking of highly connected and globally active individuals and institutions.Allowing benchmarking to be performed by evaluators by comparing collaborating individuals and institutions to each other. Affiliation names as mentioned in the published papers are not always standardized, thus making them difficult to trace.Education in a different country which might not have resulted in a publication cannot be measured, thus making this particular aspect of expertise building impossible to trace.

Table 1 - Types of evaluative indicators

Big data and its effect on evaluation methods

Big data refers to a collection of data sets that is so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. The advent of supercomputers and cloud computing which are able to process, analyze and visualize these datasets also has an effect on evaluation methods and models. While a decade ago scientific evaluation relied mainly on citations and publication counts, most of which were even done manually, today these data are not only available digitally but can also be triangulated with other data types (9). Table 2 depicts some examples of big datasets that can be combined in a bibliometric study to investigate different phenomena related to publications and scholarly output. Thus, for example, publication and citation counts can be triangulated with collaborative indicators, text analysis and econometric measures to produce a multi-level view of an institution, program or an individual. Yet, the availability and processing capabilities of these large datasets does not necessarily mean that evaluation becomes simple or easy to communicate. The fact of the matter is that as they become more complex, both administrators and evaluators find it difficult to reach consensus as to which model best depicts the productivity and impact of scientific activities. These technological abilities are becoming a breeding ground for more indices, models and measures and while each may be valid and grounded in research, they present a challenge in deciding which are best to use and in what setting.

Combined datasets Studied phenomena Typical research questions
Citation indexes and usage log files of full text publication archives Downloads versus citations; distinct phases in the process of processing scientific information What do downloads of full text articles measure? To what extent do downloads and citations correlate?
Citation indexes and patent databases Linkages between science and technology (the science–technology interface) What is the technological impact of a scientific research finding or field?
Citation indexes and scholarly book indexes The role of books in scholarly communication; research productivity taking scholarly book output into account How important are books in the various scientific disciplines, how do journals and books interrelate, and what are the most important book publishers?
Citation indexes (or publication databases) and OECD national statistics Research input or capacity; evolution of the number of active researchers in a country and the phase of their career How many researchers enter and/or move out of a national research system in a particular year?
Citation indexes and full text article databases The context of citations; sentiment analysis of the scientific-scholarly literature In what ways can one objectively characterize citation contexts? And identify implicit citations to documents or concepts?

Table 2 - Compound Big Datasets and their objects of study. Source: Research Trends, Issue 30, September 2012 (9)


As shown by this brief review, research assessment manifests itself in different methodologies and indicators. Each methodology has its strengths and limitations, and is associated with a certain risk of arriving at invalid outcomes. Indicators and other science metrics are essential tools on two levels: in the assessment process itself, and on the Meta level aimed to shape that process. Yet, their function on these two levels is different. On the first they are tools in the assessment of a particular unit, e.g. a particular individual researcher, or department, and may provide one of the foundations of evaluative statements about such a unit. On the second level they provide insight into the functionality of a research system as a whole, and help draw general conclusions about its state, assisting in drafting policy conclusions regarding the overall objective and general set-up of an assessment process.

Closely defining the unit of assessment and the evaluative methodologies to be used can provide a clue as to how peer review and quantitative approaches might be combined. For instance, the complexity of finding appropriate peers to assess all research groups in a broad science discipline in a national research assessment exercise may urge the organizers of that exercise to carry out a bibliometric study first and decide on the basis of its outcomes in which specialized fields or for which groups a thorough peer assessment seems necessary.

As Ben Martin pointed out in his 1996 article (10), this is true not only for metrics but also for peer review. It is the task of members from the scholarly community and the domain of research policy to decide what are acceptable “error rates” in the methodology and indicators being used, and whether its benefits prevail over its detriments. Bibliometricians and other science and technology analysts should provide insight into the uses and limits of various types of metrics, in order to help scholars and policy makers to carry out such a delicate task.


(1)    Vale, R. D. (2012). “Evaluating how we evaluate”, Molecular Biology of the Cell, Vol. 23, No. 17, pp. 3285-3289.
(2)    Zare, R. N. (2012). “Editorial: Assessing academic researchers”, Angewandte Chemie - International Edition, Vol. 51, No. 30, pp. 7338-7339.
(3)    Simons, K. (2008). “The misused impact factor”, Science, Vol. 322, No. 5899, pp. 165.
(4)    O'Connell, C. (2013). “Research discourses surrounding global university rankings: Exploring the relationship with policy and practice recommendations”, Higher Education, Vol. 65, No. 6, pp. 709-723.
(5)    Guido M. Imbens and Jeffrey M. Wooldridge (2008). “Recent Developments in the Econometrics of Program Evaluation”, The National Bureau of Economic Research Working Papers Series. Available at:
(6)    Arthur, M. W., & Blitz, C. (2000). “Bridging the gap between science and practice in drug abuse prevention through needs assessment and strategic community planning”, Journal of Community Psychology, Vol. 28, No. 3, pp. 241-255.
(7)    Lee, L. S., Pusek, S. N., McCormack, W. T., Helitzer, D. L., Martina, C. A., Dozier, A. M., & Rubio, D. M. (2012). “Clinical and Translational Scientist Career Success: Metrics for Evaluation”, Clinical and translational science, Vol. 5, No. 5, pp. 400-407.
(8)    Taylor, M. (2013). “Exploring the boundaries: How altmetrics can expand our vision of scholarly communication and social impact”, Information Standards Quality, Vol. 25, No. 2, pp. 27-32.  Available at:
(9)    Moed, H.F., (2012). “The Use of Big Datasets in Bibliometric Research”, Research Trends, Issue 30, September 2012. Available at:
(10) Martin, B.R., (1996). “The use of multiple indicators in the assessment of basic research”, Scientometrics, Vol. 36, No. 3, pp. 343-362.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Stem cell research: Trends in and perspectives on the evolving international landscape

Stem cell research is an exciting yet complex and controversial science. In this piece, Alexander van Servellen and Ikuko Oba present the most important findings of their recent study on publication trends in Stem Cell Research.

Read more >


Stem cell research is an exciting yet complex and controversial science. The field holds the potential to revolutionize the way human diseases are treated, and many nations have therefore invested heavily in stem cell research and its applications. However, human stem cell research is also controversial with many ethical and regulatory questions that impact a nation’s policies.

Elsevier recently partnered with EuroStemCell and Kyoto University’s Institute for Integrated Cell-Material Sciences (iCeMS) to study publication trends in stem cell research. The resulting paper was published online to coincide with the World Stem Cell Summit in San Diego on December 6th, 2013. The study provides an overview of the stem cell research field as a whole, with particular focus on pluripotent stem cells.

Pluripotent stem cells are of particular interest because they are undifferentiated cells, which have the potential to differentiate into virtually any cell type in the body (1; see Figure 1). This property opens the door to clinical applications such as cell and organ replacement (2) and may accelerate drug discovery, drug screening and toxicological assessment. There are different kinds of pluripotent stem cells: embryonic stem cells (ES) are sourced from a blastocyst (an early embryo), and when sourced from human blastocysts are called human embryonic stem cells (hES), while induced pluripotent stem cells (iPS) - which were only recently discovered in 2006 by Shinya Yamanaka and colleagues at Kyoto University - are sourced from body cells, and then genetically reprogrammed to become pluripotent. For more detailed information on stem cells, please refer to our study (3).

The document sets underlying our analyses were created using keyword searches which are provided in the methodology section of our study, and were limited to articles, reviews and conference proceedings. They include primary research articles as well as other publication types, such as reviews, papers on policy and regulation, ethical considerations, etcetera. In this article we briefly review some key findings of our study, and expand by having a closer look at the clinical theme ‘drug development’ using SciVal. We will also examine the publication trends of China and the United States specifically, to see whether we can observe the impact of country level policy decisions in the publication data.


SvS  (1)

Figure 1: Stem Cell types and characteristics

Publication output, growth and field-weighted citation impact

Our study found that the overall corpus of stem cell related papers shows a relatively fast growth rate and citation impact.

Stem cell publications show a Compound Annual Growth Rate (CAGR) of 7.0% from 2008 to 2012, which is more than twice as great as the 2.9% CAGR for global publication output on all topics in the same period. Stem cell publications have a Field-Weighted Citation Impact (FWCI) of approximately 1.5 throughout the 2008-2012 period, which indicates that stem cell papers, on average, received 50% more citations than all other papers published in related disciplines in that period. (Stem cells and its subtypes are custom subject areas that were created using keyword searches. Each document set therefore includes publications belonging to various disciplines of the All Science Journal Classification).

Looking at specific types of stem cell research, the emergence of the iPS cell field (first publication in 2006) stands out. iPS cell papers show explosive growth and the highest impact of all types of stem cell research papers. The FWCI of the iPS field was extremely high just after its discovery, as might be expected of an emerging field. The FWCI calculated at the beginning of the period was based on relatively low publication counts, which are more subject to outlier effects than later data-points, which are based on larger publication volumes. The decline in FWCI (see Figure 3) should not be interpreted as a decrease in quality of research, rather it should be seen as a natural and expected decline as publication volume increases. Nonetheless, the 3,080 iPS cell papers published between 2008-2012 have a FWCI of 2.93, which is almost 3 times world level for all papers published in related disciplines. That is strong evidence to support the sustained recognition and importance of the emerging field of iPS cell research.

We observed that hES cell publication output peaked in 2010, and while ES cell research overall shows a high publication volume, it is predominantly represented by non-human ES cell research (see Figure 2). The FWCI for ES cell and hES cell publications also remained relatively stable during the same period, at around 1.8 times the world average for ES cell publications, and over two times world average for hES cell publications.

SvS  (2)

Figure 2 - Global publication count (1996-2012) and compound annual growth rate (CAGR)(2008-2012) for all stem cells (Stem Cells), ES cells (all organisms; ESCs), hES cells (hESCs), and iPS cells (iPSCs). Source: Scopus.


SvS  (3)

Figure 3 - The FWCI of publications on stem cells overall and by cell type from 2008-2012. The pale blue line represents the global average field-weighted citation impact for all publications in the various subject areas, assigned to the journals in which stem cell papers are published. Source: Scopus


Clinical themes: Regenerative Medicine and Drug Development

Our study also examined the extent to which stem cell publications are aligned with the societal goals of developing new treatments for diseases, by analyzing the publications for use of keywords related to two themes: regenerative medicine and drug development. The results show that more than half of all stem cell publications do not use keywords related to either theme (see Figure 4). Such publications may be related to basic research which addresses the fundamental biology of stem cells. These specific themes may also not be relevant to many clinical or translational publications, e.g., those related to hematopoietic stem cell transplantation and cancer (translational research is scientific research that helps to make findings from basic science useful for practical applications that enhance human health and well-being). It should also be noted that the search used to compile the document set for overall stem cell research was purposely broad, and can be expected to include stem cell research of all kinds, as well as research which refers to stem cells in the title, abstract, and keywords, but may not necessarily be considered “stem cell research” per se.

 “Today, stem cell research is more about understanding than about treating illnesses. I do think it’s most important to understand how our tissues are formed, and how they get ill. I’d go further and say that understanding stem cells means understanding where we come from. If we think of the embryonic stem cells, they tell us a lot about how our bodies develop from an embryo. They provide a window on events which we couldn’t otherwise observe.”

— Elena Cattaneo, Full Professor, Director, UniStem, Center for Stem Cell Research, University of Milan


SvS 4
Figure 4 - The percentage of stem cell papers published from 2008 to 2012 using keywords related to “drug development,” “regenerative medicine,” or other by cell type. Source: Scopus


 It is not surprising to see that regenerative medicine is significantly represented within each type of stem cell research. Alongside positive developments in stem cell biology, regenerative medicine has enabled the development of new biotechnologies that promote self-repair and regeneration, such as the construction of new tissues to improve or restore the function of injured or destroyed tissues and organs (4).

Drug development is represented by a much smaller share of each type of stem cell research. The fact that many more iPS cell papers were related to drug development (11%) compared to ES cells (4%) and stem cells overall (2%) stands out. This may reflect the particular potential that iPS cells hold for the development of disease models, personalized medicine, and drug toxicity testing. iPS cells can be derived from selected living individuals, including those with inherited diseases and their unaffected relatives, which could allow the screening process to account for genetic differences in response to potential new drugs.


 Exploring Drug Development using SciVal

To expand on the analysis done in the study, we used the new generation of SciVal to examine stem cell papers related to drug development by setting up the relevant research fields using the same keywords applied in our initial study. The results are presented in figure 5. The number of iPS cell papers related to drug development has clearly grown fast since the first iPS cell paper was published in 2006, as it has since surpassed the numbers of ES and hES cell papers related to drug development.

SvS 5

Figure 5 Number of global publications related to “drug development” in ES, hES and iPS cell research 2006-2012. Source: SciVal.


 “I believe the biggest impact to date of iPS cell technology is not regenerative medicine, but in making disease models, drug discovery, and toxicology testing…”

— Shinya Yamanaka, Director Center for iPS Cell Research and Application (CiRA), Kyoto University.


World Publication Activity in Embryonic Stem Cell (ES), Human Embryonic Stem cell (hES) and induced Pluripotent Stem cell research (iPS)

Stem cell research has provoked debate regarding the ethics and regulation of the research and resulting therapies. Initially these discussions focused largely on the moral status of the embryo. The discovery of iPS cells raised the possibility that ES cell research would no longer be necessary, thereby circumventing the ethical issues present in embryonic research. This has not been the case, as the stem cell field continues to rely both on ES and iPS cell research to progress the understanding of pluripotency and potential applications (5). Furthermore, iPS cell research is not free of ethical considerations in terms of how they may be used as well as the question of tissue ownership. Looking at the data, we see continued publications in ES and hES, but do observe that the global volume of iPS publications has surpassed the volume of hES cell publications in 2010 (see Figure 6). There also seems to be an overall slowing in growth, and even a recent decrease in world ES and hES cell publication output. These findings should be interpreted with caution, keeping in mind that our datasets represent publications which use keywords related to stem cell research, and not solely “stem cell research papers”.

SvS 6

Figure 6 Global Stem Cell publications (top) and ES, hES and iPS cell publications (bottom), as a share of total world output, from 1996-2012. Source: Scopus


Stem Cells in China

We also examined the publication trends of China and the United States specifically, to see whether we can observe the impact of country level policy decisions in the publication data.

China is a country which shows steady growth in stem cell research supported by its major funding initiatives. In 2001, the Chinese Ministry of Science and Technology (MOST) launched two independent stem cell programs followed by a number of funding initiatives intended to further promote stem cell research, applications and public awareness. At the same time, China has been working to strengthen ethical guidelines and regulations. In total, the national government’s stem cell funding commitment is estimated at more than 3 billion RMB (close to 500 million USD) for the 5 year period from 2012 to 2016. Confronted with the healthcare needs of a rapidly aging population of nearly 1.4 billion, the impetus behind much stem cell research, so far, has understandably been clinical translation and development (6).

Looking at the publication data from our study, we see that stem cell publications have grown from representing 0.2% of China’s total publication output in 2001 to a peak of 0.82% in 2008, followed by a marginal decline to 0.76% by 2012 (see Figure 7). As also observed globally, China’s iPS cell publication output surpassed hES cell publication output in 2010, after which hES cell output shows fluctuation.

SvS 7

Figure 7 China’s Stem Cell publications (top) and ES, hES and iPS cell publications (bottom), as a share of total country output, from 1996-2012. Source: Scopus

Science policy and Human Embryonic Stem Cell (hES) research in the USA

The United States is an interesting case study because, as reported in our study, they are the world leader in stem cell research considering that they produce the highest absolute publication volume, as well as high relative activity levels, indicating a high focus on stem cell research, and show high field-weighted citation impact. Yet, they have had to grapple with the practical and ethical dilemmas that are inherent in this field, and changing views of different administrations, as governments changed.

The result is a series of policy changes, some of which limited federal funding for hES cell research, while others loosened the limitations. In figure 8 we map such policies along with the corresponding publication output (relative to total country output). Despite the restrictive policies between 2001 and 2009, the United States show steady output growth, which has been supported through individual, state and industry funding as well as donations. We do observe changes in hES cell publication output that coincide with changes in regulation. While such changes in publication output are probably not best explained using a one factor model, these findings are hardly surprising, as we expect science policy to greatly impact scientific activity. Such an analysis can provide insight into the degree to which science policy has indeed affected publication output.

SvS 8

Figure 8 – USA’s hES publications as a share of total country output, from 1996-2012 and relevant US policy decisions. Source: Scopus (and various sources for policy decisions)



In recent years, stem cell research has grown remarkably, showing a growth rate more than double the rate of world research publications from 2008 to 2012. However, this increase is not uniform across all stem cell research areas. Our analysis showed that both ES and hES fields have grown more slowly than the stem cell field overall. In contrast, iPS cell publications have shown explosive growth, as would be expected of a new and promising field of research, and iPS cell publication volumes surpassed that of hES cell publications in 2010. However, both cell types continue to be highly active areas.

Stem cell research has attracted considerable attention within the scientific community: stem cell publications overall are cited 50% more than all other publications in related disciplines, while ES cell publications are cited twice the world rate, and iPS cell publications nearly three times the world rate. This high-growth, high-impact field encompasses research across many cell types, with a focus ranging from the most fundamental to the clinical. Reflecting the field’s ongoing development and clinical promise, approximately half of all stem cell publications are associated with regenerative medicine or drug development, a trend that is particularly pronounced in iPS cell research.

Stem cell research is developing fast, with some experimental pluripotent stem cell treatments already in clinical trials. Active debates are underway to adapt regulatory frameworks to address the specific challenges of developing, standardizing, and distributing cell-based therapies, while advances in basic research continue to provide a fuller understanding of how stem cells can be safely and effectively used. Cell replacement or transplantation therapies are not the only application of stem cell research: already the first steps are being taken towards use of cells derived from pluripotent stem cells, in drug discovery and testing. It is with great interest and anticipation that we watch the further development of this exciting field of science.



(1)    Fakunle, E.S., Loring, J.F. (2012) "Ethnically diverse pluripotent stem cells for drug development", Trends in molecular medicine, Vol. 18, No. 12, pp. 709-716.
(2)    Csete, M. (2013) “Chapter 84 - Regenerative Medicine”, In Transfusion Medicine and Hemostasis (2nd. ed.), edited by Beth H. Shaz, Christopher D. Hillyer, Mikhail Roshal and Charles S. Abrams. San Diego: Elsevier, 2013, pp. 559-563.
(3)    EuroStemCell, iCeMS, Elsevier (2013) “Stem Cell Research Trends and Perspectives on the Evolving International Landscape”. Available at:
(4)    Gurtner, G.C., Callaghan, M.J., Longaker, M.T. (2007) “Progress and potential for regenerative medicine, Annu Rev Med, Vol. 58, pp 299-312.
(5)    Smith, A., Blackburn, C. (2012) “Do we still need research on human embryonic stem cells?”. Available at:
(6)    Yuan, W., Sipp, D., Wang, Z.Z., Deng, H., Pei D., Zhou, Q., Cheng, T., (2012) “Stem Cell Science On the Rise in China”, Cell Stem Cell, Vol 10, No. 1, pp 12-15.


VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Research evaluation in Israel: Interview with Dr. Daphne Getz

For this issue Dr. Gali Halevi interviewed Dr. Daphne Getz, to hear more about the way research evaluation is carried out in Israel, and in particular on the recent report on Israeli scientific output published by the Samuel Neaman Institute, for which she was the lead investigator.

Read more >

In 2013, the Samuel Neaman Institute published a report covering the Israeli scientific output from 1990 to 2011, identifying the country’s leading scientific disciplines and comparing them to countries around the world. With its unique geographical location and demographic composition, Israel presents an interesting case of scientific capabilities and output as well as collaborative trends. For this issue, we interviewed Daphne Getz, the lead investigator of this report.

Dr Daphne Getz

Dr. Daphne Getz is the head of the CESTIP (Center of Excellence in Science, Technology and Innovation Policies), and has been a senior research fellow at the Samuel Neaman Institute (SNI) since 1996. Dr. Getz is a specialist in R&D policy, technology and innovation, policies on new and emerging technologies, and relationships between academia, industry and government, among others. She has represented the academia and the Technion (Israel Institute of Technology) in the MAGNET R&D Consortia and also represents Israeli academia in several EU and UN projects. She has a D.Sc. from the Technion in Physical Chemistry and has served in several positions related to R&D management in the industry. Over the years, Dr. Getz has initiated numerous projects, including Israeli indicators for Science, Technology and Innovation, evaluation of R&D programs, and the evaluation of Israeli R&D outputs using Bibliometrics.


Could you briefly describe SNI (Samuel Neaman Institute), its core activities and role in informing science policy in Israel?

Samuel Neaman Institute (SNI) is an Israeli organization established in 1978 at the Technion (the Israel Institute of Technology). Its main objective is to conduct independent multi-disciplinary research and provide insights into Israel’s Science, Technology & Innovation (STI), education, economy and industry as well as infrastructure and social development for policy makers. The institute has a key role in outlining Israel’s national policies in science, technology and higher education and serves decision makers through its research projects and surveys. The institute operates within the framework of a budget funded by Mr. Samuel Neaman and external research grants from the Ministry of Science, Technology and Space, the Office of the Chief Scientist in the Ministry of Economy, the Ministry for Environmental Protection, the European Commission’s Seventh Framework Programme grants, and more. SNI employees are highly professional analysts chosen because of their level of expertise in different disciplines. Each year, the institute conducts many projects and publishes numerous reports covering a variety of topics related to Israel’s technological, economic and social capabilities.

What types of evaluation programs does SNI develop and conduct?

The institute is often called upon to provide evaluations of specific programs or institutions in Israel. Some examples of such evaluative research are:

1. Program evaluation:
In some cases, SNI is requested to evaluate specific scientific programs, for example, the Scientific Infrastructure Program of the Ministry of Science and Technology, which was launched in 1995 in an attempt to bridge the gap between basic and applied research. SNI was called to methodologically evaluate how and to what extent this program benefitted the Israeli economy and society. In addition, the institute set out to study the effectiveness of the program, its actual successes and failures, and to help decision makers set priorities in R&D policies and investments.

2. Evaluation of R&D programs supported by the Office of the Chief Scientist (OCS):
The OCS supports several scientific programs aimed to support technology transfer between academia and research institutions and the industry. SNI was called to evaluate some of these programs and analyze their effectiveness, success and future development to ensure well-constructed processes for technology transfer to industry.
3.Evaluation of individual institutions:
From time to time, SNI is called upon to evaluate specific institutions within academia. In such cases SNI uses quantitative and qualitative methodologies to evaluate their performance in terms of output, influence and contribution to science, economy and society.

4. Evaluation of the Israeli research output:
Since 2003, the institute uses advanced bibliometric methodologies and conducts in-depth studies on the quality and quantity of Israeli research outputs (especially relating to scientific publications and patent analysis). Specific fields such as Nanoscience and Nanotechnology, Aerospace Engineering, Energy, Environment, and Stem Cells are analyzed and benchmarked against the rest of the world.


What data does the institute collect and analyze in order to produce reports on Israel’s STI capabilities?

SNI uses a variety of data sources in order to conduct its research and produce its reports, including intellectual property (such as patents and trademarks), human resources and demographics, as well as infrastructure and economic indicators. In addition, SNI established a Bibliometric department, which focuses on analyzing publication data such as number of journal articles, number of citations, conferences etc., as well as scientific collaborations with the international community.


Which indicators did the institute develop in order to be able to benchmark Israel’s STI?

SNI developed and maintains a large and diverse database of indicators relating to the monitoring and evaluation of R&D activities, scientific capabilities and technological infrastructures and to the funding of such activities in Israel. This database has become the most reliable and trusted source for STI evaluation in the country. In 2013, SNI published the fourth edition of “Indices of Science, Technology and Innovation in Israel: An International Comparison”. It contains key data on Israel's Science and Technology input and output and covers more than a decade of international comparisons, as well as many other indices, including position indicators. In the framework of patent research, SNI developed the "distinct invention" indicator. This indicator is based on patent family data and is aimed at neutralizing double counting of identical patent applications (inventions) as a result of their filing in numerous patent offices around the world.


Please list some of the main findings of the latest report on Israel’s STI on the following:

1.  Leading disciplines by quality:
According to the latest report, Israel’s leading scientific disciplines are Space Science, Material Sciences, Molecular Biology & Genetics, and Biology & Biochemistry. Leading sub-disciplines are Cell & Tissue Engineering, Biomaterials, Biophysics, Biochemistry & Molecular Biology, Biomedical Engineering, Composite Materials, and Nanotechnology. A significant growth by quantity was seen in disciplines such as Economics and Social Sciences.

2. Developing disciplines:
Some of the leading trends found, based on both quantitative and qualitative measures, are Tissue Engineering, Physics (Particles & Fields), Astronomy & Astrophysics, Cell Biology, and Biochemistry & Molecular Biology. In some of the sub-disciplines within these areas of research, Israel has a leading global role.

3. Main collaboration trends worldwide:
Overall, of Israel’s scientific publications in 2011, 46% was the result of international collaboration (40% in 2007). The main countries with which Israeli scientists collaborate are the USA, Germany and France. In addition to these, we found a significant growth in collaborations with South East Asian countries such as Singapore. An analysis of USPTO patent data relating to the 1999-2008 time period revealed that 83% of the cooperation in inventive activity was conducted with American inventors (highly influenced by the scope of US multinational firms’ activities in Israel), 10% with inventors from EU-27 countries (mainly Germany, France and the UK) and 7% with inventors from the rest of the world.

4. Main challenges in the current state of Israel’s STI and your recommendations:
An appropriate distribution of funding is always a challenge for decision makers. In our report we demonstrated that although highly funded disciplines such as Neuroscience did perform well, other - less funded - areas such as Space Science and Cell & Tissue Engineering showed significant growth and development. This enabled us to highlight areas that will need policy and funding attention in the coming years.


SNI produces numerous studies on Israel’s STI; could you please mention one or two of such studies (e.g. environmental conservation, energy) and their main results?

One of the research reports we produced in 2013 was “Science & Technology Education in Israel”, which aimed to provide indicators to inform strategy makers in education, and to help prepare them for a possible shortage in Science and Technology teachers in high schools. A unique report titled “Success stories” features 78 success stories that depict ultra-orthodox individuals in Israel, both men and women, who have successfully integrated into the world of academic education, employment and the military. Another "hot" topic is Energy; we have an ongoing project named "Energy Master Plan", responsible for evaluating the environmental impacts of the different potential energy scenarios as well defining environmental indicators to the energy market. The Energy Forum Meetings aim to provide a platform where professionals can discuss specific energy related topics. At the same time, the forum allows multilateral discussions encouraging projects in the fields of renewable energy and energy conservation. The forum meetings serve as a platform for defining professional, applicable positions, to be used by relevant decision makers. Other reports and findings can be found on our website:


Given the variable delays and uncertain linkages between R&D inputs and outputs (and ultimately, economic development), how do you draw conclusions (if indeed you do) on the impact of STI activities on the Israeli economy?

The question of causation or causality between R&D inputs and economic outputs is a well-known and researched problem in the R&D economic literature. The main criticism is that a large number of models dealing with the relationship between technological change and economic growth probe the linkage directly by simply looking at the inputs (e.g. scientific publications, patents) and outputs (e.g. firm sales, GDP), without analyzing or understanding the process binding them.

In the process of our work in SNI, we place great emphasis on qualitative methodologies (interviews, surveys and unstructured questionnaires using open-ended questions) that to our best knowledge are better suited to understanding and probing the mechanism (the "black box") linking scientific inputs and economic performance.

A number of quantitative studies dealing with the relationship between R&D investments and economic growth were conducted in SNI (see “R&D Outputs in Israel – A Comparative Analysis of PCT Applications and Distinct Israeli Inventions”; “Investments in Higher Education and the Economic Performance of OECD Countries: Israel in an International Perspective”). In both of these studies we addressed the question of causality by developing a two-stage model of scientific and technological innovation. In this model R&D investments generate scientific and technological outputs (e.g. patents) and these technological outputs turn back into inputs which explain economic performance. In the process of this work much emphasis was placed on the quality of the R&D indicators. For example, we extracted patent application data by priority date (which is the earliest filing date of the patent application anywhere in the world), as opposed to application or grant date, in order to more accurately represent the time of invention. Concurrently, the use of temporal bias (time lag) between R&D inputs and economic outputs is actually essential to correctly represent the real-world relationship and sequence between stimulus and response.
Currently, the institute’s investigators are working on several reports focusing on technology transfer and collaboration between industry and academia, international scientific collaborations, and energy sources.

For more information please visit

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Article downloads: An alternative indicator of national research impact and cross-sector knowledge exchange

Dr. Andrew Plume and Dr. Judith Kamalski demonstrate how download data can be used in research assessment to offer a different perspective on national research impact, and to give a unique view of knowledge exchange between authors and readers in the academic and corporate sectors.

Read more >

To date, the rise of alternative metrics as supplementary indicators for assessing the value and impact of research articles has focussed primarily on the article and/or author level. However, such metrics – which may include social media mentions, coverage in traditional print and online media, and full-text download counts – have seldom previously been applied to higher levels of aggregation such as research topics, journals, institutions, or countries. In particular, the use of article download counts (also known as article usage statistics) have not been used in this way owing to the difficulty in aggregating download counts for articles across multiple publisher platforms to derive a holistic view. While the meaning of a download, defined as the event where a user views the full-text HTML of an article or downloads the full-text PDF of an article from a full-text journal article platform, remains a matter of debate, it is generally considered to represent an indication of reader interest and/or research impact (1, 2, 3).


As part of the report ‘International Comparative Performance of the UK Research Base: 2013’, commissioned by the UK’s Department for Business, Innovation and Skills (BIS), download data were used in two different ways to unlock insights not otherwise possible from more traditional, citation-based indicators. In the report, published in December 2013, download data were used alongside citation data in international comparisons to offer a different perspective on national research impact, and were also used to give a unique view of knowledge exchange between authors and readers in two distinct but entwined segments of the research-performing and research-consuming landscape: the academic and corporate sectors.


Comparing national research impact using a novel indicator derived from article download counts

Citation impact is by definition a lagging indicator: newly-published articles need to be read, after which they might influence studies that will be, are being, or have been carried out, which are then written up in manuscript form, peer-reviewed, published and finally included in a citation index such as Scopus. Only after these steps are completed can citations to earlier articles be systematically counted. Typically, a citation window of three to five year following the year if publication is proven to provide reliable results (4).  For this reason, investigating downloads has become an appealing alternative, since it is possible to start counting downloads of full-text articles immediately upon online publication and to derive robust indicators over windows of months rather than years.

While there is a considerable body of literature on the meaning of citations and indicators derived from them (5, 6), the relatively recent advent of download-derived indicators means that there is no clear consensus on the nature of the phenomenon that is measured by download counts (7). A small body of research has concluded however that download counts may be a weak predictor of subsequent citation counts at the article level (8).

To gain a different perspective on national research impact, a novel indicator called field-weighted download impact (FWDI) has been developed according to the same principles applied to the calculation of field-weighted citation impact (FWCI; a Snowball metric). The impact of a publication, whether measured through citations or downloads, is normalised for discipline specific behaviours. Since full-text journal articles reside on a variety of publisher and aggregator websites, there is no central database of download statistics available for comparative analysis; instead, Elsevier’s full-text journal article platform ScienceDirect (representing some 16% of the articles indexed in Scopus) was used with the assumption that downloading behaviour across countries does not systematically differ between online platforms. However, t there is an important difference between FWCI and FWDI in this respect: the calculation of FWCI relates to all target articles published in Scopus-covered journals, whereas  FWDI relates to target articles published in Elsevier journals only.  The effect of such differences will be tested in upcoming research. In the current approach, a download is defined as the event where a user views the full-text HTML of an article or downloads the full-text PDF of an article from ScienceDirect; views of an article abstract alone, and multiple full-text HTML views or PDF downloads of the same article during the same user session, are not included in accordance with the COUNTER Code of Practice.

A comparison of the FWCI (derived from Scopus data) and FWDI in 2012 across 10 major research fields for selected countries is shown in Figure 1. The first point of note about the comparison is that typically, FWDI is more consistent across fields and between countries. It is possible that this observation may reflect an underlying convergence of FWDI between fields and across countries owing to a greater degree of universality in download behaviour (i.e. reader interest or an intention to read an article as expressed by article downloads) than in citation behaviour, but this is not possible to discern from analysis of these indicators themselves and remains untested.

Nonetheless, FWDI does appear to offer an interesting supplementary view of a country’s research impact; for example, the relatively rounded and consistent FWCI and FWDI values across fields for established research powerhouses such as the UK, USA, Japan, Italy, France, Germany and Canada contrasts with the much less uniform patterns of field-weighted citation impact across research fields for the emergent research nations of Brazil, Russia, India and China, for which field-weighted citation impact is typically lower and more variable across research fields than field-weighted download impact. This observation suggests that for these countries reader interest expressed through article downloads is not converted at a very high rate to citations. Again, this points to the idea that users download (and by implication, read) widely across the literature but cite more selectively, and may reflect differences in the ease (and meaning) of downloading versus citing. Another possible explanation lies in the fact that depending on the country, there may be weaker or stronger overlap between the reading and the citing communities. A third aspect that may be relevant here is regional coverage. Publications from these countries with a weak link between downloads and citations may be  preferentially downloaded by authors from these same countries, only to be cited afterwards in local journals that are not as extensively covered in Scopus as English language journals.

BIS Figure__1 UKBIS Figure__1 USA
BIS Figure__1 RUS
BIS Figure__1 JPN
BIS Figure__1 ITA
BIS Figure__1 IND
BIS Figure__1 FRA
BIS Figure__1 DEU
BIS Figure__1 CHN
BIS Figure__1 CAN
BIS Figure__1 BRA






























BIS Fig 1 Legend

 Figure 1 — Field-weighted citation impact (FWCI) and field-weighted download impact (FWDI) for selected countries across ten research fields in 2012. For all research fields, a field-weighted citation or download impact of 1.0 equals world average in that particular research field. Note that the axis maximum is increased for Italy (to 2.5). Source: Scopus and ScienceDirect.


Examining authorship and article download activity by corporate and academic authors and users as a novel indicator of cross-sector knowledge exchange

Knowledge exchange is a two-way transfer of ideas and information; in research policy terms the focus is typically on academic-industry knowledge exchange as a conduit between public sector investment in research and its private sector commercialisation, ultimately leading to economic growth. Knowledge exchange is a complex and multi-dimensional phenomenon, the essence of which cannot be wholly captured with indicator-based approaches, and since knowledge resides with people and not in documents, much knowledge is tacit or difficult to articulate. Despite this, meaningful indicators of knowledge exchange are still required to inform evidence-led policy. To that end, a unique view of knowledge exchange between authors and readers in academia and corporate affiliations can be derived by analysis of the downloading sector of articles with at least one corporate author, and the authorship sector of the articles downloaded by corporate users.

Given the context of the ‘International Comparative Performance of the UK Research Base: 2013’ report, this was done on the basis of UK corporate-authored articles and UK-based corporate users. Again, ScienceDirect data was used under the assumption that downloading behaviour across sectors (academic and corporate in this analysis) does not systematically differ between online platforms.

A view of the share of downloads of articles with at least one author with a corporate affiliation (derived from Scopus) by downloading sector (as defined within ScienceDirect) in two consecutive and non-overlapping time periods is shown in Figure 2. Downloading of UK articles with one or more authors with a corporate affiliation by users in other UK sectors indicates strong cross-sector knowledge flows within the country. 61.7% of all downloads of corporate-authored articles in the period 2008-12 came from users in the academic sector (see Figure 2), an increase of 1.1% over the equivalent share of 60.6% for the period 2003-07. Users in the corporate sector themselves accounted for 35.2% of downloads of corporate-authored articles in the period 2008-12, a decrease of -1.0% on the 36.2% share in the period 2003-07. Taken together, these results indicate high and increasing usage of corporate-authored research by the academic sector.

BIS Figure__2

Figure 2 — Share of downloads of articles with at least one corporate author by downloading sector, 2003-07 and 2008-12. Source: Scopus and ScienceDirect.


A view of the share of downloads of articles by users in the corporate sector (as defined within ScienceDirect) by author affiliation (derived from Scopus) in the same two time periods is shown in Figure 3. Downloading of UK articles by users in the UK corporate sector also suggests increasing cross-sector knowledge flows within the country. Some 52.6% of all downloads by corporate users in the period 2008-12 were of articles with one or more authors with an academic affiliation, and 32.5% were of articles with one or more corporate authors (see Figure 3). Both of these shares have increased (by 1.3% and 2.1%, respectively) over the equivalent shares for the period 2003-07, while the share of articles with at least one author with a medical affiliation downloaded by corporate users has decreased from one period to the next. Taken together, these results indicate high and increasing usage of UK academic-authored research by the UK corporate sector.

BIS Figure__3

Figure 3 — Share of article downloads by corporate sector, 2003-07 and 2008-12. Shares add to 100% despite co-authorship of some articles between sectors owing to the derivation of shares from the duplicated total download count across all sectors. Source: Scopus and ScienceDirect.


Article downloads as a novel indicator: conclusion

In the ‘International Comparative Performance of the UK Research Base: 2013’ report, download data were used alongside citation data in international comparisons to help uncover fresh insights into the performance of the UK as a national research system in an international context.

Nevertheless, some methodological questions remain to be answered. Clearly, the assumption that download behaviours do not differ across platforms needs to be put to the test in future research. The analysis on the relationship between FWCI and FWDI showed how this differs from one country to another. The examples provided in for downloading publications from a different sector are focused on the UK solely, and should be complemented with views on other countries.

We envisage that the approaches outlined in this article, now quite novel, will one day become commonplace in the toolkits of those involved in research performance assessments globally, to the benefit of research, researchers, and society.



(1)    Moed, H.F. (2005a) “Statistical relationships between downloads and citations at the level of individual documents within a single journal” Journal of the American Society for Information Science and Technology 56 (10) pp. 1088-1097
(2)    Schloegl, C. & Gorraiz, J. (2010) “Comparison of citation and usage indicators: The case of oncology journals” Scientometrics 82 (3) pp. 567-580
(3)    Schloegl, C. & Gorraiz, J. (2011) “Global usage versus global citation metrics: The case of pharmacology journals” Journal of the American Society for Information Science and Technology 62 (1) pp. 161-170.
(4)    Moed. H.F. (2005b). Citation Analysis in Research Evaluation. Dordrecht: Springer, pp. 81.
(5)    Cronin, B. (2005) "A hundred million acts of whimsy?" Current Science 89 (9) pp. 1505-1509
(6)    Bornmann, L., Daniel, H. (2008) "What do citation counts measure? A review of studies on citing behavior" Journal of Documentation 64 (1) pp. 45-80.
(7)    Kurtz, M.J., & Bollen, J. (2010) “Usage Bibliometrics” Annual Review of Information Science and Technology 44 (1) pp. 3-64.
(8)    See


VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Party papers or policy discussions: an examination of highly shared papers using altmetric data

Which scientific stories are most shared on social media networks such as Twitter and Facebook? And do articles attracting social media attention also get the attention of scholars and the mass media? In this article, Dr. Andrew Plume and Mike Taylor provide some answers, but also urge for caution when characterizing altmetric indicators.

Read more >

Which scientific stories are most shared on social media networks such as Twitter and Facebook?

Research on human health and social issues are often perceived as being the most shared scientific stories on social media networks such as Twitter and Facebook and – given their mainstream appeal – are often suggested to dominate the popular discussion around scholarly research online, but skeptics, such as David Calquhoun argue for their irrelevance: “Scientific works get tweeted about mostly because they have titles that contain buzzwords, not because they represent great science” (1).

So which is it to be? And do articles attracting social media attention also get the attention of scholars and the mass media? In this article, we seek to provide an approach to answering these questions.

With the rise of online scholarly publishing and the concomitant rise in the desire to create indicators of online attention to research articles and related outputs have come a number of providers of article-level data. A leading commercial provider of such data - collectively known as ‘altmetrics’ - is, which tracks a variety of different indicators in four broad groups: Social Activity (e.g. Tweets and Facebook mentions), Mass Media (e.g. mentions on news sites such as BBC and CNN), Scholarly Commentary (e.g. mentions in scientific blogs), and Scholarly Activity (e.g. articles in reader libraries such as Mendeley). The overall collection and analysis of these references are brought together under the label “altmetrics”.

In terms of the volume of online mentions of scholarly articles, Twitter and other social networks provide by far the largest number of data points. However, given Twitter’s broad user base (the majority being non-academics) and limited information content (being restricted to 140 characters per tweet), other indicators may be more significant in terms of understanding scholarly usage (2). For example, Mendeley and CiteULike are examples of sharing and collaboration platforms used predominantly by researchers, while the mass media and scientific blogs tracked by are written by professional science journalists or researchers themselves.



Data were collected from the API over four months ending January 17th, 2014. On this date, the latest altmetric indicator data for all papers published in a selection of journals in 2013 with any online mentions captured by were downloaded for analysis; in total, 13,793 articles with at least one altmetric indicator datapoint were included in this study. Please note, the actual Journals monitored are detailed in the raw dataset, which is published on Figshare.

The data includes counts of online attention at article level from across a variety of different data sources. In order to simplify data analysis, we aggregated data counts into the four classes as defined above: Social Activity, Mass Media, Scholarly Commentary, and Scholarly Activity. For each class, articles were assigned to predefined percentile ranges (cohorts) based on the frequency of online mentions (see Table 1).

Cohorts Number of articles included























Table 1 - Cohorts of articles based on the frequency of online attention within each class.

For example, the 69 papers comprising the top 0.5% of social activity comprise 91,470 social actions, 445 mass media mentions, 540 scholarly comments and 1,571 scholarly actions, whereas the top 69 papers comprising the top 0.5% of mass media activity comprise 2,638 mass media mentions, 16,221 social actions, 779 scholarly comments and 4,856 scholarly actions.



 Headline-grabbers: Which articles got most social media attention in 2013?

Of the 69 articles belonging to the 0.5% cohort in the Social Activity class (i.e. those articles most frequently mentioned in social media such as Twitter and Facebook, for example), just 8 of them are full-length articles reporting the results of original research. The remainder are typically editorial features or news items from leading weekly journals such as The Lancet, BMJ and Nature; see table 2 for the complete list. The original research articles cover topics in the popular consciousness including climate change, human health and diet, and online information and privacy: intuitively, the sort of articles one might expect to see attracting broad popular attention online. However, one article appears to have a less obvious popular slant (the Nature letter “Attractive photons in a quantum nonlinear medium”) but closer examination shows that it describes a novel technique for forcing photons to interact in a quantum nonlinear medium which may have applications in quantum processing, where the ability to have photons ‘see’ each other could overcome present technological limitations.

The remaining 61 articles (almost exclusively news and editorial features about original research reported elsewhere) cover a variety of topics including several on topics close to the heart of the academy: research careers, science funding, the future of higher education and scholarly publishing. The preponderance of items in this group from Nature (primarily the Nature News and Nature News Feature sections of the publication) suggest that Social activity may be more likely to reflect attention to short journalistic versions of current research results rather than the original research articles themselves; a worthy follow-up to this study would be to track the variation in performance across altmetric indicator classes of an original research article and the current awareness ‘news-worthy’ version of the same research.


Journal Article title DOI
Nature Cerebral organoids model human brain development and microcephaly 10.1038/nature12517
Nature Comment Climate science: Vast costs of Arctic change 10.1038/499401a
Nature Comment Neuroscience: My life with Parkinson's 10.1038/503029a
Nature Editorial Nuclear error 10.1038/501005b
Nature Editorial Science for all 10.1038/495005a
Nature Letter No increase in global temperature variability despite changing regional patterns 10.1038/nature12310
Nature Letter Attractive photons in a quantum nonlinear medium 10.1038/nature12512
Nature News Brazilian citation scheme outed 10.1038/500510a
Nature News Half of 2011 papers now free to read 10.1038/500386a
Nature News World's slowest-moving drop caught on camera at last 10.1038/nature.2013.13418
Nature News Genetically modified crops pass benefits to weeds 10.1038/nature.2013.13517
Nature News NSF cancels political-science grant cycle 10.1038/nature.2013.13501
Nature News Deal done over HeLa cell line 10.1038/500132a
Nature News Antibiotic resistance: The last resort 10.1038/499394a
Nature News Cosmologist claims Universe may not be expanding 10.1038/nature.2013.13379
Nature News Zapped malaria parasite raises vaccine hopes 10.1038/nature.2013.13536
Nature News See-through brains clarify connections 10.1038/496151a
Nature News Dolphins remember each other for decades 10.1038/nature.2013.13519
Nature News Researchers turn off Down’s syndrome genes 10.1038/nature.2013.13406
Nature News Astrophysics: Fire in the hole! 10.1038/496020a
Nature News Giant viruses open Pandora's box 10.1038/nature.2013.13410
Nature News Quantum gas goes below absolute zero 10.1038/nature.2013.12146
Nature News Stem cells reprogrammed using chemicals alone 10.1038/nature.2013.13416
Nature News Whole human brain mapped in 3D 10.1038/nature.2013.13245
Nature News Father’s genetic quest pays off 10.1038/498418a
Nature News Tracking whole colonies shows ants make career moves 10.1038/nature.2013.12833
Nature News Pesticides spark broad biodiversity loss 10.1038/nature.2013.13214
Nature News Animal-rights activists wreak havoc in Milan laboratory 10.1038/nature.2013.12847
Nature News Silver makes antibiotics thousands of times more effective 10.1038/nature.2013.13232
Nature News Methane leaks erode green credentials of natural gas 10.1038/493012a
Nature News When Google got flu wrong 10.1038/494155a
Nature News First proof that prime numbers pair up into infinity 10.1038/nature.2013.12989
Nature News Global carbon dioxide levels near worrisome milestone 10.1038/497013a
Nature News Underwater volcano is Earth's biggest 10.1038/nature.2013.13680
Nature News Did a hyper-black hole spawn the Universe? 10.1038/nature.2013.13743
PNAS Private traits and attributes are predictable from digital records of human behavior 10.1073/pnas.1218772110
Nature News How to turn living cells into computers 10.1038/nature.2013.12406
Nature News Small-molecule drug drives cancer cells to suicide 10.1038/nature.2013.12385
Nature News Brain-simulation and graphene projects win billion-euro competition 10.1038/nature.2013.12291
Nature News Rewired nerves control robotic leg 10.1038/nature.2013.13818
Nature News US government shuts down 10.1038/502013a
Lancet Letter Open letter: let us treat patients in Syria 10.1016/s0140-6736(13)61938-8
Nature News Blood engorged mosquito is a fossil first 10.1038/nature.2013.13946
BMJ Cancer risk in 680 000 people exposed to computed tomography scans in childhood or adolescence: data linkage study of 11 million Australians 10.1136/bmj.f2360
Nature News NIH mulls rules for validating key results 10.1038/500014a
PNAS Impact of insufficient sleep on total daily energy expenditure, food intake, and weight gain 10.1073/pnas.1216951110
Nature News Red meat + wrong bacteria = bad news for hearts 10.1038/nature.2013.12746
Nature News Who is the best scientist of them all? 10.1038/nature.2013.14108
Nature News Four-strand DNA structure found in cells 10.1038/nature.2013.12253
Nature News Weak statistical standards implicated in scientific irreproducibility 10.1038/nature.2013.14131
Nature News Mathematicians aim to take publishers out of publishing 10.1038/nature.2013.12243
BMJ Bicycle helmets and the law 10.1136/bmj.f3817
Nature News Barbaric Ostrich: 27th June 2013 10.1038/nature.2013.12487
American J of M The Autopsy of Chicken Nuggets Reads “Chicken Little” 10.1016/j.amjmed.2013.05.005
Nature News Stem cells mimic human brain 10.1038/nature.2013.13617
Nature News Mystery humans spiced up ancients’ sex lives 10.1038/nature.2013.14196
BMJ The future of the NHS--irreversible privatisation? 10.1136/bmj.f1848
Nature News Feature Archaeology: The milk revolution 10.1038/500020a
Nature News Feature Neuroscience: Solving the brain 10.1038/499272a
Nature News Feature Tissue engineering: How to build a heart 10.1038/499020a
Nature News Feature Theoretical physics: The origins of space and time 10.1038/500516a
Nature News Feature Online learning: Campus 2.0 10.1038/495160a
Nature News Feature Open access: The true cost of science publishing 10.1038/495426a
Nature News Feature Inequality quantified: Mind the gender gap 10.1038/495022a
Nature News Feature Voyager: Outward bound 10.1038/497424a
Nature News Feature Mental health: On the spectrum 10.1038/496416a
Nature News Feature Brain decoding: Reading minds 10.1038/502428a
Nature News Feature Fukushima: Fallout of fear 10.1038/493290a
Nature News Feautre The big fat truth 10.1038/497428a

Table 2 -  Full list of the 69 articles belonging to the 0.5% cohort in the Social Activity class including journal, article title, and DOI. Articles highlighted in orange are those representing full-length articles reporting the results of original research.


Social media attention: An indicator of scholarly impact or simply newsworthiness?

The articles which appear in the top 0.5% cohort in each of the four classes defined in this study are typically not the same ones: just 2 articles appear in all 4 lists. This suggests that the correlation between these 4 classes of altmetric indicators may not be very high. These two articles are both original research articles, one reporting the development of a method for creating human brain-like structures (called “cerebral organoids”) in cell culture and using these to study the basis of brain development and disease (Nature article “Cerebral organoids model human brain development and microcephaly”); the other correlating online behaviour (in this case, Facebook ‘likes’) with personal information such as sexual orientation, ethnicity and political views, to create a model to predict such traits based solely on Facebook activity (PNAS article “Private traits and attributes are predictable from digital records of human behavior”).

Further analysis of the overlap between the top 0.5% cohorts in each altmetric class is shown in Table 3: by far the greatest overlaps occur between the Mass media and Scholarly commentary classes, the lowest between Social activity and Mass media or Scholarly activity, and a moderate degree of overlap for the remaining pairwise combinations. Taken together, this suggests that - at least amongst this handful of articles receiving the most online attention – articles attracting a high degree of Social activity attract relatively little attention from the Mass media or from Scholarly activity and only a moderate degree of scholarly commentary. Conversely, there is a very high co-occurrence of articles receiving Mass media attention and Scholarly commentary. Taken together, these observations suggest that Social activity in particular is an indicator of a very different kind of online attention than the other three classes.

  Mass media Scholarly activity Scholarly commentary Social activity
Mass media  




Scholarly activity  



Scholarly commentary  


Social activity        

Table 3 -  Co-occurrence counts of articles comprising the top 0.5% of articles in each class, where n varies between classes owing to tied rankings at the 0.5% cutoff between 69 and 76.

Figure 1 shows how this correlation varies across all percentile cohorts for articles with Social activity. Note that approximately 90% of social activity is constrained to 15% of articles, which is a significantly more skewed distribution than that of citations across articles within a journal (where some 90% of citations are to 50% of the articles; (3)).  This implies a scarce attention economy in the Social activity spectrum, with many articles competing for a rare resource (reader attention). The only altmetric class with a distribution of attention across articles similar to that of citations across articles is Scholarly activity (which correlates very poorly with Social activity), where approximately 90% of Scholarly activity is represented by some 30-40% of articles (data not shown). The convergence of the curves in Figure 1 around the 15% cohort implies that at this point attention in all 4 classes is equally scarce, while in the cohorts above this point the only class showing a considerable degree of co-occurrence with Social activity is Scholarly commentary (also borne out by the Table 3 for the 0.5% cohort).


Figure 1 - Proportion of total activity per article across predefined percentile ranges (cohorts) for social activity.



It is clear from this exploratory work that altmetrics hold great promise as a source of data, indicators and insights about online attention, usage and impact of published research outputs. What is currently less certain is the underlying nature of what is being measured by current indicators represented within the four broad classes analysed here, and what can (and cannot) be read into them for the purposes of assigning credit or assessing research impact at the level of individual researchers, journals, institutions or countries.

What is strikingly clear from the qualitative analysis of the top 0.5% of papers for Social Activity is the lack of mentions of titles that have particularly titillating or eye-catching keywords: although most of the links are to summaries of research, rather than primary research articles themselves, they all contains serious scientific material.

On the basis of this preliminary study, we urge caution in characterizing all altmetric indicators in a similar way, as it is likely that different indicators may measure different types of online attention from different types of readers. This finding is similar to that reported by Priem, Piwowar and Hemminger in 2012 (4). We also suggest that careful delineation of document types (as long used for citation-based indicators) must be applied to correctly evaluate (for example) the relative social activity attracted by a news or editorial item versus an original research article; these values are likely to be the inverse of their usual relationship in citation terms. In short, in the excitement and promise of this burgeoning new field of Informetrics, we must be sure to ask ourselves: what is it that we are measuring, and why?



This paper would not be possible without the kind support of Euan Adie at in providing access to these data for research purposes.



(3) Seglen, P.O. (1992) The skewness of science. Journal of the American Society for Information Science, 43 (9) pp. 628–638.
(4) Priem, J., Piwowar, H., & Hemminger, B. (2012) Altmetrics in the wild: Using social media to explore scholarly impact. Arxiv. 

The data set this paper was  based on is available online:

Taylor, Michael (2014): Data set for " Party papers or policy discussions: an examination of highly shared papers using altmetric data". figshare.


VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Celebrating Rare Disease Day – A look into Rare Disease research

In honour of Rare Disease Day, an international advocacy day to help raise awareness for rare diseases, Iris Kisjes investigated publication trends in the field using SciVal. In this piece she highlights some of the key institutions that are active in research on rare diseases.

Read more >

28th February is Rare Disease Day, which is an international advocacy day to help raise awareness with the public about rare diseases, the challenges encountered by those affected, the importance of research to develop diagnostics and treatments, and the impact of these diseases on patients' lives.

Rare Disease Day was first observed in Europe in 2008. It was established by EURORDIS, the European Rare Disease Organization. In 2009, NORD partnered with EURORDIS in this initiative and sponsored Rare Disease Day in the United States. Since then, the concept has continued to expand beyond the US and Europe. In 2013, more than 70 countries participated.

Rare diseases collectively affect millions of people of all ages globally, of which approximately 18-25 million Americans. They are often serious and life altering; many are life threatening or fatal. In Europe a disease is considered rare when it affects no more than 5 individuals among 10,000 persons, whereas the US considers a disease to be rare when it affects less than 200,000 Americans. Since each of the roughly 7000 rare diseases affect only a relatively small population, it can be challenging to develop drugs and medical devices to prevent, diagnose, and treat these conditions. In general there is a lack of understanding of the underlying molecular mechanisms or even the cause or of many rare diseases. Hence, countries across the globe should share experiences and work together to help address these challenges successfully.

There are many challenges faced by these types of diseases, as they are often not well defined or characterized. Rarity also means that recruitment for trials is usually quite difficult, study populations are widely dispersed, and there are few expert centers for diagnosis, management and research. This is often accompanied by a lack of high-quality evidence available to guide treatment.

At Research Trends we were curious to learn more about the subject and to highlight some of the key institutions and authors that are contributing to the field of rare disease. To do this we examined the field by looking at publication trends using SciVal. To begin with we created a research area in SciVal based on the keyword search ‘rare disease’. The keyword search searches through titles, abstracts and keywords of research papers within Scopus.

Setting up a research area in SciVal is fairly simple and can be completed in three easy steps. Figure 1 shows that for this search query a set of 12,818 publications were published between 1996 to present, where the US, Germany, Japan and France were the most prolific.

Rare Diseases Fig 1

Figure 1 - Defining a research area in SciVal for 'Rare Disease' research

SciVal is a ready-to-use tool to analyze the world of research. It is based on Scopus data and primarily developed for research organizations to help establish, execute and evaluate their strategies within the context of their peers (through benchmarking) and collaborators (through collaboration networks). The solution also allows users to set up research areas to analyze contributors within the field and their corresponding publication and citation statistics.

The SciVal research area focuses its analysis on the last five years of publications for which we found the pool of research papers in this area was is fairly small. Over the course of 5 years (2009-2013) close to 5900 publications containing ‘rare disease’ were published around the globe, of which less than 20% originated from the US, see Table 1 (data date stamp 21 January 2014). One of the reasons why this set of research papers is small can be contributed to the simplicity of the search terms used. In the approach presented in this paper relevant articles were selected on the basis of the occurrence of the term ‘rare disease’ without including the names of the 7000 rare diseases themselves. We can therefore assume that papers related to many rare diseases would not be included in this set as not all papers related to specific diseases will include ‘rare disease’ in their abstract, keywords or title. In a follow-up study we could look into including search terms related to particular rare diseases to provide a more complete picture of this research field.

Despite this small sample set, we took a look at a number of institutions, and a number of authors that were the most prolific in this area. When looking at the three most prolific institutions around the globe they all originate from either France or the US, bar one from Germany. Hence, this article focuses on the two most prolific from both countries, namely Université Paris 5 and Harvard, see Table 1 and Table 2.

Country/Region/Institute Publications




United States






Université Paris 5


Harvard University


University of Munich


Table 1 – Scholarly Output for ‘Rare Disease’ research in 2009-2013 at different levels

Institution Country

Publications in this Research Area

Publications in this Research Area (growth %)



Citations per Publication

Université Paris 5 France












Harvard University United States






Université Paris 6 France






University of Munich Germany






Table 2 – Most Prolific Institutions for ‘Rare Disease’ research in 2009-2013


Insitutional Collaboration maps for Harvard, Paris 5 and Munich

Using SciVal we took a closer look at the collaboration patterns of three individual intitutions, Harvard University, Université Paris 5 and University of Munich in ‘rare disease’ research.  SciVal allows you to drill down from a worldwide, to a regional and right down to a country level view of the institutional collaborations.

Rare Diseases Fig 2

Figure 2 - Worldwide collaboration by Harvard University in ‘Rare Disease’ research

As can be seen from Figure 2, Harvard University collaborated with 171 institutions worldwide on a total of 39 co-authored publications of its total of 61 publications, with the majority of the collaborations taking place with authors within the US and Europe. The average number of insitutions per paper are 4,38. You can also see here that out of the 61 articles 39 are co-authored with other North American research organizations. In fact, Harvard’s top 20 collaborating institutions on ‘rare disease’ research are mainly from the US, only three were international cross-border collaborations, namely with Canada, UK and Switzerland. When one looks at the general collaboration trends exhibited by Harvard they seem inline with the trends exhibited within the area of ‘rare disease’ where the top collaborations are all national institutions with few international collaborating institutions from abroad featuring in the top 40.

Rare Diseases Fig 3

Figure 3 - Worldwide collaboration Université Paris 5 in ‘Rare Disease’ research

You can see from Figure 3 that Université Paris 5 collaborated with 215 institutions worldwide on 62 co-authored publications out of its 77 total publications, resulting in an average of 3,48 institutions involved in each paper. From Figure 4 you can see that the majority of collaborating institutions are from France (72). In fact, there are only three non-French institutional collaborators amongst their top 20 rare disease collaborators, namely Canada, UK and Germany.

Rare Diseases Fig 4

Figure 4 - European collaboration by Université Paris 5 in ‘Rare Disease’ research


Rare Diseases Fig 5

Figure 5 - Worldwide collaboration by University of Munich in 'rare disease' research

Here again in Figure 5 we find there are a large number of institutions that work on a number of papers, 73 institutions collaborated on 18 articles, with an average of 4 institutions per paper. The University of Munich enters into the most cross-boarder collaborations with half of its top 20 institutions coming from abroad, namely three US, two UK, two Dutch, one Canadian and one institute from the Czech Republic.  It seems logical for collaboration to play a central role in ‘rare disease’ research, though it may be expected that cross-border collaborations would be most important to effectively propel the research forward, mainly due to the small patient numbers in each country.

The overal results show a large number of institutions collaborting on one paper, and it seems that Munich in general is a more internationally focused research organization from the three universitites investigated, see Table 3 for the general  collaboration trends for these universities.


Single author publications

Institutional collaboration

National collaboration

International collaboration

Harvard University





Université Paris 5





University of Munich





Table 3: General collaboration trends for the three universities investigated


Highlighting the Most Prolific Authors in SciVal

By looking at the most prolific authors in the set, in addition to clicking though to the abstracts in Scopus from SciVal, we were able to get a better idea of the set of research papers we were looking at.

The most prolific authors in the dataset for ‘rare disease’ between 2009-2013 are: Dr. Domenica Taruscio, MD. Director at the National Centre for Rare Diseases, Instituto Superiore di Sanita, Rome, Italy; Dr. Steven Simoens, Katholieke Universiteit Leuven, Department of Pharmaceutical and Pharmacological Sciences, Leuven, Belgium; Dr. Stephen Groft, Director National Institutes of Health, Office of Rare Disease Research (see table 4).


Publications in this Research Area

Citations in this Research Area

Citations per Publication

Taruscio, D.




Simoens, S.




Groft, S.C.




Table 4 – Most Productive Authors in ‘Rare Disease’ research in 2009-2013

Dr. Domenica Taruscio MD. focuses her research on setting systems in place that can, firstly, help train and inform clinicians to make the right diagnosis and secondly, improve the dissemination of information around symptomatic treatments. She has just spent the last 30 months on a feasibility study funded by the European Commission (DG Sanco) addressing regulatory, ethical and technical issues associated with the registration of rare disease patients and with the creation of an EU platform for the collection of data on rare disease patients and their communication among qualified users.

Dr. Steven Simoens, on the other hand, does not focus his research especially on rare disease themselves. He works within Pharmaceutical Sciences with a keen interest in pharmaco-economics and ethics which is where rare disease seem to come up in his research.  His publication rate is very impressive with about 15 papers per year. ‘Rare disease’ research is a topic of concern for this field of interest as there are debates on the economic rationale behind society supporting any part of the rare disease value chain. There was a debate in the Netherlands, for example, in 2012 related to the cost of providing medication to patients with Pompe’s disease. Medication for these patients cost between 400-700,000 Euros per patient per year. The economist Dr. Marc Pomp introduced the concept of the quality-adjusted life year (QALY), where the costs of medical treatment are placed in relation to the quality of life in those years for the patient. The level considered acceptable was set at 50,000 Euro per year, far below the cost of treating a Pompe patient, while stopping their treatment would would most likely cause them to die (1).

The third most prolific researcher whithin our sample, Dr. Stephen Groft, received the life time achievement award for his nearly 30 years of service and commitment to advancing research and treatments for the millions of people afflicted by rare and genetic diseases. He is one of the original pioneers in the rare disease arena and is recognized globally as a leader in building collaborative relationships to improve patient treatment and care. Pham. D. Groft retired on February 8th this year. He was praised for giving thousands of rare disease patients and their families renewed hope and a collective voice.  One of the organizations he set up was the National Center for Advancing Translational Science (NCATS). You can read more about his work at:

Each of these three individuals look at rare disease in very different ways, though all are in some way interested in the management of the research field, suggesting that our keyword search did in fact omit the research of ‘rare disease’ from the sample set. However, this also shows how much attention needs  to be placed on attracting the public’s attention for ‘rare disease’ and on building global awareness and a collective solidarity to support the population and their families affected by these rare and often severe diseases.



(1) NOS, Advies: Stop met Dure medicijnen, R. vd Brink and H. vd Parre,  (Dutch) 29 July 2012,


Related links

ORDR - Office of Rare Diseases Research -
NORD - The National Organization for Rare Disorders (NORD) -
Eurordis – The voice of rare diseases Europe -
JPA Japanese patients association -
European Platform for Rare Diseases Registries -

The next International Conferences for Rare Diseases and Orphan Drugs(ICORD) 2014 Annual Meeting on October 8-10, 2014 in The Netherlands. More information will follow:

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Did you know

…That study design influences the top 100 list of papers most cited in 2013?

Sarah Huggett, MPhil

To mark the end of 2013 and the beginning of 2014, Research Trends thought that it would be interesting to take a look at the list of papers that were cited the most in 2013 in Scopus.

The list of top 100 papers most cited in 2013 contains publications in various fields and journals, and spanning many publication years. Hot areas such as cancer, genetics, and graphene all feature prominently in the titles of these papers. Interestingly, study design also appears to have an influence on citations: several of the top cited papers are of very specific types. As expected, the top 100 list contains several reviews, but also rapid communications. Furthermore, several of the top-cited papers are focused on statistics, methods, or protocols.

Click here to see a Wordle containing the most frequent terms in these top 100 cited papers.

  • Elsevier has recently launched the International Center for the Study of Research - ICSR - to help create a more transparent approach to research assessment. Its mission is to encourage the examination of research using an array of metrics and a variety of qualitative and quantitive methods.