Issue 15 – January 2010

Articles

New perspectives on journal performance

Bibliometric indicators have brought great efficiency to research assessment, but not without controversy. Bibilometricians themselves have long warned against relying on a single measure to assess influence, while researchers have been crying out for transparency and choice. The incorporation of additional metrics into databases offers more options to everyone.

Read more >


Research has long played an important role in human culture, yet its evaluation remains heterogeneous as well as controversial. For several centuries, review by peers has been the method of choice to evaluate research publications; however, the use of bibliometrics has become more prominent in recent years.

Bibliometric indicators are not without their own controversies (1, 2) and recently there has been an explosion of new metrics, accompanying a shift in the mindset of the scientific community towards a multidimensional view of journal evaluation. These metrics have different properties and, as such, can provide new insights on various aspects of research.

Measuring prestige

SCImago is a research group led by Prof. Félix de Moya at the Consejo Superior de Investigaciones Científicas. The group is dedicated to information analysis, representation and retrieval by means of visualization techniques, and has recently developed SCImago Journal Rank (SJR) (3). This takes three years of publication data into account to assign relative scores to all of the sources (journal articles, conference proceedings and review articles) in a citation network, in this case journals in the Scopus database.

Inspired by the Google PageRank™ algorithm, SJR weights citations by the SJR of the citing journal; a citation from a source with a relatively high SJR is worth more than a citation from a source with a relatively low SJR. The results and methodology of this analysis are publicly available and allow comparison of journals over a period of time, and against each other.

Accounting for context

Another new metric based on the Scopus database is Source Normalized Impact per Paper (SNIP) (4), the brainchild of Prof. Henk Moed at the Centre for Science and Technology Studies (CWTS) at Leiden University. SNIP takes into account characteristics of the source’s subject field, especially the frequency at which authors cite other papers in their reference lists, the speed at which citation impact mature, and the extent to which the database used in the assessment covers the field’s literature.

SNIP is the ratio of a source’s average citation count per paper in a three-year citation window over the “citation potential” of its subject field. Citation potential is an estimate of the average number of citations a paper can be expected to receive in a given subject field. Citation potential is important because it accounts for the fact that typical citation counts vary widely between research disciplines, tending to be higher in life sciences than in mathematics or social sciences, for example.

Citation potential can also vary between subject fields within a discipline. For instance, basic research journals tend to show higher citation potentials than applied research or clinical journals, and journals covering emerging topics often have higher citation potentials than periodicals in well-established areas.

More choices

SNIP and SJR, using the same data source and publication window, can be seen as complementary to each other: SJR can be primarily perceived as a measure of prestige and SNIP as a measure of impact that corrects for context, although there is some overlap between the two.

Both metrics offer several new benefits. For a start, they are transparent: their respective methodologies have been published and made publicly available. These methodologies are community driven, answering the express needs of the people using the metrics. The indicators also account for the differences in citation behavior between different fields and subfields of science. Moreover, the metrics will be updated twice a year, giving users early indication of changes in citation patterns. Furthermore, they are dynamic indicators: additions to Scopus, including historical data, will be taken into account in the biannual releases of the metrics. And lastly, both metrics are freely available, and apply to all content in Scopus.

It should be emphasized that although the impact or quality of journals is an aspect of research performance in its own right, journal indicators should not replace the actual citation impact of individual papers or sets of research group publications. This is true for both existing and new journal metrics.

The fact that SJR and SNIP are relatively new additions to the existing suite of bibliometrics indicators is part of their strength. Both build upon earlier metrics, taking the latest thinking on measuring impact into account without being hindered by a legacy that denies modern publication and citation practices. Their unique properties – including transparency, public availability, dynamism, field normalization and three-year publication window – means they offer a step forward in citation analysis and thus provide new insights into the research landscape.

References:

(1) Corbyn, Z. (June 2009) “Hefce backs off citations in favour of peer review in REF”, Times Higher Education Supplement
(2) Corbyn, Z. (August 2009) “A threat to scientific communication”, Times Higher Education Supplement
(3) de Moya, F. (December 2009) “The SJR indicator: A new indicator of journals' scientific prestige”, Arxiv
(4) Moed, H. (November 2009) “Measuring contextual citation impact of scientific journals”, Arxiv
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

A question of prestige

Prestige measured by quantity of citations is one thing, but when it is based on the quality of those citations, you get a better sense of the real value of research to a community. Research Trends talks to Prof. Félix de Moya about SCImago Journal Rank (SJR), which ranks journals based on where their citations originate.

Read more >


Prestige measured by quantity of citations is one thing, but when it is based on the quality of those citations, you get a better sense of the real value of research to a community. Research Trends talks to Prof. Félix de Moya about SCImago Journal Rank (SJR), which ranks journals based on where their citations originate.

Felix de Moya

Felix de Moya

Research Trends (RT): SCImago Journal Rank (SJR) has been described as a prestige metric. Can you explain what this means and what its advantages are?
Félix de Moya (FdM): In a social context, prestige can be understood as an author’s ability or power to influence the remaining actors, which, within the research evaluation domain, can be translated as a journal’s ability to place itself in the center of scholarly discussion; that is, to achieve a commanding position in researchers’ minds.

Prestige metrics aim at highlighting journals that do not depend exclusively on the number of endorsements, as citations, they receive from other journals, but rather on a combination of the number of endorsements and the importance of each one of these endorsements. Considered in this way, the prestige of a journal is distributed among the ones it is related to through the citations.

SCImago

The SCImago Journal & Country Rank is a portal that includes the journals and country scientific indicators developed from information in the Scopus database. These indicators can be used to assess and analyze scientific domains.

This platform takes its name from the SCImago Journal Rank (SJR) indicator, developed by SCImago from the widely known algorithm Google PageRank. This indicator shows the visibility of the journals contained in the Scopus database from 1996.

SCImago is a research group from the Consejo Superior de Investigaciones Científicas (CSIC), University of Granada, Extremadura, Carlos III (Madrid) and Alcalá de Henares. The group conducts research into information analysis, representation and retrieval using visualization techniques.

RT: I understand that SJR is based on the premise that not all citations are equal (much like Google’s PageRank algorithm treats links as more or less valuable). Can you explain why it is so important to consider the value of each citation and what benefits this brings to your final ranking?
FdM: When assigning a value to a journal, the source of its citations are not the only important consideration. It is also essential to control for the effects of self-citation or other practices that have nothing to do with scientific impact. This method achieves that because citation “quality” cannot be controlled unless one has control over the whole citation network, which is, of course, impossible.

RT: Why did you decide to develop a new evaluation metric? Were you meeting a perceived market gap or seeking to improve on current methods?
FdM: As researchers working in bibliometric and scientometric fields, we have studied research-evaluation metrics for many years and we are aware of their weaknesses and limitations. The success of the PageRank algorithm and other Eigenvector-based methods to assign importance ranges to linked information resources inspired us to develop a methodology that could be applied to journal citation networks. It is not only a matter of translating previous prestige metrics to citation networks; deep knowledge of citation dynamics is needed in order to find centrality values that characterize the influence or importance that a journal may have for the scientific community.

RT: Since you launched your portal in November 2007, what level of interest have you seen from users?
FdM: The SJR indicator is provided at the SCImago Journal & Country Rank website, which has 50,000 unique visits per month. We attribute this traffic in a large part to the open availability of the metrics. And, what it is more important to us in terms of value is the increasing number of research papers that use SJR to analyze journal influence.

RT: I understand that SJR is calculated for many journals that currently have no rank under other metrics, such as those published in languages other than English, from developing countries, or those representing small communities or niche topics of research. The advantages to these journals – and to the researchers that are published in them – is obvious, but what about the advantages to science in general?
FdM: In our opinion, SJR is encouraging scientific discussion about how citation analysis methods can be used in journal evaluation. And this is really happening: in fact, all the new methodological developments in Eigenvector-based performance indicators are encouraging this healthy debate.

However, unlike many other Eigenvector-based performance indicators, SJR is built on the entire Scopus database rather than across a sample set of journals. This has important methodological implications. In addition, SJR’s results are openly available through the SCImago Journal & Country Rank evaluation platform, which makes SJR a global framework to analyze and compare the findings, and allows researchers and users worldwide to reach their own conclusions.

FdM: When assigning a value to a journal, the source of its citations are not the only important consideration. It is also essential to control for the effects of self-citation or other practices that have nothing to do with scientific impact. This method achieves that because citation “quality” cannot be controlled unless one has control over the whole citation network, which is, of course, impossible.

RT: Why did you decide to develop a new evaluation metric? Were you meeting a perceived market gap or seeking to improve on current methods?
FdM: As researchers working in bibliometric and scientometric fields, we have studied research-evaluation metrics for many years and we are aware of their weaknesses and limitations. The success of the PageRank algorithm and other Eigenvector-based methods to assign importance ranges to linked information resources inspired us to develop a methodology that could be applied to journal citation networks. It is not only a matter of translating previous prestige metrics to citation networks; deep knowledge of citation dynamics is needed in order to find centrality values that characterize the influence or importance that a journal may have for the scientific community.

RT: Since you launched your portal in November 2007, what level of interest have you seen from users?
FdM: The SJR indicator is provided at the SCImago Journal & Country Rank website, which has 50,000 unique visits per month. We attribute this traffic in a large part to the open availability of the metrics. And, what it is more important to us in terms of value is the increasing number of research papers that use SJR to analyze journal influence.

RT: I understand that SJR is calculated for many journals that currently have no rank under other metrics, such as those published in languages other than English, from developing countries, or those representing small communities or niche topics of research. The advantages to these journals – and to the researchers that are published in them – is obvious, but what about the advantages to science in general?
FdM: In our opinion, SJR is encouraging scientific discussion about how citation analysis methods can be used in journal evaluation. And this is really happening: in fact, all the new methodological developments in Eigenvector-based performance indicators are encouraging this healthy debate.

However, unlike many other Eigenvector-based performance indicators, SJR is built on the entire Scopus database rather than across a sample set of journals. This has important methodological implications. In addition, SJR’s results are openly available through the SCImago Journal & Country Rank evaluation platform, which makes SJR a global framework to analyze and compare the findings, and allows researchers and users worldwide to reach their own conclusions.

Félix de Moya

Félix de Moya has been professor of the Library and Information Science Department at the University of Granada since 2000. He obtained a Ph.D. degree in data structures and library management at the University of Granada in 1992. He has been active in numerous research areas, including information analysis, representation and retrieval by means of visualization techniques. He has been involved in innovative teaching projects, such as “Developing information systems for practice and experimentation in a controlled environment” in 2004 and 2006, and “Self-teaching modules for virtual teaching applications” in 2003.

Full resumé

RT: What particular benefits does SJR bring to the academic community? How can researchers use SJR to support their publishing career?
FdM: Following the reasoning above, SJR is already being used for real research evaluations. Researchers and research groups are using SJR to measure research achievements for tenure and career advancement, and research managers are paying increasing attention to it because it offers a comprehensive and widely available resource that helps them design methods for evaluating research. Universities worldwide are, for example, using SJR as a criterion for journal assessment in the evaluation processes.

RT: One of the main criticisms leveled at ranking metrics is that their simplicity and supposed objectivity is so seductive that more traditional methods of ranking, such as speaking to researchers and reading their papers, are in danger of being completely superceded by ranking metrics. What is your position?
FdM: Ideally, whenever a quantitative measure is involved in research-performance assessment it should be always supported by expert opinion. Unfortunately, this is not always possible due to the nature of some specific evaluation processes and the resources allocated to them. In cases where the application of quantitative metrics is the only way, efforts should be made to design the assessment criteria and reach consensus among all stakeholders to select a combination of indicators and sources that comprise fair assessment parameters.

RT: Finally, why do you think the world needs another ranking metric?
FdM: The scientific community is becoming accustomed to the availability of several journal indices and rankings, and to the idea that no single indicator can be used in every situation. Many new metrics have been released in recent years, and it is necessary to analyze the strengths and weaknesses of each these. When a new methodology solves some of the well-known problems of prior metrics, it is certainly needed.

In addition, the research community is moving forward from traditional binary assessment methods for journals. My research group believes that new metrics should be oriented toward identifying levels or grades of journal importance, especially considering the rapid increase in scientific sources, which means metrics are frequently calculated on universals rather than samples. In this scenario, we need a measure that can discern journal “quality” in a source where a huge number of publications coexist.

Useful links

SJR
SCImago
Free journal-ranking tool enters citation market: Nature news and comment
Wikipedia – SJR
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Sparking debate

Bibliometrics have become the way to rank journals, but each metric is calibrated to favor specific features. This means that using just one metric can only provide a limited perspective. Research Trends speaks to Prof. Henk Moed about how his new metric offers a context-based ranking to journals.

Read more >


A major drawback of bibliometric journal ranking is that in the search for simplicity, important details can be missed. As with all quantitative approaches to complex issues, it is vital to take the source data, methodology and original assumptions into account when analyzing the results.

Across a subject field as broad as scholarly communication, assessing journal impact by citations to a journal in a two-year time frame is obviously going to favor those subjects that cite heavily, and rapidly. Some fields, particularly those in the life sciences, tend to conform to this citation pattern better than others, leading to some widely recognized distortions. This becomes a problem when research assessment is based solely on one global ranking without taking its intrinsic limitations into account.

Henk Moed

Henk Moed

Context matters
In response to the gap in the available bibliometric toolkit, Prof. Henk Moed of the Centre for Science and Technology Studies (CWTS), in the Department (Faculty) of Social Sciences at Leiden University, has developed a context-based metric called source-normalized impact per paper (SNIP).

He explains that SNIP takes context into account in five ways. First, it takes a research field’s citation frequency into account to correct for the fact that researchers in some fields cite each other more than in other fields. Second, it considers the immediacy of a field, or how quickly a paper is likely to have an impact. In some fields, it can take a long time for a paper to start being cited while other fields continue to cite old papers for longer. Third, it accounts for how well the field is covered by the underlying database; basically, is enough of a given subject’s literature actually in the database. Fourth, delimitation of a journal’s subfield is not based on a fixed classification of journals, but is tailor-made to take a journal’s focus into account, so that each journal has its proper surrounding subject field. And fifth, to counter any potential for editorial manipulation, SNIP is only applied to peer-reviewed papers in journals.

CWTS

The Centre for Science and Technology Studies (CWTS), based at Leiden University, conducts cutting-edge basic and applied research in the field of bibliometrics, research assessment and mapping. The results of this research is made available to science-policy professionals through CWTS B.V.

However, Moed was not simply filling a market gap: “I thought that this would be a useful addition to the bibliometric toolbox, but I also wanted to stimulate debate about bibliometric tools and journal ranking in general.” Moed is at pains to explain that SNIP is not a replacement for any other ranking tool because: “there can be no single perfect measure of anything as multidimensional as journal ranking – the concept is so complex that no single index could ever represent it properly.” He continues: “SNIP is not the solution for anyone who wants a single number for journal ranking, but it does offer a number of strong points that can help shed yet another light on journal analysis.”

He adds, however, that contextual weighting means SNIP is offering a particular view, and it is important to take this into account when using it. He strongly believes that no metric, including SNIP, is useful alone: “it only really makes sense if you use it in conjunction with other metrics.”

Use the right tool

This leads to Moed’s wider aim: by providing a new option and adding to the range of tools available for bibliometricians, he hopes to stimulate debate on journal ranking and assessment in general. He explains: “All indicators are weighted differently, and thus produce different results. This is why I believe that we can never have just one ranking system: we must have as wide a choice of indicators as possible.” Like many in the bibliometric community, Moed has serious concerns about how ranking systems are being used.

It is also very important to combine all quantitative assessment with qualitative indicators and peer review. “Rankings are very useful in guiding opinion, but they cannot replace them,” he says. “You first have to decide what you want to measure, and then find out which indicator is right in your circumstances. No single metric can do justice to all fields and deliver one perfect ranking system. You may even need several indicators to help you assess academic performance, and you certainly need to be ready to call on expert opinions.”

In fact, a European Commission Expert Group on Assessment of University-based Research is working from the same assumption: that research assessment must take a multidimensional view.

Henk Moed

Henk F. Moed has been a senior staff member at the Centre for Science and Technology Studies (CWTS), in the Department (Faculty) of Social Sciences at Leiden University, since 1986. He obtained a PhD in Science Studies at the University of Leiden in 1989. He has been active in numerous research areas, including bibliometric databases and bibliometric indicators. He has published over 50 research articles and is editor of several journals in his field. He is a winner of the Derek de Solla Price Award in 1999. In 2005, he published a monograph, Citation Analysis in Research Evaluation (Springer), which is one of the very few textbooks in the field.

Moed believes that what is really required is an assessment framework in which bibliometric tools sit alongside qualitative indicators in order to give a balanced picture. He expects that adoption of a long-term perspective in research policy will become increasingly important, alongside development of quantitative tools that facilitate this. SNIP fits well into this development. “But we must keep in mind that journal-impact metrics should not be used as surrogates of the actual citation impact of individual papers or research group publication œuvres. This is also true for SNIP”.

More information means better judgment
Moed welcomes debate and criticism of SNIP, and hopes to further stimulate debate on assessment of scholarly communication in general. “I realize that having more insight into the journal communication system is beneficial for researchers because they can make well-informed decisions on their publication strategy. I believe that more knowledge of journal evaluation, and more tools and more options, can only help researchers make better judgments.”

His focus on context is also intended to both encourage and guide debate. “Under current evaluation systems, many researchers in fields that have low citation rates, slow maturation rates or partial database coverage – such as mathematics, engineering, the social sciences and humanities – find it hard to advance in their careers and obtain funding, as they are not scoring well against highly and quickly citing, well covered fields, simply because citation and database characteristics in their fields are different. I hope SNIP will help in illuminating this, and that a metric that takes context into account will be useful for researchers in slower citing fields, as they can now really see which journals are having the most impact within their area and under their behavioral patterns.”

Useful links

SNIP
CWTS
“Measuring contextual citation impact of scientific journals”, Henk Moed
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Bibliometrics comes of age

Almost 40 years ago when bibliometrics emerged as a field in its own right no one could have anticipated how developments in technology and research administration would push bibliometrics to center stage in research assessment. Research Trends asks Wolfgang Glänzel, of the Centre for R&D Monitoring ECOOM in Leuven, how he sees this remarkable “Perspective Shift”.

Read more >


Almost 40 years ago, when bibliometrics emerged as a field in its own right, no one could have anticipated how developments in technology and research administration would push bibliometrics to center stage in research assessment. Research Trends asks Wolfgang Glänzel, of the Expertisecentrum O&O Monitoring (Centre for R&D Monitoring, ECOOM) in Leuven, how he sees this remarkable “Perspective Shift”.

Wolfgang Glänzel

Wolfgang Glänzel

Research Trends (RT): Once a sub-discipline of information science, bibliometrics has developed into a prominent research field that provides instruments for evaluating and benchmarking research performance. You call this “the Perspective Shift”. Has this Perspective Shift changed the approach of bibliometric research within the community itself; i.e. has it changed the starting points for research projects, shifted the focus of research topics and literature, and so on?
Wolfgang Glänzel (WG): Such a shift can indeed be observed. One must of course distinguish between genuine research projects and projects commissioned, for instance, by national research foundations, ministries or European Framework programs.

Most commissioned work in our field is policy-related and focused on research evaluation. Since this has become one of the main funding pillars of bibliometric centers and, in turn, requires an appropriate methodological foundation, the shift has had measurable effect on the research profile of the field.

The change is also mirrored by the research literature. In a paper by Schoepflin (2001), we found a specific change in the profile of the journal Scientometrics that supports this statement: 20 years after the journal was launched, case studies and methodological papers have become dominant.

RT: Does the currently available range of bibliometric indicators, including the Impact Factor (IF), h-index, g-index and Eigenfactor, accommodate the new reality of bibliometrics and its applications?
WG: Improvements and adjustments within the bibliometric toolkit are certainly necessary to meet new challenges. This also implies development of new measures and “indicators” for evaluating and benchmarking research performance.

Without a doubt, the quantity and quality of bibliometric tools have increased and improved considerably during the last three decades. The plethora of new metrics, however, most of which are designed to substitute or supplement the h-index and the IF, are not always suited to serve this purpose. Further methodological and mathematical research is needed to distinguish useful tools from “rank shoots”. Time will show which of these approaches will survive and become established as standard tools in our field.

In general, though, I am positive that a proper selection of indicators and methods is sufficient to solve most of today’s bibliometric tasks. And, as these tasks become increasingly complex, each level of aggregation will need specific approaches and standards as well. There will not be any single measure, no single “best” indicator, that could accommodate all facets of the new reality of bibliometrics and its applications.

RT: What do you consider the challenges ahead for bibliometrics and how do you think this will or should be reflected by bibliometric indicators?
WG: There are certainly some major obstacles in bibliometrics, and I will limit my comments to three of them.

First, scientometrics was originally developed to model and measure quantitative aspects of scholarly communication in basic research. The success of scientometrics has led to its extension across the applied and technical sciences, and then to the social sciences, humanities and the arts, despite communication behavior differing considerably between these subject fields. Researchers in the social sciences and humanities use different publication channels and have different citation practices. This requires a completely different approach, not simply an adjustment of indicators.

Of course, this is not a challenge for bibliometrics alone. The development of new methods goes along with the creation of bibliographic databases that meet the requirements of bibliometric use. This implies an important opportunity for both new investments and intensive interaction with information professionals.

The second challenge is brought by electronic communication, the internet and open-access publishing. Electronic communication has dramatically changed scholarly communication in the last two decades. However, the development of web-based tools has not always kept pace with the changes. The demand for proper documentation, compatibility and “cleanness” of data, as well as for reproducibility of results, still remain challenges.
Thirdly, scholarly communication – that is, communication among researchers – is not the only form of scientific communication. Modeling and measuring communication outside research communities to measure the social impact of research and scientific work can be considered the third important task that bibliometricians will be faced with in the near future.

RT: Inappropriate or uninformed use of bibliometric indicators by laymen, such as science policymakers or research managers, can have serious consequences for institutions or individuals. Do you think bibliometricians have any responsibility in this respect?
WG: In most respects, I could repeat my opinion published 15 years ago in a paper with Schoepflin entitled “Little Scientometrics – Big Scientometrics ... and Beyond”. Rapid technological advances and the worldwide availability of preprocessed data have resulted in the phenomenon of “desktop scientometrics” proclaimed by Katz and Hicks in 1997. Today, even a “pocket bibliometrician” is not an absurd nightmare anymore; such tools are already available on the internet.

Obviously, the temptation to use cheap or even free bibliometric tools that do not require grounded knowledge or skills is difficult to resist. Uninformed use of bibliometric indicators has brought our field into discredit, and has consequences for the evaluated scientists and institutions as well. Of course, this makes us concerned. Bibliometricians may not pass over in silence the inappropriate use of their research results in science policy and research management.

Bibliometricians should possibly focus more on communicating with scientists and end-users. It is certainly important to stress that bibliometrics is not just a service but, first and foremost, a research field that develops, provides and uses methods for the evaluation of research. Moreover, professionals should be selecting the appropriate methodology to underlie evaluation studies, not clients or end-users.

Despite some negative experiences, the growing number of students and successful PhDs in our field gives me hope that the uninformed use of bibliometric indicators will soon become a thing of the past.

RT: In your opinion, what has been the most exciting bibliometric development of the last decade and why was it so important?
WG: There were many exciting bibliometric developments in the last decade. If I had to name only one, I would probably choose the h-index. Not because it was such a big breakthrough – it is actually very simple, yet ingenious – but because its effects have been so far-reaching. The h-index has brought back our original spirit of the pioneering days by stimulating research and communication on this topic. In fact, scientists from various fields are returning to scientometrics as an attractive research field.

RT: What do you see as the most promising topic, metric or method for the bibliometric future? What makes your heart beat faster?
WG: There’s no particular topic I think is “most promising”, but the fact that scientometrics has become an established discipline certainly makes my heart beat faster. Now and into the future, the necessity of doing research in our field and of teaching professional skills in bibliometrics is becoming more widely recognized.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Did you know

…that the world’s favorite search engine wouldn’t exist without bibliometrics?

The SCImago Journal Rank (SJR) metric and the related Eigenfactor trace their origins back to pioneering work by Gabriel Pinski and Francis Narin in the 1970s (1), and are, in this way, also related to Google’s PageRank algorithm, which powers the famous search engine. Using data on cross-citations between journals, they developed an iterative ranking method for measuring the influence of a journal based on who is citing it and how influential they are.

Although Pinski and Narin were able to apply this method to a small database of physics journals, technological limitations meant the method could not be easily used on larger sets of journals, and it was neglected by bibliometricians.

All this changed in the 1990s with the rapid growth of computing power and the internet. Users needed an effective way of navigating through the sea of online content to find the information they wanted. In developing the Google search engine to address this, Larry Page drew on Pinski and Narin’s research to design the PageRank algorithm that ranks the importance of a webpage based on how many links it receives and who these links come from (2).

The popularity of Google triggered a renewed interest in Pinski and Narin’s work in the bibliometrics field that led to the development of metrics such as SJR.
References:
(1) Pinski, G. and Narin, F. (1976) “Citation influence for journal aggregates of scientific publications: Theory, with application to the literature of physics”, Information Processing and Management, 12:297-312.
(2) Brin, S. and Page L. (1998) “The anatomy of a large-scale hypertextual web search engine”.

17
https://www.researchtrends.com/wp-content/uploads/2011/01/Research_Trends_Issue15.pdf
  • Elsevier has recently launched the International Center for the Study of Research - ICSR - to help create a more transparent approach to research assessment. Its mission is to encourage the examination of research using an array of metrics and a variety of qualitative and quantitive methods.