A major drawback of bibliometric journal ranking is that in the search for simplicity, important details can be missed. As with all quantitative approaches to complex issues, it is vital to take the source data, methodology and original assumptions into account when analyzing the results.

Across a subject field as broad as scholarly communication, assessing journal impact by citations to a journal in a two-year time frame is obviously going to favor those subjects that cite heavily, and rapidly. Some fields, particularly those in the life sciences, tend to conform to this citation pattern better than others, leading to some widely recognized distortions. This becomes a problem when research assessment is based solely on one global ranking without taking its intrinsic limitations into account.

Henk Moed

Henk Moed

Context matters
In response to the gap in the available bibliometric toolkit, Prof. Henk Moed of the Centre for Science and Technology Studies (CWTS), in the Department (Faculty) of Social Sciences at Leiden University, has developed a context-based metric called source-normalized impact per paper (SNIP).

He explains that SNIP takes context into account in five ways. First, it takes a research field’s citation frequency into account to correct for the fact that researchers in some fields cite each other more than in other fields. Second, it considers the immediacy of a field, or how quickly a paper is likely to have an impact. In some fields, it can take a long time for a paper to start being cited while other fields continue to cite old papers for longer. Third, it accounts for how well the field is covered by the underlying database; basically, is enough of a given subject’s literature actually in the database. Fourth, delimitation of a journal’s subfield is not based on a fixed classification of journals, but is tailor-made to take a journal’s focus into account, so that each journal has its proper surrounding subject field. And fifth, to counter any potential for editorial manipulation, SNIP is only applied to peer-reviewed papers in journals.

CWTS

The Centre for Science and Technology Studies (CWTS), based at Leiden University, conducts cutting-edge basic and applied research in the field of bibliometrics, research assessment and mapping. The results of this research is made available to science-policy professionals through CWTS B.V.

However, Moed was not simply filling a market gap: “I thought that this would be a useful addition to the bibliometric toolbox, but I also wanted to stimulate debate about bibliometric tools and journal ranking in general.” Moed is at pains to explain that SNIP is not a replacement for any other ranking tool because: “there can be no single perfect measure of anything as multidimensional as journal ranking – the concept is so complex that no single index could ever represent it properly.” He continues: “SNIP is not the solution for anyone who wants a single number for journal ranking, but it does offer a number of strong points that can help shed yet another light on journal analysis.”

He adds, however, that contextual weighting means SNIP is offering a particular view, and it is important to take this into account when using it. He strongly believes that no metric, including SNIP, is useful alone: “it only really makes sense if you use it in conjunction with other metrics.”

Use the right tool

This leads to Moed’s wider aim: by providing a new option and adding to the range of tools available for bibliometricians, he hopes to stimulate debate on journal ranking and assessment in general. He explains: “All indicators are weighted differently, and thus produce different results. This is why I believe that we can never have just one ranking system: we must have as wide a choice of indicators as possible.” Like many in the bibliometric community, Moed has serious concerns about how ranking systems are being used.

It is also very important to combine all quantitative assessment with qualitative indicators and peer review. “Rankings are very useful in guiding opinion, but they cannot replace them,” he says. “You first have to decide what you want to measure, and then find out which indicator is right in your circumstances. No single metric can do justice to all fields and deliver one perfect ranking system. You may even need several indicators to help you assess academic performance, and you certainly need to be ready to call on expert opinions.”

In fact, a European Commission Expert Group on Assessment of University-based Research is working from the same assumption: that research assessment must take a multidimensional view.

Henk Moed

Henk F. Moed has been a senior staff member at the Centre for Science and Technology Studies (CWTS), in the Department (Faculty) of Social Sciences at Leiden University, since 1986. He obtained a PhD in Science Studies at the University of Leiden in 1989. He has been active in numerous research areas, including bibliometric databases and bibliometric indicators. He has published over 50 research articles and is editor of several journals in his field. He is a winner of the Derek de Solla Price Award in 1999. In 2005, he published a monograph, Citation Analysis in Research Evaluation (Springer), which is one of the very few textbooks in the field.

Moed believes that what is really required is an assessment framework in which bibliometric tools sit alongside qualitative indicators in order to give a balanced picture. He expects that adoption of a long-term perspective in research policy will become increasingly important, alongside development of quantitative tools that facilitate this. SNIP fits well into this development. “But we must keep in mind that journal-impact metrics should not be used as surrogates of the actual citation impact of individual papers or research group publication œuvres. This is also true for SNIP”.

More information means better judgment
Moed welcomes debate and criticism of SNIP, and hopes to further stimulate debate on assessment of scholarly communication in general. “I realize that having more insight into the journal communication system is beneficial for researchers because they can make well-informed decisions on their publication strategy. I believe that more knowledge of journal evaluation, and more tools and more options, can only help researchers make better judgments.”

His focus on context is also intended to both encourage and guide debate. “Under current evaluation systems, many researchers in fields that have low citation rates, slow maturation rates or partial database coverage – such as mathematics, engineering, the social sciences and humanities – find it hard to advance in their careers and obtain funding, as they are not scoring well against highly and quickly citing, well covered fields, simply because citation and database characteristics in their fields are different. I hope SNIP will help in illuminating this, and that a metric that takes context into account will be useful for researchers in slower citing fields, as they can now really see which journals are having the most impact within their area and under their behavioral patterns.”

Useful links

SNIP
CWTS
“Measuring contextual citation impact of scientific journals”, Henk Moed
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)