Research has long played an important role in human culture, yet its evaluation remains heterogeneous as well as controversial. For several centuries, review by peers has been the method of choice to evaluate research publications; however, the use of bibliometrics has become more prominent in recent years.

Bibliometric indicators are not without their own controversies (1, 2) and recently there has been an explosion of new metrics, accompanying a shift in the mindset of the scientific community towards a multidimensional view of journal evaluation. These metrics have different properties and, as such, can provide new insights on various aspects of research.

Measuring prestige

SCImago is a research group led by Prof. Félix de Moya at the Consejo Superior de Investigaciones Científicas. The group is dedicated to information analysis, representation and retrieval by means of visualization techniques, and has recently developed SCImago Journal Rank (SJR) (3). This takes three years of publication data into account to assign relative scores to all of the sources (journal articles, conference proceedings and review articles) in a citation network, in this case journals in the Scopus database.

Inspired by the Google PageRank™ algorithm, SJR weights citations by the SJR of the citing journal; a citation from a source with a relatively high SJR is worth more than a citation from a source with a relatively low SJR. The results and methodology of this analysis are publicly available and allow comparison of journals over a period of time, and against each other.

Accounting for context

Another new metric based on the Scopus database is Source Normalized Impact per Paper (SNIP) (4), the brainchild of Prof. Henk Moed at the Centre for Science and Technology Studies (CWTS) at Leiden University. SNIP takes into account characteristics of the source’s subject field, especially the frequency at which authors cite other papers in their reference lists, the speed at which citation impact mature, and the extent to which the database used in the assessment covers the field’s literature.

SNIP is the ratio of a source’s average citation count per paper in a three-year citation window over the “citation potential” of its subject field. Citation potential is an estimate of the average number of citations a paper can be expected to receive in a given subject field. Citation potential is important because it accounts for the fact that typical citation counts vary widely between research disciplines, tending to be higher in life sciences than in mathematics or social sciences, for example.

Citation potential can also vary between subject fields within a discipline. For instance, basic research journals tend to show higher citation potentials than applied research or clinical journals, and journals covering emerging topics often have higher citation potentials than periodicals in well-established areas.

More choices

SNIP and SJR, using the same data source and publication window, can be seen as complementary to each other: SJR can be primarily perceived as a measure of prestige and SNIP as a measure of impact that corrects for context, although there is some overlap between the two.

Both metrics offer several new benefits. For a start, they are transparent: their respective methodologies have been published and made publicly available. These methodologies are community driven, answering the express needs of the people using the metrics. The indicators also account for the differences in citation behavior between different fields and subfields of science. Moreover, the metrics will be updated twice a year, giving users early indication of changes in citation patterns. Furthermore, they are dynamic indicators: additions to Scopus, including historical data, will be taken into account in the biannual releases of the metrics. And lastly, both metrics are freely available, and apply to all content in Scopus.

It should be emphasized that although the impact or quality of journals is an aspect of research performance in its own right, journal indicators should not replace the actual citation impact of individual papers or sets of research group publications. This is true for both existing and new journal metrics.

The fact that SJR and SNIP are relatively new additions to the existing suite of bibliometrics indicators is part of their strength. Both build upon earlier metrics, taking the latest thinking on measuring impact into account without being hindered by a legacy that denies modern publication and citation practices. Their unique properties – including transparency, public availability, dynamism, field normalization and three-year publication window – means they offer a step forward in citation analysis and thus provide new insights into the research landscape.

References:

(1) Corbyn, Z. (June 2009) “Hefce backs off citations in favour of peer review in REF”, Times Higher Education Supplement
(2) Corbyn, Z. (August 2009) “A threat to scientific communication”, Times Higher Education Supplement
(3) de Moya, F. (December 2009) “The SJR indicator: A new indicator of journals' scientific prestige”, Arxiv
(4) Moed, H. (November 2009) “Measuring contextual citation impact of scientific journals”, Arxiv
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)