Author Archive

Scientometrics from past to present

The origins of scientometric research can be traced back to the beginning of the 19th century. In the 21st century, the field is growing at an enormous pace and attracts interest far beyond the walls of universities and institutions. This two-part article explores not only scientometrics’ past but also its impact on and relevance in the present.

Read more >


Scientometric research, the quantitative mathematical study of science and technology, encompassing both bibliometric and economic analysis (1), is expanding at an enormous pace. This is evidenced in increasing attendance rates at important industry conferences, and the recent launch of the dedicated Journal of Informetrics. Indeed, if one were to pick up an issue of any of the leading journals in the field today, one would find research covering article output, cittion relationships between disciplines and geographical analysis linked to these. In a two-part article, we explore not only scientometrics’ past but also its impact on and relevance in the present.

The origins of bibliometric research can be traced back to the beginning of the 19th century within areas such as law. Shapiro (1999) (2) indicates that many aspects of bibliometrics were “practiced in the legal field long before being introduced into scientific literature”. Early research in the 1880s was reported by Delmas (1992), who describes documentation in France, but initial studies on qualitative and quantitative analysis of science seem to originate within psychological fields (Godin 2006) (3). Godin cites the work of Buchner in describing the notion of “scientific” psychology as “factual, inductive, measurable and experimental” and in 1920 Boring presented research on subject and geographical analysis of psychologists.

Timeline

Laying down the Law

Probably the earliest, most definable research within the scientometric field was the work that gave rise to the laws of bibliometrics. The first, which came to be known as Lotka’s Law, after Alfred Lotka, can be traced back to 1926 and suggested that within a defined area over a specific period a low number of authors accounted for a large percentage of publications in the area. This was followed in 1935 by the work of George Kingsley Zipf, which describes the frequency of words in a text and became known as Zipf’s Law. Zipf’s research was refined into two main laws looking at high and low frequency words within a text. In 1948 Samuel Clement Bradford’s analysis indicated that within a given area over a specific time a few journals publish a high percent of articles within the area and there are many journals that publish only a few articles each: Bradford’s Law. These laws continue to be studied and form the basis of the development of the modern-day scientometric literature.

In 1944, Lehman described the relationship between quantity and quality within scientific writing and this was followed in 1952 by Dennis, who analysed the effect of scientists’ age on these two elements. Again these types of analyses continue to be described in the current literature, and began to direct thinking towards averaged metrics within bibliometrics.

Journal metrics

One of the most recognized accomplishments in the field of scientometrics is the development of the Impact Factor and the work of Eugene Garfield (4). Garfield first described the Impact Factor in 1955 as a method of selecting journals for inclusion in a genetics citation index he had been developing. This eventually resulted in the publication of the Science Citation Index in 1961 as a means of linking articles together via their references. Since it was first described, journal Impact Factor has developed into a widely used bibliometric indicator.

Around the same time, another key figure, Derek De Solla Price, was working on the study of the exponential growth of science and the citation activity of scientific literature. He published several papers describing the key elements of scientometric analysis, including work on patterns of communication between scientists and the overall history and study of science itself.

There was tremendous growth in the scientometric literature in the 1960s (Herubel 1999) (5) and from this point forward the field of scientometrics developed and differentiated into several specializations. These were brought together by the launch of the first journal devoted to the field, Scientometrics, founded and edited by Tibor Braun of the Hungarian Academy of Sciences. One of the most notable developments was citation analysis. Once a laborious manual job few scholars would engage in, the emergence of (print) databases allowed citation patterns to be studied with relative speed and ease.

In the next issue, we explore scientometrics’ transition into the 21st century: the proliferation of databases, new author-focused indices and the impact of the Web.

References:

(1) Diodato, V. (1994) Dictionary of Bibliometrics (1st ed.) New York: Haworth Press.
(2) Shapiro, F.R. (1999) “Origins of bibliometrics, citation indexing and citation analysis: The neglected legal literature”, Journal of the American Society for Information Science, Vol. 43, No. 5, pp. 337-339.
(3) Godin, B. (2006) “On the origin of bibliometrics”, Scientometrics, Vol. 68, No. 1, pp.109-133.
(4) Garfield, E. (2006) “The history and meaning of the journal impact factor”, Journal of the American Medical Association, Vol. 295, No. 1, pp. 90-93.
(5) Herubel, V.M.J-P. (1999) “Historical Bibliometrics: Its purpose and significance to the history of disciplines”, Libraries & Culture, Vol. 34, No. 4, pp.380-388.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
Research Trends Image

Country analysis: examining the numbers

Research evaluation at country or national level is moving increasingly towards a metric-based system. The data produced can give interesting insights into the results within and between countries. We examine some of the numbers and what they can tell us.

Read more >


Research evaluation at country or national level is moving increasingly towards a metric-based system. The most obvious examples of these countries are Australia, with the Research Quality Framework, and the United Kingdom, with the Research Assessment Exercise, where policymakers and administrators are being called upon to submit metrics for national evaluation.

It is interesting to extract two of the indicators researchers, policymakers and administrators focus on, namely article count and citations received at country level. The differences in the number of articles published in each country may not be unexpected but the top 1% and 5% citation thresholds certainly warrant further attention.

Methodology

An analysis was performed in Scopus to extrapolate the top 1% and 5% of cited papers for ten randomly selected countries within 27 subject categories. The results of this analysis can be found in Table 1 (downloadable below).

The table denotes the number of papers published in each country for a period of five individual years from 2002-6. These counts are then separated into 27 subject categories (as specified in Scopus.com). For each of these years and for each subject category, the number of papers that forms a part of the top 1% of highly cited papers was derived.

For the purpose of this analysis, it is important to note that the cut-off date for the data extrapolation was set at September 18, 2007 resulting in a favoring of ‘older’ papers in most instances. To illustrate, we would like to take the example of Australia in the year 2002 where the following result was obtained within the subject category Engineering:

  • There were 2595 papers published in 2002;
  • The top 1% is thus a total of 26 papers (rounded up);
  • The citation threshold equals 36 citations (up to September 18, 2007).

For an Australian researcher this means that if s/he has published a paper in Engineering and has obtained a citation count of higher than 36 (considering the citation cut-off date), that researcher belongs to the top 1% of Australian research output in that year. The table also shows these figures for the top 5% citation threshold for all ten countries spread over the 27 subject categories.

This kind of data can also be used to analyze different results observed between countries. As a result, Research Trends will be providing this data for more countries in the future.

There are many interesting directions in which this research can develop and we welcome your feedback. This will help us to deliver the content you are most interested in.

Table 1 – Snapshots from full table (downloadable below)
Source: Scopus

Click here to download the complete table for all countries

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Focus on Turkey: the influence of policy on research output

Scientific research is becoming increasingly global. In the first of a series exploring research trends across borders, we focus on Turkey and the policy changes that have affected article output in recent years.

Read more >


With a special contribution from Professor Cem Saraç

When describing research, the American astronomer Dr. Carl Sagan was quoted as saying, “Somewhere, something incredible is waiting to be known.” This inspiring quotation reflects the fact that research exists in all parts of the world (and indeed outside of the world, as in the case of Astronomy) and that researchers collaborate to produce incredible breakthroughs in every country. This is the first in a series of articles that reflect the global nature of research. The series covers research trends across countries, and investigates the proliferation of research communication throughout the world.

We are focussing our first analysis on Turkey, a country that has shown strong growth in article output in recent years (see Figure 1 below).

Fig 1

Figure 1 – Output of journal research articles in Turkey 2000–2006
Source: Scopus

The increase in research articles across this period is occurring at an average rate of 17% per annum over the period 2000-2006, as compared with a 3.5% p.a. overall growth in the same period. But how can we explain this increase? Certainly the OECD Main Science and Technology Indicators Vol. 2007 (1) identify trends in data, which match this increase in research articles. Figure 2 illustrates the growth in the number of researchers based in Turkey. Comparing the data in the two graphs, we can conclude that the more researchers in a country, the more articles are written and published from institutions within that country.

Fig 2

Figure 2 – Researchers active in R&D in Turkey 2000-2004
Source: OSYM 2007
(2)

This increase in research articles and number of researchers is also matched by the increase in funding of higher education (HE) within Turkey; Figure 3 illustrates the growth in HE funding across the same period.

Fig 3

Figure 3 – Higher education funding in Turkey 2000-2004
Source: OECD

While these indicators continue to increase, the difference between subject fields is also evident. Figure 4 illustrates the subject breakdown of Turkish research in 2006 in Scopus and demonstrates that medical and life science research is currently leading the way in terms of published output, but that significant contributions are also being made to the physical and mathematical sciences.

Figure 4 – Subject focus of Turkish research articles 2006
Source: Scopus

A clear relationship exists between research funding, researcher population and article outputs at a national level, and Turkey is no exception. Data like this can inform and guide policymakers at all levels to leverage the infrastructure of the national science system and cultivate a knowledge economy.

As an ‘insider’ so to speak, Cem Saraç, Professor of Engineering at Hacettepe University, Ankara, says there are two principle reasons that could explain the figures in the tables above (3). Both relate to policy changes. “The first one can be linked to the Turkish Ministry of Health’s strategy,” he says. Indeed, OECD figures (4) show that health spending per capita in Turkey grew, in real terms, by an average of 5.8% per year between 2000 and 2005. This was one of the fastest growth rates in OECD countries and significantly higher than the OECD average of 4.3% per year. In addition, as part of a nationwide performance-based contribution payment system (5), implemented in training and research hospitals in 2004, clinic and deputy chiefs, chief interns and specialists receive additional scores providing they publish a definite number of papers.

“The second reason for the significant growth is the prerequisites, generally initiated after 2000, for applying for university degrees at Lecturer, Assistant Professor, Associate Professor and Full Professor levels,” he continues. “My university stipulates that one has to write at least three international papers in order to apply for an Associate Professor Degree and another four international papers for a Full Professor Degree. While each university has its own requirements, prerequisites like these could also have affected article growth.”

References:

(1) OECD Main Science and Technology Indicators, Vol. 2007
(2) OSYM (2007), Student Selection and Placement Center, Research and Publishing, from the World Wide Web
(3) Demirel, I.H., Sarac, C. and Ozgen T. (2007) “Science in Turkey, 1973-2006”. Science Magazine, AAAS.
(4) OECD (2007) “OECD Health Data 2007, How Does Turkey Compare”, Retrieved September 21, 2007 from the World Wide Web
(5) The Ministry of Health of Turkey “Performance-based payment system in the Ministry of Health Practices”, Retrieved September 21, 2007 from the World Wide Web
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
Research Trends Image

The politics of bibliometrics

As bibliometric indicators gain ground in the measurement of research performance and quality, and researchers and editors understand the importance of citations in these indicators, the potential for citation manipulation is bringing a political dimension to the world of bibliometrics. Research Trends explores the effect of excessive self-citation and spurious co-authorship on citation patterns.

Read more >


Academic performance indicators are relied upon for their objective insight into prestige. However, the data that they draw upon can be affected by practices such as self-citation and spurious co-authorship.

Mayur Amin and Michael Mabe have discussed and identified several issues with the Impact Factor (IF), probably the most widely used (and abused) of all indicators (1). And more recently, its derivation and transparency have been criticized in several articles. The comments are applicable to many bibliometric indicators, but most have focused on the IF because of its prominence.

Self-citation poses a political and ethical dilemma for bibliometrics: although it plays a vital role in research for both journals and authors, it can also be seen as a way to artificially increase the ranking of a journal or an individual. While most self-citation is justified and necessary, the potential for abuse has become a political issue for bibliometrics.

Dropping your own name

In 1974, Eugene Garfield wrote of self-citation rates: “It says something about your field – its newness, size, isolation; it tells us about the universe in which a journal operates.” (2) While this continues to be true, some researchers have indicated that there is a link between self-citation and the overall citation levels an author receives. James Fowler and Dag Aknses claim that: “a self-cite may yield more citations to a particular author, without yielding more citations to the paper in question” (3).

When self-citation is overused or blatant it can be detected by bibliometric indicators, generating significant attention. Several key researchers have called for bibliometric indicators to be calculated both with and without self-citations to identify their effects or to understand the reason for the self-citation (4,5,6,7).

Another political consideration concerns the replication of reference lists between articles. This occurs where a set of references is deemed to be important enough to be included in almost every article in the field. It sometimes happens even if there is only a tenuous link to the article in question, and when the author may not even have read the paper. This adds numerous “extra” citations to the pool each year. In fact, after analyzing the references in five issues of different medical journals, Gerald de Lacey, Christopher Record and James Wade found that errors were proliferating through medical literature into other articles at an alarming rate (8).

Do we need a watchdog?

The potential for abuse suggests that we may need to regulate citations. At present, we rely on authors to self-regulate. But this is a sensitive issue.

As John Maddox has described: “the widespread practice of spurious co-authorship” (9) is another political aspect of research. In some extreme cases, as Murrie Burgan indicates, articles list more than 100 authors (10). How can it be possible for each of those authors to have actively contributed to the article? Moreover, John Ioannides has shown that the average number of authors per paper is increasing, indicating that the problem is growing (11). And, according to research carried out by Richard Slone into authors listed on papers published in the American Journal of Roentgenology, the number of so-called “undeserved authors” rises as the list gets longer: 9% of authors on a three-author paper were undeserved, rising to 30% on papers with more than six authors (12).

Part of the problem is that a researcher’s personal success is intimately intertwined with his or her publication records. And as long as measures such as the h-index fail to distinguish between the first or the 30th author on a paper, undeserved co-authorship will continue. Some believe that the peer-review process should act as the governing body for research, asking journal editors and referees to act as bibliometric police. However, it can be very difficult to spot incidents of overactive self-citation, unrelated or incorrect references and erroneous authors, while attempting to assess whether the quality of research warrants publication.

There is also the potential to introduce a regulatory body, but the question remains: who should this be? Potentially publishers or associations, but it is far from clear whether there is a need for an independent organization to regulate the system.

As explained above, some researchers have suggested that metrics should be developed that account for excessive self-citation or that cleaner data are used. In the former case, self-citations can be taken out and weighted averages introduced but this can make the metric extremely complex. Meanwhile, publishers are working towards providing increasingly clean data, which makes processes easier.

In the end, is it worth all the effort? As long as the community as a whole can bring thoughtful analysis and interpretation, as well as a healthy dose of common sense, to bear on citations, such political considerations should be mitigated. As Winston Churchill once said: “If you have ten thousand regulations, you destroy all respect for the law.”

References:

(1) Amin, M., Mabe, M. (2000) “Impact Factors: use & abuse”, Perspectives in Publishing, No. 1.
(2) Garfield, E. (1974) “Journal self citation rates – there’s a difference”, Current Contents, No. 52, pp. 5–7.
(3) Fowler, J.H. and Aksnes, D.W. (2007) “Does self-citation pay?”, Scientometrics, Vol. 72, No. 3, pp. 427–37.
(4) Schubert, A., Glanzel, W. and Thijs, B. (2006) “The weight of author self-citations. A fractional approach to self-citation counting”, Scientometrics, Vol. 67, No. 3, pp. 503–14.
(5) Hyland, K. (2003) “Self citation and self reference: credibility and promotion in academic publication”, JASIST, Vol. 54, No. 3, pp. 251–59.
(6) Aksnes, D.W. (2003) “A macro study of self citation”, Scientometrics, Vol. 56, No. 2 , pp. 235–46.
(7) Glanzel, W., Thijs, B. and Schlemmer, B. (2004) “A bibliometric approach to the role of author self-citations in scientific communication”, Scientometrics, Vol. 59, No. 1, pp. 63–77.
(8) de Lacey, G., Record, C. and Wade, J. (1985) “How accurate are quotations and references in medical journals?”, British Medical Journal, Vol. 291, September, pp. 884–86.
(9) Maddox, J. (1994) “Making publication more respectable”, Nature, Vol. 369, No. 6479, p. 353.
(10) Burgan, M. (1995) “Who is the author?”, STC Proceedings, pp. 419–20.
(11) Ioannidis, J.P.A. (2008) “Measuring co-authorship and networking adjusted scientific impact”, PLoS ONE, Vol. 3, No. 7, Art. No. e2778.
(12) Slone, R.M. (1996) “Coauthors’ contributions to major papers published in the AJR: frequency of undeserved coauthorship”, American Journal of Roentgenology, Vol. 167, No.3, pp. 571–79.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Pleased to cite you: the social side of citations

The advent of robust data sources for citation analysis and computational tools for social network analysis in recent years has reawakened an old question in the sociometrics of science: how socially connected are citers to those that they cite? Research Trends talks to those in the know.

Read more >


Charles Oppenheim

Charles Oppenheim

While social connectedness correlates with citation counts, science is still more about what you know than who you know. A recent investigation of the social and citation networks of three individual researchers concluded that while a positive correlation exists between social closeness and citation counts, these individuals nevertheless cited widely beyond their immediate social circle (1).

Professor Charles Oppenheim comments on the motivation for this study and its main findings: “Our research started from the hypothesis that people were more likely to cite those close to them, forming so-called ‘citation clubs’ of colleagues in the same department or research unit. There is an allegation that such citation clubs distort citation counts. We took as our primary target the Centre for Information Behaviour and the Evaluation of Research (CIBER) group of researchers based at University College London, well known for their work in deep log analysis.

“The research was quite novel because it used social network analysis (SNA) techniques and UCINET SNA software to analyze the results from questionnaires we sent to CIBER group members and people they had cited. We found no evidence of a citation club – CIBER researchers aren't necessarily socially close to the researchers they cite. However, it must be stressed that this was a small-scale experiment and cannot be generalized to all subject areas, or indeed to anyone apart from the CIBER group.”

A circle of friends and colleagues

Blaise Cronin

Blaise Cronin

Blaise Cronin, Dean and Rudy Professor of Information Science at Indiana University, US, and newly appointed Editor-in-Chief of the Journal of the American Society for Information Science and Technology, agrees that both social and intellectual connections affect citation. “We certainly don’t cite authors just because they are colleagues or friends, but all things being equal, most of us would probably give the nod to those whom we know personally.

“Our colleagues, co-workers, trusted assessors and friends are often to be found nearby – in the lab, along the faculty corridor. Even in an age of hyper-networking, place and physical proximity play a part in determining professional ties and loyalties. And those bonds, in turn, can shape our citation practices.

“Co-citation maps do not merely depict intellectual connections between authors; inscribed in them, in invisible ink as it were, are webs of social ties. A number of bio-bibliometric studies (2) have attempted to combine sociometric and scientometric data to reveal these ties. As the digital infrastructure evolves, we may soon see the emergence of a new sub-field, bio-bibliometrics, and the first generation of socio-cognitive maps of science.”

Paying an intellectual debt

Howard D. White
Howard D. White

Howard D. White, Professor Emeritus at the College of Information Science & Technology at Philadelphia’s Drexel University, US, has been interested in the social dimension of citation for some time. His work on the social and citation structure of an interdisciplinary group established to study human development concluded that citations are driven more by intellectual than social ties (3).

White explains: “There is no doubt that citation networks and social networks often overlap. Given the specialization of research fields, how could this not be the case? But no scientist or scholar would fail to cite a useful work simply because it was by a contemporary they had not met or a dead predecessor they could not have met. Citations are made to buttress intellectual points, and perceived relevance toward that end is far more important than social ties in determining who and what gets cited.”

As the nascent field of bio-bibliometrics continues to grow, we will come to a better understanding of the motivations underlying the practice of citation. Yet it is already clear that, in the main, citations mark the acknowledgement of intellectual debt to those who have gone before, rather than mere whimsy: it really is all about what you know, not who you know.

References:

(1) Johnson, B. and Oppenheim, C. (2007) “How socially connected are citers to those that they cite?”, Journal of Documentation, Vol. 63, No. 5, pp. 609–37.
(2) Cronin, B. (2005) “A hundred million acts of whimsy?”, Current Science, Vol. 89, No. 9, pp. 1505–09.
(3) White, H. D. (2004) “Does citation reflect social structure? Longitudinal evidence from the ‘Globenet’ interdisciplinary research group”, Journal of the American Society for Information Science and Technology, Vol. 55, No. 2, pp. 111–26.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Obama’s “Dream Team”

Obama’s new senior science advisory team brings together some of the most successful and influential scientists in the US, ushering in a new era where science is at the centre of policy. We look at the track records of five of the appointees.

Read more >


New US President Barack Obama’s choices for senior science advisory posts in his new government include some of the most prolific and high-impact scientists working in the US today, earning the nickname of Obama’s “Dream Team”.

In his weekly radio address in December 2008, Obama vowed to “put science at the top of our agenda [because] science holds the key to our survival as a planet and our security and prosperity as a nation”.

Environment on the agenda

As Assistant to the President for Science and Technology and Director of the White House Office of Science and Technology Policy, John P. Holdren is Obama’s top science advisor. Based at the Kennedy School of Government at Harvard University, Holdren is a physicist whose publications on sustainable energy technology and energy policy have featured frequently in Science; his seminal 1971 article (with population biologist Paul Ehrlich) entitled “Impact of population growth” (1) continues to be cited strongly (with more than 30 citations during 2007).

Holdren was recently president of the American Association for the Advancement of Science (AAAS) and then chairman of its Board of Directors. In a statement on the AAAS website, the Association’s Chief Executive Officer Alan Leshner noted: “John Holdren’s expertise spans so many issues of great concern at this point in history – climate change, energy and energy technology, nuclear proliferation.”

Another past president of the AAAS, Jane Lubchenco, assumes the role of National Oceanic and Atmospheric Administration (NOAA) Administrator. The first woman to head the agency, Lubchenco has an impressive list of publications in marine ecology, and co-authored a 1997 article warning of the impacts of human activity on the global ecosystem and the immediate need for action that has been cited more than 1,400 times to date (2). Like Holdren, Lubchenco has a Harvard connection, having taken her Ph.D. there in 1975 and holding a teaching post before relocating to Oregon State University in 1978.

Stocking up on Nobel laureates

President Obama’s Secretary of Energy, Steven Chu, Professor of Physics and Molecular & Cellular Biology and Director of the Lawrence Berkeley National Laboratory at the University of California, Berkeley, shared the 1997 Nobel Prize in Physics for his research in cooling and trapping of atoms with laser light. The first Laureate to be appointed to the Cabinet, Chu’s research interests in single-molecule biology are reflected in his list of more than 140 journal publications since 1996, with more than 7,000 citations to date.

Rounding out President Obama’s “Dream Team” are two biologists, Eric Lander and Harold Varmus, co-chairs of the President’s Council of Advisers on Science and Technology (PCAST) with Holdren. PCAST is a panel of private sector and academic representatives established in 2001 to advise on issues related to technology, research priorities and science education.

Lander, founding Director of the Broad Institute of Massachusetts Institute of Technology and Harvard, was instrumental in the Human Genome Project; his more than 350 journal publications have collectively been cited more than 75,000 times since 1996.

Varmus, former director of the National Institutes of Health and President and CEO of Memorial Sloan-Kettering Cancer Center since 2000, is the second Nobel Prize winner (Physiology or Medicine, 1989) appointed to Obama’s team. His prize-winning research on the cellular origin of retroviral oncogenes published in Nature in 1976 (3) continues to be cited (21 times in 2007).

Towards a well-informed future

President Obama has collected some of the finest scientific talent in the US to advise him, with a particular focus on environmental issues. In fact, the team has also been dubbed the “Green Team”. These five individuals were together cited more than 12,000 times in 2007 and their experience spans the breadth of the physical sciences.

Incidentally, Obama himself is a published author, with a dozen journal publications: his 2006 article (4) with erstwhile presidential rival and now Secretary of State Hillary Clinton on healthcare reform has been cited 28 times to date.

President Obama outlined the key role that science policy will play in the US’s economic recovery in his inauguration speech in January: “The state of the economy calls for action, bold and swift, and we will act […] We will restore science to its rightful place”.

References:

(1) Ehrlich P.R. and Holdren J.P. (1971) “Impact of population growth”, Science, Vol. 171, pp. 1212–17.
(2)
Vitousek P.M., Mooney H.A., Lubchenco J. and Melillo J.M. (1997) “Human domination of Earth's ecosystems”, Science, Vol. 277, pp. 494–99.
(3)
Stehelin D., Varmus H.E., Bishop J.M. and Vogt P.K. (1976) “DNA related to the transforming gene(s) of avian sarcoma viruses is present in normal avian DNA”, Nature, Vol. 260, pp. 170–73.
(4)
Clinton H.R. and Obama B. (2006) “Making patient safety the centerpiece of medical liability reform”, New England Journal of Medicine, Vol. 354, pp. 2205–08.


VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Inspired by bibliometrics

For many editors, the bibliometrics underlying journal rankings are a fuzzy area. But for those already using similar techniques in their research, bibliometrics is another tool to help increase journal quality. Brian Fath tells us how the work of Derek de Solla Price has opened his eyes to the world of citation analysis.

Read more >


Brian Fath

Brian Fath

Brian Fath is an Associate Professor in the Department of Biological Sciences at Towson University, USA, and Editor-in-Chief for the journal, Ecological Modelling. Like all journal editors, he wants his journal to continue improving. However, unlike many editors, he has a passion for network analysis, giving him a unique insight into the way ranking metrics are calculated and an enhanced understanding of how scholarly literature is cited within communities.

Fath uses ecological network analysis to identify relationships between non-connected elements in food webs. He says: “Network analysis is a very powerful tool to identify hidden relationships. We can now integrate the networks of different systems and identify indirect pathways, making it possible for us to see the unexpected consequences of our actions. For example, CFCs looked good in the lab, but it took 40 years to understand their effect on the planet. Through network analysis, we can potentially gauge those effects before we cause them.”

In October 2007, he was invited to give a presentation on “Assessing Journal Quality Using Bibliometrics” at the Elsevier Editors’ Conference in Miami. While carrying out background research, he came across Derek de Solla Price. “His 1965 paper (1) was a revelation, and I literally just stumbled upon it,” he recalls.

Eye opener

“I thought this paper was fascinating. For instance, de Solla Price identifies research fronts, marked by review papers. This is important, because he also shows that the frequency of review papers is not linked to time, but to the number of papers published in the field. Hot topics, where a lot of papers are published, prompt review papers more frequently than slower-paced areas. This changed my mind on the frequency of publishing review papers,” says Fath.

He was also interested in de Solla Price’s discussion of non-cited papers. Around 35% of papers in a given year are never cited. Editors obviously want to publish the best research, but how can they recognize the outliers? “Our journal is quite avant-garde. We publish some novel papers, and naturally some don’t get cited. But on the other hand, if we could find a way to reduce the number of non-cited papers, our Impact Factor would go up,” he remarks.

Improving quality

Fath believes that bibliometrics can help editors improve the quality of their journals. “We can improve the field by knowing when to call for a review paper and by promoting timely special issues, and these actions are reflected in our bibliometrics,” he says. For instance, he recently discovered that special issues of his journal were actually less frequently cited than regular issues. “We’ve decided to try doing themed issues next year to see if that serves the community better than traditional conference-based special issues,” he says.

He is also paying more attention to keywords in papers, and especially in abstracts. He believes that, “people are really starting to use search engines to find papers, and it seems logical to use keywords. Abstracts are also very important: well-written, clear English is very attractive.”

He does have one concern, however. “We are going through a period of rapid journal growth, which I don’t think is sustainable. It’s possible to get almost anything published somewhere these days – in fact, it can get quite hard to follow the literature. And all these papers are citing other papers, which means everyone’s Impact Factor is increasing. But I wonder if it’s sustainable; can all these new journals also expect their Impact Factors to rise?”

Yet overall, despite some resistance, Fath is convinced that citation analysis is very valuable: “Communities should be citing each other – this is what marks them out as a community; and if you’re not being cited by your own community, you should want to know this and do something about it.”

References:

(1) de Solla Price, D.J. (1965) “Networks of scientific papers”, Science, Vol. 149, pp. 510–15.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

From h to g: the evolution of citation indices

The ‘g-index’ was developed by Professor Leo Egghe in 2006 in response to the ‘h-index’. Both indices measure the output (quantity) and impact (quality or visibility) of an individual author. Egghe explains why he thinks the g-index is an improvement.

Read more >


The h-index has become a familiar term among bibliometricians since its inception in 2005, and is being increasingly adopted by non-bibliometricians. The letter h is often thought to stand for the h in Hirsch, the name of the physicist who developed it, although it is actually short for ‘highly cited’. The h-index is therefore the number of papers that receive h or more citations. For example: Professor X has an h-index of 39 if 39 of his 185 papers have at least 39 citations each and the other 146 (185-39) papers have not more than 39 citations each.

Previous indices have tended only to focus on the impact of individual journals, using the average number of times published papers are cited up to two years after publication. This means that one paper in the journal might have been highly cited and another hardly at all but the authors of both are judged equally on the Impact Factor of their journal. While the h-index can measure individual authors, thereby overcoming the shortcomings of journal Impact Factor, it has limitations of its own. “It is insensitive to the tail of infrequently cited papers, which is a good property,” says Professor Leo Egghe, Chief Librarian at Hasselt University, Belgium and Editor-in-Chief of the Journal of Informetrics, “but it’s not sufficiently sensitive to the level of highly cited papers. Once an article belongs to the h top class, the index does not take into account whether that article continues to be cited and, if so, whether it receives 10, 100 or 1000 more citations.”

What’s in a name?

The g-index is so called for two reasons: Egghe rejected the name ‘e-index’ on the grounds that it has a different connotation in mathematics. He therefore looked at the two g’s in his surname instead. G also falls immediately before h in the alphabet, reinforcing its link to the h-index.

Lotka’s Law

This is where the g-index has evolved from its predecessor. It has all the advantages and simplicity of the h-index, but also takes into account the performance of the top articles. It was in direct response to his criticisms of the h-index that Egghe developed the g-index. No newcomer to bibliometrics, Egghe’s main area of expertise is Lotka’s Law. The premise of this Law is that as the number of articles published increases, the authors producing that many publications decreases. This principle forms the basis of the h- and the g-indices, the formulae for both of which Egghe was the first to prove. The difference between them is that while the top h papers can have many more citations than the h-index would suggest, the g-index is the highest number g of papers that together received g2 or more citations. This means that the g-index score will be higher than that of the h-index. It also makes the differences between two authors’ respective impacts more apparent. “The only disadvantage I’ve found so far with the g-index is that you need a longer table of numbers to reach your conclusions!” says Egghe.

Access to funds

For many scientists, there is a direct correlation between where they are ranked in their field and the amount of funding they can attract. “Everything is measured these days, which explains the growth of bibliometrics as a whole,” says Egghe. “The g-index enables easy analysis of the highest cited papers; but the reality is that as time passes, it’s not going to be possible to measure an author’s performance using just one tool. A range of indices is needed that together will produce a highly accurate evaluation of an author’s impact.”

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

…a Top-Cited marketing paper?

Stephen Vargo and Robert Lusch’s 2004 paper “Evolving to a new dominant logic for marketing” is the Top-Cited in its category. We ask Vargo and one its many citers why they think this article is so successful.

Read more >


In the subject area Economics, Econometrics and Finance, the paper “Evolving to a new dominant logic for marketing”, published by Stephen Vargo and Robert Lusch in the Journal of Marketing, was the TopCited article between 2004 and 2008. This article has been cited 282 times.

Relevance and timing count

Professor Vargo from the Shidler College of Business at the University of Hawaii, US, explains: “While we did not fully anticipate the impact the article would have, I think there are several reasons for it. First, it was intended to capture and extend a general evolution in thought about economic exchange, both within and outside of marketing. The most common comment we receive is something like ‘you said what I have been trying to say’ in part or in whole. Thus, although it was published in a marketing journal, it seems to have resonated with a much larger audience.

“We have also said from the outset that what has now become known as service-dominant (S-D) logic is a work in process and have tried to make its development inclusive. As we have interacted with other scholars, we have modified our original views – and the original foundational premises – and expanded the scope of S-D logic. This approach seems to have been well received.”

Professor Vargo also acknowledges an element of “fortuitous timing” in the article’s success: “The role of service in the economy is becoming increasingly recognized and firms such as IBM and GE – and many others – are shifting from thinking about themselves as manufacturing firms to primarily service firms. Similar shifts are taking place in academic and governmental thinking. S-D logic provides a service-based, conceptual foundation for these changes.”

Busting paradigms

Professor Eric Arnould from the Department of Management and Marketing at the University of Wyoming, US, has cited this paper. He explains: “this article is a paradigm buster; it is as simple as that. The paper took under-systematized currents of thought that have been circulating in the marketing discipline for a number of years and codified them. The paper proposes that marketing is about the exchange of services or resources, not things; and that value is always co-created in the exchange of resources both immaterial (operand) and material (operant) between parties. If widely adopted, their detailed proposals will change marketing theory and practice forever. The paper is widely cited because of the ongoing interest in their recommendations both in practice, such as for IBM, and in the academic world. We cited the paper both for its content and its authority as a paradigm buster”.

References:

(1) Vargo, S.L. and Lusch, R.F. (2004) “Evolving to a new dominant logic for marketing”, Journal of Marketing, Vol. 68, issue 1, pp. 1–17.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Breaking boundaries: patterns in interdisciplinary citation

Collaboration has always been an essential aspect of scientific research. Today, technology is making it easier for researchers in one field to access and identify useful research in other subjects. We take a look at citations made to other subjects to see whether collaboration is increasing and in which areas.

Read more >


Science today is separated into many areas that relate to each other in different ways. But are there any areas of research that cross the boundaries of science? Which are the most interdisciplinary areas of research?

This article investigates the major subject areas identified in Scopus that are cited by other subject areas, and attempts to identify those that show the most interdisciplinary citation patterns. We have taken articles published in each subject area between the years 1996–2000 and 2003–2007 and measured citations to these from other subject areas within the same two periods. We can then compare the percentage of citations received by other subjects across both time periods to determine which areas showed the biggest shift in citation patterns.

The results were mixed. For instance, medicine showed very little variation in citation patterns between the two periods, with the majority of citations coming from other medical fields and those in associated life sciences (see Figure 1).

A similar pattern was seen in other medical and life science areas, including biochemistry, neuroscience, nursing, and pharmacology and toxicology. Areas such as arts and humanities, social sciences or psychology also indicated no significant shift in the citation patterns of these fields, although it is worth mentioning that some of these subjects are already diverse by nature.

Figure 1: Differences in citations to medicine from other subject fields.

Figure 1: Differences in citations to medicine from other subject fields.

Branching out…

In contrast, fields such as computer science, engineering, energy and mathematics all showed a great deal of change in the subjects that cite them. Figure 2 illustrates the pattern for mathematics and Figure 3 for computer science.

Figure 2: Differences in citations to mathematics from other subject fields.

Figure 2: Differences in citations to mathematics from other subject fields.

Figure 3: Differences in citations to computer science from other subject fields.

Figure 3: Differences in citations to computer science from other subject fields.

These results indicate a shift in the citation patterns, with different subject areas making citations to academic literature. It also points to a tendency for changes in the nature of the citation relationships of these fields. Indeed, within computer science, shifts of up to 6% are seen in citation activity to other areas, with the main shifts being evident in citations from engineering and mathematics.

To investigate these shifts more closely we compared the top ten most-citing subjects to two fields that seem to show the highest interdisciplinary origin of their citing articles – energy and engineering. Figures 4 and 5 illustrate the percentage breakdown of citations to these areas.

Both energy and engineering have a diverse citation spread and have shown an increase in the “other” areas that have cited them between the two time periods. Energy has shown a 2% shift in citations from “other” fields, while engineering has shown a 6% shift.

Figures 4: Comparison of top ten subjects citing the field of energy, 1996–2000.

Figures 4: Comparison of top ten subjects citing the field of energy, 1996–2000.

Figures 5: Comparison of top ten subjects citing the field of energy, 2003–2007.

Figures 5: Comparison of top ten subjects citing the field of energy, 2003–2007.

Figures 6: Comparison of top ten subjects citing the field of engineering, 1996–2000.

Figures 6: Comparison of top ten subjects citing the field of engineering, 1996–2000.

Figures 7: Comparison of top ten subjects citing the field of engineering, 2003–2007.

Figures 7: Comparison of top ten subjects citing the field of engineering, 2003–2007.

…or converging?

Moshe Kam, Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and Professor at Drexel University, the US, is not surprised by these findings. He says that many research areas that were relatively “isolated” in the past have been developing a stronger interface with disciplines within engineering and computing.

Kam explains: “Rather than interpreting the data as showing increased cross-disciplinary activity, the data may actually indicate that some disciplines and sub-disciplines are converging, or even merging. One example is the increase in the volume of work at the interface of life sciences, computer science, computer engineering and electrical engineering. It is clear from reading papers at this intersection of subjects that many scientists and engineers who were educated in a traditional ‘standalone’ discipline have educated themselves quite well in other areas. At times it is hard to distinguish between the pattern-recognition specialist, the biological-computation expert and the software engineer. There is much less compartmentalization and much more sharing – not only in the results of tasks divided between researchers, but in actually doing the detailed research work together.”

It thus appears that for researchers in certain subjects, the results of research in certain other, complementary fields, are not only of added value; they are becoming essential. If Moshe is correct, the trend is towards convergence rather than cross-disciplinarity for fields that share common research questions and approaches. It remains to be seen whether this will lead to new areas of study at the intersections of complementary fields or greater collaboration between experts within those fields.

Useful links:

IEEE

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
13
https://www.researchtrends.com/wp-content/uploads/2011/01/Research_Trends_Issue11.pdf
  • Elsevier has recently launched the International Center for the Study of Research - ICSR - to help create a more transparent approach to research assessment. Its mission is to encourage the examination of research using an array of metrics and a variety of qualitative and quantitive methods.