Articles

Research trends is an online magazine providing objective insights into scientific trends based on bibliometrics analyses.

Turning the ranking tables on their head: how to improve your standing

Rankings are a useful way for outsiders to assess the relative value of different universities, and administrators are quickly learning that improving their ranking is a useful mark of quality. But what are the rankers looking at, and how can a university show its best side? We speak to one university that is making the rankings work in its favor.

Read more >


In February 2009, the third International Symposium on University Rankings was held in Leiden, the Netherlands. University rankings were discussed from several perspectives: from the position of the researcher or organization developing the rankings to that of the university dean or provost using the rankings to improve their university’s position.

Professor Anthony F.J. van Raan from the Centre for Science and Technology Studies, Leiden University, gave a presentation on the methods used by the various university-ranking systems around the world. For instance, where The Times Higher Education Supplement (THES) bases its analysis on 20% bibliometric input, Shanghai uses 80% and Leiden 100%.

National rankings often also take external inputs, such as average rents for student accommodation in the relevant city, into account. Gero Federkeil, from the Centre for Higher Education Development, explained that some rankings are even bringing their successful alumni into the picture in much the same way that the research community looks at Nobel Prize Laureates. Having a high number of graduates go on to become CEOs at major companies can also be an indicator of quality.

What do these rankings mean to a university?

In many of the discussions, the speakers said that rankings should not be used for resource allocation. It would be wonderful if they could be used to predict, navigate and forecast, but this is not yet possible. This is an area where further research and development are needed.

Professor Luke Georghiou, University of Manchester, explained that while universities do try to improve their ranking, it is less clear how the rankings actually influence behavior.

Climbing up the rankings

One country that has steadily increased its output and quality of papers in recent years is Finland (see Figures 1 and 2). University administrators are very interested to learn how this remarkable success has been achieved.

Figure 1: Article output in Finland has been rising steadily for some years.

Figure 1: Article output in Finland has been rising steadily for some years.

Figure 2: The average h-index of authors in the country went up by 60% in just five years.

Figure 2: The average h-index of authors in the country went up by 60% in just five years.

Jamo Saarti, Library Director at Kuopio University, Finland, says his university has improved its ranking by focusing on strategic research and supporting this with funding. “Kuopio University has made publishing papers in international and high-quality journals a clear priority, and we have been using bibliometric tools to find out where to publish.”

Indeed, analysis of recent articles from the university show that well-cited papers have been published in journals such as Annals of Internal Medicine, Cell, Nature, Nature Genetics, The Lancet and Proceedings of the National Academy of Sciences of the United States of America.

“The management at Kuopio University has used ranking lists as tools in evaluation and we in the library have been very active in acquiring the best possible e-journal collections and promoting the use of these to our researchers,” explains Saarti.

He believes that this focus on high-quality publications, coupled with international collaboration, which has been adopted throughout the university, particularly within the natural sciences and (bio)health sciences, has been key to their success. Figure 3 supports this view, showing that citation levels for the university have been steadily growing.

Figure 3: Kuopio University is succeeding in its goal to increase citations.

Figure 3: Kuopio University is succeeding in its goal to increase citations.

Looking at the rate of citations per subject further supports this approach. Kuopio University’s extra focus on fields such as biological sciences and medicine has paid off, as these were among the university’s top-cited subjects in 2006 and 2007 (see Figure 4).

Figure 4: Kuopio University’s focus on sciences has pushed its citations in these areas to new highs.

Figure 4: Kuopio University’s focus on sciences has pushed its citations in these areas to new highs. Data is field-weighted to eliminate differences in underlying citation activity between disciplines.

Tried and tested

The combination of the university’s strategy, research focus, collaboration with library services and utilization of metrics to track progress provides a very sensible approach to institute management and one that is likely to reap benefits. Indeed, many of the efforts described by Saarti are recognized as key strategies for universities to push forward their research productivity and quality.

Useful links:

International Symposium on University Rankings

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

In February 2009, the third International Symposium on University Rankings was held in Leiden, the Netherlands. University rankings were discussed from several perspectives: from the position of the researcher or organization developing the rankings to that of the university dean or provost using the rankings to improve their university’s position.

Professor Anthony F.J. van Raan from the Centre for Science and Technology Studies, Leiden University, gave a presentation on the methods used by the various university-ranking systems around the world. For instance, where The Times Higher Education Supplement (THES) bases its analysis on 20% bibliometric input, Shanghai uses 80% and Leiden 100%.

National rankings often also take external inputs, such as average rents for student accommodation in the relevant city, into account. Gero Federkeil, from the Centre for Higher Education Development, explained that some rankings are even bringing their successful alumni into the picture in much the same way that the research community looks at Nobel Prize Laureates. Having a high number of graduates go on to become CEOs at major companies can also be an indicator of quality.

What do these rankings mean to a university?

In many of the discussions, the speakers said that rankings should not be used for resource allocation. It would be wonderful if they could be used to predict, navigate and forecast, but this is not yet possible. This is an area where further research and development are needed.

Professor Luke Georghiou, University of Manchester, explained that while universities do try to improve their ranking, it is less clear how the rankings actually influence behavior.

Climbing up the rankings

One country that has steadily increased its output and quality of papers in recent years is Finland (see Figures 1 and 2). University administrators are very interested to learn how this remarkable success has been achieved.

Figure 1: Article output in Finland has been rising steadily for some years.

Figure 1: Article output in Finland has been rising steadily for some years.

Figure 2: The average h-index of authors in the country went up by 60% in just five years.

Figure 2: The average h-index of authors in the country went up by 60% in just five years.

Jamo Saarti, Library Director at Kuopio University, Finland, says his university has improved its ranking by focusing on strategic research and supporting this with funding. “Kuopio University has made publishing papers in international and high-quality journals a clear priority, and we have been using bibliometric tools to find out where to publish.”

Indeed, analysis of recent articles from the university show that well-cited papers have been published in journals such as Annals of Internal Medicine, Cell, Nature, Nature Genetics, The Lancet and Proceedings of the National Academy of Sciences of the United States of America.

“The management at Kuopio University has used ranking lists as tools in evaluation and we in the library have been very active in acquiring the best possible e-journal collections and promoting the use of these to our researchers,” explains Saarti.

He believes that this focus on high-quality publications, coupled with international collaboration, which has been adopted throughout the university, particularly within the natural sciences and (bio)health sciences, has been key to their success. Figure 3 supports this view, showing that citation levels for the university have been steadily growing.

Figure 3: Kuopio University is succeeding in its goal to increase citations.

Figure 3: Kuopio University is succeeding in its goal to increase citations.

Looking at the rate of citations per subject further supports this approach. Kuopio University’s extra focus on fields such as biological sciences and medicine has paid off, as these were among the university’s top-cited subjects in 2006 and 2007 (see Figure 4).

Figure 4: Kuopio University’s focus on sciences has pushed its citations in these areas to new highs.

Figure 4: Kuopio University’s focus on sciences has pushed its citations in these areas to new highs. Data is field-weighted to eliminate differences in underlying citation activity between disciplines.

Tried and tested

The combination of the university’s strategy, research focus, collaboration with library services and utilization of metrics to track progress provides a very sensible approach to institute management and one that is likely to reap benefits. Indeed, many of the efforts described by Saarti are recognized as key strategies for universities to push forward their research productivity and quality.

Useful links:

International Symposium on University Rankings

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
Research Trends Image

Measuring up: how does the h-index correlate with peer assessments?

There are two broad approaches to assess research performance: peer review and the numerous indices based on bibliometric data and analysis. But do they both provide comparable results, and how should they be used? We ask Lutz Bornmann and Hans-Dieter Daniel how the h-index performs against peer review.

Read more >


Since it was first proposed in 2005, Hirsch’s h-index (1) has made a considerable impact on both bibliometricians and the wider scientific community by offering an additional yardstick for assessing individual researchers’ scholarly output and influence. Hirsch’s original paper has been cited more than 280 times in journals, conference proceedings and book series in 14 languages from fields as diverse as medicine and mathematics to engineering and economics (data from Scopus).

The h-index is defined as the number of an individual researcher’s articles that have received the same number (or more) of citations since publication. It is easily derived from any comprehensive list of an author’s papers by ranking them in descending order of citations received and then identifying the rank position at which the number of citations is not less than the ranked value. Since it combines measures of productivity (the upper limit of the h-index for a given author is the total number of papers published) and a proxy for quality (citations received), it has become an attractive all-in-one metric for comparing researchers.

The h-index, and the numerous variants that have proliferated since 2005, can only be used to compare researchers within the same research field; this is true of all metrics that do not account for the publication and citation practices of the various research fields.

Is the h-index a match for peer assessment?

Lutz Bornmann

Lutz Bornmann

An important and interesting question when evaluating individuals is how well the results of bibliometric assessment compare with peer assessment.

For many years, Lutz Bornmann and Hans-Dieter Daniel, at the Swiss Federal Institute of Technology in Zurich and the University of Zurich respectively, have been investigating the review processes used by funding institutions.

Hans-Dieter Daniel

Hans-Dieter Daniel

Explaining their findings, Bornmann says: “In two investigations (3, 4), we have shown that for individual scientists the h-index correlates well with the number of publications and the number of citations that these publications have attracted. This is hardly surprising given that the h-index was proposed to do exactly that.”

In three studies (2, 3, 4), they also examined the relationship between the h-index and peer judgments of research performance. “In these studies, we have shown that the average h-index values of accepted applicants for biomedicine research grants are statistically significantly higher than for rejected applicants.”

Impact versus quantity

Best-practice: getting the most out of the h-index and variants

Use several indicators to measure research performance: the publication set of a scientist, journal, research group or scientific facility should always be described using a multitude of indicators, such as the numbers of publications with zero citations, highly-cited papers and papers for which the scientist is first or last author. Non-publication indicators, such as awards, grant funding and speaking engagements could also be used.

To measure the quality of scientific output using h-index variants, it is sufficient to use just two variants: one that measures productivity and one that measures impact (e.g. the h-index and a-index) (5).

If the h-index is used to evaluate research performance, the fact that it is dependent upon the length of an academic career and the field of study in which the papers are published and cited should always be taken into account. The index should only be used to compare researchers of a similar age and within the same field of study.

However, the h-index has certain disadvantages, including a bias towards older researchers and a failure to place emphasis on highly cited papers. This has led to the development of numerous variants of the h-index. The m-quotient, for example, is computed by dividing the h-index by the number of years that the scientist has been active since the first published paper. Unlike the h-index, the m-quotient avoids a bias towards more senior scientists with longer careers and more publications.

Another variant, the a-index, indicates the average number of citations of publications in the Hirsch core (publications with ≥h citations). In contrast to the h-index, which corresponds to the number of citations for the publication with the fewest citations in the Hirsch core, the a-index is meant to give more weight to highly cited papers.

Bornmann says: “The results of our study (5) show that the h-index and its variants are, in effect, two types of indices: one type describes the most productive core of a scientist’s output and the number of papers in that core; the other type depicts the impact of those papers in the core.”

Using indices wisely

Bornmann and Daniel believe that while their studies (2, 3, 4) provide an initial confirmation of the h-index’s validity, more time and research is required before it can be used in practice to assess scientific work.

“As a basic principle, it is always prudent to use several indicators to measure research performance,” says Bornmann. “The publication set of a scientist, journal, research group or scientific facility should always be described using a multitude of indicators, such as the numbers of publications with zero citations, highly-cited papers and papers for which the scientist is first or last author.”

Bibliometric indicators can and should be used to support peer review, especially where efficiencies are sought. Current research clearly supports the hypothesis that such indicators can approximate the results of peer review, and many research institutes and research councils are already using indices to support their assessments. Informed peer review currently is the state of the art of research evaluation.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Since it was first proposed in 2005, Hirsch’s h-index (1) has made a considerable impact on both bibliometricians and the wider scientific community by offering an additional yardstick for assessing individual researchers’ scholarly output and influence. Hirsch’s original paper has been cited more than 280 times in journals, conference proceedings and book series in 14 languages from fields as diverse as medicine and mathematics to engineering and economics (data from Scopus).

The h-index is defined as the number of an individual researcher’s articles that have received the same number (or more) of citations since publication. It is easily derived from any comprehensive list of an author’s papers by ranking them in descending order of citations received and then identifying the rank position at which the number of citations is not less than the ranked value. Since it combines measures of productivity (the upper limit of the h-index for a given author is the total number of papers published) and a proxy for quality (citations received), it has become an attractive all-in-one metric for comparing researchers.

The h-index, and the numerous variants that have proliferated since 2005, can only be used to compare researchers within the same research field; this is true of all metrics that do not account for the publication and citation practices of the various research fields.

Is the h-index a match for peer assessment?

Lutz Bornmann

Lutz Bornmann

An important and interesting question when evaluating individuals is how well the results of bibliometric assessment compare with peer assessment.

For many years, Lutz Bornmann and Hans-Dieter Daniel, at the Swiss Federal Institute of Technology in Zurich and the University of Zurich respectively, have been investigating the review processes used by funding institutions.

Hans-Dieter Daniel

Hans-Dieter Daniel

Explaining their findings, Bornmann says: “In two investigations (3, 4), we have shown that for individual scientists the h-index correlates well with the number of publications and the number of citations that these publications have attracted. This is hardly surprising given that the h-index was proposed to do exactly that.”

In three studies (2, 3, 4), they also examined the relationship between the h-index and peer judgments of research performance. “In these studies, we have shown that the average h-index values of accepted applicants for biomedicine research grants are statistically significantly higher than for rejected applicants.”

Impact versus quantity

Best-practice: getting the most out of the h-index and variants

Use several indicators to measure research performance: the publication set of a scientist, journal, research group or scientific facility should always be described using a multitude of indicators, such as the numbers of publications with zero citations, highly-cited papers and papers for which the scientist is first or last author. Non-publication indicators, such as awards, grant funding and speaking engagements could also be used.

To measure the quality of scientific output using h-index variants, it is sufficient to use just two variants: one that measures productivity and one that measures impact (e.g. the h-index and a-index) (5).

If the h-index is used to evaluate research performance, the fact that it is dependent upon the length of an academic career and the field of study in which the papers are published and cited should always be taken into account. The index should only be used to compare researchers of a similar age and within the same field of study.

However, the h-index has certain disadvantages, including a bias towards older researchers and a failure to place emphasis on highly cited papers. This has led to the development of numerous variants of the h-index. The m-quotient, for example, is computed by dividing the h-index by the number of years that the scientist has been active since the first published paper. Unlike the h-index, the m-quotient avoids a bias towards more senior scientists with longer careers and more publications.

Another variant, the a-index, indicates the average number of citations of publications in the Hirsch core (publications with ≥h citations). In contrast to the h-index, which corresponds to the number of citations for the publication with the fewest citations in the Hirsch core, the a-index is meant to give more weight to highly cited papers.

Bornmann says: “The results of our study (5) show that the h-index and its variants are, in effect, two types of indices: one type describes the most productive core of a scientist’s output and the number of papers in that core; the other type depicts the impact of those papers in the core.”

Using indices wisely

Bornmann and Daniel believe that while their studies (2, 3, 4) provide an initial confirmation of the h-index’s validity, more time and research is required before it can be used in practice to assess scientific work.

“As a basic principle, it is always prudent to use several indicators to measure research performance,” says Bornmann. “The publication set of a scientist, journal, research group or scientific facility should always be described using a multitude of indicators, such as the numbers of publications with zero citations, highly-cited papers and papers for which the scientist is first or last author.”

Bibliometric indicators can and should be used to support peer review, especially where efficiencies are sought. Current research clearly supports the hypothesis that such indicators can approximate the results of peer review, and many research institutes and research councils are already using indices to support their assessments. Informed peer review currently is the state of the art of research evaluation.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Scientometrics from past to present: part two

The first part of this article covered the early interests of scholars in law and psychology at the beginning of the 19th century. Since that time, scientometrics has matured and developed into a respected and recognized field in its own right. In the 1980s, new technology was applied to bibliometric research, including citation mapping techniques […]

Read more >


The first part of this article covered the early interests of scholars in law and psychology at the beginning of the 19th century. Since that time, scientometrics has matured and developed into a respected and recognized field in its own right.

Fig 1

In the 1980s, new technology was applied to bibliometric research, including citation mapping techniques from CWTS at the University of Leiden, and specialist research solutions from the Institute for Scientific Information, led by Eugene Garfield and Henry Small. The first major award for the scientometric field, the Derek John De Solla Price Award of the journal Scientometrics, was first awarded in 1984 to Eugene Garfield. Several of the key contributors to Scientometrics received this award throughout the 1980s: Michael Moravcsik, Tibor Braun, Vasily Nalimov, Henry Small, Francis Narin, Bertram Brookes and Jan Vlachý. The 1990s saw the birth of the first society for the scientometric community: the International Society for Scientometrics and Informetrics (ISSI). The De Solla Price Award is now presented biannually at the ISSI meeting.

The Web and web-based tools

The Impact Factor continued to grow in significance within the scientific world. Many researchers started to use the metric in grant, funding and tenure applications. In the late 1990s Thomson Scientific launched a web-based version of the citation indices, allowing users to search across citation databases on the Internet. Indeed, the Internet has become a vital tool for investigation and has given rise to several new citation measures that were previously impossible. These include article download counts and Google’s PageRank, a numerical value that represents the importance of a page on the Web. New areas such as webometrics have also developed to look at the quality of Web pages and links within them. Web usage and weblog analysis are sophisticated new techniques that allow researchers to understand how the Web is used for analysis.

In 2004 Scopus was released as a new tool to search and navigate through the literature and link between references and citations. This abstract and citation database of peer-reviewed literature, patents and Web sources has also introduced additional tools that increase the speed and accuracy of research evaluation. One of these is the Author Identifier that automatically matches and de-duplicates author names, with a 99% accuracy rate. Attention is increasingly turning from rating the performance of journals to also rating individual authors. The h-index, a simple metric developed in 2005 by Professor J. Hirsch and adopted by Scopus and Web of Science is one way to do this, while the Scopus Citation Tracker allows users to track who is being cited, how often and by whom. This can also help identify research trends. Other key indicators that have been developed include the Eigenfactor, the Y factor and the g-index.

Wider relevance of scientometrics

A new journal for the scientometrics field was launched in 2006. The Journal of Informetrics, edited by Professor Leo Egghe, is an additional forum to disseminate scientometric research findings, alongside established journals such as Scientometrics, Journal of the American Society for Information Science & Technology and the Journal of Information Science.

While the development of a new journal in the field illustrates the growth and proliferation of research within the scientometrics community, it is also important to recognize the science’s wider relevance and application. Scientometrics has been used in creating thesauri and exploring the grammatical and syntactical structures of texts. Governments and policymakers are also increasingly adopting scientometrics, for example in the UK Research Assessment Exercise and the Australian Research Quality Framework, as a means of allocating research funds or to ensure the decisions they make are based on unbiased, credible research.

VN:F [1.9.22_1171]
Rating: 4.0/10 (1 vote cast)

The first part of this article covered the early interests of scholars in law and psychology at the beginning of the 19th century. Since that time, scientometrics has matured and developed into a respected and recognized field in its own right.

Fig 1

In the 1980s, new technology was applied to bibliometric research, including citation mapping techniques from CWTS at the University of Leiden, and specialist research solutions from the Institute for Scientific Information, led by Eugene Garfield and Henry Small. The first major award for the scientometric field, the Derek John De Solla Price Award of the journal Scientometrics, was first awarded in 1984 to Eugene Garfield. Several of the key contributors to Scientometrics received this award throughout the 1980s: Michael Moravcsik, Tibor Braun, Vasily Nalimov, Henry Small, Francis Narin, Bertram Brookes and Jan Vlachý. The 1990s saw the birth of the first society for the scientometric community: the International Society for Scientometrics and Informetrics (ISSI). The De Solla Price Award is now presented biannually at the ISSI meeting.

The Web and web-based tools

The Impact Factor continued to grow in significance within the scientific world. Many researchers started to use the metric in grant, funding and tenure applications. In the late 1990s Thomson Scientific launched a web-based version of the citation indices, allowing users to search across citation databases on the Internet. Indeed, the Internet has become a vital tool for investigation and has given rise to several new citation measures that were previously impossible. These include article download counts and Google’s PageRank, a numerical value that represents the importance of a page on the Web. New areas such as webometrics have also developed to look at the quality of Web pages and links within them. Web usage and weblog analysis are sophisticated new techniques that allow researchers to understand how the Web is used for analysis.

In 2004 Scopus was released as a new tool to search and navigate through the literature and link between references and citations. This abstract and citation database of peer-reviewed literature, patents and Web sources has also introduced additional tools that increase the speed and accuracy of research evaluation. One of these is the Author Identifier that automatically matches and de-duplicates author names, with a 99% accuracy rate. Attention is increasingly turning from rating the performance of journals to also rating individual authors. The h-index, a simple metric developed in 2005 by Professor J. Hirsch and adopted by Scopus and Web of Science is one way to do this, while the Scopus Citation Tracker allows users to track who is being cited, how often and by whom. This can also help identify research trends. Other key indicators that have been developed include the Eigenfactor, the Y factor and the g-index.

Wider relevance of scientometrics

A new journal for the scientometrics field was launched in 2006. The Journal of Informetrics, edited by Professor Leo Egghe, is an additional forum to disseminate scientometric research findings, alongside established journals such as Scientometrics, Journal of the American Society for Information Science & Technology and the Journal of Information Science.

While the development of a new journal in the field illustrates the growth and proliferation of research within the scientometrics community, it is also important to recognize the science’s wider relevance and application. Scientometrics has been used in creating thesauri and exploring the grammatical and syntactical structures of texts. Governments and policymakers are also increasingly adopting scientometrics, for example in the UK Research Assessment Exercise and the Australian Research Quality Framework, as a means of allocating research funds or to ensure the decisions they make are based on unbiased, credible research.

VN:F [1.9.22_1171]
Rating: 4.0/10 (1 vote cast)

Breaking boundaries: patterns in interdisciplinary citation

Collaboration has always been an essential aspect of scientific research. Today, technology is making it easier for researchers in one field to access and identify useful research in other subjects. We take a look at citations made to other subjects to see whether collaboration is increasing and in which areas.

Read more >


Science today is separated into many areas that relate to each other in different ways. But are there any areas of research that cross the boundaries of science? Which are the most interdisciplinary areas of research?

This article investigates the major subject areas identified in Scopus that are cited by other subject areas, and attempts to identify those that show the most interdisciplinary citation patterns. We have taken articles published in each subject area between the years 1996–2000 and 2003–2007 and measured citations to these from other subject areas within the same two periods. We can then compare the percentage of citations received by other subjects across both time periods to determine which areas showed the biggest shift in citation patterns.

The results were mixed. For instance, medicine showed very little variation in citation patterns between the two periods, with the majority of citations coming from other medical fields and those in associated life sciences (see Figure 1).

A similar pattern was seen in other medical and life science areas, including biochemistry, neuroscience, nursing, and pharmacology and toxicology. Areas such as arts and humanities, social sciences or psychology also indicated no significant shift in the citation patterns of these fields, although it is worth mentioning that some of these subjects are already diverse by nature.

Figure 1: Differences in citations to medicine from other subject fields.

Figure 1: Differences in citations to medicine from other subject fields.

Branching out…

In contrast, fields such as computer science, engineering, energy and mathematics all showed a great deal of change in the subjects that cite them. Figure 2 illustrates the pattern for mathematics and Figure 3 for computer science.

Figure 2: Differences in citations to mathematics from other subject fields.

Figure 2: Differences in citations to mathematics from other subject fields.

Figure 3: Differences in citations to computer science from other subject fields.

Figure 3: Differences in citations to computer science from other subject fields.

These results indicate a shift in the citation patterns, with different subject areas making citations to academic literature. It also points to a tendency for changes in the nature of the citation relationships of these fields. Indeed, within computer science, shifts of up to 6% are seen in citation activity to other areas, with the main shifts being evident in citations from engineering and mathematics.

To investigate these shifts more closely we compared the top ten most-citing subjects to two fields that seem to show the highest interdisciplinary origin of their citing articles – energy and engineering. Figures 4 and 5 illustrate the percentage breakdown of citations to these areas.

Both energy and engineering have a diverse citation spread and have shown an increase in the “other” areas that have cited them between the two time periods. Energy has shown a 2% shift in citations from “other” fields, while engineering has shown a 6% shift.

Figures 4: Comparison of top ten subjects citing the field of energy, 1996–2000.

Figures 4: Comparison of top ten subjects citing the field of energy, 1996–2000.

Figures 5: Comparison of top ten subjects citing the field of energy, 2003–2007.

Figures 5: Comparison of top ten subjects citing the field of energy, 2003–2007.

Figures 6: Comparison of top ten subjects citing the field of engineering, 1996–2000.

Figures 6: Comparison of top ten subjects citing the field of engineering, 1996–2000.

Figures 7: Comparison of top ten subjects citing the field of engineering, 2003–2007.

Figures 7: Comparison of top ten subjects citing the field of engineering, 2003–2007.

…or converging?

Moshe Kam, Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and Professor at Drexel University, the US, is not surprised by these findings. He says that many research areas that were relatively “isolated” in the past have been developing a stronger interface with disciplines within engineering and computing.

Kam explains: “Rather than interpreting the data as showing increased cross-disciplinary activity, the data may actually indicate that some disciplines and sub-disciplines are converging, or even merging. One example is the increase in the volume of work at the interface of life sciences, computer science, computer engineering and electrical engineering. It is clear from reading papers at this intersection of subjects that many scientists and engineers who were educated in a traditional ‘standalone’ discipline have educated themselves quite well in other areas. At times it is hard to distinguish between the pattern-recognition specialist, the biological-computation expert and the software engineer. There is much less compartmentalization and much more sharing – not only in the results of tasks divided between researchers, but in actually doing the detailed research work together.”

It thus appears that for researchers in certain subjects, the results of research in certain other, complementary fields, are not only of added value; they are becoming essential. If Moshe is correct, the trend is towards convergence rather than cross-disciplinarity for fields that share common research questions and approaches. It remains to be seen whether this will lead to new areas of study at the intersections of complementary fields or greater collaboration between experts within those fields.

Useful links:

IEEE

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Science today is separated into many areas that relate to each other in different ways. But are there any areas of research that cross the boundaries of science? Which are the most interdisciplinary areas of research?

This article investigates the major subject areas identified in Scopus that are cited by other subject areas, and attempts to identify those that show the most interdisciplinary citation patterns. We have taken articles published in each subject area between the years 1996–2000 and 2003–2007 and measured citations to these from other subject areas within the same two periods. We can then compare the percentage of citations received by other subjects across both time periods to determine which areas showed the biggest shift in citation patterns.

The results were mixed. For instance, medicine showed very little variation in citation patterns between the two periods, with the majority of citations coming from other medical fields and those in associated life sciences (see Figure 1).

A similar pattern was seen in other medical and life science areas, including biochemistry, neuroscience, nursing, and pharmacology and toxicology. Areas such as arts and humanities, social sciences or psychology also indicated no significant shift in the citation patterns of these fields, although it is worth mentioning that some of these subjects are already diverse by nature.

Figure 1: Differences in citations to medicine from other subject fields.

Figure 1: Differences in citations to medicine from other subject fields.

Branching out…

In contrast, fields such as computer science, engineering, energy and mathematics all showed a great deal of change in the subjects that cite them. Figure 2 illustrates the pattern for mathematics and Figure 3 for computer science.

Figure 2: Differences in citations to mathematics from other subject fields.

Figure 2: Differences in citations to mathematics from other subject fields.

Figure 3: Differences in citations to computer science from other subject fields.

Figure 3: Differences in citations to computer science from other subject fields.

These results indicate a shift in the citation patterns, with different subject areas making citations to academic literature. It also points to a tendency for changes in the nature of the citation relationships of these fields. Indeed, within computer science, shifts of up to 6% are seen in citation activity to other areas, with the main shifts being evident in citations from engineering and mathematics.

To investigate these shifts more closely we compared the top ten most-citing subjects to two fields that seem to show the highest interdisciplinary origin of their citing articles – energy and engineering. Figures 4 and 5 illustrate the percentage breakdown of citations to these areas.

Both energy and engineering have a diverse citation spread and have shown an increase in the “other” areas that have cited them between the two time periods. Energy has shown a 2% shift in citations from “other” fields, while engineering has shown a 6% shift.

Figures 4: Comparison of top ten subjects citing the field of energy, 1996–2000.

Figures 4: Comparison of top ten subjects citing the field of energy, 1996–2000.

Figures 5: Comparison of top ten subjects citing the field of energy, 2003–2007.

Figures 5: Comparison of top ten subjects citing the field of energy, 2003–2007.

Figures 6: Comparison of top ten subjects citing the field of engineering, 1996–2000.

Figures 6: Comparison of top ten subjects citing the field of engineering, 1996–2000.

Figures 7: Comparison of top ten subjects citing the field of engineering, 2003–2007.

Figures 7: Comparison of top ten subjects citing the field of engineering, 2003–2007.

…or converging?

Moshe Kam, Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and Professor at Drexel University, the US, is not surprised by these findings. He says that many research areas that were relatively “isolated” in the past have been developing a stronger interface with disciplines within engineering and computing.

Kam explains: “Rather than interpreting the data as showing increased cross-disciplinary activity, the data may actually indicate that some disciplines and sub-disciplines are converging, or even merging. One example is the increase in the volume of work at the interface of life sciences, computer science, computer engineering and electrical engineering. It is clear from reading papers at this intersection of subjects that many scientists and engineers who were educated in a traditional ‘standalone’ discipline have educated themselves quite well in other areas. At times it is hard to distinguish between the pattern-recognition specialist, the biological-computation expert and the software engineer. There is much less compartmentalization and much more sharing – not only in the results of tasks divided between researchers, but in actually doing the detailed research work together.”

It thus appears that for researchers in certain subjects, the results of research in certain other, complementary fields, are not only of added value; they are becoming essential. If Moshe is correct, the trend is towards convergence rather than cross-disciplinarity for fields that share common research questions and approaches. It remains to be seen whether this will lead to new areas of study at the intersections of complementary fields or greater collaboration between experts within those fields.

Useful links:

IEEE

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

…a Top-Cited marketing paper?

Stephen Vargo and Robert Lusch’s 2004 paper “Evolving to a new dominant logic for marketing” is the Top-Cited in its category. We ask Vargo and one its many citers why they think this article is so successful.

Read more >


In the subject area Economics, Econometrics and Finance, the paper “Evolving to a new dominant logic for marketing”, published by Stephen Vargo and Robert Lusch in the Journal of Marketing, was the TopCited article between 2004 and 2008. This article has been cited 282 times.

Relevance and timing count

Professor Vargo from the Shidler College of Business at the University of Hawaii, US, explains: “While we did not fully anticipate the impact the article would have, I think there are several reasons for it. First, it was intended to capture and extend a general evolution in thought about economic exchange, both within and outside of marketing. The most common comment we receive is something like ‘you said what I have been trying to say’ in part or in whole. Thus, although it was published in a marketing journal, it seems to have resonated with a much larger audience.

“We have also said from the outset that what has now become known as service-dominant (S-D) logic is a work in process and have tried to make its development inclusive. As we have interacted with other scholars, we have modified our original views – and the original foundational premises – and expanded the scope of S-D logic. This approach seems to have been well received.”

Professor Vargo also acknowledges an element of “fortuitous timing” in the article’s success: “The role of service in the economy is becoming increasingly recognized and firms such as IBM and GE – and many others – are shifting from thinking about themselves as manufacturing firms to primarily service firms. Similar shifts are taking place in academic and governmental thinking. S-D logic provides a service-based, conceptual foundation for these changes.”

Busting paradigms

Professor Eric Arnould from the Department of Management and Marketing at the University of Wyoming, US, has cited this paper. He explains: “this article is a paradigm buster; it is as simple as that. The paper took under-systematized currents of thought that have been circulating in the marketing discipline for a number of years and codified them. The paper proposes that marketing is about the exchange of services or resources, not things; and that value is always co-created in the exchange of resources both immaterial (operand) and material (operant) between parties. If widely adopted, their detailed proposals will change marketing theory and practice forever. The paper is widely cited because of the ongoing interest in their recommendations both in practice, such as for IBM, and in the academic world. We cited the paper both for its content and its authority as a paradigm buster”.

References:

(1) Vargo, S.L. and Lusch, R.F. (2004) “Evolving to a new dominant logic for marketing”, Journal of Marketing, Vol. 68, issue 1, pp. 1–17.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

In the subject area Economics, Econometrics and Finance, the paper “Evolving to a new dominant logic for marketing”, published by Stephen Vargo and Robert Lusch in the Journal of Marketing, was the TopCited article between 2004 and 2008. This article has been cited 282 times.

Relevance and timing count

Professor Vargo from the Shidler College of Business at the University of Hawaii, US, explains: “While we did not fully anticipate the impact the article would have, I think there are several reasons for it. First, it was intended to capture and extend a general evolution in thought about economic exchange, both within and outside of marketing. The most common comment we receive is something like ‘you said what I have been trying to say’ in part or in whole. Thus, although it was published in a marketing journal, it seems to have resonated with a much larger audience.

“We have also said from the outset that what has now become known as service-dominant (S-D) logic is a work in process and have tried to make its development inclusive. As we have interacted with other scholars, we have modified our original views – and the original foundational premises – and expanded the scope of S-D logic. This approach seems to have been well received.”

Professor Vargo also acknowledges an element of “fortuitous timing” in the article’s success: “The role of service in the economy is becoming increasingly recognized and firms such as IBM and GE – and many others – are shifting from thinking about themselves as manufacturing firms to primarily service firms. Similar shifts are taking place in academic and governmental thinking. S-D logic provides a service-based, conceptual foundation for these changes.”

Busting paradigms

Professor Eric Arnould from the Department of Management and Marketing at the University of Wyoming, US, has cited this paper. He explains: “this article is a paradigm buster; it is as simple as that. The paper took under-systematized currents of thought that have been circulating in the marketing discipline for a number of years and codified them. The paper proposes that marketing is about the exchange of services or resources, not things; and that value is always co-created in the exchange of resources both immaterial (operand) and material (operant) between parties. If widely adopted, their detailed proposals will change marketing theory and practice forever. The paper is widely cited because of the ongoing interest in their recommendations both in practice, such as for IBM, and in the academic world. We cited the paper both for its content and its authority as a paradigm buster”.

References:

(1) Vargo, S.L. and Lusch, R.F. (2004) “Evolving to a new dominant logic for marketing”, Journal of Marketing, Vol. 68, issue 1, pp. 1–17.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

From h to g: the evolution of citation indices

The ‘g-index’ was developed by Professor Leo Egghe in 2006 in response to the ‘h-index’. Both indices measure the output (quantity) and impact (quality or visibility) of an individual author. Egghe explains why he thinks the g-index is an improvement.

Read more >


The h-index has become a familiar term among bibliometricians since its inception in 2005, and is being increasingly adopted by non-bibliometricians. The letter h is often thought to stand for the h in Hirsch, the name of the physicist who developed it, although it is actually short for ‘highly cited’. The h-index is therefore the number of papers that receive h or more citations. For example: Professor X has an h-index of 39 if 39 of his 185 papers have at least 39 citations each and the other 146 (185-39) papers have not more than 39 citations each.

Previous indices have tended only to focus on the impact of individual journals, using the average number of times published papers are cited up to two years after publication. This means that one paper in the journal might have been highly cited and another hardly at all but the authors of both are judged equally on the Impact Factor of their journal. While the h-index can measure individual authors, thereby overcoming the shortcomings of journal Impact Factor, it has limitations of its own. “It is insensitive to the tail of infrequently cited papers, which is a good property,” says Professor Leo Egghe, Chief Librarian at Hasselt University, Belgium and Editor-in-Chief of the Journal of Informetrics, “but it’s not sufficiently sensitive to the level of highly cited papers. Once an article belongs to the h top class, the index does not take into account whether that article continues to be cited and, if so, whether it receives 10, 100 or 1000 more citations.”

What’s in a name?

The g-index is so called for two reasons: Egghe rejected the name ‘e-index’ on the grounds that it has a different connotation in mathematics. He therefore looked at the two g’s in his surname instead. G also falls immediately before h in the alphabet, reinforcing its link to the h-index.

Lotka’s Law

This is where the g-index has evolved from its predecessor. It has all the advantages and simplicity of the h-index, but also takes into account the performance of the top articles. It was in direct response to his criticisms of the h-index that Egghe developed the g-index. No newcomer to bibliometrics, Egghe’s main area of expertise is Lotka’s Law. The premise of this Law is that as the number of articles published increases, the authors producing that many publications decreases. This principle forms the basis of the h- and the g-indices, the formulae for both of which Egghe was the first to prove. The difference between them is that while the top h papers can have many more citations than the h-index would suggest, the g-index is the highest number g of papers that together received g2 or more citations. This means that the g-index score will be higher than that of the h-index. It also makes the differences between two authors’ respective impacts more apparent. “The only disadvantage I’ve found so far with the g-index is that you need a longer table of numbers to reach your conclusions!” says Egghe.

Access to funds

For many scientists, there is a direct correlation between where they are ranked in their field and the amount of funding they can attract. “Everything is measured these days, which explains the growth of bibliometrics as a whole,” says Egghe. “The g-index enables easy analysis of the highest cited papers; but the reality is that as time passes, it’s not going to be possible to measure an author’s performance using just one tool. A range of indices is needed that together will produce a highly accurate evaluation of an author’s impact.”

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The h-index has become a familiar term among bibliometricians since its inception in 2005, and is being increasingly adopted by non-bibliometricians. The letter h is often thought to stand for the h in Hirsch, the name of the physicist who developed it, although it is actually short for ‘highly cited’. The h-index is therefore the number of papers that receive h or more citations. For example: Professor X has an h-index of 39 if 39 of his 185 papers have at least 39 citations each and the other 146 (185-39) papers have not more than 39 citations each.

Previous indices have tended only to focus on the impact of individual journals, using the average number of times published papers are cited up to two years after publication. This means that one paper in the journal might have been highly cited and another hardly at all but the authors of both are judged equally on the Impact Factor of their journal. While the h-index can measure individual authors, thereby overcoming the shortcomings of journal Impact Factor, it has limitations of its own. “It is insensitive to the tail of infrequently cited papers, which is a good property,” says Professor Leo Egghe, Chief Librarian at Hasselt University, Belgium and Editor-in-Chief of the Journal of Informetrics, “but it’s not sufficiently sensitive to the level of highly cited papers. Once an article belongs to the h top class, the index does not take into account whether that article continues to be cited and, if so, whether it receives 10, 100 or 1000 more citations.”

What’s in a name?

The g-index is so called for two reasons: Egghe rejected the name ‘e-index’ on the grounds that it has a different connotation in mathematics. He therefore looked at the two g’s in his surname instead. G also falls immediately before h in the alphabet, reinforcing its link to the h-index.

Lotka’s Law

This is where the g-index has evolved from its predecessor. It has all the advantages and simplicity of the h-index, but also takes into account the performance of the top articles. It was in direct response to his criticisms of the h-index that Egghe developed the g-index. No newcomer to bibliometrics, Egghe’s main area of expertise is Lotka’s Law. The premise of this Law is that as the number of articles published increases, the authors producing that many publications decreases. This principle forms the basis of the h- and the g-indices, the formulae for both of which Egghe was the first to prove. The difference between them is that while the top h papers can have many more citations than the h-index would suggest, the g-index is the highest number g of papers that together received g2 or more citations. This means that the g-index score will be higher than that of the h-index. It also makes the differences between two authors’ respective impacts more apparent. “The only disadvantage I’ve found so far with the g-index is that you need a longer table of numbers to reach your conclusions!” says Egghe.

Access to funds

For many scientists, there is a direct correlation between where they are ranked in their field and the amount of funding they can attract. “Everything is measured these days, which explains the growth of bibliometrics as a whole,” says Egghe. “The g-index enables easy analysis of the highest cited papers; but the reality is that as time passes, it’s not going to be possible to measure an author’s performance using just one tool. A range of indices is needed that together will produce a highly accurate evaluation of an author’s impact.”

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Inspired by bibliometrics

For many editors, the bibliometrics underlying journal rankings are a fuzzy area. But for those already using similar techniques in their research, bibliometrics is another tool to help increase journal quality. Brian Fath tells us how the work of Derek de Solla Price has opened his eyes to the world of citation analysis.

Read more >


Brian Fath

Brian Fath

Brian Fath is an Associate Professor in the Department of Biological Sciences at Towson University, USA, and Editor-in-Chief for the journal, Ecological Modelling. Like all journal editors, he wants his journal to continue improving. However, unlike many editors, he has a passion for network analysis, giving him a unique insight into the way ranking metrics are calculated and an enhanced understanding of how scholarly literature is cited within communities.

Fath uses ecological network analysis to identify relationships between non-connected elements in food webs. He says: “Network analysis is a very powerful tool to identify hidden relationships. We can now integrate the networks of different systems and identify indirect pathways, making it possible for us to see the unexpected consequences of our actions. For example, CFCs looked good in the lab, but it took 40 years to understand their effect on the planet. Through network analysis, we can potentially gauge those effects before we cause them.”

In October 2007, he was invited to give a presentation on “Assessing Journal Quality Using Bibliometrics” at the Elsevier Editors’ Conference in Miami. While carrying out background research, he came across Derek de Solla Price. “His 1965 paper (1) was a revelation, and I literally just stumbled upon it,” he recalls.

Eye opener

“I thought this paper was fascinating. For instance, de Solla Price identifies research fronts, marked by review papers. This is important, because he also shows that the frequency of review papers is not linked to time, but to the number of papers published in the field. Hot topics, where a lot of papers are published, prompt review papers more frequently than slower-paced areas. This changed my mind on the frequency of publishing review papers,” says Fath.

He was also interested in de Solla Price’s discussion of non-cited papers. Around 35% of papers in a given year are never cited. Editors obviously want to publish the best research, but how can they recognize the outliers? “Our journal is quite avant-garde. We publish some novel papers, and naturally some don’t get cited. But on the other hand, if we could find a way to reduce the number of non-cited papers, our Impact Factor would go up,” he remarks.

Improving quality

Fath believes that bibliometrics can help editors improve the quality of their journals. “We can improve the field by knowing when to call for a review paper and by promoting timely special issues, and these actions are reflected in our bibliometrics,” he says. For instance, he recently discovered that special issues of his journal were actually less frequently cited than regular issues. “We’ve decided to try doing themed issues next year to see if that serves the community better than traditional conference-based special issues,” he says.

He is also paying more attention to keywords in papers, and especially in abstracts. He believes that, “people are really starting to use search engines to find papers, and it seems logical to use keywords. Abstracts are also very important: well-written, clear English is very attractive.”

He does have one concern, however. “We are going through a period of rapid journal growth, which I don’t think is sustainable. It’s possible to get almost anything published somewhere these days – in fact, it can get quite hard to follow the literature. And all these papers are citing other papers, which means everyone’s Impact Factor is increasing. But I wonder if it’s sustainable; can all these new journals also expect their Impact Factors to rise?”

Yet overall, despite some resistance, Fath is convinced that citation analysis is very valuable: “Communities should be citing each other – this is what marks them out as a community; and if you’re not being cited by your own community, you should want to know this and do something about it.”

References:

(1) de Solla Price, D.J. (1965) “Networks of scientific papers”, Science, Vol. 149, pp. 510–15.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Brian Fath

Brian Fath

Brian Fath is an Associate Professor in the Department of Biological Sciences at Towson University, USA, and Editor-in-Chief for the journal, Ecological Modelling. Like all journal editors, he wants his journal to continue improving. However, unlike many editors, he has a passion for network analysis, giving him a unique insight into the way ranking metrics are calculated and an enhanced understanding of how scholarly literature is cited within communities.

Fath uses ecological network analysis to identify relationships between non-connected elements in food webs. He says: “Network analysis is a very powerful tool to identify hidden relationships. We can now integrate the networks of different systems and identify indirect pathways, making it possible for us to see the unexpected consequences of our actions. For example, CFCs looked good in the lab, but it took 40 years to understand their effect on the planet. Through network analysis, we can potentially gauge those effects before we cause them.”

In October 2007, he was invited to give a presentation on “Assessing Journal Quality Using Bibliometrics” at the Elsevier Editors’ Conference in Miami. While carrying out background research, he came across Derek de Solla Price. “His 1965 paper (1) was a revelation, and I literally just stumbled upon it,” he recalls.

Eye opener

“I thought this paper was fascinating. For instance, de Solla Price identifies research fronts, marked by review papers. This is important, because he also shows that the frequency of review papers is not linked to time, but to the number of papers published in the field. Hot topics, where a lot of papers are published, prompt review papers more frequently than slower-paced areas. This changed my mind on the frequency of publishing review papers,” says Fath.

He was also interested in de Solla Price’s discussion of non-cited papers. Around 35% of papers in a given year are never cited. Editors obviously want to publish the best research, but how can they recognize the outliers? “Our journal is quite avant-garde. We publish some novel papers, and naturally some don’t get cited. But on the other hand, if we could find a way to reduce the number of non-cited papers, our Impact Factor would go up,” he remarks.

Improving quality

Fath believes that bibliometrics can help editors improve the quality of their journals. “We can improve the field by knowing when to call for a review paper and by promoting timely special issues, and these actions are reflected in our bibliometrics,” he says. For instance, he recently discovered that special issues of his journal were actually less frequently cited than regular issues. “We’ve decided to try doing themed issues next year to see if that serves the community better than traditional conference-based special issues,” he says.

He is also paying more attention to keywords in papers, and especially in abstracts. He believes that, “people are really starting to use search engines to find papers, and it seems logical to use keywords. Abstracts are also very important: well-written, clear English is very attractive.”

He does have one concern, however. “We are going through a period of rapid journal growth, which I don’t think is sustainable. It’s possible to get almost anything published somewhere these days – in fact, it can get quite hard to follow the literature. And all these papers are citing other papers, which means everyone’s Impact Factor is increasing. But I wonder if it’s sustainable; can all these new journals also expect their Impact Factors to rise?”

Yet overall, despite some resistance, Fath is convinced that citation analysis is very valuable: “Communities should be citing each other – this is what marks them out as a community; and if you’re not being cited by your own community, you should want to know this and do something about it.”

References:

(1) de Solla Price, D.J. (1965) “Networks of scientific papers”, Science, Vol. 149, pp. 510–15.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Obama’s “Dream Team”

Obama’s new senior science advisory team brings together some of the most successful and influential scientists in the US, ushering in a new era where science is at the centre of policy. We look at the track records of five of the appointees.

Read more >


New US President Barack Obama’s choices for senior science advisory posts in his new government include some of the most prolific and high-impact scientists working in the US today, earning the nickname of Obama’s “Dream Team”.

In his weekly radio address in December 2008, Obama vowed to “put science at the top of our agenda [because] science holds the key to our survival as a planet and our security and prosperity as a nation”.

Environment on the agenda

As Assistant to the President for Science and Technology and Director of the White House Office of Science and Technology Policy, John P. Holdren is Obama’s top science advisor. Based at the Kennedy School of Government at Harvard University, Holdren is a physicist whose publications on sustainable energy technology and energy policy have featured frequently in Science; his seminal 1971 article (with population biologist Paul Ehrlich) entitled “Impact of population growth” (1) continues to be cited strongly (with more than 30 citations during 2007).

Holdren was recently president of the American Association for the Advancement of Science (AAAS) and then chairman of its Board of Directors. In a statement on the AAAS website, the Association’s Chief Executive Officer Alan Leshner noted: “John Holdren’s expertise spans so many issues of great concern at this point in history – climate change, energy and energy technology, nuclear proliferation.”

Another past president of the AAAS, Jane Lubchenco, assumes the role of National Oceanic and Atmospheric Administration (NOAA) Administrator. The first woman to head the agency, Lubchenco has an impressive list of publications in marine ecology, and co-authored a 1997 article warning of the impacts of human activity on the global ecosystem and the immediate need for action that has been cited more than 1,400 times to date (2). Like Holdren, Lubchenco has a Harvard connection, having taken her Ph.D. there in 1975 and holding a teaching post before relocating to Oregon State University in 1978.

Stocking up on Nobel laureates

President Obama’s Secretary of Energy, Steven Chu, Professor of Physics and Molecular & Cellular Biology and Director of the Lawrence Berkeley National Laboratory at the University of California, Berkeley, shared the 1997 Nobel Prize in Physics for his research in cooling and trapping of atoms with laser light. The first Laureate to be appointed to the Cabinet, Chu’s research interests in single-molecule biology are reflected in his list of more than 140 journal publications since 1996, with more than 7,000 citations to date.

Rounding out President Obama’s “Dream Team” are two biologists, Eric Lander and Harold Varmus, co-chairs of the President’s Council of Advisers on Science and Technology (PCAST) with Holdren. PCAST is a panel of private sector and academic representatives established in 2001 to advise on issues related to technology, research priorities and science education.

Lander, founding Director of the Broad Institute of Massachusetts Institute of Technology and Harvard, was instrumental in the Human Genome Project; his more than 350 journal publications have collectively been cited more than 75,000 times since 1996.

Varmus, former director of the National Institutes of Health and President and CEO of Memorial Sloan-Kettering Cancer Center since 2000, is the second Nobel Prize winner (Physiology or Medicine, 1989) appointed to Obama’s team. His prize-winning research on the cellular origin of retroviral oncogenes published in Nature in 1976 (3) continues to be cited (21 times in 2007).

Towards a well-informed future

President Obama has collected some of the finest scientific talent in the US to advise him, with a particular focus on environmental issues. In fact, the team has also been dubbed the “Green Team”. These five individuals were together cited more than 12,000 times in 2007 and their experience spans the breadth of the physical sciences.

Incidentally, Obama himself is a published author, with a dozen journal publications: his 2006 article (4) with erstwhile presidential rival and now Secretary of State Hillary Clinton on healthcare reform has been cited 28 times to date.

President Obama outlined the key role that science policy will play in the US’s economic recovery in his inauguration speech in January: “The state of the economy calls for action, bold and swift, and we will act […] We will restore science to its rightful place”.

References:

(1) Ehrlich P.R. and Holdren J.P. (1971) “Impact of population growth”, Science, Vol. 171, pp. 1212–17.
(2)
Vitousek P.M., Mooney H.A., Lubchenco J. and Melillo J.M. (1997) “Human domination of Earth's ecosystems”, Science, Vol. 277, pp. 494–99.
(3)
Stehelin D., Varmus H.E., Bishop J.M. and Vogt P.K. (1976) “DNA related to the transforming gene(s) of avian sarcoma viruses is present in normal avian DNA”, Nature, Vol. 260, pp. 170–73.
(4)
Clinton H.R. and Obama B. (2006) “Making patient safety the centerpiece of medical liability reform”, New England Journal of Medicine, Vol. 354, pp. 2205–08.


VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

New US President Barack Obama’s choices for senior science advisory posts in his new government include some of the most prolific and high-impact scientists working in the US today, earning the nickname of Obama’s “Dream Team”.

In his weekly radio address in December 2008, Obama vowed to “put science at the top of our agenda [because] science holds the key to our survival as a planet and our security and prosperity as a nation”.

Environment on the agenda

As Assistant to the President for Science and Technology and Director of the White House Office of Science and Technology Policy, John P. Holdren is Obama’s top science advisor. Based at the Kennedy School of Government at Harvard University, Holdren is a physicist whose publications on sustainable energy technology and energy policy have featured frequently in Science; his seminal 1971 article (with population biologist Paul Ehrlich) entitled “Impact of population growth” (1) continues to be cited strongly (with more than 30 citations during 2007).

Holdren was recently president of the American Association for the Advancement of Science (AAAS) and then chairman of its Board of Directors. In a statement on the AAAS website, the Association’s Chief Executive Officer Alan Leshner noted: “John Holdren’s expertise spans so many issues of great concern at this point in history – climate change, energy and energy technology, nuclear proliferation.”

Another past president of the AAAS, Jane Lubchenco, assumes the role of National Oceanic and Atmospheric Administration (NOAA) Administrator. The first woman to head the agency, Lubchenco has an impressive list of publications in marine ecology, and co-authored a 1997 article warning of the impacts of human activity on the global ecosystem and the immediate need for action that has been cited more than 1,400 times to date (2). Like Holdren, Lubchenco has a Harvard connection, having taken her Ph.D. there in 1975 and holding a teaching post before relocating to Oregon State University in 1978.

Stocking up on Nobel laureates

President Obama’s Secretary of Energy, Steven Chu, Professor of Physics and Molecular & Cellular Biology and Director of the Lawrence Berkeley National Laboratory at the University of California, Berkeley, shared the 1997 Nobel Prize in Physics for his research in cooling and trapping of atoms with laser light. The first Laureate to be appointed to the Cabinet, Chu’s research interests in single-molecule biology are reflected in his list of more than 140 journal publications since 1996, with more than 7,000 citations to date.

Rounding out President Obama’s “Dream Team” are two biologists, Eric Lander and Harold Varmus, co-chairs of the President’s Council of Advisers on Science and Technology (PCAST) with Holdren. PCAST is a panel of private sector and academic representatives established in 2001 to advise on issues related to technology, research priorities and science education.

Lander, founding Director of the Broad Institute of Massachusetts Institute of Technology and Harvard, was instrumental in the Human Genome Project; his more than 350 journal publications have collectively been cited more than 75,000 times since 1996.

Varmus, former director of the National Institutes of Health and President and CEO of Memorial Sloan-Kettering Cancer Center since 2000, is the second Nobel Prize winner (Physiology or Medicine, 1989) appointed to Obama’s team. His prize-winning research on the cellular origin of retroviral oncogenes published in Nature in 1976 (3) continues to be cited (21 times in 2007).

Towards a well-informed future

President Obama has collected some of the finest scientific talent in the US to advise him, with a particular focus on environmental issues. In fact, the team has also been dubbed the “Green Team”. These five individuals were together cited more than 12,000 times in 2007 and their experience spans the breadth of the physical sciences.

Incidentally, Obama himself is a published author, with a dozen journal publications: his 2006 article (4) with erstwhile presidential rival and now Secretary of State Hillary Clinton on healthcare reform has been cited 28 times to date.

President Obama outlined the key role that science policy will play in the US’s economic recovery in his inauguration speech in January: “The state of the economy calls for action, bold and swift, and we will act […] We will restore science to its rightful place”.

References:

(1) Ehrlich P.R. and Holdren J.P. (1971) “Impact of population growth”, Science, Vol. 171, pp. 1212–17.
(2)
Vitousek P.M., Mooney H.A., Lubchenco J. and Melillo J.M. (1997) “Human domination of Earth's ecosystems”, Science, Vol. 277, pp. 494–99.
(3)
Stehelin D., Varmus H.E., Bishop J.M. and Vogt P.K. (1976) “DNA related to the transforming gene(s) of avian sarcoma viruses is present in normal avian DNA”, Nature, Vol. 260, pp. 170–73.
(4)
Clinton H.R. and Obama B. (2006) “Making patient safety the centerpiece of medical liability reform”, New England Journal of Medicine, Vol. 354, pp. 2205–08.


VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Pleased to cite you: the social side of citations

The advent of robust data sources for citation analysis and computational tools for social network analysis in recent years has reawakened an old question in the sociometrics of science: how socially connected are citers to those that they cite? Research Trends talks to those in the know.

Read more >


Charles Oppenheim

Charles Oppenheim

While social connectedness correlates with citation counts, science is still more about what you know than who you know. A recent investigation of the social and citation networks of three individual researchers concluded that while a positive correlation exists between social closeness and citation counts, these individuals nevertheless cited widely beyond their immediate social circle (1).

Professor Charles Oppenheim comments on the motivation for this study and its main findings: “Our research started from the hypothesis that people were more likely to cite those close to them, forming so-called ‘citation clubs’ of colleagues in the same department or research unit. There is an allegation that such citation clubs distort citation counts. We took as our primary target the Centre for Information Behaviour and the Evaluation of Research (CIBER) group of researchers based at University College London, well known for their work in deep log analysis.

“The research was quite novel because it used social network analysis (SNA) techniques and UCINET SNA software to analyze the results from questionnaires we sent to CIBER group members and people they had cited. We found no evidence of a citation club – CIBER researchers aren't necessarily socially close to the researchers they cite. However, it must be stressed that this was a small-scale experiment and cannot be generalized to all subject areas, or indeed to anyone apart from the CIBER group.”

A circle of friends and colleagues

Blaise Cronin

Blaise Cronin

Blaise Cronin, Dean and Rudy Professor of Information Science at Indiana University, US, and newly appointed Editor-in-Chief of the Journal of the American Society for Information Science and Technology, agrees that both social and intellectual connections affect citation. “We certainly don’t cite authors just because they are colleagues or friends, but all things being equal, most of us would probably give the nod to those whom we know personally.

“Our colleagues, co-workers, trusted assessors and friends are often to be found nearby – in the lab, along the faculty corridor. Even in an age of hyper-networking, place and physical proximity play a part in determining professional ties and loyalties. And those bonds, in turn, can shape our citation practices.

“Co-citation maps do not merely depict intellectual connections between authors; inscribed in them, in invisible ink as it were, are webs of social ties. A number of bio-bibliometric studies (2) have attempted to combine sociometric and scientometric data to reveal these ties. As the digital infrastructure evolves, we may soon see the emergence of a new sub-field, bio-bibliometrics, and the first generation of socio-cognitive maps of science.”

Paying an intellectual debt

Howard D. White
Howard D. White

Howard D. White, Professor Emeritus at the College of Information Science & Technology at Philadelphia’s Drexel University, US, has been interested in the social dimension of citation for some time. His work on the social and citation structure of an interdisciplinary group established to study human development concluded that citations are driven more by intellectual than social ties (3).

White explains: “There is no doubt that citation networks and social networks often overlap. Given the specialization of research fields, how could this not be the case? But no scientist or scholar would fail to cite a useful work simply because it was by a contemporary they had not met or a dead predecessor they could not have met. Citations are made to buttress intellectual points, and perceived relevance toward that end is far more important than social ties in determining who and what gets cited.”

As the nascent field of bio-bibliometrics continues to grow, we will come to a better understanding of the motivations underlying the practice of citation. Yet it is already clear that, in the main, citations mark the acknowledgement of intellectual debt to those who have gone before, rather than mere whimsy: it really is all about what you know, not who you know.

References:

(1) Johnson, B. and Oppenheim, C. (2007) “How socially connected are citers to those that they cite?”, Journal of Documentation, Vol. 63, No. 5, pp. 609–37.
(2) Cronin, B. (2005) “A hundred million acts of whimsy?”, Current Science, Vol. 89, No. 9, pp. 1505–09.
(3) White, H. D. (2004) “Does citation reflect social structure? Longitudinal evidence from the ‘Globenet’ interdisciplinary research group”, Journal of the American Society for Information Science and Technology, Vol. 55, No. 2, pp. 111–26.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Charles Oppenheim

Charles Oppenheim

While social connectedness correlates with citation counts, science is still more about what you know than who you know. A recent investigation of the social and citation networks of three individual researchers concluded that while a positive correlation exists between social closeness and citation counts, these individuals nevertheless cited widely beyond their immediate social circle (1).

Professor Charles Oppenheim comments on the motivation for this study and its main findings: “Our research started from the hypothesis that people were more likely to cite those close to them, forming so-called ‘citation clubs’ of colleagues in the same department or research unit. There is an allegation that such citation clubs distort citation counts. We took as our primary target the Centre for Information Behaviour and the Evaluation of Research (CIBER) group of researchers based at University College London, well known for their work in deep log analysis.

“The research was quite novel because it used social network analysis (SNA) techniques and UCINET SNA software to analyze the results from questionnaires we sent to CIBER group members and people they had cited. We found no evidence of a citation club – CIBER researchers aren't necessarily socially close to the researchers they cite. However, it must be stressed that this was a small-scale experiment and cannot be generalized to all subject areas, or indeed to anyone apart from the CIBER group.”

A circle of friends and colleagues

Blaise Cronin

Blaise Cronin

Blaise Cronin, Dean and Rudy Professor of Information Science at Indiana University, US, and newly appointed Editor-in-Chief of the Journal of the American Society for Information Science and Technology, agrees that both social and intellectual connections affect citation. “We certainly don’t cite authors just because they are colleagues or friends, but all things being equal, most of us would probably give the nod to those whom we know personally.

“Our colleagues, co-workers, trusted assessors and friends are often to be found nearby – in the lab, along the faculty corridor. Even in an age of hyper-networking, place and physical proximity play a part in determining professional ties and loyalties. And those bonds, in turn, can shape our citation practices.

“Co-citation maps do not merely depict intellectual connections between authors; inscribed in them, in invisible ink as it were, are webs of social ties. A number of bio-bibliometric studies (2) have attempted to combine sociometric and scientometric data to reveal these ties. As the digital infrastructure evolves, we may soon see the emergence of a new sub-field, bio-bibliometrics, and the first generation of socio-cognitive maps of science.”

Paying an intellectual debt

Howard D. White
Howard D. White

Howard D. White, Professor Emeritus at the College of Information Science & Technology at Philadelphia’s Drexel University, US, has been interested in the social dimension of citation for some time. His work on the social and citation structure of an interdisciplinary group established to study human development concluded that citations are driven more by intellectual than social ties (3).

White explains: “There is no doubt that citation networks and social networks often overlap. Given the specialization of research fields, how could this not be the case? But no scientist or scholar would fail to cite a useful work simply because it was by a contemporary they had not met or a dead predecessor they could not have met. Citations are made to buttress intellectual points, and perceived relevance toward that end is far more important than social ties in determining who and what gets cited.”

As the nascent field of bio-bibliometrics continues to grow, we will come to a better understanding of the motivations underlying the practice of citation. Yet it is already clear that, in the main, citations mark the acknowledgement of intellectual debt to those who have gone before, rather than mere whimsy: it really is all about what you know, not who you know.

References:

(1) Johnson, B. and Oppenheim, C. (2007) “How socially connected are citers to those that they cite?”, Journal of Documentation, Vol. 63, No. 5, pp. 609–37.
(2) Cronin, B. (2005) “A hundred million acts of whimsy?”, Current Science, Vol. 89, No. 9, pp. 1505–09.
(3) White, H. D. (2004) “Does citation reflect social structure? Longitudinal evidence from the ‘Globenet’ interdisciplinary research group”, Journal of the American Society for Information Science and Technology, Vol. 55, No. 2, pp. 111–26.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
Research Trends Image

The politics of bibliometrics

As bibliometric indicators gain ground in the measurement of research performance and quality, and researchers and editors understand the importance of citations in these indicators, the potential for citation manipulation is bringing a political dimension to the world of bibliometrics. Research Trends explores the effect of excessive self-citation and spurious co-authorship on citation patterns.

Read more >


Academic performance indicators are relied upon for their objective insight into prestige. However, the data that they draw upon can be affected by practices such as self-citation and spurious co-authorship.

Mayur Amin and Michael Mabe have discussed and identified several issues with the Impact Factor (IF), probably the most widely used (and abused) of all indicators (1). And more recently, its derivation and transparency have been criticized in several articles. The comments are applicable to many bibliometric indicators, but most have focused on the IF because of its prominence.

Self-citation poses a political and ethical dilemma for bibliometrics: although it plays a vital role in research for both journals and authors, it can also be seen as a way to artificially increase the ranking of a journal or an individual. While most self-citation is justified and necessary, the potential for abuse has become a political issue for bibliometrics.

Dropping your own name

In 1974, Eugene Garfield wrote of self-citation rates: “It says something about your field – its newness, size, isolation; it tells us about the universe in which a journal operates.” (2) While this continues to be true, some researchers have indicated that there is a link between self-citation and the overall citation levels an author receives. James Fowler and Dag Aknses claim that: “a self-cite may yield more citations to a particular author, without yielding more citations to the paper in question” (3).

When self-citation is overused or blatant it can be detected by bibliometric indicators, generating significant attention. Several key researchers have called for bibliometric indicators to be calculated both with and without self-citations to identify their effects or to understand the reason for the self-citation (4,5,6,7).

Another political consideration concerns the replication of reference lists between articles. This occurs where a set of references is deemed to be important enough to be included in almost every article in the field. It sometimes happens even if there is only a tenuous link to the article in question, and when the author may not even have read the paper. This adds numerous “extra” citations to the pool each year. In fact, after analyzing the references in five issues of different medical journals, Gerald de Lacey, Christopher Record and James Wade found that errors were proliferating through medical literature into other articles at an alarming rate (8).

Do we need a watchdog?

The potential for abuse suggests that we may need to regulate citations. At present, we rely on authors to self-regulate. But this is a sensitive issue.

As John Maddox has described: “the widespread practice of spurious co-authorship” (9) is another political aspect of research. In some extreme cases, as Murrie Burgan indicates, articles list more than 100 authors (10). How can it be possible for each of those authors to have actively contributed to the article? Moreover, John Ioannides has shown that the average number of authors per paper is increasing, indicating that the problem is growing (11). And, according to research carried out by Richard Slone into authors listed on papers published in the American Journal of Roentgenology, the number of so-called “undeserved authors” rises as the list gets longer: 9% of authors on a three-author paper were undeserved, rising to 30% on papers with more than six authors (12).

Part of the problem is that a researcher’s personal success is intimately intertwined with his or her publication records. And as long as measures such as the h-index fail to distinguish between the first or the 30th author on a paper, undeserved co-authorship will continue. Some believe that the peer-review process should act as the governing body for research, asking journal editors and referees to act as bibliometric police. However, it can be very difficult to spot incidents of overactive self-citation, unrelated or incorrect references and erroneous authors, while attempting to assess whether the quality of research warrants publication.

There is also the potential to introduce a regulatory body, but the question remains: who should this be? Potentially publishers or associations, but it is far from clear whether there is a need for an independent organization to regulate the system.

As explained above, some researchers have suggested that metrics should be developed that account for excessive self-citation or that cleaner data are used. In the former case, self-citations can be taken out and weighted averages introduced but this can make the metric extremely complex. Meanwhile, publishers are working towards providing increasingly clean data, which makes processes easier.

In the end, is it worth all the effort? As long as the community as a whole can bring thoughtful analysis and interpretation, as well as a healthy dose of common sense, to bear on citations, such political considerations should be mitigated. As Winston Churchill once said: “If you have ten thousand regulations, you destroy all respect for the law.”

References:

(1) Amin, M., Mabe, M. (2000) “Impact Factors: use & abuse”, Perspectives in Publishing, No. 1.
(2) Garfield, E. (1974) “Journal self citation rates – there’s a difference”, Current Contents, No. 52, pp. 5–7.
(3) Fowler, J.H. and Aksnes, D.W. (2007) “Does self-citation pay?”, Scientometrics, Vol. 72, No. 3, pp. 427–37.
(4) Schubert, A., Glanzel, W. and Thijs, B. (2006) “The weight of author self-citations. A fractional approach to self-citation counting”, Scientometrics, Vol. 67, No. 3, pp. 503–14.
(5) Hyland, K. (2003) “Self citation and self reference: credibility and promotion in academic publication”, JASIST, Vol. 54, No. 3, pp. 251–59.
(6) Aksnes, D.W. (2003) “A macro study of self citation”, Scientometrics, Vol. 56, No. 2 , pp. 235–46.
(7) Glanzel, W., Thijs, B. and Schlemmer, B. (2004) “A bibliometric approach to the role of author self-citations in scientific communication”, Scientometrics, Vol. 59, No. 1, pp. 63–77.
(8) de Lacey, G., Record, C. and Wade, J. (1985) “How accurate are quotations and references in medical journals?”, British Medical Journal, Vol. 291, September, pp. 884–86.
(9) Maddox, J. (1994) “Making publication more respectable”, Nature, Vol. 369, No. 6479, p. 353.
(10) Burgan, M. (1995) “Who is the author?”, STC Proceedings, pp. 419–20.
(11) Ioannidis, J.P.A. (2008) “Measuring co-authorship and networking adjusted scientific impact”, PLoS ONE, Vol. 3, No. 7, Art. No. e2778.
(12) Slone, R.M. (1996) “Coauthors’ contributions to major papers published in the AJR: frequency of undeserved coauthorship”, American Journal of Roentgenology, Vol. 167, No.3, pp. 571–79.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Academic performance indicators are relied upon for their objective insight into prestige. However, the data that they draw upon can be affected by practices such as self-citation and spurious co-authorship.

Mayur Amin and Michael Mabe have discussed and identified several issues with the Impact Factor (IF), probably the most widely used (and abused) of all indicators (1). And more recently, its derivation and transparency have been criticized in several articles. The comments are applicable to many bibliometric indicators, but most have focused on the IF because of its prominence.

Self-citation poses a political and ethical dilemma for bibliometrics: although it plays a vital role in research for both journals and authors, it can also be seen as a way to artificially increase the ranking of a journal or an individual. While most self-citation is justified and necessary, the potential for abuse has become a political issue for bibliometrics.

Dropping your own name

In 1974, Eugene Garfield wrote of self-citation rates: “It says something about your field – its newness, size, isolation; it tells us about the universe in which a journal operates.” (2) While this continues to be true, some researchers have indicated that there is a link between self-citation and the overall citation levels an author receives. James Fowler and Dag Aknses claim that: “a self-cite may yield more citations to a particular author, without yielding more citations to the paper in question” (3).

When self-citation is overused or blatant it can be detected by bibliometric indicators, generating significant attention. Several key researchers have called for bibliometric indicators to be calculated both with and without self-citations to identify their effects or to understand the reason for the self-citation (4,5,6,7).

Another political consideration concerns the replication of reference lists between articles. This occurs where a set of references is deemed to be important enough to be included in almost every article in the field. It sometimes happens even if there is only a tenuous link to the article in question, and when the author may not even have read the paper. This adds numerous “extra” citations to the pool each year. In fact, after analyzing the references in five issues of different medical journals, Gerald de Lacey, Christopher Record and James Wade found that errors were proliferating through medical literature into other articles at an alarming rate (8).

Do we need a watchdog?

The potential for abuse suggests that we may need to regulate citations. At present, we rely on authors to self-regulate. But this is a sensitive issue.

As John Maddox has described: “the widespread practice of spurious co-authorship” (9) is another political aspect of research. In some extreme cases, as Murrie Burgan indicates, articles list more than 100 authors (10). How can it be possible for each of those authors to have actively contributed to the article? Moreover, John Ioannides has shown that the average number of authors per paper is increasing, indicating that the problem is growing (11). And, according to research carried out by Richard Slone into authors listed on papers published in the American Journal of Roentgenology, the number of so-called “undeserved authors” rises as the list gets longer: 9% of authors on a three-author paper were undeserved, rising to 30% on papers with more than six authors (12).

Part of the problem is that a researcher’s personal success is intimately intertwined with his or her publication records. And as long as measures such as the h-index fail to distinguish between the first or the 30th author on a paper, undeserved co-authorship will continue. Some believe that the peer-review process should act as the governing body for research, asking journal editors and referees to act as bibliometric police. However, it can be very difficult to spot incidents of overactive self-citation, unrelated or incorrect references and erroneous authors, while attempting to assess whether the quality of research warrants publication.

There is also the potential to introduce a regulatory body, but the question remains: who should this be? Potentially publishers or associations, but it is far from clear whether there is a need for an independent organization to regulate the system.

As explained above, some researchers have suggested that metrics should be developed that account for excessive self-citation or that cleaner data are used. In the former case, self-citations can be taken out and weighted averages introduced but this can make the metric extremely complex. Meanwhile, publishers are working towards providing increasingly clean data, which makes processes easier.

In the end, is it worth all the effort? As long as the community as a whole can bring thoughtful analysis and interpretation, as well as a healthy dose of common sense, to bear on citations, such political considerations should be mitigated. As Winston Churchill once said: “If you have ten thousand regulations, you destroy all respect for the law.”

References:

(1) Amin, M., Mabe, M. (2000) “Impact Factors: use & abuse”, Perspectives in Publishing, No. 1.
(2) Garfield, E. (1974) “Journal self citation rates – there’s a difference”, Current Contents, No. 52, pp. 5–7.
(3) Fowler, J.H. and Aksnes, D.W. (2007) “Does self-citation pay?”, Scientometrics, Vol. 72, No. 3, pp. 427–37.
(4) Schubert, A., Glanzel, W. and Thijs, B. (2006) “The weight of author self-citations. A fractional approach to self-citation counting”, Scientometrics, Vol. 67, No. 3, pp. 503–14.
(5) Hyland, K. (2003) “Self citation and self reference: credibility and promotion in academic publication”, JASIST, Vol. 54, No. 3, pp. 251–59.
(6) Aksnes, D.W. (2003) “A macro study of self citation”, Scientometrics, Vol. 56, No. 2 , pp. 235–46.
(7) Glanzel, W., Thijs, B. and Schlemmer, B. (2004) “A bibliometric approach to the role of author self-citations in scientific communication”, Scientometrics, Vol. 59, No. 1, pp. 63–77.
(8) de Lacey, G., Record, C. and Wade, J. (1985) “How accurate are quotations and references in medical journals?”, British Medical Journal, Vol. 291, September, pp. 884–86.
(9) Maddox, J. (1994) “Making publication more respectable”, Nature, Vol. 369, No. 6479, p. 353.
(10) Burgan, M. (1995) “Who is the author?”, STC Proceedings, pp. 419–20.
(11) Ioannidis, J.P.A. (2008) “Measuring co-authorship and networking adjusted scientific impact”, PLoS ONE, Vol. 3, No. 7, Art. No. e2778.
(12) Slone, R.M. (1996) “Coauthors’ contributions to major papers published in the AJR: frequency of undeserved coauthorship”, American Journal of Roentgenology, Vol. 167, No.3, pp. 571–79.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
  • Elsevier has recently launched the International Center for the Study of Research - ICSR - to help create a more transparent approach to research assessment. Its mission is to encourage the examination of research using an array of metrics and a variety of qualitative and quantitive methods.