Issue 12 – July 2009

Articles

Learning from our mistakes

Critics have long held that only positive results (where the outcome fits the hypothesis) are published in journals. However, science has always progressed by learning from its mistakes as well as its successes. Research Trends investigates the impact of negative findings.

Read more >


Human discovery, scientific and otherwise, has always been moved forwards in response to the positive and negative outcomes of our experiences. The experimental nature of scientific research, based on the testing of hypotheses, implies a distinct possibility of negative results to our experiments. The very essence of science is based on using both positive and negative results as steps along the continuum.

Medical and scientific theories are developed over time as new research challenges and builds upon received wisdom. For instance, medical research has overturned the assumption that conditions like scurvy and beri-beri are caused by infection, finding that they are actually a symptom of vitamin or hormonal deficiency due to malnutrition.

However, there is a growing feeling in the research community that publishing negative results, despite their scientific value, can be damaging, and many are choosing not to submit such findings to journals.

Publishing negative results

Much research does result in negative findings, and these are rarely published. However, prior knowledge that a particular hypothesis or experiment leads to a negative result could help other researchers modify their experiments or save time in reproducing this event. In an article in Nature, Jonathan Knight has asked whether scientific progress is being hampered in some areas by this practice (1).

William F. Balistreri, MD, Editor-in-Chief of The Journal of Pediatrics, says: “We agree with the International Committee of Medical Journal Editors (ICMJE). They have made a clear statement regarding the obligation to publish negative studies: ‘Editors should consider seriously for publication any carefully done study of an important question, relevant to their readers, whether the results for the primary or any additional outcome are statistically significant. Failure to submit or publish findings because of lack of statistical significance is an important cause of publication bias.’

The Journal of Pediatrics serves as a practical guide for the continuing education of physicians who diagnose and treat disorders in infants, children and adolescents. We seek original work, which undergoes peer-reviewed scrutiny overseen by the Editorial Board, and have accepted articles that clearly documented a lack of efficacy of therapeutic agents or procedures. We believe that evidence-based medicine must be based on the best evidence.”

In an attempt to encourage researchers to publish negative results, BMC launched the Journal of Negative Results in BioMedicine in 2002. This journal publishes research that covers: “aspects of unexpected, controversial, provocative and/or negative results/conclusions in the context of current tenets, providing scientists and physicians with responsible and balanced information to support informed experimental and clinical decisions.”

Spectacular blunder
Polywater was initially described in 1962 as a new form of water generated from regular water inside glass capillaries. Polywater was believed to have different properties to normal water, including a significantly higher boiling point (three times that of water) and a higher level of viscosity. This led to considerable research for several years until it was eventually confirmed that Polywater was actually normal water containing impurities that were so concentrated that they significantly affected the properties of their solvent – i.e. water. Polywater is rather a large negative result and World Records in Chemistry has described it as a “spectacular blunder” (5).

The polywater effect

The effects of negative results and wide-scale research failures have also caught the attention of the scientometric community. The polywater (see box) research front has been analyzed both bibliometrically and econometrically to assess its impacts on citation activity and economics.

In two papers published in Scientometrics, Eric Ackermann followed the progression of polywater research, demonstrating that seminal papers published in 1962 led to an “information epidemic” that proliferated through the literature and peaked in 1970 with over 100 articles (2, 3). Ackermann found 445 papers on polywater between 1962 and 1974. The research penetrated numerous disciplines, with 85% of papers appearing in five subject fields: nuclear science and technology, physics, multidisciplinary science, electro-chemistry and analytical chemistry.

Ackerman’s findings show how rapidly a new research front can spread and how readily researchers alter their own direction in the light of seminal papers, regardless of whether the research carried out turns out to be true or not.

References:
(1) Knight, J. (2003) ‘Null and Void’, Nature, 422 (6932), pp. 554–555
(2) Ackermann, E. (2005) ‘Bibliometrics of a controversial scientific literature: Polywater research, 1962–1974’, Scientometrics, 63 (2) pp. 189–208
(3) Ackermann, E. (2006) ‘Indicators of failed information epidemics in the scientific journal literature: A publication analysis of Polywater and Cold Nuclear Fusion’, Scientometrics, 66 (3), pp. 451–466
(4) Diamond, A.M. (2009) ‘The career consequences of a mistaken research project – the case of polywater’, American Journal of Economics & Sociology, 68 (2), pp. 387–411
(5) Quadbeck-Seeger, H-J. (Ed.); Faust, R.; Knaus, G.; Siemeling, U. (1999) World Records in Chemistry. New York: Wiley-VCH

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
Research Trends Image

Analyzing the multidisciplinary landscape

Many of our most urgent scientific challenges require multidisciplinary approaches; however, research performance is typically measured on a unidisciplinary basis. Research Trends learns about a new study seeking to measure output in alternative-energy research in a novel way.

Read more >


Many of today’s most pressing scientific challenges, such as identifying alternative energy sources, require a multidisciplinary approach. However, traditional methods for assessing research output cannot adequately measure multidisciplinary research output.

Current methods of organizing, and thus analyzing, science are based on journal categories. Yet, since journals are based on single disciplines, this classification system cannot capture the changing landscape. This means it is impossible for research executives and government policymakers to gain insight into which institutions, countries and regions are leading in such fields as alternative energy.

Leaders in alternative-energy research
Alternative-energy research is, by its very nature, multidisciplinary, and any attempt to identify leaders in this field must take this into account. In order to rank leaders in alternative-energy research, Boyak and Klavans first identified alternative energy-related paradigms using search terms from relevant websites. They discovered that 1,100 paradigms contained alternative energy research, and divided these into three equally distributed topic groups:

1. Solar/PV
2. Fuel cells
3. Environmentally related (efficiency + renewable + biomass + biodiesel + biofuel + nuclear + wind + cogeneration + clean coal + carbon + bioenergy + security + hydroelectric + geothermal)

They then counted the alternative-energy papers for over 3,000 major academic and government players within the global research community, ranked them according to output and calculated distinctive competencies for each of the top-50 institutions on the list.

To rank the research leaders in this field (see Figure 1), they found where the 1,100 paradigms from the three topic groups belonged to a distinctive competency and counted the number of alternative-energy papers that were in distinctive competencies for each university/laboratory.

This information was aggregated to identify country (see Figures 2, 3 and 4) and regional leaders in alternative-energy research.

However, research executives need accurate research performance information to identify areas of research strengths and make strategic decisions. Developing an accurate picture of how universities and countries are performing is critical to advancing the frontiers of science.

A new way to measure multidisciplinary impact

Senior Development Advisors Kevin Boyak and Dick Klavans, together with Elsevier, have developed a new method of measuring output in multidisciplinary research. Based on co-citation analysis, SciVal Spotlight displays research performance from an interdisciplinary perspective.

Using Scopus as its underlying data source, SciVal Spotlight draws upon 5.6 million research papers published between 2003 and 2007, along with another two million reference papers that these publications cite heavily. This content was divided into about 80,000 paradigms, each of which is centered on a separate topic (e.g. alternative energy) in science.

These paradigms were used to identify an institution’s distinctive competencies. Researchers tend to focus within a unique set of related paradigms, which form natural clusters based on the research networks at their institution. These clusters can be seen as the institution’s distinctive competencies, and are the areas in which the institution is a research leader. One of the unique features of this method is that it can identify those distinctive competencies that link multiple disciplines within an institution, indicating that research within the university is not being done in isolated silos. If work does not appear as part of a distinctive competency, this does not mean that it is not good work, but rather that it is isolated, and not part of a larger network.

An institute is identified as a research leader if it displays substantial activity and impact in the topics associated with the paradigm.

True leadership is in distinctive competencies

Using this new methodology to measure which institutions, countries and regions are research leaders in alternative energy-related science gave some surprising and insightful results (see box for method).

At an institute-level, the top-10 world institutes are almost all in the United States, with Germany a close second (see Figure 1). In fact, the United States is ahead in all of the topic groups on a single-country basis; however, the only area in which it has overwhelming leadership is in environmentally related research. In fuel cells and solar energy, leadership is more diffuse, and Germany and China are significant players in these two fields.

In fact, while Germany’s total number of papers remains lower than the United States’, its percentage of papers in distinctive competencies in both solar-energy and fuel-cells research is higher. This indicates that Germany is a formidable competitor in these areas, particularly in solar energy, where it has 335 papers in distinctive competencies compared with 454 for the United States.

Identifying distinctive competencies rather than simply replying on citation counts shows where competition could come from in the future. While Germany may not yet be leading the United Stated on alternative-energy research, it is certainly developing deep expertise in a wide range of disciplines, which could result in breakthroughs in the near future.

If our most urgent scientific challenges, such as alternative-energy, require a multidisciplinary approach, then we urgently need to find ways of measuring output in these areas. Future breakthroughs in such areas are expected to emerge from the institutes and countries drawing on the widest range of their research capabilities to answer specific questions. And this methodology helps us see where those breakthroughs are likely to emerge.

Institution Country Total
1 NASA Goddard Space Flight Center US 309
2 National Renewable Energy Laboratory US 271
3 Hahn-Meitner-Institut DE 240
4 Forschungszentrum Julich DE 234
5 Pennsylvania State University US 168
6 National Oceanic and Atmospheric Administration US 121
7 University of California at Irvine US 101
8 Osaka University JP 97
9 California Institute of Technology US 97
10 Harvard University US 84

Figure 1 – Top-10 institutions for alternative-energy research

Country Total papers Papers in DCs % in DCs
United States 893 454 51%
Japan 455 149 33%
Germany 370 335 91%

Figure 2 – Top-three countries for solar/photovoltaic research

Country Total papers Papers in DCs % in DCs
United States 1006 377 38%
China 574 157 27%
Japan 531 94 18%

Figure 3 – Top-three countries for fuel-cells research

Country Total papers Papers in DCs % in DCs
United States 1997 797 40%
China 425 75 18%
Japan 216 0%

Figure 4 – Top-three countries for environmentally related energy research

Useful links:

SciVal Spotlight
Research leadership redefined – measuring performance in a multidisciplinary landscape. Listen to the webinar here
USA Today, ‘US institutes lead in environmental research expertise’

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

What’s leading the curve: research or policy?

Stem cell research often attracts headlines due to the controversial nature of human embryonic stem cell research, and most countries have strict rules governing what can and cannot be done with public funding in this field. Research Trends investigates the relationship between policy changes and publication rates in recent years.

Read more >


Stem cells are characterized by the ability to renew themselves and differentiate into a diverse range of specialized cell types. Therefore, stem cell research opens the possibility of new technologies with great therapeutic potential for many medical conditions. The two main types of mammalian stem cells are adult stem cells, which are found in adult tissues, and embryonic stem cells, which are isolated from the inner cell mass of early embryos. The latter are at the center of a moral and ethical debate on research involving the creation, use and destruction of human embryonic stem cells.

Stem cell research is an exciting but controversial field and has been the subject of intense debate in recent years. Despite the controversy, the field has sustained a strong growth over the past decade (~11%). National policies on stem cell research have been evolving with the debate and influencing research outputs, as can be seen in the following overview of the most prolific countries during the past decade.

Stem cell research in the USA, Japan, Germany, the UK and France may be affected by national policy.

Stem cell research in the USA, Japan, Germany, the UK and France may be affected by national policy.

France: ranked 5

Since the 1994 bioethics laws, embryo research has in principle been forbidden in France. However, these laws are evaluated every five years and the bioethic law of August 6, 2004, allowed research with a therapeutic aim and under extremely controlled conditions to be carried out. Research is only permitted on embryos created as a by-product of IVF and only with the parents’ agreement that the superfluous embryos can be used for research purposes. Taking into account the fact that research manuscripts can take up to a year to appear in a journal, this relaxing of the law may be reflected in a slight increase in the number of articles on this subject published by French authors in 2005. The law is due to be re-examined at the end of the year, so the future of stem cell research in France is still uncertain.

UK: ranked 4

The UK has a well-established system for regulating the creation and use of embryos: the Human Fertilisation and Embryology Act (HFEA) of 1990. This act allows the creation and use of embryos for research, provided that the research is for one of five specified purposes and has been granted a license by the HFEA. The UK Stem Cell Initiative, launched in March 2005, aims at defining a 10-year vision for UK stem cell research and coordinating its public and private funding. This initiative matches a modest increase in the number of articles published by UK authors in the following years.

Germany: ranked 3

Any creation of human stem cells that also involves the use of embryos is prohibited in Germany by the Embryo Protection Law of January 1, 1991. However, this law does not cover the importation of stem cell lines produced in other countries from human embryos and their use in Germany for the purposes of research. In response, the Bundestag adopted the Stem Cell Law on July 1, 2002. In principle, this law prohibits the importation and use of human embryonic stem cells except for research under exceptional conditions. Germany’s peak in output in 2001 could be interpreted as an advanced reaction to the impending 2002 Stem Cell Law. In May 2007, the Stem Cell Law was discussed by the German Parliament, the Bundestag’s, Committee on Education, Research and Appraisal of the Consequences of Technology, a fact that could suggest a need for reform.

Japan: ranked 2

Since June 6, 2001, the law on Human Cloning Techniques and Other Similar Techniques has been enforced in Japan. This law specifically bans reproductive cloning and recommends the elaboration of national guidelines for the creation of “Specified Embryos” for research purposes. On July 23, 2004, Japan’s Council for Science and Technology Policy, the Japanese government’s highest science and technology policy body, approved the final report of its Bioethics Expert Panel on human embryo and stem cell research. The report recommended allowing the creation of human embryos for stem cell research. Japan’s relatively steady increase in output could reflect the absence of any drastic new legislation in recent years.

USA: ranked 1

On August 9, 2001, President George Bush allowed the funding of stem cell research through taxpayer financing, but only with strict limits. The USA’s peak in output in 2001 could correspond to an accelerated publication of publicly funded research at the time of President Bush’s restrictive statement; surprisingly, commentary on the legislation is scarce in the scientific literature of the time. The House of Representatives passed a bill on May 24, 2005, to expand federal financing for embryonic stem cell research, defying a veto threat from then President Bush. On March 9, 2009, President Barack Obama issued Executive Order 13505, Removing Barriers to Responsible Scientific Research Involving Human Stem Cells, revoking President Bush’s statement of August 9, 2001, as well as its supplemental executive order of June 20, 2007, and opens the door to new horizons for the future of stem cell research in the US. The continued upswing in research outputs on stem cells may reflect the fact that many researchers have sought non-federal funding for their research or diversified their research efforts into permissible techniques for acquiring human stem cells. Recently, two leading bioethicists have even argued that Bush’s restrictive policy may have inadvertently pushed stem cell research, and thinking about the underlying ethical dilemmas, much further forward. (1)

Open future

As the debate on stem cell research evolves, national policies follow: advances in the field raise new ethical issues, entailing an evolution of the controversy, and a resulting need for new regulations. The ethics of stem cell research are still controversial and, despite a recent tendency towards an increase in legislative permissiveness, the future of this exciting field of research is still to be written.

Reference:
(1) ‘Benefits of the stem cell ban’, New Scientist, available at:
www.the-scientist.com/news/display/55752/

(registration, which is free, is required to read this article)

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Busting the open access myth

Open access has been touted as the future of scientific publishing, claiming benefits such as wider readership and, crucially, significantly higher citation rates. However, research carried out by Phil Davis at Cornell University suggests that the manner of publication may have very little to do with citations. He discusses his latest research.

Read more >


Open access has been touted as the future of scientific publishing, claiming benefits such as wider readership and, crucially, significantly higher citation rates. However, research carried out by Phil Davis at Cornell University suggests that the manner of publication may have very little to do with citations. He discusses his latest research (1).

Research Trends (RT): Your methodology is pretty unique for doing citation analysis. How did you decide on a randomized controlled trial?

Phil Davis (PD): Previous studies that measured the citation advantage were all based on observational methodologies. Essentially, researchers counted citations to open-access articles and compared them to subscription-access articles. This is a very weak methodology, as it ignores factors other than access that lead to a citation. It also ignores the direction of causality.

The only way to adequately control for confounding explanations and to rule out the possibility of reverse directionality was to set up a proper scientific trial. By randomizing which articles were given the open-access “treatment” we could effectively control for other possible causes and focus entirely on the effect of access on readership and citations. This methodology makes our study much more rigorous than other observational studies that were done in the past.

RT: How did you get publishers to participate in your study?

PD: It was much easier than I expected! I focused on recruiting scientific societies, since I knew they had an interest in the outcome of the study. Ultimately, their participation depended on trust: they trusted that I would conduct a rigorous, scientific study and that I was going to be fair and objective in reporting the results. All but one publisher gave me access to their online publishing system so that I could manipulate the access conditions without their involvement, thus minimizing potential publisher influence and bias. Every publisher gave me full access to their statistical reporting systems. This says a lot about the integrity of these people and their dedication to the scientific process.

RT: You found evidence that open access increases readership but not citations. What does this mean?

PD: A large open access “citation advantage” would suggest that the subscription model is doing a very poor job of disseminating information to the research community. The fact that we were unable to detect a citation difference suggests that the subscription model is operating efficiently, at least for authors. Yet, the research community is not the only group that reads the scientific literature. We were able to document a large increase in full-text article downloads and a smaller, but significant, increase in PDF downloads and unique visitors to the journal websites. This suggests that open-access publishing may reach a wider readership community, although this may not translate into more citations.

RT: Who are these additional readers of the scientific literature?

PD: It is difficult to say from our data. We know that they are accessing the literature from outside subscriber IP addresses. But we don’t know who they are, nor do we know their intention. They could be people like my dad – who had triple bypass heart surgery – typing a search query into Google and landing on an article published by the American Heart Association. They could be teachers, students, physicians or journalists, or just interested people trying to learn from the primary literature. The research field is wide open on answering this question.

RT: Why is measuring a citation difference so important in making the case for open access?

PD: Most scientists view citations as a form of reward, and thus an incentive, for where and how they publish. The potential of getting a 50–250% return in expected citations by publishing in an open-access journal or by making your articles freely available from an institutional archive has been used repeatedly as an argument to change the behavior of scientists. There are many other good reasons for making one’s results widely available – a citation advantage, however, does not appear to be one of them.

RT: You take issue with the phrase “open access”. Why?

PD: “Open access” assumes a dissemination model in which information only flows from the publisher to the reader. It’s a model that completely ignores the high degree of sharing of articles that takes place within informal networks of authors, readers and libraries. I’m very privileged to belong to an institution with such rich access to the literature, and yet I still depend on my peers for copies of research articles and manuscripts. Secondly, “open access” implies a right to information; I much prefer “free access”, which implies a privilege.

RT: Some have criticized you for reporting too early on your study. What is your response?

PD: Our first article, reporting initial results within the first year after publication (1) was indeed published early in the study. We felt confident that the main results wouldn’t change over time, and they haven’t. After two years, we are yet to detect a difference in the citations to the open-access articles compared to the control articles. Remember that other studies had reported huge differences after very short periods of time, some within the first few months after publication. I was confident that if we didn’t see a difference within the first year, we were unlikely to see a difference in the future. I’m glad we made the decision to publish early. Similar findings from other journals in the sciences, medicine, social sciences and humanities will be coming out in the next few years.

RT: The scholarly publishing field is changing very rapidly. How relevant will your study be, in say, five years?

PD: I imagine that the main results of our study will largely be moot in another five years. The information landscape is changing very rapidly right now, with new granting and institutional policies and new publishing business models. Bibliometrics is a very powerful tool, although it requires theory from other disciplines to give it meaning. This is why my professors have pushed me to read into the history of science, economics, communication, law and sociology. When this study runs its course, I hope to be ready for the next big question.

Reference:

(1) Davis, P. (2008) ‘Open access publishing, article downloads, and citations: randomised controlled trial’, BMJ 2008;337:a568; doi:10.1136/bmj.a568

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

…a classic paper?

Why do researchers continue to cite classic papers for many decades? Is it to formally acknowledge an intellectual debt or is it the ‘done thing’ in the field? We ask two researchers why they cited a classic paper.

Read more >


Citation classics (articles receiving many more than the expected number of citations for their area) exist in all fields of research. Often marking technological or theoretical advances, they can stimulate a generation of researchers to make significant advances that might otherwise not have been possible. As such, they become highly cited and may persist for many years.

Tyge Payne

Tyge Payne

Why do researchers continue to return to these classic papers long after the initial wave of excitement has passed? Are they cited to formally acknowledge an intellectual debt or is it just the ‘done thing’ in the field?

Conceptual shorthand

Michael Jensen and William Meckling’s landmark 1976 paper, ‘Theory of the firm: Managerial behavior, agency costs and ownership structure’ (1), offered a unique synthesis of three existing theories (of agency, of property rights and of finance) to formulate a theory of the ownership structure of a firm. The article has been cited over 3,900 times since 1996 in journals on topics as diverse as business, management, accounting, economics, econometrics, finance, decision sciences and psychology, and its broad impact can be attributed to its inclusive scope and broad theoretical framework.

Associate Professor Tyge Payne at Rawls College of Business, Texas Tech University, cited this classic paper in two of his recent papers on management decision-making and performance (2, 3). Dr Payne notes that in doing so: “I was drawing on classic agency theory and, therefore, used Jensen and Meckling as a means of establishing that line of thinking.”

David Jones

David Jones

He continues: “For me, citation classics are a way of communicating certain ideas to the reader. It positions the reader in an established research stream and develops a measure of credibility without having to extensively develop a particular line of thinking.”

Acknowledging past achievements

Citation classics may offer methodological advances that subsequently become the standard or reference procedure for work in a given field – and sometimes beyond. Despite running to just six pages in length, James Murphy and John Riley’s 1962 article, ‘A modified single solution method for the determination of phosphate in natural waters’ (4), has been cited more than 4,200 times since 1996. As the authors stated in a 1986 column: “I suppose that this paper has been so extensively cited [because] it provides a simple, highly reproducible technique for the determination of microgram amounts of phosphate. Almost 25 years later, the method is still the recommended standard procedure for the analysis of fresh and potable waters, as well as seawater. Although it was originally developed for the analysis of phosphate in natural waters, it has been widely adopted in many other fields, including, for example, botany, zoology, biochemistry, geochemistry, metallurgy, and clinical medicine. Indeed, kits for the determination are available commercially for use in physiological investigations and water analysis.”

Professor David L. Jones at the School of the Environment and Natural Resources, Bangor University, Wales, has cited this paper three times this year (5, 6, 7). He says: “This is the definitive paper on measuring phosphorus in soil solution. Murphy and Riley published the method first, and it only seems fitting to acknowledge that achievement.”

Citation classics punctuate the scholarly landscape and mark waypoints in our development as a scientific society. But they are more than that: in the words of Eugene Garfield, who coined the term “citation classic” in 1977 (8), “this is the human side of science”.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Did you know

Don’t panic! – The Hitchhiker’s Guide to the Galaxy makes a scholarly impact

With its publication in 1979, The Hitchhiker’s Guide to the Galaxy (1) (originally a radio series) single-handedly invented the genre of science-fiction comedy. It went on to spawn four more books, stage and film adaptations, and a video game. Selling more than 14 million copies and translated into more than 30 languages, the novel has also had considerable scholarly impact, with almost 100 citations to date in Scopus. A recent book (2) shows how author Douglas Adams successfully predicted (and may even have stimulated research towards) such advances as space tourism, parallel universes, instant-translation devices and sentient computers. But, as Slartibartfast noted to Arthur Dent: “Science has achieved some wonderful things of course, but I’d far rather be happy than right any day.

References
(1) Adams, D. (1979) The Hitchhiker’s Guide to the Galaxy. London: Pan-Macmillian.
(2) Hanlon, M. (2005) The Science of The Hitchhiker’s Guide to the Galaxy. New York: Palgrave-Macmillan.

14
https://www.researchtrends.com/wp-content/uploads/2011/01/Research_Trends_Issue12.pdf