Articles

Research trends is an online magazine providing objective insights into scientific trends based on bibliometrics analyses.

The RAE: measuring academic research

Higher education research assessment in the UK is changing. Carried out for over 20 years via a method known as the Research Assessment Exercise (RAE), 2008 will be the last time the RAE takes place in its present form. What is it and why is it changing?

Read more >


The Research Assessment Exercise (RAE) is the principal method used in the UK to establish the quality of the research undertaken in the higher education sector. Conducted jointly by the UK’s four principal funding bodies - the Higher Education Funding Council for England, the Scottish Funding Council, the Higher Education Funding Council for Wales and the Department for Employment and Learning, Northern Ireland - it is an exercise based on peer review and aims to produce quality profiles for each submission of research activity made by an institution. Any higher education institution eligible to receive research funding from one of these funding bodies is eligible to participate in the exercise. It has taken place in its present form in 1986, 1992, 1996 and 2001, in which year a more rigorous exercise was carried out.

Traditionally, the RAE’s main objective has been to allow funding bodies to assess the quality of research arising from the investment of public funds. It is used as a means for the academic sector to assess its success and prepare its future strategy. As such, the RAE introduces an incentive to individuals and institutions to improve their research performance and, unlike other forms of review and assessment in higher education, it has retained a good degree of support among academic staff.

The 2008 RAE will be based on the same principles of peer review as in previous years. For the purpose of this year’s assessment, each academic discipline is assigned to one of 67 units of assessment (UOA). Work submitted is then assessed against the published criteria by an expert panel drawn from higher education institutions and the wider research community. The results will be published as a graded profile for each UOA for each submission.

RAE versus REF

After 2008, the UK intends to change the way research quality is evaluated to a more statistics-based system. The main reasons put forth by the Higher Education Funding Council for England for this change are that the RAE is expensive and burdensome for the participating institutions. It is expected that bibliometrics will be central to judging research quality in this new system. The proposed method, known as the Research Excellence Framework (REF), will therefore rely more heavily on statistics than the extensive – but arguably costly - peer review of the current RAE.

The REF will consist of a framework for the assessment and funding of academic research that takes into account the key differences between disciplines. Research income, research student data and a new bibliometric indicator of research quality will drive assessment and funding decisions for the science-based disciplines. A form of peer review will remain in place for the arts, humanities and social sciences.

The REF is expected to be phased in gradually. It will inform funding for science-based disciplines from 2010, with all other disciplines following from 2014.

Research assessment elsewhere

The UK is not the only country with fresh plans for the allocation of research funding. Australia recently announced the final specifications of its first research evaluation method, the Research Quality Framework (RQF). While the RAE has a stronger emphasis on research environment and esteem indicators, the RQF grades both research quality and its impact. However, they also share common characteristics, namely similarities in the proportion of work to be examined, a minimum size of research grouping as well as rules for eligibility for inclusion in the volume count for funding. More details of the RQF can be found here.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The Research Assessment Exercise (RAE) is the principal method used in the UK to establish the quality of the research undertaken in the higher education sector. Conducted jointly by the UK’s four principal funding bodies - the Higher Education Funding Council for England, the Scottish Funding Council, the Higher Education Funding Council for Wales and the Department for Employment and Learning, Northern Ireland - it is an exercise based on peer review and aims to produce quality profiles for each submission of research activity made by an institution. Any higher education institution eligible to receive research funding from one of these funding bodies is eligible to participate in the exercise. It has taken place in its present form in 1986, 1992, 1996 and 2001, in which year a more rigorous exercise was carried out.

Traditionally, the RAE’s main objective has been to allow funding bodies to assess the quality of research arising from the investment of public funds. It is used as a means for the academic sector to assess its success and prepare its future strategy. As such, the RAE introduces an incentive to individuals and institutions to improve their research performance and, unlike other forms of review and assessment in higher education, it has retained a good degree of support among academic staff.

The 2008 RAE will be based on the same principles of peer review as in previous years. For the purpose of this year’s assessment, each academic discipline is assigned to one of 67 units of assessment (UOA). Work submitted is then assessed against the published criteria by an expert panel drawn from higher education institutions and the wider research community. The results will be published as a graded profile for each UOA for each submission.

RAE versus REF

After 2008, the UK intends to change the way research quality is evaluated to a more statistics-based system. The main reasons put forth by the Higher Education Funding Council for England for this change are that the RAE is expensive and burdensome for the participating institutions. It is expected that bibliometrics will be central to judging research quality in this new system. The proposed method, known as the Research Excellence Framework (REF), will therefore rely more heavily on statistics than the extensive – but arguably costly - peer review of the current RAE.

The REF will consist of a framework for the assessment and funding of academic research that takes into account the key differences between disciplines. Research income, research student data and a new bibliometric indicator of research quality will drive assessment and funding decisions for the science-based disciplines. A form of peer review will remain in place for the arts, humanities and social sciences.

The REF is expected to be phased in gradually. It will inform funding for science-based disciplines from 2010, with all other disciplines following from 2014.

Research assessment elsewhere

The UK is not the only country with fresh plans for the allocation of research funding. Australia recently announced the final specifications of its first research evaluation method, the Research Quality Framework (RQF). While the RAE has a stronger emphasis on research environment and esteem indicators, the RQF grades both research quality and its impact. However, they also share common characteristics, namely similarities in the proportion of work to be examined, a minimum size of research grouping as well as rules for eligibility for inclusion in the volume count for funding. More details of the RQF can be found here.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

In recognition of peer reviewers

With increasing pressure on researchers’ time, many are finding it hard to devote the necessary hours to peer review. Journal editors are therefore having trouble finding expert reviewers for an increasing number of manuscript submissions. Research Trends looks at requests from the scientific community for the essential task of peer review to be recognized.

Read more >


Peer review, the assessment procedure of a scholarly manuscript carried out by external experts prior to publication, is an essential part of scholarly communications. It has recently been described as the cornerstone without which “the whole edifice of scientific research and publication would have no foundation”. (1) However crucial, peer review goes nonetheless mostly unrewarded.

Researchers are always struggling for time between conducting and documenting their research, obtaining funding through grant applications, and keeping apace with the literature in their field. A large proportion of researchers also have to deal with the tasks of teaching and mentoring students, managing labs, and travelling to present their findings. It seems paradoxical, therefore, that a fundamental yet time-consuming task such as peer review is not formally incentivized, especially in our times of budgetary restrictions for science, growing competition for grants, and increasing emphasis on productivity.

Prof. Philippe Baveye

Prof. Philippe Baveye

The reviewing crisis

For Prof. Philippe Baveye of the SIMBIOS Centre, Abertay University, this very real problem is nonetheless only the tip of the iceberg: “Now more than ever, many more manuscripts are submitted to journals than really deserve to be. A huge amount of them are junk, submitted for reasons other than the sharing of new knowledge, which understandably nobody wants to review. It is in this context that the peer-review crisis has to be interpreted.”

Prof. Bernard Grabot

Prof. Bernard Grabot

Although there have been ideas for penalising late reviewers (2) as an incentive for prompt reviews, the majority of suggestions focus on positive reinforcement. (3) Prof. Bernard Grabot, of the Ecole Nationale d'Ingénieurs de Tarbes, France, agrees that this is the right approach: “In my opinion, the idea is to encourage people to review; we should therefore avoid any penalty, even for ‘poor’ reviewers, as people would prefer not to respond than risk a bad evaluation.”

Peer-review metrics

While some journals do provide access to e-content or Abstracting & Indexing services such as Scopus, publish lists of reviewers and/or frequent reviewers, or even pay reviewers a token sum for each completed review, most peer reviewing goes unrewarded. The most recent proposals to change this have advocated the application of scientometrics to peer review. (4)

Dr Elena Paoletti

Dr Elena Paoletti

In November 2009, Dr Elena Paoletti of the National Council of Research, Italy, proposed the Reviewer Factor: a simple indicator based on the number of reviews multiplied by the citation influence of the journal, which would be “a concrete way to provide public recognition of [reviewers’] attitude to evaluation and importance in the field, and a succinct measure of [their] experience in peer review.” (5) Late reviews may or may not be taken into account.

Dr Pedro Cintas

Dr Pedro Cintas

Meanwhile, Dr Pedro Cintas of the University of Extremadura, Spain, suggested a Peer Review Index: a metric or “peer review capability [which] would be the quotient between the number of papers evaluated (q) and the number of papers published (p) within a given period.” (6) This could be made to incorporate the quality of the reviews in terms of relevance and usefulness, as evaluated by the editors.

Prof. Bernard Grabot comments: “Concerning what would make a ‘good’ index, the discussion is open […] The important thing would be – if possible – to get a single index for a reviewer, summarising his/her activities for most of the journals [...] but I suppose it is quite difficult. It would be useful to get similar indices for all the journals, which could then be computed at reviewer level.”

While Prof. Philippe Baveye does not deny the usefulness of these types of indicators, he believes that they are only part of the solution: “Certainly, peer-reviewing effectiveness indices like those that are being proposed would help, […] but that would not be enough. The solution to the problem has to be sought by attacking the ‘publish or perish’ mentality directly, wherever it manifests, and by reducing drastically the number of articles published in most disciplines.” (7)

Although there is a clear need for the academic community to incentivize peer review in order to preserve a fast and efficient quality check of scientific manuscripts submitted for publication, there is as yet no uniformly established method to do so. With the recent incorporation of the nascent reviewer metrics, the issue has the potential to turn into a hotly debated topic.

Useful links

Rewarding reviewers – could a Reviewer Factor be a solution?
Increasing visibility and recognition of reviewers – is a Peer Review Index a possible solution?
Sticker shock and looming tsunami: the high cost of academic serials in perspective

References:

(1) (2008) “The pitfalls and rewards of peer review”, The Lancet, vol. 371, issue 9611, p. 447.
(2) Hauser, M & Fehr, E (2007) “An incentive solution to the peer review problem”, PLoS Biology, vol. 5, issue 4, p. e107.
(3) Baveye, PC, Charlet, L, Georgakakos, K P, & Syme, G (2009) “Ensuring that reviewers’ time and effort are used efficiently”, Journal of Hydrology, vol. 365, issue 1–2, pp. 1–3.
(4) Baveye, PC & Trevors, JT (2010) “How Can We Encourage Peer-Reviewing”, Water, Air, & Soil Pollution, pp. 1-3, article in press DOI 10.1007/s11270-010-0355-7.
(5) Paoletti, E (2010) “Rewarding reviewers – could a Reviewer Factor be a solution?”, Elsevier Reviewers’ Update, issue 4, March 2010.
(6) Cintas, P (2010) “Increasing visibility and recognition of reviewers – is a Peer Review Index a possible solution?”, Elsevier Reviewers’ Update, issue 4, March 2010.
(7) Baveye, PC (2010) “Sticker shock and looming tsunami”, Journal of Scholarly Publishing, vol. 41, issue 2, pp. 191-215.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Peer review, the assessment procedure of a scholarly manuscript carried out by external experts prior to publication, is an essential part of scholarly communications. It has recently been described as the cornerstone without which “the whole edifice of scientific research and publication would have no foundation”. (1) However crucial, peer review goes nonetheless mostly unrewarded.

Researchers are always struggling for time between conducting and documenting their research, obtaining funding through grant applications, and keeping apace with the literature in their field. A large proportion of researchers also have to deal with the tasks of teaching and mentoring students, managing labs, and travelling to present their findings. It seems paradoxical, therefore, that a fundamental yet time-consuming task such as peer review is not formally incentivized, especially in our times of budgetary restrictions for science, growing competition for grants, and increasing emphasis on productivity.

Prof. Philippe Baveye

Prof. Philippe Baveye

The reviewing crisis

For Prof. Philippe Baveye of the SIMBIOS Centre, Abertay University, this very real problem is nonetheless only the tip of the iceberg: “Now more than ever, many more manuscripts are submitted to journals than really deserve to be. A huge amount of them are junk, submitted for reasons other than the sharing of new knowledge, which understandably nobody wants to review. It is in this context that the peer-review crisis has to be interpreted.”

Prof. Bernard Grabot

Prof. Bernard Grabot

Although there have been ideas for penalising late reviewers (2) as an incentive for prompt reviews, the majority of suggestions focus on positive reinforcement. (3) Prof. Bernard Grabot, of the Ecole Nationale d'Ingénieurs de Tarbes, France, agrees that this is the right approach: “In my opinion, the idea is to encourage people to review; we should therefore avoid any penalty, even for ‘poor’ reviewers, as people would prefer not to respond than risk a bad evaluation.”

Peer-review metrics

While some journals do provide access to e-content or Abstracting & Indexing services such as Scopus, publish lists of reviewers and/or frequent reviewers, or even pay reviewers a token sum for each completed review, most peer reviewing goes unrewarded. The most recent proposals to change this have advocated the application of scientometrics to peer review. (4)

Dr Elena Paoletti

Dr Elena Paoletti

In November 2009, Dr Elena Paoletti of the National Council of Research, Italy, proposed the Reviewer Factor: a simple indicator based on the number of reviews multiplied by the citation influence of the journal, which would be “a concrete way to provide public recognition of [reviewers’] attitude to evaluation and importance in the field, and a succinct measure of [their] experience in peer review.” (5) Late reviews may or may not be taken into account.

Dr Pedro Cintas

Dr Pedro Cintas

Meanwhile, Dr Pedro Cintas of the University of Extremadura, Spain, suggested a Peer Review Index: a metric or “peer review capability [which] would be the quotient between the number of papers evaluated (q) and the number of papers published (p) within a given period.” (6) This could be made to incorporate the quality of the reviews in terms of relevance and usefulness, as evaluated by the editors.

Prof. Bernard Grabot comments: “Concerning what would make a ‘good’ index, the discussion is open […] The important thing would be – if possible – to get a single index for a reviewer, summarising his/her activities for most of the journals [...] but I suppose it is quite difficult. It would be useful to get similar indices for all the journals, which could then be computed at reviewer level.”

While Prof. Philippe Baveye does not deny the usefulness of these types of indicators, he believes that they are only part of the solution: “Certainly, peer-reviewing effectiveness indices like those that are being proposed would help, […] but that would not be enough. The solution to the problem has to be sought by attacking the ‘publish or perish’ mentality directly, wherever it manifests, and by reducing drastically the number of articles published in most disciplines.” (7)

Although there is a clear need for the academic community to incentivize peer review in order to preserve a fast and efficient quality check of scientific manuscripts submitted for publication, there is as yet no uniformly established method to do so. With the recent incorporation of the nascent reviewer metrics, the issue has the potential to turn into a hotly debated topic.

Useful links

Rewarding reviewers – could a Reviewer Factor be a solution?
Increasing visibility and recognition of reviewers – is a Peer Review Index a possible solution?
Sticker shock and looming tsunami: the high cost of academic serials in perspective

References:

(1) (2008) “The pitfalls and rewards of peer review”, The Lancet, vol. 371, issue 9611, p. 447.
(2) Hauser, M & Fehr, E (2007) “An incentive solution to the peer review problem”, PLoS Biology, vol. 5, issue 4, p. e107.
(3) Baveye, PC, Charlet, L, Georgakakos, K P, & Syme, G (2009) “Ensuring that reviewers’ time and effort are used efficiently”, Journal of Hydrology, vol. 365, issue 1–2, pp. 1–3.
(4) Baveye, PC & Trevors, JT (2010) “How Can We Encourage Peer-Reviewing”, Water, Air, & Soil Pollution, pp. 1-3, article in press DOI 10.1007/s11270-010-0355-7.
(5) Paoletti, E (2010) “Rewarding reviewers – could a Reviewer Factor be a solution?”, Elsevier Reviewers’ Update, issue 4, March 2010.
(6) Cintas, P (2010) “Increasing visibility and recognition of reviewers – is a Peer Review Index a possible solution?”, Elsevier Reviewers’ Update, issue 4, March 2010.
(7) Baveye, PC (2010) “Sticker shock and looming tsunami”, Journal of Scholarly Publishing, vol. 41, issue 2, pp. 191-215.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Letters from the past

While scholars have always built networks to disseminate their ideas, their media have changed over time. During the Enlightenment, academics exchanged letters within a forum known as the Republic of Letters. For the first time, this “Republic” has now been mapped. Research Trends meets the “cartographers”.

Read more >


Today, we take scholarly communication so much for granted that we rarely consider how we would share ideas and meet like-minded researchers if there were no journals or research institutes. Yet these are relatively recent developments. The first journals did not appear until the 17th century and universities did not become widespread until the 16th century. Before (and during) these developments, scholars exchanged opinions, hypotheses and conclusions within a forum they called the Republic of Letters.

The Republic of Letters was a forerunner of our modern scholarly communications, incorporating the activities of today’s journals, societies and research institutes. Starting in the mid-15th century and reaching its peak during the Enlightenment period of the late 17th and 18th centuries, this was both a real and an imagined community. Ideas were exchanged via handwritten letters and cultural-intellectual gatherings in salons.

According to Paula Findlen, Ubaldo Pierotti Professor of Italian History and Chair of the History Department at Stanford University: “It was a scholars’ Utopia; a kind of transnational, global community of minds.”

Left to right: Dan Edelstein, Nicole Coleman, Paula Findlen

Left to right: Dan Edelstein, Nicole Coleman, Paula Findlen Photo: Linda Cicero

Mapping the Republic

Findlen, along with her colleagues at Stanford University, Dan Edelstein, Assistant Professor of French, and Academic Technology Specialist Nicole Coleman, is working on a major collaborative project to map the exchanges within the Republic of Letters.

Paula Findlen is Ubaldo Pierotti Professor of Italian History and Chair of the History Department at Stanford University. Her research focuses on the scientific culture of early modern Italy, the role of the Jesuits in early modern science, the history of collecting, and the Republic of Letters as seen from an Italian perspective.

Nicole Coleman is Academic Technology Specialist for the Stanford Humanities Center. She works on large-scale international collaborative research projects, currently focusing on information visualization for humanities research.

Dan Edelstein is Assistant Professor of French at Stanford University. He works primarily on 18th-century France, which also serves as a launching pad for forays into the 19th and 20th centuries, as well as the early modern period.

Producing the maps, however, is only a starting point for the team. They are using them to test theories and gain an overview of the landscape. The maps make it possible to view each writer in context, and to search and compare different thinkers. It is also much easier to see how a correspondent’s career developed along with his network.

They have long-term plans to allow researchers to annotate the data and test hypotheses. “Humanities projects can face the challenge of presenting disputed and/or incomplete data in a way that offers most clarity to researchers, so we want to create space for interpretations when we create visualizations,” says Coleman. However, simply gathering the data was the team’s first obstacle. “We’re working with incomplete data. And many gaps will never be filled in because the documents are lost,” she explains. “It’s a bit like trying to do modern bibliometrics, but you only have Nature left,” says Edelstein.

While it is feasible to explore the content of the letters, the team chose only to look at metadata. “The discovery of new knowledge in the humanities relies on rich context, which can be obscured when the objective of visualizing this data is primarily about managing complexity or quantity. When gathering these remnants of the past, our big challenge is to present it in a way that gives context. Context helps us make sense of it rather than numerical analysis,” she adds.

Exploring the periphery

Findlen is particularly interested in the outliers: people in far-flung locations or those forgotten by history. “We can see how they fit in with and contributed to the flow of ideas. Everyone knows that London and Paris were important, and the maps confirm this. But we can now see how the Republic appeared to its members living outside the capitals, such as Benjamin Franklin in Philadelphia,” she says.

At the same time, some people were highly prolific, but did not have a big impact, while others wrote few letters, but had a massive impact. In fact, if history has shown us anything, it is that sheer quantity of output is only a small part of the story. Important figures, like Isaac Newton, actually refused to accept correspondence, while others, like Thomas Hobbes and René Descartes, have a relatively small output when compared with their impact.

Establishing past impact

While the output – maps of the Republic of Letters – echo modern bibliometric attempts to map science, the team’s starting point is very different. One significant distinction is that where modern bibliometrics aims to establish the impact of living authors, Findlen, Edelstein and Coleman already know who was important.

“What we’re really doing,” says Edelstein, “is comparing reality with imagination. For instance, many French Enlightenment thinkers believed that England was a haven of liberal, progressive thinking and hoped to emulate this free society. However, the reality is that key French Enlightenment figures, like Voltaire, weren’t really corresponding with England. In fact, less than 1% of his output went to, or came from, England.”

Gossip will always be with us

When drawing parallels between the Republic of Letters and current scholarly communications, it is important to remember that letter writing was a quite different activity from today. While some were personal, many were written with a wider audience in mind. Correspondents in the Republic assumed that their letters would be shared.

According to Edelstein, “these letters were essentially gossip: gossip about ideas, books, publications and other members of the Republic.” And this background chatter whereby scholars bounce ideas, vent steam and make private comments has never really stopped, continuing today in emails, blogs and university corridors the world over.

Edelstein adds: “Everyone is part of a community. While we celebrate individual genius, most ideas emerge from debate, and this has never changed. We have always constructed virtual communities, whether by writing letters or joining today’s global online networks.” Debate is a cornerstone of all academic pursuits, and while our media may change, we will always need to discuss our ideas within a community.

Useful links

Mapping the Republic of Letters (project website)
Mapping the Republic of Letters (visualizations and explanations)
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Today, we take scholarly communication so much for granted that we rarely consider how we would share ideas and meet like-minded researchers if there were no journals or research institutes. Yet these are relatively recent developments. The first journals did not appear until the 17th century and universities did not become widespread until the 16th century. Before (and during) these developments, scholars exchanged opinions, hypotheses and conclusions within a forum they called the Republic of Letters.

The Republic of Letters was a forerunner of our modern scholarly communications, incorporating the activities of today’s journals, societies and research institutes. Starting in the mid-15th century and reaching its peak during the Enlightenment period of the late 17th and 18th centuries, this was both a real and an imagined community. Ideas were exchanged via handwritten letters and cultural-intellectual gatherings in salons.

According to Paula Findlen, Ubaldo Pierotti Professor of Italian History and Chair of the History Department at Stanford University: “It was a scholars’ Utopia; a kind of transnational, global community of minds.”

Left to right: Dan Edelstein, Nicole Coleman, Paula Findlen

Left to right: Dan Edelstein, Nicole Coleman, Paula Findlen Photo: Linda Cicero

Mapping the Republic

Findlen, along with her colleagues at Stanford University, Dan Edelstein, Assistant Professor of French, and Academic Technology Specialist Nicole Coleman, is working on a major collaborative project to map the exchanges within the Republic of Letters.

Paula Findlen is Ubaldo Pierotti Professor of Italian History and Chair of the History Department at Stanford University. Her research focuses on the scientific culture of early modern Italy, the role of the Jesuits in early modern science, the history of collecting, and the Republic of Letters as seen from an Italian perspective.

Nicole Coleman is Academic Technology Specialist for the Stanford Humanities Center. She works on large-scale international collaborative research projects, currently focusing on information visualization for humanities research.

Dan Edelstein is Assistant Professor of French at Stanford University. He works primarily on 18th-century France, which also serves as a launching pad for forays into the 19th and 20th centuries, as well as the early modern period.

Producing the maps, however, is only a starting point for the team. They are using them to test theories and gain an overview of the landscape. The maps make it possible to view each writer in context, and to search and compare different thinkers. It is also much easier to see how a correspondent’s career developed along with his network.

They have long-term plans to allow researchers to annotate the data and test hypotheses. “Humanities projects can face the challenge of presenting disputed and/or incomplete data in a way that offers most clarity to researchers, so we want to create space for interpretations when we create visualizations,” says Coleman. However, simply gathering the data was the team’s first obstacle. “We’re working with incomplete data. And many gaps will never be filled in because the documents are lost,” she explains. “It’s a bit like trying to do modern bibliometrics, but you only have Nature left,” says Edelstein.

While it is feasible to explore the content of the letters, the team chose only to look at metadata. “The discovery of new knowledge in the humanities relies on rich context, which can be obscured when the objective of visualizing this data is primarily about managing complexity or quantity. When gathering these remnants of the past, our big challenge is to present it in a way that gives context. Context helps us make sense of it rather than numerical analysis,” she adds.

Exploring the periphery

Findlen is particularly interested in the outliers: people in far-flung locations or those forgotten by history. “We can see how they fit in with and contributed to the flow of ideas. Everyone knows that London and Paris were important, and the maps confirm this. But we can now see how the Republic appeared to its members living outside the capitals, such as Benjamin Franklin in Philadelphia,” she says.

At the same time, some people were highly prolific, but did not have a big impact, while others wrote few letters, but had a massive impact. In fact, if history has shown us anything, it is that sheer quantity of output is only a small part of the story. Important figures, like Isaac Newton, actually refused to accept correspondence, while others, like Thomas Hobbes and René Descartes, have a relatively small output when compared with their impact.

Establishing past impact

While the output – maps of the Republic of Letters – echo modern bibliometric attempts to map science, the team’s starting point is very different. One significant distinction is that where modern bibliometrics aims to establish the impact of living authors, Findlen, Edelstein and Coleman already know who was important.

“What we’re really doing,” says Edelstein, “is comparing reality with imagination. For instance, many French Enlightenment thinkers believed that England was a haven of liberal, progressive thinking and hoped to emulate this free society. However, the reality is that key French Enlightenment figures, like Voltaire, weren’t really corresponding with England. In fact, less than 1% of his output went to, or came from, England.”

Gossip will always be with us

When drawing parallels between the Republic of Letters and current scholarly communications, it is important to remember that letter writing was a quite different activity from today. While some were personal, many were written with a wider audience in mind. Correspondents in the Republic assumed that their letters would be shared.

According to Edelstein, “these letters were essentially gossip: gossip about ideas, books, publications and other members of the Republic.” And this background chatter whereby scholars bounce ideas, vent steam and make private comments has never really stopped, continuing today in emails, blogs and university corridors the world over.

Edelstein adds: “Everyone is part of a community. While we celebrate individual genius, most ideas emerge from debate, and this has never changed. We have always constructed virtual communities, whether by writing letters or joining today’s global online networks.” Debate is a cornerstone of all academic pursuits, and while our media may change, we will always need to discuss our ideas within a community.

Useful links

Mapping the Republic of Letters (project website)
Mapping the Republic of Letters (visualizations and explanations)
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Metric mad: the future of scholarly evaluation indicators

Hosted by the National Science Foundation (NSF), the “Scholarly Evaluation Metrics: Opportunities and Challenges” workshop took place in late 2009 in Washington D.C. Ashley Higgs, on behalf of Research Trends, looks back at what the participants discussed, agreed and disagreed on.

Read more >


In mid-December 2009, around 50 science colleagues assembled for what was tipped to be a veritable bibliometric wonderland. Attended by George Hirsch and Henry Small among others, the event offered a practical workshop rather than one-way theoretical presentations.

New usage metrics: recurring themes, fresh challenges

Small beginnings: it took centuries for citation structure to develop; technologies are only now available to make new metrics possible.

Incentives work both ways: people need incentives to adopt new metrics, while metrics incentivize both positive and negative behavior.

Availability of raw data: usage data can be proprietary, fragmented, and not overtly displayed.

Metrics are only part of the answer: peer review continues to play a role.

Jumping on the interdisciplinary bandwagon, the speakers and attendees represented many differing points of view: government vs. academic vs. corporate; evaluator vs. proposer; funding vs. policy vs. scientist; metric theorists vs. practitioners. But while debates were spirited, discussions were collegial and focused on advancing work on new metrics.

Two particular questions occupied participants, to which all discussions of new metrics circled back. Herbert van de Sompel of Los Alamos National Laboratory, the first speaker and one of the event organizers, asked attendees: “What are the qualities which make a metric acceptable to all stakeholders? And how do we move from conception to acceptance?” The workshop centered on projects investigating or proposing new metrics, including the MESUR project, Eigenfactor, h-bar index, and PLoS ONE’s article-level metrics. Many of these new metrics center on usage data.

Usage-based versus article-level metrics
Metrics based on usage data are central to the MESUR (MEtrics from Scholarly Usage of Resources) project. Johan Bollen from Indiana University, and principal investigator for the MESUR project, presented his findings to date. When comparing traditional citation-based metrics with usage-based metrics, he observed that usage data are very good indicators of prestige, but that evaluating scholars solely on rate metrics and total citations is “like saying Britney Spears is the most important artist who ever existed because she's sold 50 million records.”

In contrast, Peter Binfield of PLoS ONE presented the journal’s work on article-level metrics. In PLoS ONE, article views, downloads, star ratings, bookmarks and comments join the traditional citation counts. There are, however, downsides to article-level metrics like the star-rating system: Peter cautions that it is not yet widely used, and there is a propensity to give articles a five-star rating. The full scale is rarely used, meaning that it can be hard to infer much from these ratings.

Missing from the current metrics available, claims Peter, include those that predict an article’s impact from day one; ratings by reviewers, editors and other experts in an article’s particular field; mainstream media coverage; publicly available usage metrics that track article downloads, views of abstracts, re-posts of articles online and so on; tracking of “conversations” (comments, forum discussions and so on) outside the original place of publication; and the reputation of metrics among commentators.

Moving with the times
Recognizing that citation analysis has a history hundreds of years in the making, the discussion of new usage indicators has only been possible in the last decade or two. It will take a long time before scholarship catches up with these new (technological) metrics; and we are only just beginning to understand what the impact of these technologies will be.

Will Jorge Hirsch’s h-bar index take hold with the speed of the h-index? Will collaboration between the MESUR and Eigenfactor projects deliver MESUR-able results? Which approaches to network analysis will become mainstream in identifying influence, prestige and trust? When will measuring re-use of data sets become commonplace? Will metrics ever replace peer review? Whatever the answers, we look forward to the next workshop to carry the debate forward.

The burning questions

Cart before the horse: new usage-based metrics require the collection of new data for future analysis. But what data with what standards and for which metrics?

Variances in usage-tracking systems: without a central repository, how to measure usage across databases for the same article?

Power of simplicity: simple calculations, such as the h-index and impact factor, have high adoption rates; will relatively complex, computer-dependent network analyses ever achieve the same rate?

Scholarly vs. public attention: when analyzing usage data of publicly available articles, can scholarly attention be distinguished from general curiosity? Does it need to be?

What’s “new”: do existing systems for scholarly attention and funding decisions drive attention to the norm, to the detriment of breakthrough research that pushes the boundaries of science?

No single metric: while one metric will never suffice, which set of metrics will serve as a standard group?

Metrics for non-article research output: how can re-purposing mathematical formulas or re-using data sets be tracked?

Useful links

Scholarly Evaluation Metrics: Opportunities and Challenges
Scholars Seek Better Metrics for Assessing Research Productivity
MESUR
PLoS ONE
Visualizations on PLoS
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

In mid-December 2009, around 50 science colleagues assembled for what was tipped to be a veritable bibliometric wonderland. Attended by George Hirsch and Henry Small among others, the event offered a practical workshop rather than one-way theoretical presentations.

New usage metrics: recurring themes, fresh challenges

Small beginnings: it took centuries for citation structure to develop; technologies are only now available to make new metrics possible.

Incentives work both ways: people need incentives to adopt new metrics, while metrics incentivize both positive and negative behavior.

Availability of raw data: usage data can be proprietary, fragmented, and not overtly displayed.

Metrics are only part of the answer: peer review continues to play a role.

Jumping on the interdisciplinary bandwagon, the speakers and attendees represented many differing points of view: government vs. academic vs. corporate; evaluator vs. proposer; funding vs. policy vs. scientist; metric theorists vs. practitioners. But while debates were spirited, discussions were collegial and focused on advancing work on new metrics.

Two particular questions occupied participants, to which all discussions of new metrics circled back. Herbert van de Sompel of Los Alamos National Laboratory, the first speaker and one of the event organizers, asked attendees: “What are the qualities which make a metric acceptable to all stakeholders? And how do we move from conception to acceptance?” The workshop centered on projects investigating or proposing new metrics, including the MESUR project, Eigenfactor, h-bar index, and PLoS ONE’s article-level metrics. Many of these new metrics center on usage data.

Usage-based versus article-level metrics
Metrics based on usage data are central to the MESUR (MEtrics from Scholarly Usage of Resources) project. Johan Bollen from Indiana University, and principal investigator for the MESUR project, presented his findings to date. When comparing traditional citation-based metrics with usage-based metrics, he observed that usage data are very good indicators of prestige, but that evaluating scholars solely on rate metrics and total citations is “like saying Britney Spears is the most important artist who ever existed because she's sold 50 million records.”

In contrast, Peter Binfield of PLoS ONE presented the journal’s work on article-level metrics. In PLoS ONE, article views, downloads, star ratings, bookmarks and comments join the traditional citation counts. There are, however, downsides to article-level metrics like the star-rating system: Peter cautions that it is not yet widely used, and there is a propensity to give articles a five-star rating. The full scale is rarely used, meaning that it can be hard to infer much from these ratings.

Missing from the current metrics available, claims Peter, include those that predict an article’s impact from day one; ratings by reviewers, editors and other experts in an article’s particular field; mainstream media coverage; publicly available usage metrics that track article downloads, views of abstracts, re-posts of articles online and so on; tracking of “conversations” (comments, forum discussions and so on) outside the original place of publication; and the reputation of metrics among commentators.

Moving with the times
Recognizing that citation analysis has a history hundreds of years in the making, the discussion of new usage indicators has only been possible in the last decade or two. It will take a long time before scholarship catches up with these new (technological) metrics; and we are only just beginning to understand what the impact of these technologies will be.

Will Jorge Hirsch’s h-bar index take hold with the speed of the h-index? Will collaboration between the MESUR and Eigenfactor projects deliver MESUR-able results? Which approaches to network analysis will become mainstream in identifying influence, prestige and trust? When will measuring re-use of data sets become commonplace? Will metrics ever replace peer review? Whatever the answers, we look forward to the next workshop to carry the debate forward.

The burning questions

Cart before the horse: new usage-based metrics require the collection of new data for future analysis. But what data with what standards and for which metrics?

Variances in usage-tracking systems: without a central repository, how to measure usage across databases for the same article?

Power of simplicity: simple calculations, such as the h-index and impact factor, have high adoption rates; will relatively complex, computer-dependent network analyses ever achieve the same rate?

Scholarly vs. public attention: when analyzing usage data of publicly available articles, can scholarly attention be distinguished from general curiosity? Does it need to be?

What’s “new”: do existing systems for scholarly attention and funding decisions drive attention to the norm, to the detriment of breakthrough research that pushes the boundaries of science?

No single metric: while one metric will never suffice, which set of metrics will serve as a standard group?

Metrics for non-article research output: how can re-purposing mathematical formulas or re-using data sets be tracked?

Useful links

Scholarly Evaluation Metrics: Opportunities and Challenges
Scholars Seek Better Metrics for Assessing Research Productivity
MESUR
PLoS ONE
Visualizations on PLoS
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
Research Trends Image

Buckyballs, nanotubes and graphene: On the hunt for the next big thing

Known only as a theoretical structure for many years, recent breakthroughs in the isolation of graphene (a one-atom thick, two-dimensional sheet of carbon atoms) has led to an explosion of literature on its unique properties and potential applications. Research Trends investigates the massive growth in publications on graphene and how the field’s most prolific researchers migrated to it from closely related topics.

Read more >


The current focus on graphene owes its legacy to the foundations of nanoscience laid down with the discovery of buckminsterfullerene (named in homage to the geodesic domes of architect Richard Buckminster Fuller) in 1985. (1) This sparked the search for other fullerenes, complex carbon nanostructures typically occurring as spheres (similar in appearance to a soccer ball, and colloquially known as “buckyballs”) or cylinders. The first cylindrical structures, quickly dubbed nanotubes, were isolated in 1991. (2) Graphene can be considered as an unzipped and flattened-out nanotube, and has been shown to have unique electronic properties under certain conditions. (3)

Explosive growth

The growth of the peer-reviewed journal literature on nanotubes and graphene is nothing short of remarkable. While articles on fullerenes have appeared in steadily increasing numbers annually since 1985 (see Figure 1), massive (and so far sustained) growth has been observed for both nanotubes and graphene. Early response to the “discovery” of each of these materials shows very different trends (see Figure 2). While fullerene and nanotube research expanded rapidly, graphene research has grown exponentially (at a rate of 58% per year) since the publication of Novoselov et al. (4), a landmark paper describing a new method for isolating stable graphene sheets. The citation impact of this paper is visualized in Figure 3, giving a clear sense of the citation ripples emanating from this paper out into the literature, like those from a brick dropped in a pond.

Figure 1. English-language research articles published in journals in the period 1985–2009. Keyword searches were conducted for fullerenes (*fullerene), nanotubes (nanotube*) and graphene (graphene*). Source: Scopus.

Figure 1. English-language research articles published in journals in the period 1985–2009. Keyword searches were conducted for fullerenes (*fullerene), nanotubes (nanotube*) and graphene (graphene*). Source: Scopus.

Figure 2. English-language research articles published in journals from the year indicated (i.e. for fullerenes, Y1 is 1985). Keyword searches were conducted for fullerenes (*fullerene), nanotubes (nanotube*) and graphene (graphene*). Source: Scopus.

Figure 2. English-language research articles published in journals from the year indicated (i.e. for fullerenes, Y1 is 1985). Keyword searches were conducted for fullerenes (*fullerene), nanotubes (nanotube*) and graphene (graphene*). Source: Scopus.

Figure 3. All documents citing Novoselov et al. (2004; shown at the centre of the figure). Each concentric ring of citing documents were published in 2005 through 2009 respectively and are identified by their first author – note how their number increases with each year, just like the broadening of the ripples in a pond. Source: Scopus.

Figure 3. All documents citing Novoselov et al. (2004; shown at the centre of the figure). Each concentric ring of citing documents were published in 2005 through 2009 respectively and are identified by their first author – note how their number increases with each year, just like the broadening of the ripples in a pond. Source: Scopus.

This paper effectively opened up research on the characterization and exploitation of the unique properties of graphene to a new field of scientists, many of whom had previously been working on carbon nanotubes. Indeed, the 100 most prolific authors on graphene to date have shown a recent decline in their share of publication output on nanotubes in favor of graphene, with the latter exceeding the former since 2008. These top 100 authors appear to have a low and decreasing output on fullerenes,
perhaps a carryover from the origins of the nanotube and graphene research fields.

Figure 4. Percentage shares of total article output of most prolific 100 graphene researchers on fullerenes, nanotubes or graphene. Keyword searches were conducted for fullerenes (*fullerene), nanotubes (nanotube*) and graphene (graphene*). Source: Scopus.

Figure 4. Percentage shares of total article output of most prolific 100 graphene researchers on fullerenes, nanotubes or graphene. Keyword searches were conducted for fullerenes (*fullerene), nanotubes (nanotube*) and graphene (graphene*). Source: Scopus.

Graphene research boom
How does the graphene revolution feel to those working in the field? Dr Jamie Warner, Glasstone Research Fellow in Science at the Department of Materials, Brasenose College, University of Oxford comments: “The main thing I see when visiting other research groups is the massive uptake of graphene-focused research. Everyone wants to get on board the graphene revolution. Laboratories that have facilities for examining carbon nanotubes are suitable for graphene as well. So there is no real investment cost required to expand the research into graphene. […] When combined with the ease with which graphene can be obtained from scotch (sticky) tape, it is evident why output in graphene research has boomed in such a short time.

“It’s clear that many researchers are riding the graphene wave in the hope of high-impact papers. The quest for all scientists is to be among those leading the field. But there are few who are setting the trend for others to follow. In such a fast-moving field, it may be hard to stay ahead.”

Contribution to the carbon community
How has this fundamental shift in research direction affected the communities of physicists (interested in graphene’s electronic properties), materials scientists (seeking potential applications in new carbon materials) and chemists and surface scientists working on its large-scale synthesis?

Dr Warner continues: “The coalescence of nano-carbon communities hasn’t really changed that much. Groups have always collaborated worldwide; that is the nature of science. More interesting is how established groups have shifted focus or expanded. Research groups that were previously working on nanotubes are now entering the graphene field.

“Groups with established expertise in examining carbon nanotubes with high-resolution transmission electron microscopy – such as Kazu Suenaga and Sumio Iijima at the National Institute of Advanced Industrial Science and Technology (AIST) in Japan, and Alex Zettl at UC Berkeley – were able to translate their expertise directly to graphene. The large-scale growth of graphene using chemical vapor deposition (CVD) was a similar case: groups with experience and apparatus set up for CVD of nanotubes – such as Rodney Ruoff at the University of Texas at Austin – were able to modify the catalyst structure to grow graphene. Surprisingly, it was two scientists with no background in carbon nanotubes or fullerenes, Kostya Novoselov and Andre Geim, who made the biggest contribution to the field of graphene. This highlights how people from outside the immediate field can make a massive impact.”

References:

(1) Kroto, H.W., Heath, J.R., O'Brien, S.C., Curl, R.F., Smalley, R.E. (1985) “C60: Buckminsterfullerene”, Nature, vol. 318, issue 6042, pp. 162-163.
(2) Iijima, S. (1991) “Helical microtubules of graphitic carbon”, Nature, vol. 354, issue 6348, pp. 56-58.
(3) Soldano, C., Mahmood, A., Dujardin, E. (2010) “Production, properties and potential of graphene”, Carbon, vol. 48, issue 8, pp. 2127-2150.
(4) Novoselov, K.S., Geim, A.K., Morozov, S.V., Jiang, D., Zhang, Y., Dubonos, S.V., Grigorieva, I.V., Firsov, A.A. (2004) “Electric field in atomically thin carbon films”, Science, vol. 306, issue 5696, pp. 666-669.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The current focus on graphene owes its legacy to the foundations of nanoscience laid down with the discovery of buckminsterfullerene (named in homage to the geodesic domes of architect Richard Buckminster Fuller) in 1985. (1) This sparked the search for other fullerenes, complex carbon nanostructures typically occurring as spheres (similar in appearance to a soccer ball, and colloquially known as “buckyballs”) or cylinders. The first cylindrical structures, quickly dubbed nanotubes, were isolated in 1991. (2) Graphene can be considered as an unzipped and flattened-out nanotube, and has been shown to have unique electronic properties under certain conditions. (3)

Explosive growth

The growth of the peer-reviewed journal literature on nanotubes and graphene is nothing short of remarkable. While articles on fullerenes have appeared in steadily increasing numbers annually since 1985 (see Figure 1), massive (and so far sustained) growth has been observed for both nanotubes and graphene. Early response to the “discovery” of each of these materials shows very different trends (see Figure 2). While fullerene and nanotube research expanded rapidly, graphene research has grown exponentially (at a rate of 58% per year) since the publication of Novoselov et al. (4), a landmark paper describing a new method for isolating stable graphene sheets. The citation impact of this paper is visualized in Figure 3, giving a clear sense of the citation ripples emanating from this paper out into the literature, like those from a brick dropped in a pond.

Figure 1. English-language research articles published in journals in the period 1985–2009. Keyword searches were conducted for fullerenes (*fullerene), nanotubes (nanotube*) and graphene (graphene*). Source: Scopus.

Figure 1. English-language research articles published in journals in the period 1985–2009. Keyword searches were conducted for fullerenes (*fullerene), nanotubes (nanotube*) and graphene (graphene*). Source: Scopus.

Figure 2. English-language research articles published in journals from the year indicated (i.e. for fullerenes, Y1 is 1985). Keyword searches were conducted for fullerenes (*fullerene), nanotubes (nanotube*) and graphene (graphene*). Source: Scopus.

Figure 2. English-language research articles published in journals from the year indicated (i.e. for fullerenes, Y1 is 1985). Keyword searches were conducted for fullerenes (*fullerene), nanotubes (nanotube*) and graphene (graphene*). Source: Scopus.

Figure 3. All documents citing Novoselov et al. (2004; shown at the centre of the figure). Each concentric ring of citing documents were published in 2005 through 2009 respectively and are identified by their first author – note how their number increases with each year, just like the broadening of the ripples in a pond. Source: Scopus.

Figure 3. All documents citing Novoselov et al. (2004; shown at the centre of the figure). Each concentric ring of citing documents were published in 2005 through 2009 respectively and are identified by their first author – note how their number increases with each year, just like the broadening of the ripples in a pond. Source: Scopus.

This paper effectively opened up research on the characterization and exploitation of the unique properties of graphene to a new field of scientists, many of whom had previously been working on carbon nanotubes. Indeed, the 100 most prolific authors on graphene to date have shown a recent decline in their share of publication output on nanotubes in favor of graphene, with the latter exceeding the former since 2008. These top 100 authors appear to have a low and decreasing output on fullerenes,
perhaps a carryover from the origins of the nanotube and graphene research fields.

Figure 4. Percentage shares of total article output of most prolific 100 graphene researchers on fullerenes, nanotubes or graphene. Keyword searches were conducted for fullerenes (*fullerene), nanotubes (nanotube*) and graphene (graphene*). Source: Scopus.

Figure 4. Percentage shares of total article output of most prolific 100 graphene researchers on fullerenes, nanotubes or graphene. Keyword searches were conducted for fullerenes (*fullerene), nanotubes (nanotube*) and graphene (graphene*). Source: Scopus.

Graphene research boom
How does the graphene revolution feel to those working in the field? Dr Jamie Warner, Glasstone Research Fellow in Science at the Department of Materials, Brasenose College, University of Oxford comments: “The main thing I see when visiting other research groups is the massive uptake of graphene-focused research. Everyone wants to get on board the graphene revolution. Laboratories that have facilities for examining carbon nanotubes are suitable for graphene as well. So there is no real investment cost required to expand the research into graphene. […] When combined with the ease with which graphene can be obtained from scotch (sticky) tape, it is evident why output in graphene research has boomed in such a short time.

“It’s clear that many researchers are riding the graphene wave in the hope of high-impact papers. The quest for all scientists is to be among those leading the field. But there are few who are setting the trend for others to follow. In such a fast-moving field, it may be hard to stay ahead.”

Contribution to the carbon community
How has this fundamental shift in research direction affected the communities of physicists (interested in graphene’s electronic properties), materials scientists (seeking potential applications in new carbon materials) and chemists and surface scientists working on its large-scale synthesis?

Dr Warner continues: “The coalescence of nano-carbon communities hasn’t really changed that much. Groups have always collaborated worldwide; that is the nature of science. More interesting is how established groups have shifted focus or expanded. Research groups that were previously working on nanotubes are now entering the graphene field.

“Groups with established expertise in examining carbon nanotubes with high-resolution transmission electron microscopy – such as Kazu Suenaga and Sumio Iijima at the National Institute of Advanced Industrial Science and Technology (AIST) in Japan, and Alex Zettl at UC Berkeley – were able to translate their expertise directly to graphene. The large-scale growth of graphene using chemical vapor deposition (CVD) was a similar case: groups with experience and apparatus set up for CVD of nanotubes – such as Rodney Ruoff at the University of Texas at Austin – were able to modify the catalyst structure to grow graphene. Surprisingly, it was two scientists with no background in carbon nanotubes or fullerenes, Kostya Novoselov and Andre Geim, who made the biggest contribution to the field of graphene. This highlights how people from outside the immediate field can make a massive impact.”

References:

(1) Kroto, H.W., Heath, J.R., O'Brien, S.C., Curl, R.F., Smalley, R.E. (1985) “C60: Buckminsterfullerene”, Nature, vol. 318, issue 6042, pp. 162-163.
(2) Iijima, S. (1991) “Helical microtubules of graphitic carbon”, Nature, vol. 354, issue 6348, pp. 56-58.
(3) Soldano, C., Mahmood, A., Dujardin, E. (2010) “Production, properties and potential of graphene”, Carbon, vol. 48, issue 8, pp. 2127-2150.
(4) Novoselov, K.S., Geim, A.K., Morozov, S.V., Jiang, D., Zhang, Y., Dubonos, S.V., Grigorieva, I.V., Firsov, A.A. (2004) “Electric field in atomically thin carbon films”, Science, vol. 306, issue 5696, pp. 666-669.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Speech is silver, silence is golden: The challenges of scientific communication

The relationship between scientists and science journalists can sometimes be uncomfortable. Much of this stems not only from what is said, but, more importantly, what is left unsaid. Research Trends examines three cases in which scientists have taken different approaches to the challenges of communicating via the media.

Read more >


Scientists face some difficult choices. They can offer complete transparency by opening their debates to the general public via the Internet, but run the risk that normal academic criticisms could lead to libel cases. Alternatively, they could refuse to discuss anything openly, with the risk of alienating the general public. Finally, they could try working closely with journalists and other communicators, allowing them to disseminate their ideas, even though this can lead to misrepresentation of ideas and results. Three recent cases have highlighted the difficulties associated with each of these approaches.

I’ll see you in court

When Simon Singh, the physicist turned science writer, published an opinion in the Guardian newspaper criticizing chiropractic therapy (1), the British Chiropractic Association (BCA) attempted to sue him for libel. Eventually, the court decided in Singh’s favor. (2)

This case highlighted risks that scientists face when the robust criticism typical of academic debate is published in the mainstream media. Within the world of academic journals, opponents have no recourse but to reasoned debate; in the public eye, however, when you run out of arguments, you can fall back on libel law. The BCA could have published their own response, providing the evidence Singh claimed was non-existent; instead, they chose to sue. For many academics, this is an unexpected response.

Storm in a teacup

In November 2009, hackers leaked internal emails belonging to members of the University of East Anglia’s Climate Research Unit. According to climate-change skeptics, these emails contained evidence of data manipulation, and attempts to suppress the work of climate-change skeptics. They and the media also claimed that the content of these mails was in the public interest.

While a subsequent Parliamentary Enquiry cleared the researchers of manipulating data to show certain results (3), public trust in climate-change science specifically, and the wider scientific community in general, has suffered.

The enquiry was, however, critical of the culture of withholding information (3), which raises an important question for scientists: to what degree should they expect their communications and information sources, which might be private, informal and/or works in progress, to be subject to public scrutiny?

Darwin award

Few theories are as widely debated in the mainstream media as Darwinism. (4) In the pursuit of “balanced” reporting, many alternative theories have been given wide coverage, including intelligent design and Lamarckism. A predecessor of Darwin, Jean-Baptiste Lamarck proposed a theory of evolution by inheritance of advantageous survival traits acquired during the parent’s lifetime. Darwinism superceded Lamarckism, specifically with respect to the acquisition of inherited traits.

Building on Darwinism, modern evolutionary theory suggests that evolution is a result of changes to the DNA sequence. When these changes help an organism to survive and reproduce, they pass into the next generation.

However, a recent study showing that chickens could pass on behavioral changes caused by stressful environmental conditions to their offspring, even though there were no changes to their DNA sequence, has been cited as confirmation of Lamarckism. (5) To anybody with a reasonable understanding of evolutionary theory, this result is completely compatible with Darwinism.

In fact, while the argument in the body of the article does not question current evolutionary theory, the headline and the introduction are rather sensationalist. Such treatment may lead many scientists to question whether they can trust journalists to treat their work responsibly, or whether they need to actively engage with the media to promote their findings in a balanced, rational and accurate manner.

Commenting on the article, Alice Tuff, from Sense About Science, a charity concerned with promoting good science and evidence for the public, said: “Science is a slow, continuous process based on uncertainty, while in contrast, the media demands quick, entertaining stories with clear answers and certainty. These different demands can seem difficult to reconcile, but if scientists’ voices are missing from the debate, they risk being replaced by others who do not have the same regard for evidence.”

Balanced voice

Scientists need to work towards resolving this uncomfortable relationship with the media; openness is required to maintain trust, and the public appreciates lively debate. For this to be effective, however, scientists need to be able to express themselves freely and without risk of libel – a threat that could cause scientists to self-censor some of their most progressive ideas.

At the same time, scientists must balance reported articles with their own communications, through interviews and opinion pieces. After all, those who actually develop and test new ideas are best placed to understand the logic and subtleties of a scientific argument and thus communicate their work accurately.

Useful links

Sense About Science

References:

(1) Singh, S. (2008) “Beware the spinal trap”, Guardian.
(2) Editorial (2010) “Time for libel-law reform”, Nature, vol. 464, issue 1104.
(3) Pearce, F. (2010) “Climategate inquiry points finger at university”, New Scientist.
(4) Darwin, C. (1859) The Origin of Species.
(5) Burkeman, O. (2010) “Why everything you've been told about evolution is wrong”, Guardian.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Scientists face some difficult choices. They can offer complete transparency by opening their debates to the general public via the Internet, but run the risk that normal academic criticisms could lead to libel cases. Alternatively, they could refuse to discuss anything openly, with the risk of alienating the general public. Finally, they could try working closely with journalists and other communicators, allowing them to disseminate their ideas, even though this can lead to misrepresentation of ideas and results. Three recent cases have highlighted the difficulties associated with each of these approaches.

I’ll see you in court

When Simon Singh, the physicist turned science writer, published an opinion in the Guardian newspaper criticizing chiropractic therapy (1), the British Chiropractic Association (BCA) attempted to sue him for libel. Eventually, the court decided in Singh’s favor. (2)

This case highlighted risks that scientists face when the robust criticism typical of academic debate is published in the mainstream media. Within the world of academic journals, opponents have no recourse but to reasoned debate; in the public eye, however, when you run out of arguments, you can fall back on libel law. The BCA could have published their own response, providing the evidence Singh claimed was non-existent; instead, they chose to sue. For many academics, this is an unexpected response.

Storm in a teacup

In November 2009, hackers leaked internal emails belonging to members of the University of East Anglia’s Climate Research Unit. According to climate-change skeptics, these emails contained evidence of data manipulation, and attempts to suppress the work of climate-change skeptics. They and the media also claimed that the content of these mails was in the public interest.

While a subsequent Parliamentary Enquiry cleared the researchers of manipulating data to show certain results (3), public trust in climate-change science specifically, and the wider scientific community in general, has suffered.

The enquiry was, however, critical of the culture of withholding information (3), which raises an important question for scientists: to what degree should they expect their communications and information sources, which might be private, informal and/or works in progress, to be subject to public scrutiny?

Darwin award

Few theories are as widely debated in the mainstream media as Darwinism. (4) In the pursuit of “balanced” reporting, many alternative theories have been given wide coverage, including intelligent design and Lamarckism. A predecessor of Darwin, Jean-Baptiste Lamarck proposed a theory of evolution by inheritance of advantageous survival traits acquired during the parent’s lifetime. Darwinism superceded Lamarckism, specifically with respect to the acquisition of inherited traits.

Building on Darwinism, modern evolutionary theory suggests that evolution is a result of changes to the DNA sequence. When these changes help an organism to survive and reproduce, they pass into the next generation.

However, a recent study showing that chickens could pass on behavioral changes caused by stressful environmental conditions to their offspring, even though there were no changes to their DNA sequence, has been cited as confirmation of Lamarckism. (5) To anybody with a reasonable understanding of evolutionary theory, this result is completely compatible with Darwinism.

In fact, while the argument in the body of the article does not question current evolutionary theory, the headline and the introduction are rather sensationalist. Such treatment may lead many scientists to question whether they can trust journalists to treat their work responsibly, or whether they need to actively engage with the media to promote their findings in a balanced, rational and accurate manner.

Commenting on the article, Alice Tuff, from Sense About Science, a charity concerned with promoting good science and evidence for the public, said: “Science is a slow, continuous process based on uncertainty, while in contrast, the media demands quick, entertaining stories with clear answers and certainty. These different demands can seem difficult to reconcile, but if scientists’ voices are missing from the debate, they risk being replaced by others who do not have the same regard for evidence.”

Balanced voice

Scientists need to work towards resolving this uncomfortable relationship with the media; openness is required to maintain trust, and the public appreciates lively debate. For this to be effective, however, scientists need to be able to express themselves freely and without risk of libel – a threat that could cause scientists to self-censor some of their most progressive ideas.

At the same time, scientists must balance reported articles with their own communications, through interviews and opinion pieces. After all, those who actually develop and test new ideas are best placed to understand the logic and subtleties of a scientific argument and thus communicate their work accurately.

Useful links

Sense About Science

References:

(1) Singh, S. (2008) “Beware the spinal trap”, Guardian.
(2) Editorial (2010) “Time for libel-law reform”, Nature, vol. 464, issue 1104.
(3) Pearce, F. (2010) “Climategate inquiry points finger at university”, New Scientist.
(4) Darwin, C. (1859) The Origin of Species.
(5) Burkeman, O. (2010) “Why everything you've been told about evolution is wrong”, Guardian.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

15 minutes of fame

There are many ways to raise your profile, and a television appearance, reaching potentially many thousands, will certainly get your name out there. However, is this audience likely to contain researchers who might cite your work? Research Trends investigates the effect of broadcasting your research on citations.

Read more >


Every scientist believes in the importance of their own research. And, when that long desired breakthrough finally arrives, they believe the whole world will want to hear about it. What happens when the popular media actually agree and feature your research? Do other researchers pick up on it? Does it mean you will get more recognition from your peers for that breakthrough? In other words, can media coverage increase citations to your work?

Case Study: Does fame affect citations?

Every month, the President of the Royal Netherlands Academy of Sciences, Robbert Dijkgraaf, appears on a Dutch talk show to introduce up-and-coming scientists. Research Trends looked at the Scopus records for three young scientists who appeared on the show in early 2009. Previous research has shown that the effect of media coverage on citations is strongest in the first year after the media attention, where publicized research received more than 72.8% more citations (3).

In early 2009, Martin Jurna discussed a new microscope, Martine Veldhuizen talked about swearing in the Middle Ages and Appy Sluijs addressed climate changes in history.

A year on, Veldhuizen is not listed in Scopus. Scopus lists eight documents for Jurna, with 10 citations in 2008 (before his TV appearance) and 17 after. For this to be a direct result of his TV appearance, the increase would have to be mainly from Dutch citations, but his only Dutch citations are three self-citations; the rest are international. Sluijs has 25 articles in Scopus, which attracted 222 citations in 2008 but only 220 in 2009 – a slight fall after the show aired.

Researchers certainly do use other sources of information, aside from the traditional scholarly journals. For instance, a 1991 survey found that 57% of Dutch biologists said they use national newspapers as sources of information for their work, and 30% said they relied on Dutch television (1). Therefore, if the media cover your finding, other researchers are likely to pick up on it.

However, does this exposure also lead to more citations? Vincent Kiernan has shown that breaking news coverage by daily newspapers was associated with more frequent citations, but coverage by network television was not (1). One of his possible explanations is that people remember things better when they have seen them in writing. The results of our investigation into the effects of a television appearance on citations appear to confirm Kiernan’s finding (see Case Study), but it is of course difficult to tease out the direction of causality – does exposure bring about additional citations as a by-product of increased attention, or are inherently more citable breakthroughs selected for media coverage?

Knowledge should be shared

Citation impact aside, it is very important for scientists to share their findings with a wider audience. The results of academic research are relevant to many more people than those in the same academic subject field, and should be shared with anyone with an interest in the area. At the same time, scientists can do their bit to promote science by speaking with enthusiasm about their results on TV.

There is a risk involved, however. When media report on scientific findings, they can misinterpret or oversimplify. Meaningful results get edited to the point that they fail to communicate the original idea or complex findings are interpreted differently according to the journalist (3). Ben Goldacre, a writer, broadcaster, and medical doctor, gives this example:

“Prostate cancer screening could cut deaths by 20%” said the Guardian, and “Prostate cancer screening may not reduce deaths” said the Washington Post. About exactly the same study. (3)

So some caution is certainly warranted when interpreting non-scientific popular articles about science.

Ultimately, scientific work that is novel or important deserves to be broadcast to the widest possible scientific and lay audiences. The question of additional citation impact might, perhaps, be seen as an optional bonus for the researchers involved.

Useful links

‘How science became cool’, Guardian

References:

(1) Willems, J. and Woustra, E. (1993) “The use by biologists and engineers of non-specialist information sources and the relation to their social involvement”, Scientometrics, issue 28, pp. 205–216.
(2) Kiernan, V. (2003) “Diffusion of News about Research”, Science Communication, vol. 25, issue 3, pp. 3–13.
(3) Philips, D.P., Kanter, E.J., Bednarczyk, B., and Tastad, P.L. (1991) “Importance of the lay press in the transmission of medical knowledge to the scientific community”, New England Journal of Medicine, issue 325, pp. 1180–1183.
(4) Goldacre, B. (March 2009) “Science journalists? Don’t make me laugh”, Guardian.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Every scientist believes in the importance of their own research. And, when that long desired breakthrough finally arrives, they believe the whole world will want to hear about it. What happens when the popular media actually agree and feature your research? Do other researchers pick up on it? Does it mean you will get more recognition from your peers for that breakthrough? In other words, can media coverage increase citations to your work?

Case Study: Does fame affect citations?

Every month, the President of the Royal Netherlands Academy of Sciences, Robbert Dijkgraaf, appears on a Dutch talk show to introduce up-and-coming scientists. Research Trends looked at the Scopus records for three young scientists who appeared on the show in early 2009. Previous research has shown that the effect of media coverage on citations is strongest in the first year after the media attention, where publicized research received more than 72.8% more citations (3).

In early 2009, Martin Jurna discussed a new microscope, Martine Veldhuizen talked about swearing in the Middle Ages and Appy Sluijs addressed climate changes in history.

A year on, Veldhuizen is not listed in Scopus. Scopus lists eight documents for Jurna, with 10 citations in 2008 (before his TV appearance) and 17 after. For this to be a direct result of his TV appearance, the increase would have to be mainly from Dutch citations, but his only Dutch citations are three self-citations; the rest are international. Sluijs has 25 articles in Scopus, which attracted 222 citations in 2008 but only 220 in 2009 – a slight fall after the show aired.

Researchers certainly do use other sources of information, aside from the traditional scholarly journals. For instance, a 1991 survey found that 57% of Dutch biologists said they use national newspapers as sources of information for their work, and 30% said they relied on Dutch television (1). Therefore, if the media cover your finding, other researchers are likely to pick up on it.

However, does this exposure also lead to more citations? Vincent Kiernan has shown that breaking news coverage by daily newspapers was associated with more frequent citations, but coverage by network television was not (1). One of his possible explanations is that people remember things better when they have seen them in writing. The results of our investigation into the effects of a television appearance on citations appear to confirm Kiernan’s finding (see Case Study), but it is of course difficult to tease out the direction of causality – does exposure bring about additional citations as a by-product of increased attention, or are inherently more citable breakthroughs selected for media coverage?

Knowledge should be shared

Citation impact aside, it is very important for scientists to share their findings with a wider audience. The results of academic research are relevant to many more people than those in the same academic subject field, and should be shared with anyone with an interest in the area. At the same time, scientists can do their bit to promote science by speaking with enthusiasm about their results on TV.

There is a risk involved, however. When media report on scientific findings, they can misinterpret or oversimplify. Meaningful results get edited to the point that they fail to communicate the original idea or complex findings are interpreted differently according to the journalist (3). Ben Goldacre, a writer, broadcaster, and medical doctor, gives this example:

“Prostate cancer screening could cut deaths by 20%” said the Guardian, and “Prostate cancer screening may not reduce deaths” said the Washington Post. About exactly the same study. (3)

So some caution is certainly warranted when interpreting non-scientific popular articles about science.

Ultimately, scientific work that is novel or important deserves to be broadcast to the widest possible scientific and lay audiences. The question of additional citation impact might, perhaps, be seen as an optional bonus for the researchers involved.

Useful links

‘How science became cool’, Guardian

References:

(1) Willems, J. and Woustra, E. (1993) “The use by biologists and engineers of non-specialist information sources and the relation to their social involvement”, Scientometrics, issue 28, pp. 205–216.
(2) Kiernan, V. (2003) “Diffusion of News about Research”, Science Communication, vol. 25, issue 3, pp. 3–13.
(3) Philips, D.P., Kanter, E.J., Bednarczyk, B., and Tastad, P.L. (1991) “Importance of the lay press in the transmission of medical knowledge to the scientific community”, New England Journal of Medicine, issue 325, pp. 1180–1183.
(4) Goldacre, B. (March 2009) “Science journalists? Don’t make me laugh”, Guardian.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Creating your own destiny

To some people, success seems to come easily, and Professor Dennis Weber is one of those people. Through playing to his strengths and making sure he enjoys everything he does, Weber has made his own opportunities and success.

Read more >


At the age of 40, Dennis Weber is already a professor of European Corporate Tax Law at the University of Amsterdam, head of the European Tax Law desk at Loyens & Loeff and a deputy judge in s’Hertogenbosch, the Netherlands.

He is also a regular speaker on European tax law at seminars and institutions worldwide, holds several directorships, and edits and contributes to various publications.

However, he only entered tax law by a process of elimination: he knew he did not want to work with languages, so entered law, which he discovered he was very good at. “When you’re good at something, it’s often more fun.” However, he is keen to point out that success is no accident. “Once you discover your strengths, you need to study hard and set new limits every day.”

Yet as a student, he did not set himself long-term career goals. He had a vague idea that he would like to be a top European tax lawyer and maybe get an article published in one good journal. Now he is a professor with 43 journal publications and three books, not to mention countless newsletters and short articles, to his name. “I had no clear vision. You just work hard at things you enjoy, and suddenly you look around and realize that you have succeeded,” he says.

He sees himself as primarily an academic, but in most cases there is no difference because the two activities feed into each other. As a legal consultant, he advises from an academic perspective, so the academic feeds into the practical. He then uses case examples for his research, allowing the practical to feed the theoretical.

Opportunity knocks

Weber actively seeks out opportunities for interesting and useful research. For example, there was a lot of discussion on the most-favored nation principle in EU direct taxation, but no clarity and no answers. So, he set up a test case and took it to the European Court. “I also thought it would be a nice academic project,” he adds. He not only got an answer, he was also able to write a paper on the case.

He says: “Sometimes you hear people complaining that they need an organization to research a particular subject. I believe you have two choices: wait for someone else to start an organization or start one yourself. I always say that anything is possible if you try. And, this is what I did. I helped set up the Group for European and International Taxation and the EU Tax Law group. I’m the general editor of Highlights & Insights on European Tax Law because everyone was saying we needed a journal like that. And I organize seminars on hot topics and winter courses on international and European taxation.”

He has always been an initiator. “When I was a student, I got bored of the parties in Amsterdam, so I started my own. I even had my own magazine. I’m good at organizing things.” He was on holiday in Sri Lanka when the tsunami hit, so he raised money to help. “It seemed the obvious thing to do,” he says.

Academics also need to work on boosting their visibility. “If you do research but nobody knows about it, it is useless,” Weber says. “Build your network and make sure people receive your research, even if you have to send it to them.”

Making time for success

According to Weber, to achieve success in European tax law, you must be a critical thinker; have independent and new ideas, or at least be open to them; do excellent research; and always be one step ahead of your peers. You must also have passion for your subject, manage your time carefully and do high-quality research. Seizing opportunities is one thing, but you must have the time to take on dream projects when they do come along.

He says: “Don’t waste your time and talent on less important research projects. If you are busy with unimportant work, you won’t have time for that big project. I always make time for that.”

Weber believes that quality is far more important than quantity in research, and much more likely to lead to success. This is why he deliberately sets time aside for the important questions. “You get more attention if you write about important topics, because you will initiate debate.”

Packed social life

Perhaps not surprisingly, Weber approaches his social life with the same energy he gives to his professional work. There is some overlap: he travels a lot for work, which is also a hobby. “When I travel for work, I always go out – to bars, restaurants. Tell me a city and I will tell you a good restaurant; the last one was Caprice in Hong Kong – amazing. I am also lucky to have a strong social network with my family and friends.”

But he is too busy living his life for one pastime: television. “Why would I want to watch other people’s lives? It is better to live your own life, isn’t it? To create your own life and your own opportunities.”

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

At the age of 40, Dennis Weber is already a professor of European Corporate Tax Law at the University of Amsterdam, head of the European Tax Law desk at Loyens & Loeff and a deputy judge in s’Hertogenbosch, the Netherlands.

He is also a regular speaker on European tax law at seminars and institutions worldwide, holds several directorships, and edits and contributes to various publications.

However, he only entered tax law by a process of elimination: he knew he did not want to work with languages, so entered law, which he discovered he was very good at. “When you’re good at something, it’s often more fun.” However, he is keen to point out that success is no accident. “Once you discover your strengths, you need to study hard and set new limits every day.”

Yet as a student, he did not set himself long-term career goals. He had a vague idea that he would like to be a top European tax lawyer and maybe get an article published in one good journal. Now he is a professor with 43 journal publications and three books, not to mention countless newsletters and short articles, to his name. “I had no clear vision. You just work hard at things you enjoy, and suddenly you look around and realize that you have succeeded,” he says.

He sees himself as primarily an academic, but in most cases there is no difference because the two activities feed into each other. As a legal consultant, he advises from an academic perspective, so the academic feeds into the practical. He then uses case examples for his research, allowing the practical to feed the theoretical.

Opportunity knocks

Weber actively seeks out opportunities for interesting and useful research. For example, there was a lot of discussion on the most-favored nation principle in EU direct taxation, but no clarity and no answers. So, he set up a test case and took it to the European Court. “I also thought it would be a nice academic project,” he adds. He not only got an answer, he was also able to write a paper on the case.

He says: “Sometimes you hear people complaining that they need an organization to research a particular subject. I believe you have two choices: wait for someone else to start an organization or start one yourself. I always say that anything is possible if you try. And, this is what I did. I helped set up the Group for European and International Taxation and the EU Tax Law group. I’m the general editor of Highlights & Insights on European Tax Law because everyone was saying we needed a journal like that. And I organize seminars on hot topics and winter courses on international and European taxation.”

He has always been an initiator. “When I was a student, I got bored of the parties in Amsterdam, so I started my own. I even had my own magazine. I’m good at organizing things.” He was on holiday in Sri Lanka when the tsunami hit, so he raised money to help. “It seemed the obvious thing to do,” he says.

Academics also need to work on boosting their visibility. “If you do research but nobody knows about it, it is useless,” Weber says. “Build your network and make sure people receive your research, even if you have to send it to them.”

Making time for success

According to Weber, to achieve success in European tax law, you must be a critical thinker; have independent and new ideas, or at least be open to them; do excellent research; and always be one step ahead of your peers. You must also have passion for your subject, manage your time carefully and do high-quality research. Seizing opportunities is one thing, but you must have the time to take on dream projects when they do come along.

He says: “Don’t waste your time and talent on less important research projects. If you are busy with unimportant work, you won’t have time for that big project. I always make time for that.”

Weber believes that quality is far more important than quantity in research, and much more likely to lead to success. This is why he deliberately sets time aside for the important questions. “You get more attention if you write about important topics, because you will initiate debate.”

Packed social life

Perhaps not surprisingly, Weber approaches his social life with the same energy he gives to his professional work. There is some overlap: he travels a lot for work, which is also a hobby. “When I travel for work, I always go out – to bars, restaurants. Tell me a city and I will tell you a good restaurant; the last one was Caprice in Hong Kong – amazing. I am also lucky to have a strong social network with my family and friends.”

But he is too busy living his life for one pastime: television. “Why would I want to watch other people’s lives? It is better to live your own life, isn’t it? To create your own life and your own opportunities.”

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Tending the GM garden: does public interest fertilize or poison the field?

Public interest in a scientific field can be a double-edged sword, attracting both good and bad publicity. Research Trends investigates the effects of positive and negative public opinion on the highly controversial, and often polarizing, issue of genetically modified crops.

Read more >


Genetic modification (GM), which involves altering the genome of an organism, typically by introducing genes taken from a distantly related species, has become a highly controversial technology. Both hailed as a solution to world hunger and vilified as a potentially devastating attempt to subvert nature, its development and applications have become a polarizing and emotional issue.

GM technologies are an effective way of introducing novel traits to organisms and, with the launch of the FlavrSavr tomato (a tomato with a gene to prevent ripe fruit from going soft) in the mid 1990s, GM crops have become a commercial reality.

Initially, advocates promoted GM technologies as the great ‘Green’ hope – with benefits for our health, productivity and economies. There was rapid uptake in a number of countries, including Canada, the USA and Japan.

However, crises linked to industrial agriculture (such as the bovine spongiform encephalopathy (BSE) epidemic in the UK, for example) fuelled concern about the potential risks of GM in Europe, and public attention rapidly became focused on the negative aspects of GM crops, including impacts on biodiversity, health issues for consumers, and consolidation of control of the food chain.

The UK, for instance, has shown continued public resistance to GM crop technologies (1): surveys suggest that only 2% of British people would be happy to eat GM food, and 50% are against it being publicly available.

Broader shifts in the developed world have also seen the increasing popularity of organic and locally sourced food, small-scale production (an approach that is in opposition to GM agriculture), and strict legislation and control of GM material in the EU.

Additional focus

The storm of negative media attention and public opinion does not seem to have had a direct effect on publication output on the development and applications of GM crops, which has grown steadily since the launch of the FlavrSavr tomato. However, these public concerns may be helping to boost research into the environmental impact of GM crops, an issue that has attracted considerable public attention and has also seen a significant rise in research output (see Figure 1).

Figure 1: While there was a steady increase in research output into the development and applications of GM crops between 1995 and 2009 (keyword search: gm and crop* and develop*), this was matched by growth in research into the environmental impacts of growing GM crops (keyword search: gm and crop* and environment*).

Figure 1: While there was a steady increase in research output into the development and applications of GM crops between 1995 and 2009 (keyword search: gm and crop* and develop*), this was matched by growth in research into the environmental impacts of growing GM crops (keyword search: gm and crop* and environment*). Source: Scopus

Where GM blooms

Not everyone shares these environmental and health concerns, and developing countries have been quick to develop their GM farming sector. Brazil, for example, has significantly stepped up its GM soybean production. A major concern for developing economies, however, is that by growing GM crops they will harm their prospects of exporting food to wealthy countries with stringent restrictions and labeling rules on GM in the food chain (2).

For developing countries, GM crops are also a food-security issue, and for those with rising wealth and growing populations, GM crops offer great promise. In China, for instance, where famine is within living memory, public attention is naturally concerned with food security and this has helped fuel a huge expansion in research into the development and applications of GM crops. In 1998–1999, China was the 20th most prolific producer of research on this topic; in 2007–2008, it had jumped to fourth place (see Table 1).

2008–2009
1998–1999
Country
Number of articles
Country
Number of articles
USA
1,586
USA
1,232
Germany
752
Spain
301
Spain
668
France
281
China
513
UK
219
Italy
417
Japan
205
Japan
407
Germany
180
UK
399
Canada
140
France
390
Italy
140
Canada
317
Netherlands
81
Belgium
174
Switzerland
66
Netherlands
172
Belgium
63
Switzerland
166
Taiwan
54
Taiwan
164
Australia
48
Korea, Republic of
146
Denmark
47
India
142
India
39
Brazil
138
Sweden
34
Australia
131
Israel
33
Sweden
114
Brazil
30
Denmark
108
Korea, Republic of
30
Austria
94
China
28

Table 1 – Developing countries are steadily overtaking their developed counterparts in research output on the development of GM crops (keyword search: gm and crop* and develop*).
Source: Scopus

Because these lists can be distorted by factors such as national wealth or the size of the historical research base, a better alternative is to look at relative research growth in different countries. Even here, developing countries with increasing wealth and populations, coupled with food-security concerns, are outstripping their developed counterparts. Between 1998–1999 and 2007–2008 China’s output rose by 1,700%, India’s by 264% and Brazil’s by 360%, compared with growth of 82%, 28.7% and 39% in the UK, the USA and France, respectively, all of which were early leaders in GM research.

It seems that media interest is not only fuelling research into the effects of GM crops, it is boosting research output in regions where GM is seen as a potential answer to food-security concerns and suppressing output in countries where public opinion is more skeptical of its potential.

References:

(1) Franks, J.R. (1999) “The status and prospects for genetically modified crops in Europe”, Food Policy, issue 24, pp. 565–584.
(2) Azadi, H. and Ho, P. (2010) “Genetically modified and organic crops in developing countries: A review of options for food security”, Biotechnology Advances, issue 28, pp. 160–168.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Genetic modification (GM), which involves altering the genome of an organism, typically by introducing genes taken from a distantly related species, has become a highly controversial technology. Both hailed as a solution to world hunger and vilified as a potentially devastating attempt to subvert nature, its development and applications have become a polarizing and emotional issue.

GM technologies are an effective way of introducing novel traits to organisms and, with the launch of the FlavrSavr tomato (a tomato with a gene to prevent ripe fruit from going soft) in the mid 1990s, GM crops have become a commercial reality.

Initially, advocates promoted GM technologies as the great ‘Green’ hope – with benefits for our health, productivity and economies. There was rapid uptake in a number of countries, including Canada, the USA and Japan.

However, crises linked to industrial agriculture (such as the bovine spongiform encephalopathy (BSE) epidemic in the UK, for example) fuelled concern about the potential risks of GM in Europe, and public attention rapidly became focused on the negative aspects of GM crops, including impacts on biodiversity, health issues for consumers, and consolidation of control of the food chain.

The UK, for instance, has shown continued public resistance to GM crop technologies (1): surveys suggest that only 2% of British people would be happy to eat GM food, and 50% are against it being publicly available.

Broader shifts in the developed world have also seen the increasing popularity of organic and locally sourced food, small-scale production (an approach that is in opposition to GM agriculture), and strict legislation and control of GM material in the EU.

Additional focus

The storm of negative media attention and public opinion does not seem to have had a direct effect on publication output on the development and applications of GM crops, which has grown steadily since the launch of the FlavrSavr tomato. However, these public concerns may be helping to boost research into the environmental impact of GM crops, an issue that has attracted considerable public attention and has also seen a significant rise in research output (see Figure 1).

Figure 1: While there was a steady increase in research output into the development and applications of GM crops between 1995 and 2009 (keyword search: gm and crop* and develop*), this was matched by growth in research into the environmental impacts of growing GM crops (keyword search: gm and crop* and environment*).

Figure 1: While there was a steady increase in research output into the development and applications of GM crops between 1995 and 2009 (keyword search: gm and crop* and develop*), this was matched by growth in research into the environmental impacts of growing GM crops (keyword search: gm and crop* and environment*). Source: Scopus

Where GM blooms

Not everyone shares these environmental and health concerns, and developing countries have been quick to develop their GM farming sector. Brazil, for example, has significantly stepped up its GM soybean production. A major concern for developing economies, however, is that by growing GM crops they will harm their prospects of exporting food to wealthy countries with stringent restrictions and labeling rules on GM in the food chain (2).

For developing countries, GM crops are also a food-security issue, and for those with rising wealth and growing populations, GM crops offer great promise. In China, for instance, where famine is within living memory, public attention is naturally concerned with food security and this has helped fuel a huge expansion in research into the development and applications of GM crops. In 1998–1999, China was the 20th most prolific producer of research on this topic; in 2007–2008, it had jumped to fourth place (see Table 1).

2008–2009
1998–1999
Country
Number of articles
Country
Number of articles
USA
1,586
USA
1,232
Germany
752
Spain
301
Spain
668
France
281
China
513
UK
219
Italy
417
Japan
205
Japan
407
Germany
180
UK
399
Canada
140
France
390
Italy
140
Canada
317
Netherlands
81
Belgium
174
Switzerland
66
Netherlands
172
Belgium
63
Switzerland
166
Taiwan
54
Taiwan
164
Australia
48
Korea, Republic of
146
Denmark
47
India
142
India
39
Brazil
138
Sweden
34
Australia
131
Israel
33
Sweden
114
Brazil
30
Denmark
108
Korea, Republic of
30
Austria
94
China
28

Table 1 – Developing countries are steadily overtaking their developed counterparts in research output on the development of GM crops (keyword search: gm and crop* and develop*).
Source: Scopus

Because these lists can be distorted by factors such as national wealth or the size of the historical research base, a better alternative is to look at relative research growth in different countries. Even here, developing countries with increasing wealth and populations, coupled with food-security concerns, are outstripping their developed counterparts. Between 1998–1999 and 2007–2008 China’s output rose by 1,700%, India’s by 264% and Brazil’s by 360%, compared with growth of 82%, 28.7% and 39% in the UK, the USA and France, respectively, all of which were early leaders in GM research.

It seems that media interest is not only fuelling research into the effects of GM crops, it is boosting research output in regions where GM is seen as a potential answer to food-security concerns and suppressing output in countries where public opinion is more skeptical of its potential.

References:

(1) Franks, J.R. (1999) “The status and prospects for genetically modified crops in Europe”, Food Policy, issue 24, pp. 565–584.
(2) Azadi, H. and Ho, P. (2010) “Genetically modified and organic crops in developing countries: A review of options for food security”, Biotechnology Advances, issue 28, pp. 160–168.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
Research Trends Image

The mobile lab

Worldwide uptake of mobile technology has been phenomenal and now sophisticated mobile apps are bringing all kinds of information to users’ fingertips. The academic community has emerged as an early adopter of mobile apps, Research Trends reports.

Read more >


The growing prevalence of mobile devices cannot be ignored: cell phone subscriptions worldwide have reached 4.6 billion and this figure is expected to increase to five billion this year (1). With a global population of around 6.8 billion (2), this means that approximately two-thirds of people now own a cell phone (3).

Between 2000 and 2008, the cell phone industry boomed, recording average year-on-year subscriber growth of 24%. Scholarly publications on the subject kept pace during this period, rising 18% per annum (see Figure 1, reflecting data 1996-2008).

In more recent years, the industry has expanded to include smart phones and other mobile devices, and many of us now expect to be able to access the information or services we need anytime, anywhere.

From 1996 to 2008, scientific literature (articles, reviews and conference papers) with variants of “cell/mobile/smart phone” in their titles, abstracts or keywords shows an annual growth of 17%. Research output was relatively stable in the late 1990s but started climbing steadily after 2000, with a jump of more than 30% between 2007 and 2008.

Figure 1: From 1996 to 2008, scientific literature (articles, reviews and conference papers) with variants of “cell/mobile/smart phone” in their titles, abstracts or keywords shows an annual growth of 17%. Research output was relatively stable in the late 1990s but started climbing steadily after 2000, with a jump of more than 30% between 2007 and 2008. Source: Scopus

Academic apps

The mobile boom has led a growing number of actors in the Science, Technology and Medicine (STM) community – from academics and universities to publishers and database providers – to ensure their services are easily accessible from handheld devices. Academia’s uptake has been swift, and applications designed specifically for researchers and health professionals (see box below) have mushroomed. This is significant, especially as the market is still in its infancy – online app stores only began to emerge about two years ago.

As more apps are released, we are in danger of seeing an “app” overload hit the market, and busy academics will need help choosing the best apps for their needs. User reviews, visibility, popularity and usefulness will, therefore, play a part in determining the success or failure of various science apps. Among the early adopters, some will prove successful and others will fade away, but it is certain that scientists’ interest in mobile apps is set to continue.

APPlied science: mobile apps for researchers on the go

Atom in a Box: an iPhone app that aids in visualizing hydrogenic atomic orbitals in quantum mechanics.
Chemical Touch: an iPhone app for detailed periodic and amino acid tables.
Epocrates: the Rx version is a free comprehensive handheld drug guide for Palm, Windows Mobile, iPhone and BlackBerry.
iCut DNA: this iPhone app allows users to search the Restriction Enzyme Database for enzymes and DNA nucleotide sequences.
MD Consult Mobile: designed for use with the iPhone, BlackBerry and other smartphones, MD Consult Mobile gives access to an extensive library of medical content.
Molecules: an application for the iPhone and iPod touch that allows users to view and manipulate three-dimensional renderings of molecules.
Netter’s Anatomy Flash Cards: an iPhone app to navigate more than 300 anatomical flash cards. Neuroscience and other versions are also available.
PubSearch Plus: a free iPhone app that allows users to navigate and search the biomedical literature database PubMed.
Papers: an iPhone app dubbed the “iTunes for literature” allowing purchase and storage of journal articles.
Scopus Alerts: an iPhone app enabling users to search and save searches in the Scopus literature database, set up and view alerts, and annotate and share documents.
Starmap: a sophisticated interactive iPhone app claiming to be a “portable planetarium”.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The growing prevalence of mobile devices cannot be ignored: cell phone subscriptions worldwide have reached 4.6 billion and this figure is expected to increase to five billion this year (1). With a global population of around 6.8 billion (2), this means that approximately two-thirds of people now own a cell phone (3).

Between 2000 and 2008, the cell phone industry boomed, recording average year-on-year subscriber growth of 24%. Scholarly publications on the subject kept pace during this period, rising 18% per annum (see Figure 1, reflecting data 1996-2008).

In more recent years, the industry has expanded to include smart phones and other mobile devices, and many of us now expect to be able to access the information or services we need anytime, anywhere.

From 1996 to 2008, scientific literature (articles, reviews and conference papers) with variants of “cell/mobile/smart phone” in their titles, abstracts or keywords shows an annual growth of 17%. Research output was relatively stable in the late 1990s but started climbing steadily after 2000, with a jump of more than 30% between 2007 and 2008.

Figure 1: From 1996 to 2008, scientific literature (articles, reviews and conference papers) with variants of “cell/mobile/smart phone” in their titles, abstracts or keywords shows an annual growth of 17%. Research output was relatively stable in the late 1990s but started climbing steadily after 2000, with a jump of more than 30% between 2007 and 2008. Source: Scopus

Academic apps

The mobile boom has led a growing number of actors in the Science, Technology and Medicine (STM) community – from academics and universities to publishers and database providers – to ensure their services are easily accessible from handheld devices. Academia’s uptake has been swift, and applications designed specifically for researchers and health professionals (see box below) have mushroomed. This is significant, especially as the market is still in its infancy – online app stores only began to emerge about two years ago.

As more apps are released, we are in danger of seeing an “app” overload hit the market, and busy academics will need help choosing the best apps for their needs. User reviews, visibility, popularity and usefulness will, therefore, play a part in determining the success or failure of various science apps. Among the early adopters, some will prove successful and others will fade away, but it is certain that scientists’ interest in mobile apps is set to continue.

APPlied science: mobile apps for researchers on the go

Atom in a Box: an iPhone app that aids in visualizing hydrogenic atomic orbitals in quantum mechanics.
Chemical Touch: an iPhone app for detailed periodic and amino acid tables.
Epocrates: the Rx version is a free comprehensive handheld drug guide for Palm, Windows Mobile, iPhone and BlackBerry.
iCut DNA: this iPhone app allows users to search the Restriction Enzyme Database for enzymes and DNA nucleotide sequences.
MD Consult Mobile: designed for use with the iPhone, BlackBerry and other smartphones, MD Consult Mobile gives access to an extensive library of medical content.
Molecules: an application for the iPhone and iPod touch that allows users to view and manipulate three-dimensional renderings of molecules.
Netter’s Anatomy Flash Cards: an iPhone app to navigate more than 300 anatomical flash cards. Neuroscience and other versions are also available.
PubSearch Plus: a free iPhone app that allows users to navigate and search the biomedical literature database PubMed.
Papers: an iPhone app dubbed the “iTunes for literature” allowing purchase and storage of journal articles.
Scopus Alerts: an iPhone app enabling users to search and save searches in the Scopus literature database, set up and view alerts, and annotate and share documents.
Starmap: a sophisticated interactive iPhone app claiming to be a “portable planetarium”.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)