Issue 34 – September 2013

Articles

Military medicine and its impact on civilian life

Dr. Gali Halevi investigates how military medical inventions have benefited civilian health care. How have they developed through the years?

Read more >


It has long been said that “necessity is the mother of invention”, which means that when the need for something becomes imperative, you are forced to find ways of getting or achieving it (1). What can be a better example of this saying than inventions conceived in times when human lives were at stake and circumstances were of the most challenging nature?

In a comprehensive article published by the Phoenix Patriot (2), the authors Keely Grasser and Teresa Bitler examined some of the medical innovations that grew from military medical research and were widely adopted by civil medicine later on. These innovations were born from the need for quick and efficient medical care for soldiers in the battle field as well as ongoing rehabilitation care later on. Some of the examples given by the authors include advanced prosthetics and reconstructive surgery methods. In an overview, Teresa Bitler lists six medical innovations that were conceived during times of war and several decades later had become widely adopted in civilian medicine (see Image 1).

Image 1 - A History of Military Contributions. Source: The Phoenix Patriot, Winter 2012

In this article we examine these innovations from the scientific output perspective in order to trace their development through the years. Each of these products and methods were searched on in Scopus and the results were analyzed and discussed from several perspectives:

  1. Growth over time: tracing the number of articles discussing these applications through the years.
  2. Regional contributions: where are these articles published, by country.
  3. Affiliations: what type of affiliations are publishing articles on these topics.
  4. Collaborations: mapping the collaborations between army or government institutions and civilian institutions such as universities or hospitals.

In order to create the maps, we limited the data to publications that have at least one instance of collaboration with an army institution, and presented the main links between them.

The topics in this article are traced backwards in time by counting the number of publications indexed in Scopus that contain a concept term in the article title or abstract. Scopus contains about 24 million records with references back to 1996, and 20 million records pre-1996 which go back as far as 1823. It must be noted that Scopus has full coverage of source journals from the publication year 1996 onwards; prior to 1996 Scopus’ journal coverage is more limited, but large enough to obtain an indication of the trend in the number of articles containing a particular term.  Since a significant expansion of coverage took place in 1996, the best way to read the trend figures presented in this paper is by splitting the curves into a pre-1996 part and a post-1996 part, and to identify trends within each part rather than across parts.

Triage

As mentioned in Bitler’s overview (see Image 1), triage has been used since World War I as a process of determining the priority of patients' treatment based on the severity of their condition. This rations treatment efficiently when there are insufficient resources to treat everyone immediately (3). As can be seen in Figure 1, there is a steady growth of articles discussing triage evident from the 1970s onward. This phenomenon can be explained by the fact that this method was not only widely adopted in civilian medicine internationally, but was also further developed into several sub-methods and specific applications in different countries, where each country adjusted it to fit specific workflow methods.  In addition, advanced technology has introduced new ways to prioritize patient injury levels (4).

Figure 1 - Publications on triage over the years (1947 – 2012)

An overview of countries with at least 100 scientific articles about this topic shows high output from North America, China, India, Australia, and Brazil, and from the UK, Germany, France, Italy, Sweden and the Netherlands in Europe. Israel should also be noted here with over 100 articles on the subject (see Image 2).


Image 2 - Overview of scientific publications on triage by author country

Finally, an examination of affiliations which published at least 80 articles on the subject not surprisingly shows that both military institutions and hospitals are among the top scientific publishers of articles on this topic, followed by medical schools at universities (see Figure 2).

Figure 2 - Top affiliations publishing on triage

The collaboration patterns between military and civilian institutions coincide with the patterns found in the affiliations identification over all (see Image 3). In this map, the military affiliations are highlighted in red and the universities in blue. The size of the circles and the width of the lines indicate prolific publication rates and strong collaborations respectively. The US Army Institute of Surgical Research has a central role in the network. It collaborates with medical universities and hospitals mainly in the USA but also in Canada. Other military institutions seen on the map are The Walter Reed Medical Center, The National Navy Medical Center, The US Navy, The Brooke Army Medical Center, and others. As can be seen in the network, the role of the universities as research collaborators is fundamental. The Uniformed Services University of the Health Sciences, the University of Miami and the Medical College of Georgia are among the main co-publishers of triage related research collaborating with military research institutions.

Image 3 - Collaborative network of triage related publications

Penicillin

The discovery of Penicillin traces back to 1928 and saw mass production and use in the 1940s during World War II. An examination of scientific articles on the topic produced almost 150,000 articles, with the first publication in 1927 followed by a varied pattern over the years (see Figure 3). It should be noted that years with a peak in publications regarding penicillin are also war time years.

Figure 3 - Publications on penicillin over the years (1927 – 2012)

The regional output for countries with at least 100 publications on this topic shows the United States, Canada, United Kingdom, Germany, France, Spain, Italy, Japan and India as top publishers of scientific research in this area (see Image 4). Penicillin, as well as antibiotics in general, including its development, production and effects on both humans and animals, is an ongoing research topic in many countries; this is due to the increasing numbers of penicillin (and other antibiotic) resistant diseases, and the increased use of antibiotics on animals and in agriculture (5).

 


Image 4 - Overview of scientific publications on penicillin by author country

Due to the large number of articles, we chose to display an overview of affiliations that have at least 300 publications on this topic. It shows the VA Medical Center and the Center for Disease Control, which are both government institutions, at the top of the list followed by the pharmaceutical company GlaxoSmithKline. Universities’ medical schools and departments take a major role in this research arena (see Figure 4).

Figure 4 - Top affiliations publishing on penicillin

The collaborative network for penicillin related research is enormous (see Image 5) due to the sheer numbers of publications. For the collaborative network depiction we highlighted the main military institutions in red and the universities and hospitals in blue. What is evident from the map is the vast majority of academic and hospital research depicted by the blue areas on the map. Yet, despite the sheer size of the network, the VA Medical Center emerges as one of the major affiliations to publish on the subject and has a complex network of collaborations and co-authorships, both domestic and international.

Image 5 - The overall collaborative network of penicillin related publications

Blood Banking

As mentioned in the Phoenix Patriot article, blood banking was conceived in the 1940s. A look at the first publications on this topic shows that the first two articles appeared in 1939 (see Figure 5).

Figure 5 - Publications on blood banking over the years (1939 – 2012)

Several discoveries and developments in this field are seen on the publication curve. The first peak in publications is seen in the 1950s when plastic bags for containing collected blood were developed. Following is the 1960s development of cryoprecipitate, which is a high Factor VII deposit created from slowly thawed frozen plasma that was found to have greater clotting capacity. AIDS research in the 1980s shows an increase in publications, followed by the NAT testing, which allowed for better detection of the HIV and Hepatitis B virus in donated blood (6).

Nowadays, blood banks are an integral part of many countries’ healthcare system. An overview of countries where at least 100 articles were published shows certain global concentrations of research on the topic, mainly in North America, Europe, India, Australia and Brazil (see Image 6).

Image 6 - Overview of scientific publications on blood banking by author country

 

An affiliation examination of the top institutions publishing research on this topic shows that a large portion of the research is produced by blood service institutions such as the Red Cross, and other national blood institutes (see Figure 6).

Figure 6 - Top institutions publishing on blood banking

The collaborative network in Image 7 shows the Walter Reed Army Institute, the US Army Institute of Surgical Research and the VA Medical Center as the main army branches publishing on this subject. The collaboration network demonstrates the major role of the global Red Cross and its branches as researchers in this field. This international organization collaborated with universities and hospitals around the world.

Image 7 - Collaborative network of blood banking related publications

 

Wound Adhesives

Wound adhesives are a type of glue which can be used as first aid wound treatment. The time line of the research output focusing on wound adhesives or forms of cyanoacrylate shows the first article to be published in 1962. In the 1970s the N-butyl-2-cyanoacrylate was developed. This was the first adhesive to be less toxic yet still have strong bonding qualities. However, this material was still fragile and suffered from cracking. In the late 1990s a new bonding agent called 2-octyl-cyanoacrylate was invented. This compound causes less skin irritation and has improved flexibility and strength. As a result, the FDA approved 2-octyl cyanoacrylate for use on patients (see Figure 7).

Figure 7 - Publications on wound adhesives over the years (1926 – 2012)

An overview of the country output shows that research on this topic is led by the USA with double the amount of articles compared to other countries. Research is also conducted in Japan, China and India as well as several European countries such as the UK, Germany and France. It should be noted that Turkey is also one of the main contributors to this research topic with over 100 articles on the subject (see Image 8).


Image 8 - Overview of scientific publications on wound adhesives by author country

Because of the large amount of data, we chose to display institutions with at least 15 articles on the subject. From Figure 8 can be seen that the top producing institutions publishing on this subject are universities rather than any other institutions.

Figure 8 - Top institutions publishing on wound adhesives

The collaborative network in the area of wound adhesives shows that publications on this subject are carried out by the VA Medical Center, the Brook Army Medical Center and the US Army Institute of Surgical Research (see Image 9). The University of Texas Health Science Center has an interesting role as it appears to connect between research carried out by the VA and the US Army Institute of Surgical Research which forms a cluster with the Brook Army Medical Centre.

Image 9 - Collaborative network of wound adhesives related publications

 

Conclusions

Our analysis of the publication trends in triage, penicillin, blood banking and wound adhesives shows that these developments do begin in times of war. Military related events could be seen as triggers to the development of these medical inventions, which is clearly illustrated by the trend figures that tend to show peaks in years that can be linked to particular wars, such as World War II, the wars in Vietnam, and even in Afghanistan and Iraq.

It can also be seen how in later years these inventions enter the medical research arena and how major discoveries create peaks in publications through the years. Therefore we could trace a close connection between military medicine and civilian medical research. Many of the researching institutions are medical universities and hospitals which developed the initial findings into medicines and systems that were adopted on national levels. Medical military innovations are developing over time thus progressing science and human health care. This is seen clearly in the cases of wound adhesives and penicillin development.

Military methods, especially those related to trauma care, are being adopted by civilian hospitals and becoming a part of mainstream emergency health care. This phenomenon is seen in the development of triage systems and blood banking all over the world.

References

(1)      Cambridge Dictionary, "Necessity is the mother of invention".. . Available at: http://dictionary.cambridge.org/dictionary/british/necessity-is-the-mother-of-invention [Accessed 23 July 2013]
(2)      Keely Grasser and Teresa Bitler, “Military Medicine: A History of Innovation”, The Phoenix Patriot; Winter 2012. Available at: http://phoenixpatriotmagazine.com/article/winter12/a-history-of-innovation/
(3)      Wikipedia, the Free Encyclopedia, “Triage”.. Available at: http://en.wikipedia.org/wiki/Triage [Accessed 23 July 2013]
(4)      Mackersie, R.C. (2006) “History of trauma field triage development and the American College of Surgeons criteria”, Prehospital Emergency Care, Vol. 10, No. 3, pp. 287-94.
(5)      Todar, K., “Bacterial Resistance to Antibiotics”, Todar’s Online Textbook of Bacteriology. Available at: http://textbookofbacteriology.net/resantimicrobial.html [Accessed 23 July 2013]
(6)      The Community Blood Center, “The History of Blood Banking”..  Available at: http://givingblood.org/about-blood/history-of-blood-banking.aspx [Accessed 23 July 2013]

 



 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The Becker Medical Library Model for assessment of research impact – an Interview with Cathy C. Sarli and Kristi L. Holmes

Dr. Gali Halevi interviewed Cathy C. Sarli and Kristi L. Holmes, about the Becker Model, a framework for tracking diffusion of research outputs and activities to locate indicators that demonstrate evidence of biomedical research impact.

Read more >


 

The Becker Medical Library Model for assessment of research impact is a framework for tracking diffusion of research outputs and activities to locate indicators that demonstrate evidence of biomedical research impact. It is intended to be used as a supplement to publication analysis. Using the Becker Model in tandem with publication analysis provides a more robust and comprehensive perspective of biomedical research impact. The Becker Model also includes guidance for quantifying and documenting research impact as well as resources for locating evidence of impact.

Could you share some of the background or challenges that enticed you to develop this model?

The project resulted from completing an ex post study in 2007 of the research outputs and activities of a large clinical trial. We went beyond citation analysis and located many examples of research outcomes that were not discernible using publication data. Citation analysis alone does not reveal whether research findings result in new diagnostic applications, a new standard of care, changes in health care policy, or improvement in public health.  We discovered that diffusion of research outcomes transcends publication data and one must go beyond using publication data to provide a full narrative of meaningful health outcomes.

After the project was completed we had about approximately 100 examples of indicators of research impact including those not related to the study. We decided to create a listing of these examples for others to use and called this listing the Becker Model. Since then, we have added many more examples and we continue to do so, updating the model about every 6 months or so. We have also included examples of various research outputs and activities to make it easier for people to apply the model.

For a full description of the genesis of the Becker Model, see Sarli, C.C., Dubinsky, E.K., Holmes, K,L. “Beyond citation analysis: a model for assessment of research impact”. J Med Libr Assoc. 2010 Jan; 98(1):17-23.

Was the model built to be used more by researchers or by evaluators?

The Becker Model (and related information on the website) is intended for any audience as needed for their purposes with a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License assigned. We highly encourage anyone to use, modify, and/or adapt the model as they see fit as long as there is no commercial use, and we also ask that people notify us of their use so that we can better understand how the model is being applied. The Model has been used by researchers examining their own work, evaluators trying to better understand the impact of research efforts on the individual and group level, by agencies that wish to see the return on investment of funding awards, and librarians supporting their faculty and evaluation groups on campus as well as those librarians who are beginning to provide services and consultations in this area.

We welcome suggestions for new indicators of impact. It has been our experience that new indicators tend to reveal themselves organically during the process of applying the model to an individual or group. When the model was first launched in March 2009, there were approximately 100 indicators of impact examples. To date, there are over 350 examples and the list is updated at least twice a year. Our insider joke is that the Becker Model is in “perpetual beta.”

How do the different aspects of the model work together as indicators of quality?

We do not assign any type of quality measurement to the indicators as noted in the Becker Model. The indicators of research impact examples are simply examples of biomedical impact with no differentiation as to ranking or significance. The indicators are grouped under various stages, or pathways, based on the research cycle with some overlap between the stages. These pathways will vary based on the discipline, but there are some commonalities across all disciplines.

Is the model modular? Can parts of it be used or does it have to be used as a whole in order to capture quality in an accurate manner?

Users are welcome to use any part or all of the Becker Model for their purposes. The definition of quality is to the discretion of the user. It is also up to the user to assign any ranking or significance to specific indicators of research impact.

How do you balance between the quantitative and qualitative parts of the model? Are they given equal weight? Can an institution/researcher decide the appropriate weights for their evaluation?

Absolutely. Users are welcome to assign their own ranking system for specific indicators of research impact based on the outcomes that match their research or program goals.  Some organizations may prefer to assign a specific indicator a higher level of significance over others.  We recommend that any report using the Becker Model include both quantitative and qualitative indicators of research impact and include multiple examples of such. No single example or metric should be used to demonstrate research impact.

The model has been running since 2009: could you tell us about some of the successes you observed in its use?

We are just tickled by the response and feedback to the model. It has far surpassed our expectations. We find that the list of indicators of impact can be quite useful as a checklist for scholars and investigators as they review their project and prepare for grant progress reporting, tenure/promotion, or for departmental reports. The checklist helps jog memories and identify outcomes from their own research as well as ideas for using publication data to tell a story about their research. Other institutions and agencies have reported using the model for their evaluation projects and others have adapted it for different disciplines such as Anthropology, Archaeology, Nanotechnology, Agriculture, and others. But most of all, The Becker Model has been helpful as a means of engaging users to think about ways to report on research outcomes beyond publication data.

Can you see the model adapted to other disciplines? If yes, to which?

Yes, to a point. The Becker Model was developed with an emphasis on indicators of outcomes specific to biomedical research. However, some indicators based on publication data are universal among disciplines. A number of the Strategies for Assessing Research Impact are applicable to a wide variety of disciplines, as well.

10 Strategies for enhancing research impact

Consider these strategies for enhancing the visibility and impact of your research from the authors of the Becker Model. The strategies are divided into three categories: Preparing for Publication, Dissemination, and Keeping Track of Your Research. A full listing of the strategies can be found at https://becker.wustl.edu/impact-assessment/strategies.

  1. Authors should use the same variation of their name consistently throughout their academic careers. If the name is a common name, consider adding your full middle name to distinguish it from other authors. Authors should also use a standardized institutional affiliation and address, using no abbreviations. Consistency enhances retrieval. See Establishing Your Author Name for more information.
  2.  

  3. Present preliminary research findings at a meeting or conference and consider making your figures available through FigShare and your presentation materials available in your institutional repository or on a sharing site such as SlideShare so that others may discover and share your materials post-event.
  4.  

  5. Consider the desired audience when choosing a journal for publication. Topic-specific journals or journals published by a specialized society may disseminate research results more efficiently to a desired audience than general science journals. More specialized journals, even with a potentially smaller readership, may offer an author broader dissemination of relevant research results to their peers in their specific field of research. For more information on selection of a journal for publication, see Preparing for Publication: Factors to Consider in Selecting a Journal for Publication.
  6.  

  7. Submit the manuscript to a digital subject repository such as arXiv or to your institutional repository.
  8.  

  9. Enrich your visibility through press releases and an established online presence. Issue press releases for significant findings and partner with the organizational media office to deliver findings to local media outlets. Set up a web site devoted to the research project and post manuscripts of publications, conference abstracts, and supplemental materials such as images, illustrations, slides, specimens, and progress reports on the site.
  10.  

  11. Share the research data generated by the research and deposit research data in appropriate repositories. One study, “Sharing detailed research data is associated with increased citation rate,” demonstrated a correlation between shared research data and increased citation impact. Consult data management guidelines for suggestions on organizing, managing, and sharing your data. The University of California Curation Center of the California Digital Library provides a comprehensive set of guidelines in their DMP Tool.
  12.  

  13. Leverage social media: start a blog devoted to the research project, communicate information about your research via Twitter, and contribute to a wiki in your area of work or research.
  14.  

  15. Keep your profile data up to date on social networking sites aimed at scientists, researchers and/or physicians and inquire about these tools at your institution or within your organization. Some highly adopted enterprise-level platforms providing verifiable data about scholars include VIVO, Profiles, and SciVal Experts.  These institutional efforts leverage structured data about researchers to provide current and validated data which can be used to visualize your efforts and identify new resources and collaborators.
  16.  

  17. Register for an ORCID iD and curate your ORCID record with your scholarly contributions. ORCID identifiers provide you with a way to differentiate yourself and highlight your professional activities.
  18.  

  19. Become acquainted with how your work is being used in the online world via bookmarks and links to the article or data, conversations on Twitter and in blogs about the work, and various methods of sharing and storing content. Some great tools that provide this type of information for articles and individuals include Altmetric and ImpactStory.
  20.  

For the model’s website please visit: https://becker.wustl.edu/impact-assessment/model

Contacts:

 

 

 

 

Cathy Sarli
Scholarly Communications Specialist
sarlic@wusm.wustl.edu

 

 

 

 

Kristi Holmes
Bioinformaticist
holmeskr@wusm.wustl.edu

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Charlatans and copy-cats: Research fraud in the medical sector

In this article, Steven Scheerooren discusses different types of research misconduct that can occur in the medical sector. Why does misconduct occur, what problems does it cause and what can be done against it?

Read more >


Although fraudulent practices may occur in many areas of research, it is arguably most problematic in the field of medicine, where the outcomes of studies often directly influence health-care policies, and by extension, the wellbeing of many. For the purpose of this article, fraud is defined as the intentional deception of others for personal gain. Some authors have pointed out that when it comes to research it can be difficult to distinguish fraud from simple errors, misunderstandings or incompetence (1, 2). After all, errors can sneak in due to a faulty design of the study, improper conduct of research or a biased interpretation of the outcome (3, 2). Subsequent publications of such studies might be heavily flawed, but it would be rash to accuse researchers of fraud because of this. An equally appropriate term for unacceptable behavior is ‘research misconduct’. This is defined by the US Federal Research Misconduct Policy as “fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results” (4). This article will first provide a brief overview of various types of research misconduct and why such misconduct takes place. The second half of the article focuses on the consequences of misconduct and what can be done against it.

 

Misconduct’s many faces

There are many forms of what one might consider to be unethical conduct, ranging from seemingly harmless ‘honorary authorship’ to faking entire studies. One way to categorize misconduct is to make the distinction between ‘publishing misconduct’ and ‘research misconduct’ (see quote), whereby the latter can then be split into fabrication, falsification and plagiarism. However, I here suggest dividing it into three different categories. Firstly, there is data related misconduct. In its minor form, this is the misuse of data by inappropriate statistical analyses or the manipulation thereof. One could deliberately present one’s findings in a way that is misleading, or draw conclusions that are not supported by test results. Researchers might also choose to present only a selection of their data to make it support their hypothesis. This falls into the category of falsification, which besides omission of data also includes the ‘adjusting’ of data and manipulating research equipment or processes (3). Some researchers may even go so far as to repeat an experiment until the desired outcome is reached (1). It should be noted however, that falsification can be a grey area. Baerlocher et al. mention that some alterations to data - e.g. deletion or minimization of outliers - may not be considered appropriate by a statistician, but would not be deemed ‘fraudulent’ (5). One severe form of unethical creativity with data is fabrication: producing a study without conducting any actual research, e.g. inventing patients and test subjects that never existed, and describing tests that did not take place.

In essence there are two ways of misconduct. One has to do with misconduct in the publishing process, the other has to do with misconduct in the research process. The difference between the two is, broadly speaking, that when we talk about publishing misconduct, then the data as such can be ok, but the way they are published is unethical. When it comes to research misconduct then there is great doubt about the validity of the data as such, and that can be very misleading. [I]n general therefore research misconduct is much more serious than publishing misconduct.”

Dr. Jaap van Harten, Executive Publisher, Elsevier (17)

 

Secondly, we have misconduct related to authorship. From a few lines or a paragraph, to stealing entire articles and publishing them under one’s own name, plagiarism – i.e. copying ideas or data without giving due credit – comes in all sizes. This includes self-plagiarism; reusing parts of previous research in subsequent publications. Plagiarism is possibly the most common form of research misconduct, but it is certainly not the only misconduct when it comes to authorship. What also occurs in academic writing is false (a.k.a. ‘gift’ or ‘honorary’) authorship: listing someone who did not in fact contribute to the research as a co-author. Although not as grave as plagiarism, a recent survey showed that false authorship was considered unacceptable behavior by most respondents (6). On the other side we have what is known as ‘ghost authorship’ (also, ‘ignored’ or ‘neglected’ authorship); not listing someone who did contribute to the research.

Finally, there is misconduct related to publications. Fairly common is the practice of ‘shot-gunning’, a.k.a. having so-called dual, multiple, replicate, repetitive, or secondary publications. Some researchers choose to submit manuscripts to more than one journal (often within the same period to avoid detection), or to publish the same article in more than one language (7). It should be clear that the latter is as inexcusable as the former, when done with the intent of merely increasing one’s publication output. Translating an article into English to reach a wider audience can be perfectly acceptable, if the author clearly indicates this in both papers by means of a reference (8). However, in a study by Schein and Paladugu, none of the examined duplicate publications contained such a reference (7). The argument of ‘trying to reach a different audience’ can also be heard in cases where both articles are written in the same language. For example, an author might publish his or her article in both a surgical and a non-surgical specialty journal. Here Schein and Paladugu raise the valid question of whether this still makes sense at all in an age where publications can readily be found in online databases such as Scopus. Another form of publication-related misconduct is ‘templating’ or ‘salami-slicing’; publishing a study in several parts, sometimes containing a substantial part of a previous publication (1, 7).

 

Pressure all around

So why do people commit research fraud? Something that is often mentioned in regard to scientific misconduct is the ‘publish or perish’ atmosphere that appears to have become the norm at research institutes the world over (9, 7, 10). Institutes as well as individual researchers may feel they are judged by the quantity rather than quality of their publications. Although one could argue this is no reason for dishonest behavior, this sort of pressure could add to the likelihood of researchers taking unethical steps to increase their output and reputation (1, 9). In addition, the desire to be published in a well-known journal may be so strong that a researcher is prepared to manipulate research data (10, 9).

“Promotion, appointments, and academic careers are really relying on publication and while that is in some ways good for the publishers and opens up some opportunities, I think there is always a concern that if the pressure is too high it will create an atmosphere in which the temptation to commit research or publication misconduct is increased.”

Dr. Elizabeth Wager, Council member of the Committee on Publication Ethics (COPE) (18)

 

Another common factor underlying fraudulent behavior that is mentioned is a conflict of interests, usually of a financial nature. This is especially true for clinical trials sponsored by pharmaceutical companies (3, 9). The former editor of the BMJ, Richard Smith, even went so far as to say that “medical journals are an extension of the marketing arm of the pharmaceutical industry” (10). In this light, the temptation to falsify or fabricate data in order to make a sponsor look good is always present. A similar cause for misconduct is the publishing bias of journals or editors (3, 10, 8 ). It has been suggested that studies with neutral or negative outcomes may be bypassed in favor of articles that ‘advance’ science rather than confirm the status quo (3). Plagiarism, on the other hand, is blamed on a variety of other factors. Other than a desire for fame, some of the suggested reasons are “cultural differences, lack of good command of the English language, or (…) unawareness, misconception or misunderstanding of plagiarism” (11).

Finally, the fairly low chance of getting caught may also play a role. The quality of science lies in the reproducibility of its results. Unfortunately, when faced with difficult or costly studies, institutes and researchers alike might see no reason to replicate tests if they already have one publication with a favorable outcome. In this way, fraudsters can get away with all kinds of misconduct. “Once successful fraud has been performed the temptation to repeat it may well be very strong” (9).

 

Ripple effects

When fraudulent data is published its effects can last up to several years after the fraud has been discovered. The slow progress of replicating test or validating data means that an article might not be retracted before other researchers have started citing it in their own publications, thus compounding the errors (10, 12). This could lead to situations where people will have to spend more time on “trying to confirm the work of others rather than building on it” (10). For the original author, the consequences can be quite severe. Once a single publication has been discredited, this will cast doubt on the scientific and ethical validity of all of a researcher’s previous and subsequent publications (13, 8, 12). Of course, this is also damaging to the institution that had employed the researcher. One serious concern here is that institutions will wish to avoid publicity when misconduct occurs (9), or may even refuse to investigate alleged misconduct (12). Thus, scandals will not tarnish their name, but it comes at a price: good research may be tainted by fraudulent data. The problem extends well beyond the offending author and the affiliated institution. Co-authors may also find their publications questioned, thus suffering irreparable damage to their reputation (10). The same can be said for the review panel or editors of a journal, whose credibility will be doubted. Patients may have already been treated based on falsified or fabricated test results, their health put at risk by unethical research. Last but certainly not least, is the loss of public trust in science. When a case of misconduct becomes known to the public, it can lead to distrust not only of a particular researcher, but of science in general (3, 10).

 

The antidote

What can we do against scientific misconduct? Generally speaking, the choice is between punishment and prevention, though a third possibility is also given below.

a)      Punishment. If the misconduct is considered to be only a minor transgression, a warning may suffice. For example, earlier this year a PhD candidate from Leiden University was given a second chance after it was discovered that his thesis contained plagiarized material. This same candidate was later expelled however, when plagiarism was again found in the revised version (14). For a researcher, similar measures may include suspension from his or her current position, revoking licenses (in case of medical researchers), or publishing bans (7, 10). Other options are giving fines or, when a case of fraud is deemed so severe as to warrant intervention by the law, even prosecution or imprisonment (10). Of course, one measure that should always be taken is the retraction of suspicious publications (12). Regulatory organizations such as the Committee on Publication Ethics (COPE) or the Office of Research Integrity (ORI) have been set up to monitor, investigate and remedy scientific misconduct (3, 10).

b)      Prevention. Since a fair portion of misconduct is blamed on a lack of awareness of ethics in science, better education or guidance may be the most likely prevention tool (11). To this end many institutions have set up guidelines such as the now internationally recognized Good Laboratory Practice (GLP); a set of principles that provides a framework within which laboratory studies are planned, performed, monitored, recorded, reported and archived (15). Similar controls are the Good Clinical Practice and Good Scientific Practice. Whereas these all focus on the prevention of research misconduct, there are of course also guidelines to prevent misconduct related to authorship and the publishing process, such as the Vancouver guidelines, or the Publishing Ethics Research Kit (PERK) designed by Elsevier (16).

c)       Correction. There may also be a third option; a middle-way between prevention and punishment. Besides responding after misconduct has already occurred, or attempting to make sure it does not occur, there is the possibility of intervening the moment potential misconduct is suspected or found. Since it can be very difficult for reviewers or editors to detect fraud in manuscripts, most research misconduct is brought to light by whistleblowers (10). It ought to be so that when a collaborating researcher or research assistant disagrees with a colleague’s conduct – in whichever phase of the research or publication – they should be able to openly voice their concerns. Unfortunately, this is not yet a reality. Many collaborators will keep silent or simply withdraw from a study, instead of reporting (suspected) misconduct to the proper authorities (5). The problem is that whistleblowers simply do not feel safe. If scientific literature wishes to remain “a record of the search for truth” (12) this will have to change.

“Usually, whistleblowers in any arena have a bad time, often being disregarded, suspended, shamed or threatened by peers or managers. Furthermore, whistleblowers face economic and emotional deprivation, victimization, and personal abuse and they receive little help from statutory authorities.” (10)

 

References

(1)    Lohsiriwat, V., Lohsiriwat, S. (2007) “Fraud and Deceit in Published Medical Research”, Journal of the Medical Association of Thailand, Vol. 90, No. 10, pp. 2238-2243
(2)    DeMets, D.L. (1997) “Distinctions between Fraud, Bias, Errors, Misunderstanding, and Incompetence”, Controlled Clinical Trials, Vol. 18, pp. 637-650
(3)    Purchase, I.F.H. (2004) “Fraud, errors and gamesmanship in experimental toxicology”, Toxicology, Vol. 202, No.1-2, pp. 1-20
(4)    http://ori.dhhs.gov/definition-misconduct
(5)    Baerlocher, M.O., O’Brien, J., Newton, M., Gautam, T., Noble, J. (2010) “Data integrity, reliability and fraud in medical research”, European Journal of Internal Medicine, Vol. 21, pp. 40-45
(6)    Vuckovic-Dekic, Lj., Gavrilovic, D., Kezic, I., Bogdanovic, G., Brkic, S. (2011) “Science Ethics Education. Part I. Perception and attitude toward scientific fraud among medical researchers”, Journal of BUON, Vol. 16, pp. 771-777
(7)    Schein, M., Paladugu, R. (2001) “Redundant surgical publications: Tip of the iceberg?”, Surgery, Vol. 129, No. 6, pp. 655-661
(8)    Van der Heyden, M.A.G., Van de Ven, T.D., Opthof, T. (2009) “Fraud and misconduct in science: the stem cell seduction”, Netherlands Heart Journal, Vol. 17, No. 1, pp. 25-29
(9)    Illingworth, R. (2005), “fraud and other misconduct in biomedical research”, Neurocirugía, Vol. 16, pp. 297-300
(10) Karcz, M., Papadakos, P.J. (2011) “The Consequences of Fraud and Deceit in Medical Research”, Canadian Journal of Respiratory Therapy, Vol. 47, No.1, pp. 18-27
(11) Brkic, S., Bogdanovic, G., Vuckovic-Dekic, Lj., Gavrilovic, D., Kezic, I. (2012) “Science ethics education: Effects of a short lecture on plagiarism on the knowledge of young medical researchers”, Journal of BUON, Vol. 17, pp. 570-574
(12) Sox, H.C., Rennie, D. (2006) “Research misconduct, retraction, and cleansing the medical literature: Lessons from the Poehlman case”, Annals of Internal Medicine, Vol. 144, No.8, pp. 609-613
(13) Myburgh, J. (2011) “CHEST and the impact of fraud in fluid resuscitation research”, Critical Care and Resuscitation, Vol. 13, No. 2, pp. 69-70
(14) http://www.mareonline.nl/archive/2013/03/13/plagiarist-exposed
(15) http://www.mhra.gov.uk/Howweregulate/Medicines/Inspectionandstandards/GoodLaboratoryPractice/Structure/index.htm
(16) http://www.ethics.elsevier.com/toolsOfTheTrade.asp
(17) http://www.ethics.elsevier.com/ethicsToolkit.asp#video1
(18) http://ethics.elsevier.com/pdf/ithenticate-PressureToPublish.pdf

 

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

A funding profile of the NIH

In this contribution Matthew Richardson uses data publically reported by the National Institutes of Health (NIH) to examine the type of research they fund and how funding is affecting research in specific areas.

Read more >


As the largest source of funding for medical research globally (1), the National Institutes of Health (NIH) in the United States is responsible for distributing more than $30 billion per year to best support biomedical researchers. According to the NIH, “[m]ore than 83 percent [of this budget] goes to more than 300,000 research personnel at over 3,000 universities, medical schools, and other research institutions in every state and throughout the world.” (2)

In 2012, the NIH awarded 12,303 Research Grants, including the main Research Project Grants as well as other extramural awards such as those specifically supporting research centers or small businesses; in addition, funding was awarded for training, R&D contracts, and intramural research. In this article we use the NIH’s publically-reported (3,4) data to look in more detail at the types of awards they provide, the typical recipient of an award, and at how funding is affecting research in specific areas.

 

Types of NIH award

The most common type of award provided by the NIH is the R01, which is one of a number of so-called Research Project Grants (RPGs). The R01 grant is the oldest offered by the NIH, which is awarded “to support a discrete, specified, circumscribed project” in an area of the investigator’s interest (5). This is offered alongside other RPGs such as the R15 offered to those at “educational institutions that have not been major recipients of NIH research grant funds” (6), and the R21 grant which “is intended to encourage exploratory/developmental research by providing support for the early and conceptual stages of project development.” (7)

In addition to RPGs, the NIH offers a variety of awards aimed at research centers, small businesses—including the Small Business Innovation Research (SBIR) grant and the Small Business Technology Transfer (STTR) grant—Research Career Awards (the so-called K grants), and individual and institutional training awards.

Major categories of NIH award are shown in Figure 1. Award types towards the right of the chart are those which were most commonly awarded in 2012, with the R01 by far the most numerous. The y-axis shows the success rate of applicants for each award in 2012; this varies from almost 50% for institutional training to 14% for R21 grants. Finally, the bubble size is proportional to the average cost of each award in 2012; the Research Center Awards hold the highest cost per award, followed by the Research Project Grants.

Figure 1: -Number of awards granted and success rate per grant type in 2012. Source: NIH Data Book

 

Profile of NIH awardees

Using the data available at the NIH Data Book (3), we can answer many questions about how the available budget is distributed among investigators, and indeed students, in biomedical fields. Here we answer three questions concerning the profile of award recipients: what is the representation of women, in which fields are PhD students supported, and how successful are first-time investigators when applying?

What is the representation of women among NIH-funded investigators? (see Figure 2)

Mirroring the gradual dismantling of cultural and institutional barriers preventing women from advancing in scientific careers, NIH grants have been increasingly awarded to women; however the number of female investigators is still far from reaching parity with men, particularly in some of the types of grant awarded.

Looking at trends from 2000 to 2012, we can see an increase in the representation of women in every type of research grant. Research Project Grants (RPGs) were awarded to female investigators in only 30% of cases in 2012; however this does represent an increase from 22% in 2000.  The rate is also much higher than we see for Small Business (SBIR/STTR) and Research Center Awards (20% in 2012).

Research Career Awards stands above the other types with 45% of investigators being female in 2012, however we see no increase in this rate since 2010.

Figure 2 - Representation of women among research grant investigators, by type of grant.
Note: 2009 and 2010 data points exclude awards made under
the 2009 American Recovery and Reinvestment Act. Source: NIH Data Book

In which fields are NIH-supported PhDs recipients? (see Figure 3)

The NIH supports PhD students across numerous fields with fellowships, traineeships, and research assistantships. While the fields in which NIH support is granted are those we would expect (with Biochemistry, Health Sciences, Immunology, Molecular Biology, and Neuroscience featuring prominently), the long-term trends over the past 30 years show that growth in some areas has been much stronger than in others. Neuroscience in particular sees a dramatic increase to become the most common area by a great deal, followed by Health Sciences; another field with strong growth, particularly in the years from 2005 to 2010, is Engineering. This makes the current view very different from the years 1985 to 1995 when the dominant fields (as reported by the PhD recipients themselves) were Biochemistry, Psychology, and Molecular Biology.

Figure 3 - Number of PhDs per field of study with NIH support prior to their PhD. Source: NIH Data Book

How successful are first-time investigators vs. established investigators when applying for NIH grants? (see Figure 4)

The overall success rate for Research Grants has declined from a level of 33% in 2000 to 19% in 2011 and 2012. (In the same time period, applications for grants increased by 72% while the number of grants awarded, which increased steadily until 2004, later declined until it has returned to the same level as in 2000.) For R01 grants, these success rates tend to be slightly lower. But how do the success rates differ between first-time investigators and established investigators?

Until 2007, there was a clear (though narrowing) gap between the success rates of established and first-time investigators; compare the success rates of applicants for R01-equivalent grants in 2000, in which 29% of established investigators were successful while the equivalent rate for first-time investigators was 22%. In recent years these success rates have generally been decreasing, but first-time investigators have received a boost in success rates which led to parity between the two groups in the year 2011. In 2012, we see signs that established investigators may once more have an advantage when applying, with a success rate of 16% vs. 13% for first-time investigators.

Figure 4 - Success rates of applicants for R01-equivalent awards, by career stage of investigator.
Note: 2009 and 2010 data points exclude awards made under
the 2009 American Recovery and Reinvestment Act. Source: NIH Data Book

Research areas

The Research, Condition, and Disease Categories (RCDC) are reported by the NIH to show funding in different areas of research (4). We can use this information to see those areas in which the most money is spent (see Table 1).  However, trends over time show that the areas with highest spending have only seen modest increases in funding since 2009: for instance, Cancer has been stable with 0% growth. The area of Brain Disorders has seen the highest growth out of these top 8 areas with growth of 3.9% per year.

 

Research/Disease Areas FY 2012 Actual CAGR 2009-12
Clinical Research 10,951 1.9%
Genetics 7,632 1.6%
Biotechnology 6,089 2.7%
Prevention 5,924 3.6%
Cancer 5,621 0.0%
Neurosciences 5,618 1.8%
Brain Disorders 3,968 3.9%
Infectious Diseases 3,867 2.2%

Table 1 - Funding in 2012 per research/disease area, with Compound Annual Growth Rate (CAGR) 2009-12. All funding values million US$. Source: NIH Categorical Spending

Of course, when we see increases in some areas when the overall budget has remained stable, we know that some areas must have lost out. Hypertension is an example of an area in which funding has reduced since 2009. The NIH Categorical Spending data show a picture of regular declines year-on-year, which can be expected to have a serious effect on research in this area (see Figure 5).

Figure 5 - NIH funding per year in hypertension. Note: ‘ARRA’ in 2009 and 2010 represents funding from the American Recovery and Reinvestment Act. Source: NIH Categorical Spending

Using Scopus we can see that in the same time period, the overall rate of publication on hypertension has been growing at 5.7% per year; PubMed can be used to look at NIH-funded papers in the area, and these have been growing even more rapidly at 6.8% per year (see Figure 6). However, this rate of growth is very unlikely to be sustained given a scenario of reduced funding each year from such a major medical research funding body.

Figure 6 - Article output per year in hypertension,
overall (source: Scopus) and with NIH funding (source: PubMed).

 

Conclusion

The data made available by the NIH allows us to look in some detail at the types of funding provided each year and where it is assigned. While it is of interest in its own right, it is also a good complement to the wider focus on tracking the impact of funded research: see, for instance, the efforts of FundRef (8) to enable the tracking of funded research after the submission of papers. Alongside the growing ability to track funded work, we have the emergence of ORCID (9) as a unique identifier to track authors with confidence. As these systems are adopted more widely, we are approaching a time when analysis of funding, and the resulting impact of work, can become more rigorous and extensive.

 

References

  1. http://www.nih.gov/about/
  2. http://www.nih.gov/about/almanac
  3. http://report.nih.gov/nihdatabook/
  4. http://report.nih.gov/categorical_spending.aspx
  5. http://grants.nih.gov/grants/funding/r01.htm
  6. http://grants.nih.gov/grants/funding/area.htm
  7. http://grants.nih.gov/grants/funding/r21.htm
  8. http://www.crossref.org/fundref/
  9. http://orcid.org/
All accessed 22nd July 2013.

 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The peculiar persistence of medical myths: how to counter and discourage misinformation

Mike Taylor investigates the publishing history of three medical memes and suggests methods that the scientific community could use to improve the quality and robustness of medical research publishing.

Read more >


Medical misinformation is unusually persistent in society. Despite the withdrawal of the paper that provoked the measles-mumps-rubella vaccine scandal, countless studies rebutting the findings, and the professional disgracing of the principal author, the level of vaccination has not yet returned to pre-publication levels. Scientific and pseudo-scientific communication carries with it a certain weight of authority and responsibility. As access to research grows, and with it the potential for wide-spread social reach, the scholarly community needs to maintain and develop the caliber of its publishing, and develop more robust and authoritative methods of countering misinformation and overturned findings.

The industry surrounding the communication of medical facts to the lay community is substantial: there are dozens of magazines and 100,000s of websites devoted to communicating health facts and once accepted into society, medical facts appear to have a particular resilience, whether based on medical research in good standing or not. Snopes.com – a database of urban legends, rumors and myths - lists many such medical stories and their top 25 consists of approximately one-quarter health facts. Scientific research is also poorly served by the popular media. Vinegar: Secret to Fast Weight Loss (1), for example, contains approximately 21 claims regarding the weight loss and health-promoting properties of vinegar, of which only six have a partial reference to the literature – typically providing a journal title and year of publication only. Furthermore, one of the key references is a review, rather than research (2), which references a 2005 study (3) that may be considered flawed, as it has (partially) relied upon a subjective scale, appears not to have been conducted double-blind, and has a sample size of 12.

In this article, I investigate the publishing history of three medical memes and detail their current status in literature and society. In addition, I use my findings to suggest methods that the scientific community could use to improve the quality and robustness of medical research publishing, in particular when the social impact is likely to be high. My investigation is supported by an informal and anonymous survey of 80 associates, most of whom work in science or an allied industry (see Box below).

Respondents were asked to indicate their agreement with six medical sentences (see Table 1 in Appendix 1). They were able to respond using one of four statements: “I agree with the sentence”, “I disagree with the sentence”, “I used to agree with the sentence, but have changed my mind”, “I used to disagree with the sentence, but have changed my mind”. Respondents who had changed their mind were invited to give some reasons. Three of the sentences (“Spinach contains loads of iron and is particularly good for you”, “Some people are made ill by Wi-Fi and mobile phone radiation” and “Some routine childhood vaccinations are sufficiently risky to make me not want to give them to my children”) had been previously selected to feature in this article and are known to be untrue statements. The other three were chosen to provide some comparative figures and are mostly true. “A diet containing a lot of fat is unlikely to be very healthy” (aside from some particular biological requirements) may reasonably be observed to be true. “Male circumcision is unnecessary” described an emergent issue with some research in its favor, but has considerable religious importance (although there are medical conditions that can be ameliorated by circumcision), and that “cancer can be caused by a virus” is demonstrably true for at least one virus (cervical cancer is caused by HPV, see footnote!) - but is probably not common knowledge. The language was deliberately non-clinical, which caused comment amongst some respondents, but was aimed at encouraging a populist mode of response – i.e., respondents would hopefully respond instinctively, rather than engage in a literature search. Therefore, the survey was cued as taking “two minutes”.

Free text responses were classified into three classes: those that provided no evidence, those that mentioned some formal evidence (research, professional opinion, citable evidence, review of research, etc.) and those that referred to non-formal evidence (generic reading, friends, mass media, etc.). Of the four statements, 'spinach', 'vaccine', 'fat' and 'Wi-Fi' had a majority of informal citations, and 'circumcision' and 'cancer-virus' had a majority of formal evidence (see Table 2 in Appendix 1).

Three medical memes without foundation that persist in popular belief

Failure to provide citations: the case of Popeye and spinach

The idea that spinach contains a disproportionate quantity of iron is a long-standing – but entirely false – belief. In fact, the true proportion of iron in spinach was well understood in the nineteenth century. That people have believed that spinach is peculiarly rich in iron has, for the last forty years, been attributed to two factors: (a) that the cartoon, Popeye, made that claim, as an explanation for Popeye's considerable consumption, and (b) that there had been, at some stage, a typographical error (misplaced decimal point) in an influential German publication of the early twentieth century. Extensive research by Dr Mike Sutton (4, 5) disproved both theories.  Dr Sutton conducted an exhaustive review of the ‘Popeye and spinach’ literature, concluding that – as accurate figures were known at the beginning of serious food science - the error is the consequence of credulous re-reporting, lack of citation and lack of fact-checking, and potentially a swiftly corrected error in a US textbook of the 1930s. In particular, he cites the failure of Professor Hamblin (1981) to have undertaken any research in order to provide a citation for the decimal-point error in his BMJ article, 'Fake', and prior to that, Professor Bender (1977) who made the claim both in a speech and in a letter to the Spectator magazine; again without providing a resolvable citation. In correspondence with Dr Sutton, Professor Hamblin is reported to have said that he “may have read it in an unknown copy of the Reader's Digest”.

Despite this, 68% of my survey's respondents continue to agree with the sentence “Spinach contains loads of iron and is particularly good for you”. 29 per cent of respondents who add an explanation cite the Popeye / decimal point error explanation for their belief. Furthermore, Dr Sutton's extensive literature review concluded that Popeye's dietary preference was because “Spinach is full of Vitamin 'A' an' tha's what makes hoomans strong an' hefty” (Segar, 1932, in Popeye, sic, all errors) (4, page 13).

 

Lack of evidence leads to a research dead-end: Electromagnetic hypersensitivity (EHS)

The idea that some individuals have a particular hypersensitivity to wireless or mobile electromagnetic radiation is a necessarily recent idea. Clearly the reported symptoms are distressing, and a sizeable number of preventative and diagnostic services and products are available for purchase (http://www.emfields.org/shielding/overview.asp). Successive studies, meta-studies and reviews (e.g., 6) have found that people who self-report electromagnetic-hypersensitivity are unable to detect electromagnetic radiation in double-blind conditions, although researchers note that these individuals appear to score higher for physiological discomfort in any condition (7). The continuous failure to find any evidence for electromagnetic hypersensitivity has resulted in a low volume of papers published in Scopus, with little or no growth (the last five years have produced an average of 11 papers per year). The World Health Organization concluded that it is not a diagnosable condition (8).

Despite this, 17.5% of people surveyed in the UK in 2007 reported their belief that they are – to some extent – sensitive to electromagnetic radiation (9). Although the majority (approximately 2:1) of people in this survey disagreed with the statement: “Some people are made ill by Wi-Fi and mobile phone radiation”, a sizable proportion (31.4%) agreed with it. All comments that referred to an information source cited non-professional channels.

Despite the profound health implications for society, technology and the health of humanity if such a large proportion of people are sensitive to EMR, and the wide-spread belief in the syndrome, it appears that few people take any action, for example, by not using Wi-Fi, buying EMR shields or seeking “quiet zones”.

In the case of electromagnetic hypersensitivity, it appears that a widely held belief has emerged despite the lack of any supporting evidence. Without any medical or economic motivations, it seems likely that research in this field – which consistently has failed to produce any positive biomedical results in support of an effect – will continue to drop-off, allowing the belief to persist.

Fraud and malpractice: Vaccination and MMR

In 1998, former doctor Andrew Wakefield (and others) published a fraudulent paper in the Lancet providing now discredited evidence linking the MMR vaccine to autism and bowel disease. Despite the action taken – (a) the withdrawal of the original paper, (b) subsequent studies and meta-studies that have failed to replicate the original paper's findings or find any other relationship, (c) Wakefield being struck off the Medical Register, (d) the many investigations that have found ethical and methodological mis-practice and finally (e) evidence that Wakefield had undisclosed financial interest in MMR being discredited, vaccination rates in the UK have not risen to their former, pre-Lancet publication highs (see Figure 1). As a consequence of low vaccination rates, there was a measles epidemic in parts of the UK in 2013 that resulted in at least one fatality. The UK health service ran a very high profile campaign, operating vaccination clinics in schools and work-places, keeping the story in the headlines during the course of the epidemic in order to reach an effective percentage of vaccination.

Figure 1 - Completed primary courses: percentage of UK children immunized by their second birthday,
1997-98 to 2008-09. Source: NHS Health and Social Care Centre

Despite the overwhelming evidence, some media outlets in the UK continue to publish stories referring to MMR as a 'controversial' vaccination (see box). Although only two respondents to my survey expressed a belief that some vaccinations are significantly risky, clearly a considerable distrust continues to exist amongst British parents, as evidenced by the failure of the MMR vaccination rate to recover after the Wakefield scandal.

The “controversial vaccine”: MMR stories in the Daily Mail since 2009

• MMR: A mother's victory. The vast majority of doctors say there is no link between the triple jab and autism, but could an Italian court case reignite this controversial debate? (2012) http://www.dailymail.co.uk/news/article-2160054/MMR-A-mothers-victory-The-vastmajority-doctors-say-link-triple-jab-autism-Italian-court-case-reignite-controversial-debate.html

• Six months after the MMR jab... a bubbly little girl now struggles to speak, walk and feed herself (2009) http://www.dailymail.co.uk/health/article-1126035/Six-months-MMR-jab--bubbly-little-girl-struggles-speak-walk-feed-herself.html

• American parents awarded £600,000 in compensation after their son developed autism as a result of MMR vaccine (2013) http://www.dailymail.co.uk/news/article-2262534/American-parents-awarded-600-000-compensation-son-developed-autism-result-MMR-vaccine.html

The recent epidemic of measles has resulted in sufficient publicity to change opinion about the relative risks, and the NHS has launched a campaign to vaccinate 1,000,000 children, in order to return to the pre-Wakefield levels of immunization.

Changing minds: why misinformation is so persistent

These medical myths – and many others – have much in common with urban myths:

“A story, generally untrue but sometimes one that is merely exaggerated or sensationalized , that gains the status of folklore by continual retelling” (10).

However, these medical myths have a peculiar characteristic: not only are they demonstrably untrue, but they appear to defy logic by persisting in society, long after the evidence of their falsehood has been available.

Lewandowsky et al (11) explore a number of dimensions that may be applied to understand the persistence of misinformation: internal coherence, personal experience or knowledge, credibility and how widespread a belief is. In the case where medical doctors or scientists make an assertion, the source will be assumed to be credible, whereas the nature of the fact is likely to place it in the realm of the expertise: so a lay-consumer will lack the necessary experience or knowledge to rebut or refute a new claim. Furthermore, Heath et al (12) observed that the greater the level of disgust associated with an urban legend, the more likely they were to be disseminated. (This intriguing observation allows us to conclude that if it had been strawberries, not spinach, which had been misidentified with superior iron, the myth would not have lasted so long, nor would have had the same impact.) This observation tallies with Berger's 2011 findings, that arousal increases social transmission of information (13).

Constructing a rebuttal that has a high probability of acceptance is complex. Lewandowsky et al (14) demonstrated the importance of the perceived scientific consensus, researching the relationship between that perception and non-expert acceptance of those theories. Furthermore, he demonstrates that providing information about the consensus (“nine out of ten cats agree”, “95% of dentists use”) increases acceptance and that without this information, people frequently underestimate the meaning of consensus. Additionally, he reports studies that show that people accept consensus from trusted information sources (scientists), but not from authority figures.

Ecker et al (15) demonstrated that belief will persist and that its level of influence will continue to increase in the absence of strong rebuttal, and that rebuttals require full attention in order to have maximal effect. Lewandowsky et al (11)) report that over-complexity of rebuttal and dogmatic assertions of correctness may reduce acceptance of the corrected information, and stress the need to offer a replacement narrative.

Thus, if we were to construct a rebuttal to the MMR vaccination issue, it might be characterized thus:

  • The message would come from a trusted figure, rather than an authority.
  • It would reference the degree of consensus (“97% of doctors...”).
  • The story would be simple.
  • It would construct a replacement narrative, referencing personal experience and a new narrative (“Just as vaccines for polio, typhoid and diphtheria have kept generations safe...”)
  • There might be attempt to elicit arousal (“Wakefield was personally paid £435,000 to conduct research on children, including unnecessary and invasive procedures”) - http://www.bmj.com/content/342/bmj.c7001#ref-16
  • And rather than adding complexity to the message, further information should be made available to anyone who is interested.

If this sounds like advertising, we should reflect on the amount of investment and research undertaken by both industrial organizations and academics on the best strategies to change people's minds. In the case of this toothpaste advert, the authority figure is a “representation of a nurse”, and these highly effective informational adverts that were designed to decrease the time taken for middle-aged stroke victims to seek medical attention were voiced over by an actress famous for playing a doctor in a UK TV series. Both examples are constructed with a view to changing or replacing a narrative, whether it is that Toothpaste A is better than Toothpaste B, or that strokes affect older people or involve dramatic symptoms. The adverts construct narratives (“When stroke strikes, act FAST”), using a judicious mix of authority and evidence, but at all times maintaining a clear message.

Trustworthy communication

Despite the bizarre omission of a category for 'scientist, researcher or academic', professions with a scientific background are highly trusted, with five (nurses, pharmacists, medical doctors, engineers and dentists) appearing at the top of Gallup's Honest/Ethics in Professions ranking. Furthermore, scientific publishing is seen as of a different caliber from other forms of publishing, with the peer-review process often being used as a hallmark of quality. Entwistle reported that “Journalists relied heavily on the peer review processes of the journals in ensuring accuracy” (16).

Within the scholarly community, however, we have a more sophisticated view of the meaning of peer-review, and are able to take into account other phenomena. In short, we are able to take into account other pieces of information: for example, low citation rates and lower quality impact factors, the construction of the title and abstract, the reputation of the authors within their community - without necessarily engaging our subject-level expertise in an in-depth analysis of the methodology, analysis and conclusions. The process of peer-review is not a “gold standard” with a fixed methodological process, rather is it a term that encompasses many different forms of practice. Journals are re-visiting the process (e.g., Virology, start-ups are proposing peer-review as a commercial service and new publishers are experimenting with an open, non-anonymous peer review.

As scholarly communication becomes more freely available with the growth of open access – and we become more aware of concepts like “citizen science” – it is worth considering how scholarly articles can be consumed in the wider community, especially when research is calculated to have the potential for being highly impactful: a paper on the dangers of childhood vaccination will always have more potential than articles on bibliometrics, especially when surrounded with the paraphernalia of press releases, press conferences and media appearances that are calculated to provide added impetus to a story.

There are many emergent approaches to how we can position research in society, retaining the channel for the researchers, publishers and readership to communicate together, and how we can provide more information regarding the likely reliability of research outcomes.

  • Crossref's Crossmark service provides a mechanism by which publishers can communicate errata, corrections in a standardized format.
  • The Reproducibility Initiative – an initiative supported by Mendeley – aims to increase the rigor of scientific work, by reproducing experimental work using a blind, independent team.
  • The Amsterdam Manifesto on Data Citation proposes a set of best practices to ensure that data is openly available, and that researchers can get credit for making their data available for error checking, re-use and re-analysis.

The problems surrounding withdrawn articles are likely to increase. The authors of the blog “Retraction Watch” have published a detailed article on the phenomena of increasing retractions. “Why Has the Number of Scientific Retractions Increased?” (17) indicates a variety of causes: editors act faster, and more frequently. Retraction of one paper will lead to a re-evaluation of a researcher's other papers, and greater scrutiny of higher-impact journals has a 'modest' impact on retraction.

Increasing openness is likely to increase the rate of retraction, correction and erratum. Given how hard (and expensive) it is to retract misinformation, it seems reasonable to conclude that:

  1. papers with a higher degree of likely social interest and impact should merit a higher standard of review, and that those standards should be open and readily understood by all readers, and
  2. that when high-impact papers are retracted, retraction is insufficient, and that the “withdrawal” of the findings from the social melee should recognize the long-standing nature of scientific belief, and the likely cost to society of misheld beliefs.

 

Appendix 1

I agree with the sentence I disagree with the sentence I used to agree with the sentence, but have changed my mind I used to disagree with the sentence, but have changed my mind Notes
Spinach contains loads of iron and is particularly good for you 68.4% (54) 2.5% (2) 26.6% (21) 2.5% (2) See this article for more information; however spinach does not contain more iron than other green vegetables.
A diet containing a lot of fat is unlikely to be very healthy 62.5% (50) 12.5% (10) 21.3% (17) 3.8% (3) Aside from the biological need for some lipids, this may be reasonably said to be true.
Some routine childhood vaccinations are sufficiently risky to make me not want to give them to my children 2.5% (2) 88.8% (71) 6.3% (5) 2.5% (2) See this article for more information. Although there are various rumors regarding vaccination (“immune system overload” and “mercury” amongst them), this article focuses on the UK MMR scandal.
Some people are made ill from Wi-Fi and mobile phone radiation 29.1% (23) 59.5% (47) 6.3% (5) 6.3% (4) Subject of this article; however there is no evidence to support this statement.
Male circumcision is unnecessary 79.9% (63) 13.9% (11) 3.8% (3) 2.5% (2) Current medical research supports this statement.
Cancer can be caused by a virus 48.6% (36) 35.1% (26) 1.4% (1) 14.9% (11) Cervical cancer is caused by HPV, this statement is true.

Table 1 – Overview of responses to an informal and anonymous survey of 80 associates, most of whom work in science or an allied industry

 

Total comments citing mass media / rumor / friend-of-a-friend etc. Total comments citing professional option / research / review / evidence Total (not all respondents cited media)
Spinach contains loads of iron and is particularly good for you 61% (17) * 25% (7) 28
A diet containing a lot of fat is unlikely to be very healthy 27% (7) 15% (4) 26
Some routine childhood vaccinations are sufficiently risky to make me not want to give them to my children 50% (5) 0% (0) 10
Some people are made ill from Wi-Fi and mobile phone radiation 92% (11) 0% (0) 12
Male circumcision is unnecessary 8% (1) 62% (8) 13
Cancer can be caused by a virus 21% (4) 26% (5) 19

* 8 respondents specifically refer to Popeye or “decimal point error”

Table 2 - Types of communication mentioned by respondents as influencing opinion


Footnote

Michael Taylor and the editorial board of Research Trends are much obliged to Dr Scott Granter for having corrected the author's (unintentionally ironic) false claim that the Tasmanian Devil population is widely affected by a virus that causes cancer. Dr Granter points out that it is the tumour that behaves like an infection, despite not actually having an infectious agent. For more information, and to quash any belief in this myth, the author would direct interested readers to Welsh, JS, 'Contagious Cancer', DOI: http://dx.doi.org/10.1634/theoncologist.2010-0301.

References

(1)           Hubbard, S.B. (2013) “Vinegar: Secret to Fast Weight Loss,” Available at: http://www.newsmaxhealth.com/newswidget/vinegar-apple-cider-vinegar-folk-remedies-weight-loss/2013/07/07/id/513664
(2)           O’Keefe, J.H., Gheewala, N.M., O’Keefe, J.O. (2008) “Dietary Strategies for Improving Post-Prandial Glucose, Lipids, Inflammation, and Cardiovascular Health”, Journal of the American College of Cardiology, Vol. 52, No. 3, pp. 249-55 (10.1016/j.jacc.2007.10.016)
(3)           Ostman, E., Granfeldt, Y., Persson, L., Björck, I. (2005) “Vinegar supplementation lowers glucose and insulin responses and increases satiety after a bread meal in healthy subjects”, European Journal of Clinical Nutrition, Vol. 59, No. 9, pp. 983-8 (10.1038/sj.ejcn.1602197)
(4)           Sutton, M. (2010) “Spinach, Iron and Popeye: Ironic lessons from biochemistry and history on the importance of healthy eating, healthy skepticism and adequate citation”, Internet Journal of Criminology, Available at: http://www.internetjournalofcriminology.com/sutton_Spinach_Iron_and_Popeye_March_2010.pdf
(5)           Sutton, M. (2012) “The Spinach, Popeye, Iron, Decimal Error Myth is Finally Busted”, Available at: http://www.bestthinking.com/articles/science/chemistry/biochemistry/the-spinach-popeye-iron-decimal-error-myth-is-finally-busted
(6)           Rubin, G.J., Das-Munshi, J., Wessely, S. (2005) “Electromagnetic Hypersensitivity: A Systematic Review of Provocation Studies”, Psychosomatic Medicine, Vol. 67, No. 2, pp. 224-232 (10.1097/01.psy.0000155664.13300.64)
(7)           Eltiti, S. et al (2007) “Does Short-Term Exposure to Mobile Phone Base Station Signals Increase Symptoms in Individuals Who Report Sensitivity to Electromagnetic Fields? A Double-Blind Randomized Provocation Study”, Environmental Health Perspectives, Vol. 115, No. 11, pp. 1603-1608 (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2072835/)
(8)           World Health Organization, Available at: http://www.who.int/peh-emf/publications/facts/fs296/en/
(9)           Eltiti, S. et al (2007) “Development and evaluation of the electromagnetic hypersensitivity questionnaire”, Bioelectromagnetics, Vol. 28, No. 2, pp. 137-151 (10.1002/bem.20279)
(10)       The Phrase Finder, Available at: http://www.phrases.org.uk/meanings/urban-myth.html
(11)       Lewandowsky, S. et al (2012) “Misinformation and Its Correction: Continued Influence and Successful Debiasing”, Psychological Science in the Public Interest, Vol. 13, No. 3, pp. 106-131 (10.1177/1529100612451018)
(12)       Heath, C., Bell, C., Sternberg, E. (2001) “Emotional selection in memes: The case of urban legends”, Journal of Personality and Social Psychology, Vol. 81, No. 6, pp. 1028-1041.
(13)       Berger, J. (2011) “Arousal Increases Social Transmission of Information”, Psychological Science, Vol. 22, No. 7, pp. 891-893 (10.1177/0956797611413294)
(14)       Lewandowsky, S., Gignac, G.E., Vaughan, S., (2012) “The pivotal role of perceived scientific consensus in acceptance of science”, Nature Climate Change, Vol. 3, pp. 399-404 (10.1038/NCLIMATE1720)
(15)       Ecker, U.K.H., Lewandowsky, S., Swire, B., Chang, D. (2011) “Correcting false information in memory: Manipulating the strength of misinformation encoding and its retraction”, Psychonomic Bulletin & Review, Vol. 18, No. 3, pp. 570-578 (10.3758/s13423-011-0065-1)
(16)       Entwistle, V. (1995), “Reporting research in medical journals and newspapers”, BMJ, Vol. 310, No. 6984, pp. 920-923.
(17)       Plos One, Available at: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0068397
 
 
 
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Trends in pediatrics: Overview of research trends from 2007 – 2011

Dr. Daphne van Weijen explores the trends in Pediatric research, using the term mapping technique, and determines which topics have shown active growth in research output in recent years.

Read more >


Introduction

For this issue on trends in Medical research, we took a closer look at the field of Pediatrics. What are the recent ‘hot’ topics in Pediatric research?  Or, more specifically, what topics have shown active growth in research output (published articles) over the past five years?  To answer this question, we used a new visualization tool, developed in collaboration with the CWTS research group, specializing in bibliometrics at the University of Leiden. It enables us to explore the topics addressed in different journals and the citation impact of these topics. The tool can generate term maps based on all journals and conference proceedings indexed in Scopus. These term maps reveal the relationships between terms used in titles and abstracts of articles published in one or more selected journals and visualize these using VOSviewer software developed at CWTS for the visualization of journal impact maps (1).

Largest English language journals in Pediatrics

First, we used Scopus to delineate the Pediatrics field and determine the journals to be entered in the term mapping tool. An initial subject term search in Scopus revealed two relevant categories: Pediatrics, and Pediatrics, Perinatology & Child Health. A subsequent search for all output in those categories revealed that almost 39,000 articles were published in pediatrics in 2012. Based on this output, we determined the 10 largest English language journals, in terms of number of papers published, publishing in Pediatrics in the period 2007-11:

  • Pediatrics
  • Journal of Pediatric Surgery
  • Archives of Disease in Childhood
  • Journal of Pediatrics
  • Acta Paediatrica, International Journal of Paediatrics
  • Pediatric Infectious Disease Journal
  • Pediatric Blood and Cancer
  • Pediatric Radiology
  • Journal of Child Neurology
  • Journal of Pediatric Gastroenterology and Nutrition

Term mapping

Once this list of titles had been compiled, the term mapping tool was used to generate a term map, to determine relatively highly cited topics in the field. The term mapping tool performs an analysis of the keywords found in the titles and abstracts of articles in selected journals published over a specified period of time. In this case, the analysis was based on Scopus data (version June 2012), and included 14,878 articles published in the 10 largest English language pediatrics journals in 2007–2011, covering around 38% of the total output in the field. For the analysis a five year overlapping publication and citation window was used (2007–2011). Reviews and conference papers were excluded, as particularly reviews tend to refer to older content, rather than emerging topics. Publications without an abstract were also excluded.

Clusters of co-occurring terms

After the first version of the map was generated, it was checked for uninformative terms, such as publisher or society names, and generic terms such as literature, complication, parent, presentation, feature, etc. These were removed and the tool was rerun to generate a new version of the map. This version, shown in Figure 1, is a co-occurrence cluster map. Every term that occurs at least 5 times in the titles and abstracts of articles in the selected journals is represented by an individual node on the map. The size of the node indicates its frequency of occurrence: the larger the node, the more articles contain the term. The nodes are positioned in a 2D plot in which their relative positions are determined by their co-occurrence in titles and abstracts included in the analysis. The closer the terms, the more often they tend to co-occur. However, it is important to note that this is a 2D representation of a multi-dimensional network, so the proximity of terms is not perfect.

Finally, the terms are colored into clusters of terms that tend to co-occur. The map in Figure 1 clearly contains 6 main clusters of co-occurring terms. The blue cluster (middle and top left), for example, appears related to Vaccination, the red cluster (top right) to Surgery, and the green cluster (left) to Pediatric health and education. Field expertise can help check and appropriately name the clusters, as well as predict which clusters are likely to contain the most highly cited content, and why.

Figure 1 - Journal term co-occurrence cluster similarity map for
the 10 largest English Language Pediatrics journals (2007–2011)

Highly cited terms

The next step in determining hot topics in the field is then to check which terms are relatively well cited in comparison to the rest of the content published in the journals. This can be done by changing the coloring in the cluster map to reflect the relative citation impact of each of the terms. This journal term co-occurrence citation impact map, shown in Figure 2, is colored as a heat map. The color of each item represents the average citation impact of the articles containing that term, relative to the average citation impact (1.00) of all articles included in the map. As older publications have had more time to be cited, the citations are normalized by year of publication to make a fair comparison possible. In the color scheme, terms with an above average citation impact are colored in red, terms with average citation impact are green and terms with below average citation impact are shown in blue.

Figure 2 - Journal term co-occurrence citation impact map for
the 10 largest English Pediatrics journals (2007–2011)

The map in Figure 2 clearly shows that the relatively highly cited terms tend to occur on the left side of the map. These are terms found in the blue, green and purple clusters shown in the cluster coloring version of the map in Figure 1, related to vaccines (blue cluster, top left), pediatric health & education (green cluster, left) and BMI & obesity (purple cluster, bottom left), as well as pediatric preclinical testing (yellow cluster, bottom right). Highly cited terms in these areas include:

  • rotavirus (rotavirus vaccine/vaccination, rotavirus gastroenteritis, rotavirus disease)
  • hydroxyvitamin D
  • nutrition examination survey
  • national health
  • childhood obesity
  • food allergy
  • severe intraventricular hemorrhage
  • pneumococcal conjugate vaccine (pcv7, heptavalent pcv, invasive pneumococcal disease, valent pneumococcal conjugate vaccine)
  • vaccine serotype
  • vaccine effectiveness
  • S. pneumoniae
  • H1N1
  • Late preterm

Hot topics?

As a final step to determine what topics have shown active growth in research output (published articles) over the past five years, a Scopus keyword search was performed for a few of the terms in the map with the highest relative citation impact, to determine if these were isolated occurrences or part of growing areas of research focus within Pediatrics. The outcome of this keyword search confirmed that there were at least 5 areas in Pediatrics which had a Compound Annual Growth Rate (CAGR) of more than 5%, which indicates that there was an above average increase of number of papers published in these areas over the past five years, as the average CAGR is 3 – 5% (see Table 1).

 

Relatively highly cited terms Relatively highly cited terms that co-occur with the main term: No. of papers published
(‘07 – ‘11):
CAGR*
(‘07 – ‘11):
Influenza H1N1, influenza infection, influenza virus, haemophilus influenzae/type B/hib, streptococcus pneumoniae, pandemic influenza, influenza vaccination/vaccine 852 22.7%
Vitamin D Hydroxyvitamin D, vitamin D deficiency 645 20.1%
Obesity Childhood obesity, obese, overweight, bmi, body mass index 4619 15.9%
Late preterm Infant death, neonatal mortality/death, neonatal intensive care, neonatal outcome, healthy neonate 168 6.3%
Vaccine Immunization, vaccination, vaccine, serotype 2638 5.3%

Table 1 - Overview of number of papers containing relatively highly cited terms from the term map and their compound annual growth rates. Source: Scopus

In conclusion, the main topics in Pediatrics that generated most interest in the past five years were related to research on Influenza, Vitamin D and Childhood Obesity. This is not surprising, given the increased real-world interest in avian flu (H1N1) and vaccinations against this since the pandemic in 2009, and the global rise in childhood obesity, especially in developed countries (2). The Scopus keyword search confirmed that the topics suggested by the map were indeed topics that have been attracting attention in the field.  Although this specific map at field level is somewhat generic, it does provide a general idea of where to look for hot topics in more detail. Therefore, generating term maps based on Scopus data is clearly a useful way to determine areas of growth in specific fields of scientific research.

References

1. Van Eck, N.J., & Waltman, L. (2010) “Software survey: VOSviewer, a computer program for bibliometric mapping”, Scientometrics, Vol 84, No. 2, pp. 523–538.
2. Ogden, C.L., Carroll, M.D., Kit, B.K., Flegal, K.M. (2012) “Prevalence of obesity and trends in body mass index among US children and adolescents, 1999-2010”, Journal of the American Medical Association, Vol. 307, No. 5, pp. 483-490. http://jama.jamanetwork.com/article.aspx?articleid=1104932
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Did you know

about physicians on Twitter?

Dr. Gali Halevi, Elsevier

Did you know that there are over 1000 medical doctors actively participating in TwitterDoctors.net, a directory of physicians on Twitter?

Some of the participating physicians tweet their professional comments on health and medical issues, report breaking news and research, and inspire patients, thus reaching out in some cases to thousands of followers. However, this phenomenon has its issues. A recent analysis of physicians on Twitter (1) studied 260 self-identified physicians who all had at least 500 followers on Twitter, and found that 144 tweets (3%) were categorized as unprofessional. 38 tweets (0.7%) represented potential patient privacy violations, 33 (0.6%) contained profanity, 14 (0.3%) included sexually explicit material, and 4 (0.1%) included discriminatory statements.
This study shows a relatively small proportion of unprofessional behavior overall, which thanks to social media can be quantitatively monitored. However, it is impossible to tell if and how the social networking element increased the behaviors’ frequency and their outreach, compared to what these would have been in a previous era. Social networking monitoring represents an opportunity to help identify unprofessional behaviors and address them, perhaps through the education of physicians on what is and is not acceptable. This could help the Twitter directory achieve better stimulation of the patient-doctor discourse.

References
(1) Physicians on Twitter: http://jama.jamanetwork.com/article.aspx?articleid=893850

86
https://www.researchtrends.com/wp-content/uploads/2013/09/Research_Trends_Issue_34.pdf