“No one can read everything. We rely on filters to make sense of the scholarly literature, but the narrow, traditional filters are being swamped. However, the growth of new, online scholarly tools allows us to make new filters; these altmetrics reflect the broad, rapid impact of scholarship in this burgeoning ecosystem. We call for more tools and research based on altmetrics. (1)

The above manifesto signaled the birth of altmetrics. It grew from the recognition that the social web provided opportunities to create new metrics for the impact or use of scholarly publications. These metrics could help scholars find important articles and perhaps also evaluate the impact of their articles. At the time there was already a field with similar goals, webometrics, which had created a number of indicators from the web for scholars (e.g., 2) and scholarly publications (e.g., 3), including genre-specific indicators, such as syllabus mentions (4). Moreover, article download indicators (e.g., 5) had also been previously investigated. Nevertheless, altmetrics have been radically more successful because of the wide range of social web services that could be harnessed, from Twitter to Mendeley, and because of the ease with which large scale data could be automatically harnessed from the social web through Applications Programming Interfaces (APIs). Academic research with multiple different approaches is needed to evaluate their value, however (6).

 

1 Scholarly use of the social web

Some research has investigated how scholars use social web services, giving insights into the kinds of activities that altmetrics might reflect. In some cases the answers seem straightforward; for example Mendeley is presumably used to store the academic references that users are interested in – perhaps articles that they have previously read or articles that they plan to read. Counts of article “Readers” in Mendeley might therefore be similar to citation counts in the sense that they could reflect the impact of an article. Mendeley has the advantage that its metrics could be available sooner than traditional citations, since there is no publication delay, and its user base is presumably wider than just publishing scientists. Nevertheless, there are biases, such as towards more junior researchers (7).

In comparison to Mendeley, Twitter has a wider user base and a wider range of potential uses. Nevertheless, it seems that only a minority of articles get tweeted – for example, perhaps as few as 10% of PubMed articles in the Web of Science 2010-2012 have been tweeted (8). Scholars seem to use Twitter to cite articles, but sometimes indirectly (9), which may cause problems for automatically harvesting these citations. Moreover, most tweet (link) citations seem to be relatively trivial in the sense of echoing an article title or a brief summary rather than critically engaging with it (10). There are also disciplinary differences in the extent to which Twitter is used and what it is used for (11) and so, as with citations, Twitter altmetrics should not be used to compare between fields. Another problem is that users may also indicate awareness of others’ work by tweeting to them or tweeting about their ideas without citing specific publications (12).

 

2 Evidence for the value of altmetrics

If article level altmetrics are to be useful to help direct potential readers to the more important articles in their field then evidence would be needed to show that articles with higher altmetric scores tended to be, in general, more useful to read. It would be difficult to get direct empirical verification, however, since data from readers about many articles would be needed to cross-reference with altmetric scores. Perhaps the most practical way to demonstrate the value of an altmetric is to show that it can be used to predict the number of future citations to articles, however, since citations are an established indicator of article impact, at least at the statistical level (more cited articles within a field tend to be more highly regarded by scholars, e.g., 13), even though there are many individual examples of articles for which citations are not a good guide to their value. This has been done for tweets to one online medical journal (14) and for citations in research blogs (15). This approach has double value because it shows that altmetric scores are not random but associate with an established (albeit controversial) impact measure and also shows that altmetrics can give earlier evidence of impact than can citation counts.

A second way of getting evidence of the value of altmetrics is to show that their values correlate with citation counts, without demonstrating that the former preceded the latter (of course, correlation does not imply causation and a lack of correlation does not imply worthlessness, but a correlation does imply a relationship with citation impact or at least some of the factors that cause citation impact). This gives some evidence of the validity of altmetrics as an impact indicator but not of their value as an early impact indicator. For example, a study showed that the number of Mendeley readers of articles in the Science and Nature magazines correlated with their citations, but did not prove that Mendeley reader data was available before citation counts (16).

Although the above studies provide good evidence that some altmetrics could have value as impact indicators for a small number of journals, larger scale studies are needed to check additional indicators and a wider range of journals in order to get more general evidence. In response, a large-scale study investigated 11 different altmetrics and up to 208,739 PubMed articles for evidence of a relationship between citations and altmetric scores gathered for 18 months from July 2011. The study found most altmetrics to have a statistically significant positive (Spearman) correlation with citations but one that was too small to be of practical significance (below 0.1). The exceptions were blogs (0.201), research highlights (0.373) and Twitter (-0.190). The reason for the negative correlation for Twitter, and perhaps also for the low correlations in many other cases, could be the rapid increase in citing academic articles in social media, leading to more recent articles being more mentioned even though they were less cited. This suggests that, in most cases, altmetrics have little value for comparing articles published at different points in time, even within the same year. To assess the ability of altmetrics to differentiate between articles published at the same time and in the same journal, the study ran a probabilistic test for up to 1,891 journals per metric to see whether more cited articles tended to have higher altmetric scores, benchmarking against approximately contemporary articles from the same journal. The results gave statistical evidence of an association between higher altmetric scores and citations for most of them for which sufficient data was available (Twitter, Facebook, research highlights, blogs, mainstream media, forums) (17). In summary, it seems that although many altmetrics may have value as indicators of impact, differences over time are critical and so altmetrics need to be normalized in some way in order to allow valid comparisons over time, or they should only be used to compare articles published at the same time (exception: blogs and research highlights).

 

3 Other uses for altmetrics

Altmetrics also have the potential to be used for impact indicators for individual researchers based upon their web presences, although this information should not be used as a primary source of impact information since the extent to which academics possess or exploit social web profiles is variable (e.g., 18; 19; 20). More widely, however, altmetrics should not be used to help evaluate academics for anything important, unless perhaps as complementary measures, because of the ease with which they can be manipulated. In particular, since social websites tend to have no quality control and no formal process to link users to offline identities it would be easy to systematically generate high altmetric scores for any given researcher or set of articles.

A promising future direction for research is to harness altmetrics in new ways in order to gain insights into aspects of research that were previously difficult to get data about, such as the extent to which articles from a field attract readerships from other fields (21) or the value of social media publicity for articles (22). Future research also needs to investigate disciplinary differences in the validity and value of different types of altmetrics. Currently it seems that most articles don’t get mentioned in the social web in a way that can be easily identified for use in altmetrics (e.g., 23), but this may change in the future.

4 References

(1) Priem, J., Taraborelli, D., Groth, P. & Neylon, C. (2010) “Altmetrics: A manifesto”, http://altmetrics.org/manifesto/
(2) Cronin, B., Snyder, H.W., Rosenbaum, H., Martinson, A. & Callahan, E. (1998) “Invoked on the Web”, Journal of the American Society for Information Science, Vol. 49, No. 14, pp. 1319-1328.
(3) Vaughan, L. & Shaw, D. (2003) “Bibliographic and web citations: what is the difference?”, Journal of the American Society for Information Science and Technology, Vol.54, No. 14, pp. 1313-1322.
(4) Kousha, K. & Thelwall, M. (2008) “Assessing the impact of disciplinary research on teaching: An automatic analysis of online syllabuses”, Journal of the American Society for Information Science and Technology, Vol. 59, No. 13, pp. 2060-2069.
(5) Shuai, X., Pepe, A., & Bollen, J. (2012) “How the scientific community reacts to newly submitted preprints: Article downloads, Twitter mentions, and citations”, PLOS ONE, Vol. 7 No. 11, e47523.
(6) Sud, P. & Thelwall, M. (2014) “Evaluating altmetrics”, Scientometrics, Vol. 98, No. 2, pp. 1131-1143.
(7) Mohammadi, E., Thelwall, M., Haustein, S. & Larivière, V. (in press) “Who reads research articles? An altmetrics analysis of Mendeley user categories”, Journal of the Association for Information Science and Technology.
(8) Haustein, S., Peters, I., Sugimoto, C.R., Thelwall, M. & Larivière, V. (in press) “Tweeting biomedicine: An analysis of tweets and citations in the biomedical literature”, Journal of the Association for Information Science and Technology.
(9) Priem, J., & Costello, K.L. (2010) “How and why scholars cite on Twitter”, Proceedings of the American Society for Information Science and Technology, Vol. 47, pp. 1-4.
(10) Thelwall, M., Tsou, A., Weingart, S., Holmberg, K. & Haustein, S. (2013) “Tweeting links to academic articles”, Cybermetrics: International Journal of Scientometrics, Informetrics and Bibliometrics, Vol. 17, No. 1, paper 1.
(11) Holmberg, K. & Thelwall, M. (in press) “Disciplinary differences in Twitter scholarly communication”, Scientometrics.
(12) Weller, K., Dröge, E. & Puschmann, C. (2011) “Citation analysis in Twitter: Approaches for defining and measuring information flows within tweets during scientific conferences”, In Proceedings of Making Sense of Microposts Workshop (# MSM2011).
(13) Franceschet, M. & Costantini, A. (2011) “The first Italian research assessment exercise: A bibliometric perspective”, Journal of Informetrics, Vol. 5, No. 2, pp. 275-291.
(14) Eysenbach, G. (2011) “Can tweets predict citations? Metrics of social impact based on Twitter and correlation with traditional metrics of scientific impact”, Journal of Medical Internet Research, Vol.13, No. 4, e123.
(15) Shema, H., Bar-Ilan, J. & Thelwall, M. (2014) “Do blog citations correlate with a higher number of future citations? Research blogs as a potential source for alternative metrics”, Journal of the Association for Information Science and Technology, Vol. 65, No. 5, pp. 1018–1027.
(16) Li, X., Thelwall, M. & Giustini, D. (2012) “Validating online reference managers for scholarly impact measurement”, Scientometrics, Vol. 91, No. 2, pp. 461-471.
(17) Thelwall, M., Haustein, S., Larivière, V. & Sugimoto, C. (2013) “Do altmetrics work? Twitter and ten other candidates”, PLOS ONE, Vol. 8, No. 5, e64841. doi:10.1371/journal.pone.0064841
(18) Bar-Ilan, J., Haustein, S., Peters, I., Priem, J., Shema, H. & Terliesner, J. (2012) “Beyond citations: Scholars' visibility on the social Web”, Proceedings of 17th International Conference on Science and Technology Indicators (pp. 98-109), Montréal: Science-Metrix and OST.
(19) Haustein, S., Peters, I., Bar-Ilan, J., Priem, J., Shema, H. & Terliesner, J. (in press) “Coverage and adoption of altmetrics sources in the bibliometric community”, Scientometrics.
(20) Mas Bleda, A., Thelwall, M., Kousha, K. & Aguillo, I. (2013) “European highly cited scientists’ presence in the social web”, In 14th International Society of Scientometrics and Informetrics Conference (ISSI 2013) (pp. 98-109).
(21) Mohammadi, E. & Thelwall, M. (in press) “Mendeley readership altmetrics for the social sciences and humanities: Research evaluation and knowledge flows”, Journal of the Association for Information Science and Technology.
(22) Allen, H.G., Stanton, T.R., Di Pietro, F. & Moseley, G.L. (2013) “Social media release increases dissemination of original articles in the clinical pain sciences”, PloS ONE, Vol. 8, No. 7, e68914.
(23) Zahedi, Z., Costas, R. & Wouters, P. (in press) “How well developed are Altmetrics? Cross-disciplinary analysis of the presence of “alternative metrics” in scientific publications”, Scientometrics.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)