Issue 24 – September 2011

Articles

On the assessment of institutional research performance

This article distinguishes between top-down and bottom-up approaches to assess institutional performance, as well as ‘article downloads’ versus citation data.

Read more >


Introduction

A standard way of bibliometrically analyzing the performance of an institution is to select all of its publications and then calculate publication- and citation-based indicators for the institution as a whole. But there are other ways of assessing performance, and these come in top-down and bottom-up varieties. In general, bottom-up approaches tend to produce more reliable results than top-down, and also make it possible to look at performance at the level of groups of departments within an institution. Next, we illustrate a new set of indicators bases on “usage”.

Top-down and bottom-up approaches 

One of the most challenging tasks in bibliometric studies is to correctly identify and assign scientific publications to the institutions and research departments in which the authors of the paper work. Over the years, two principal approaches have been developed to tackle this task.

 The first is the top-down approach, which is used in many, if not all, ranking studies of universities. In a top-down assessment, one typically notes the institutional affiliations of authors on scientific publications, and then selects all publications with a specific institutional affiliation. Even though this process is very simple, difficulties can arise. These can be conceptual issues (e.g., are academic hospitals always a part of a university?) or problems of a more technical nature (e.g., an institution’s name may appear in numerous variations). A bibliometric analyst must therefore be aware of these potential problems, and address them properly.

 The second, bottom-up approach begins with a list of researchers who are active in a particular institution. The next step is to create a preliminary list of all articles published by each researcher, which are sent to these individuals for verification to produce a verified database. This approach allows for the grouping of authors into research groups, departments, research fields, networks, or for an analysis of the entire institution.

 While top-down approaches can be conducted more easily than bottom-up studies, mainly because they do not directly involve the researchers themselves, they are often less informative than bottom-up ones. For example, top-down approaches cannot inform managers about which particular researchers or groups are responsible for a certain outcome, nor can they identify collaborations between departments. So despite the ease of use of top-down approaches, there is a need to supplement them with bottom-up analyses to create a comprehensive view of an institution’s performance.

The analysis of usage data

A different method of assessing of an institute’s performance is by analyzing the ‘usage’ of articles, as opposed to citations of articles. Usage, in our analysis, is measured and quantified in terms of the number of clicks on links to the full-text of articles in Scopus.com, which demonstrates the intention of a Scopus.com user to view a full-text article. Here we use a case-study of an anonymous “Institute X” in the United Kingdom as an example of what usage data analysis has to offer.

In this case study, we analyze papers from 2003–2009, and usage data from 2009. We first identified countries that click through the full text of articles with at least one author based in Institute X. Next, we determined the total number of full-text UK articles accessed by each country, and calculated the proportion of these that were linked to Institute X (that is, articles with at least one author based at the Institute). Finally, we identified the 30 countries with the highest proportion of downloads of articles affiliated with Institute X. The results are shown in Figure 1.  

 

 

 

 

 

 

 

 

 

Figure 1 – For the Top 30 countries viewing UK articles, the percentage of downloads of articles with at least one author from Institute X compared to downloads of all articles with at least one author from the UK. Source: Scopus.

Figure 1 also shows that of the 30 countries clicking through to the greatest number of full-text UK articles, the English-speaking countries of Australia, Canada and the US cite the greatest proportion of articles originating from Institute X. This is shown geographically in the map in Figure 2.

 

 

 

 

 

 

 

 

 

Figure 2 – Who is viewing articles from Institute X? Source: Scopus.

Similarly, one can look at downloads per discipline to assess the relative strengths of an institute.

 

 

 

 

 

 

 

 

 

Figure 3 – Relative usage of Institute X’s papers per academic discipline compared with UK papers in the discipline. The relative usage in Figure 3 is calculated as follows: (Downloads of Institute X/Papers from Institute X)/(Downloads UK/Papers UK).For Mathematics, Neuroscience, Nursing, Psychology and Health Professionals Institute X’s publications have a higher relative usage than for the entire UK.

We can also look at downloads over time. Figure 4 shows the increasing contribution of Institute X’s downloads to all UK downloads, suggesting that Institute X is playing a more and more important role in research in the UK. 

  

 

 

 

 

 

 

 

 

Figure 4 — Institute X’s papers downloads as a percentage of UK’s papers downloads per year.

As these examples demonstrate, usage data can be used for a number of different types of analyses. One major advantage they have over citation analyses is that citations of papers only accrue in the months and years following their publication, as new papers cite the article under analysis. Usage statistics, by contrast, begin to emerge as soon as an article is available for download, and so can give a more immediate view of how researchers, and the groups and institutes to which they belong, are performing. And while the full meaning and value of usage data remains up for debate, usage analysis is nonetheless represents a useful addition to the more conventional bibliometric analysis based on citations.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Emerging scientific networks

In countries with an emerging science base, how much do key institutions participate in “outward” versus “inward” collaboration networks? This article shows different patterns in these networks for different countries.

Read more >


Examining the scientific output of countries around the world is an effective way to identify emerging scientific competencies on national, institutional and topical levels. Various studies in this area have identified up-and-coming countries in South America, Africa, Asia and Europe1–5. These studies not only describe how particular countries actually invest in their science by mapping publications by research topics and disciplines, but can also help identify the factors that stimulate or harm scientific development by pointing to over- or under-investment in particular fields and/or research groups. 

Inward or outward?

In this piece we focus not only on the scientific output as seen in publications, but also on the formation of emerging scientific networks in a number of countries from different geographical regions. These networks were defined in terms of “Inward” and “Outward” connections. “Inward” connections denote scientific collaborations mostly conducted between institutions in the same country; “Outward” connections are those between institutions in different countries. Looking at the Inward/Outward characteristics of these emerging scientific networks reveals the differences between institutions and countries at the level of international versus domestic scientific participation, and also helps identify the specific disciplines and topics that foster such scientific network exchanges.

Countries of interest

This study focuses on the analysis and identification of institutions in selected countries in Africa, Central America, Eastern Europe, Arab nations and South Asia, all of which have shown a surge in scientific output in the past 5 years. The analysis was conducted in four steps. In the first step, a selected list of countries per region was compiled (see Table 1). In the second step, these countries were searched using Scopus database for 2005–2010 publications. The country with the higher number of publications was then searched individually in the third step in order to identify the institution with the highest scientific output. Finally, using Scopus Affiliation Profile, further analysis into each institution’s topics and collaborations was completed. The results are presented in Table 2.

Africa South Africa, Nigeria, Egypt, Kenya, Tunisia, Algeria, Morocco, Tunisia, Uganda, Namibia, Ghana, Cameroon
Eastern Europe Estonia, Latvia, Lithuania, Poland, Czech Republic, Slovakia, Hungary, Romania, Bulgaria, Slovenia, Croatia, Bosnia-Herzegovina, Serbia, Kosovo, Albania, Montenegro, Macedonia
Arab Countries Iran, Iraq, Jordan, Lebanon, Qatar, Saudi Arabia, Syria, Turkey, United Arab Emirates, Yemen
Central & South America   Panama, Costa Rica, El Salvador, Nicaragua, Honduras, Guatemala, Belize, Argentina, Brazil, Bolivia, Chile, Colombia, Ecuador, Paraguay, Peru, Uruguay, Venezuela
South Asia Indonesia, Laos, Malaysia, Philippines, Singapore, Thailand, Vietnam

Table 1 – List of selected regions and countries.

Table 2 shows the five countries displaying the highest number of publications in the different regions, and the institutions that display the largest number of publications, for 2005–2010. These results suggest that the nature of scientific collaborations — Outward or Inward — are not determined by the scientific field under study. For example, both Singapore and Saudi Arabia published significantly in Engineering, yet these countries show different collaborative characteristics: the former tend to be Inward, while the latter are Outward. Similarly, while both South Africa and the Czech Republic are strong in Medicine, they typically engage in Inward and Outward collaborations, respectively.

Close to sight, close to heart

In fact, the distinction between Outward and Inward collaborations obscures the fact that both kinds of collaboration tend to occur between geographically close countries. For example, Saudi Arabia’s Outward collaborations are largely carried out with groups based in India and Pakistan, countries that are relatively close to Saudi Arabia compared with, say, the US or Western Europe. Likewise, while the Czech Republic collaborates with institutions outside its borders, they are typically geographically close (that is, other European countries).

This examination of disciplinary foci and collaborative formations shows that despite the differences in research activities and collaborative trends, collaborations are typically formed between institutions that show relative geographical proximity. This trend could be a result of many factors. For example, researchers may be more likely to form personal connections with colleagues from nearby countries, perhaps because they encounter each other at regional talks and conferences more often than colleagues from countries further afield. In addition, researchers may find it easier to work with colleagues who share the same language, or other cultural characteristics. 

Region Country Most productive institution Dominant disciplines in most productive institution Collaborative orientation Most productive institution’s major collaborators
Africa South Africa University of Cape Town Medicine and Agricultural & Biological sciences Inward Univ Stellenbosch; Univ Witwatersrand; Groote Schuur Hospital; South African Medical Research Council
Central America Costa Rica Universidad de Costa Rica Agricultural and Biological Sciences Outward Texas A and M Univ; Smithsonian Tropical Research Institute; Univ Nacional Autónoma de México; Univ Sao Paulo
Eastern Europe Czech Republic Univerzita Karlova v Praze (Charles Univ, Prague Medicine and Biochemistry Outward Institutions in Russia, France and the UK
Arab Saudi Arabia King Fahd Univ Petroleum and Minerals Engineering Outward Institutions in IEEE, India and Pakistan
South Asia Singapore National University of Singapore Engineering, Physics and Astronomy Inward Inst. Materials Research and Engineering, A-Star, Inst. Infocomm Research, A-Star, Yong Loo Lin School of Medicine
  Table 2 – Most productive institutions and their collaborators in five countries.

References

  1.  Toivanen, H., Ponomariov, B. (2011) African regional innovation systems: Bibliometric analysis of research collaboration patterns 2005-2009. Scientometrics, Vol. 88, No. 2, pp. 471–493.
  2. Zhou, P., Glänzel, W. (2010) In-depth analysis on China's international cooperation in science. Scientometrics, Vol. 82, No. 3, pp. 597-612.
  3. De Castro, L. A. B. (2005) Strategies to assure adequate scientific outputs by developing countries - A Scientometrics evaluation of Brazilian PADCT as a case study. Cybermetrics, Vol. 9, No. 1.
  4. García-Carpintero, E., Granadino, B., & Plaza, L. M. (2010) The representation of nationalities on the editorial boards of international journals and the promotion of the scientific output of the same countries. Scientometrics, Vol. 84, No. 3, 799–811.
  5. Nguyen, T. V., & Pham, L. T. (2011) Scientific output and its relationship to knowledge economy: An analysis of ASEAN countries. Scientometrics, (1 July 2011), 1–11.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Heading for success: or how not to title your paper

Does the choice of a certain title influence citation impact? Research Trends investigates the relevant properties of a scientific title, from length to punctuation marks.

Read more >


The title of a paper acts as a gateway to its content. It’s the first thing potential readers of the paper see, before deciding to move on to the abstract or full text. As academic authors want to maximize the readership of their papers it is unsurprising that they usually take a lot of care in choosing an appropriate title. But what makes a title draw in citations?

Is longer better?

Bibliometric analyses can be used to illuminate the influence of titles on citations. Jamali and Nikzad, for example, found differences between the citation rates of articles with different types of titles. In particular, they found that articles with a question mark or colon in their title tend to be cited less1. The authors noted that “no significant correlation was found between title length and citations”, a result conflicting with another study by Habibzadeh and Yadollahie finding that “longer titles seem to be associated with higher citation rates”2.

Research Trends investigates

Faced with inconsistent evidence, Research Trends decided to conduct its own case study of scholarly papers published in Cell between 2006 and 2010, and their citations within the same window. Overall, there was no direct correlation between title length (measured in number of characters) and total citations. However, comparing the citation rates of articles of different lengths revealed that papers with titles between 31 and 40 characters were cited the most (see Figure 1). There were also differences in average number of citations per paper depending on the punctuation used in the titles: for instance, the few papers with questions marks in their titles were clearly cited less, but titles containing a comma or colon were cited more (see Figure 2). There were no papers with a semicolon in their title, and only one (uncited) paper with an exclamation mark in its title. It is interesting to note that the ten most cited papers in Research Trends’ case study did not contain any punctuation at all in their titles.

The authors explain

Research Trends contacted authors from highly cited papers in its corpus for their take on the influence of titles on citations. For some authors, such as Professor Deepak Srivastava — who published a paper in Cell with a title that included three commas3 — the main emphasis when choosing a title is semantics: "We chose a title that would reflect the major findings of the paper and the conclusion we would like the field to derive from the contribution. I don't pay too much attention to the title's effect on citations." Interestingly, different criteria are used for title assignment depending on the type of paper, as explained by Professor Ben Blencowe: "For research articles, I try to use titles that are concise while conveying the most interesting and surprising new results from the study. For review titles, I generally start with the main overall subject followed by a colon and then one or more subtopics that best describe the contents of the review. My 2006 Cell review on alternative splicing4 followed this format. It is not clear to me that this format increases citation impact — I would hope that the overall information content, timeliness and quality of writing in a review are directly related to citation impact! — but using punctuation in this way helps to convey at a glance what the review is about."

 

Are you having a laugh?

Given that straightforwardly descriptive paper titles run the risk of being dull, some authors are tempted to spice them up with a touch of humour, which may be a pun, a play on words, or an amusing metaphor. This, however, is a risky strategy. An analysis5 of papers published in two psychology journals, carried out by Sagi and Yechiam, found that “articles with highly amusing titles […] received fewer citations”, suggesting that academic authors should leave being funny to comedians.

In sum, the citation analysis of papers according to title characteristics is better at telling authors what to avoid than what to include. Our results, combined with others, suggest that a high-impact paper should be neither too short nor too long (somewhere between 30 and 40 characters appears to be the sweet spot for papers published in Cell). It may also be advisable to avoid question marks and exclamation marks (though colons and commas do not seem to have a negative impact on subsequent citation). And even when you think you have a clever joke to work in to a title, it probably won’t help you gain citations. Finally, while a catchy title can help get readers to look at your paper, it’s not going to turn a bad paper into a good one.

Figures

 

Figure 1 – Average number of citations per paper by title length for papers published in Cell 2006–2010, and their citations within the same window. Data labels show number of papers. Source: Scopus.

 

Figure 2 – Average number of citations per paper by punctuation mark for papers published in Cell 2006-2010, and their citations within the same window. Data labels show number of papers. Source: Scopus.

References

  1. Jamali, H.R. & Nikzad M. (2011) Article title type and its relation with the number of downloads and citations. Scientometrics, online first. DOI: 10.1007/s11192-011-0412-z
  2. Habibzadeh, F. & Yadollahie, M. (2010) Are Shorter Article Titles More Attractive for Citations? Cross-sectional study of 22 scientific journals. Croatian Medical Journal, Vol. 51, No. 2, pp. 165–170.
  3. Zhao Y., Srivastava D., Ransom J.F., Li A., Vedantham V., von Drehle M., Muth A.N., Tsuchihashi T., McManus M.T. & Schwartz R.J. (2007) Dysregulation of cardiogenesis, cardiac conduction, and cell cycle in mice lacking miRNA-1–2. Cell, Vol. 129, No. 2, pp. 303–317.
  4. Blencowe, B. J. (2006) Alternative splicing: new insights from global analyses”, Cell, Vol. 126, No. 1, pp. 848–858.
  5. Sagi, I. & Yechiam, E. (2008) Amusing titles in scientific journals and article citation. Journal of Information Science, Vol. 34, No. 5, pp. 680–687.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

A “democratization” of university rankings: U-Multirank

Research Trends reports on a new multi-dimensional approach of ranking universities. Its unique property is that it does not collapse different scores into one overall score, thereby increasing transparency.

Read more >


The rise of university ranking systems has engendered status anxiety among many institutions, and created a “reputation race” in which they strive to place higher up the university charts year on year. Concerns have been aired that this is leading to an homogenization of the university sector, as aspiring institutions imitate the model of more successful research-intensive institutions. And while the ranking scores do capture an important aspect of each university’s overall quality, they don’t speak to a diverse range of other issues, such as student satisfaction within these institutions.  

U-Multirank is a new initiative to change this. The system — designed and tested by the Consortium for Higher Education and Research Performance Assessment, and supported by the European Commission — aims to increase transparency in the information available to stakeholders about universities, and encourage functional diversity of the institutions1. Unlike traditional university rankings such as the ARWU2, QS3 and THE4 rankings, U-Multirank features separate indicators that are not collapsed into an overall score. In this article Frans van Vught, project leader of U-Multirank, discusses development of the system and his hopes for it.

Traditional university ranking systems encourage institutions to focus on areas that carry the greatest ranking weight, such as scientific research performance. One benefit of these rankings is that they publicize the achievements of universities that perform well, albeit in this specific range of activities. Will U-Multirank move away from a culture of looking for success stories?

U-Multirank is a multi-dimensional, user-driven ranking tool, addressing the functions of higher education and research institutions across five dimensions: research, education, knowledge exchange, regional engagement and international orientation. In each dimension it offers indicators to compare institutions. In this sense it certainly focuses on the goals institutions set themselves. But unlike most current rankings, U-Multirank does not limit itself to one dimension only (research). It allows institutions to show whether they are winners or improvers over a range of dimensions.

As it is impossible to directly measure the quality of an institute, proxy measures, such as graduation rates and publication output, have to be used instead. Yet as Geoffrey Boulton argues, “[i]f ranking proxies are poor measures of the underlying value to society of universities, rankings will at best be irrelevant to the achievement of those values, at worst, they will undermine it.”5 What criteria have you considered when selecting indicators, and are there indicators you would like to include but cannot at present?

When ranking in higher education and research we need to work with proxy indicators, since a comprehensive and generally acceptable set of indicators for ‘quality’ does not exist. Quality and excellence are relative concepts and can only be judged in the context of the purposes stakeholders relate to these concepts. Quality in this sense is ‘fitness for purpose’, and purposes are different for different stakeholders.

For the selection of U-Multirank’s indicators we made use of a long and intensive process of stakeholder consultation, which included a broad variety of stakeholders, including the higher education and research institutions themselves. This stakeholder consultation reflected the criterion of ‘relevance’ in the process of indicator selection. In addition we used the criteria of validity, reliability, comparability and feasibility. For ‘feasibility’ we focused on the availability of data and the effort required to collect extra data. We tried to ensure that data availability would not become the most important factor in the selection process. However, the empirical pilot test of the feasibility of U-Multirank indicators showed that particularly in the dimensions of ‘knowledge exchange’ and ‘regional engagement’ data availability is limited.

A recent report drew attention to U-Multirank’s ‘traffic light’ rating system, commenting that “institutions should not be ranked on aspects that they explicitly choose not to pursue within their mission.”6 Is this a valid criticism? Could it lead — as the authors suggest — to a decrease in functional diversity as “institutions compet[e] to avoid being awarded a poor ranking against any of the criteria”?

I think this argument is invalid. U-Multirank is user-driven. This is based on the fundamental epistemological position that any description of reality is conceptually driven; rankings imply a selection of reality aspects that are assumed to be relevant. Any ranking reflects the conceptual framework of its creator, who should therefore be a user of the ranking. U-Multirank is a ‘democratization’ of rankings.

We designed a tool that allows users to select the institutions or programs they are interested in. This is U-Map7, a mapping instrument that allows the selection of institutional activity profiles. In U-Multirank only comparable institutions are compared: apples are compared with apples, not oranges. Institutions that do not pursue certain mission aspects should not be compared on these aspects. U-Multirank is designed to avoid this, so as not to encourage imitation or discourage functional diversity. On the contrary, U-Multirank shows and supports the rich diversity in higher education systems.

However harmful to the goal of encouraging diversity, traditional ranking systems have the advantage that people know how to read them: the simplest comparison between universities is seeing which has the higher rank. Will U-Multirank’s users need guidance to compare institutions?

We hope to address both the wish to have a general picture of institutional performances and the wish to go into detail. U-Multirank offers a set of presentation modes that allow both a quick and general overview of multidimensional performance on the one hand, and a more detailed comparison per dimension on the other. Testing these presentation modes with different groups of stakeholders showed that our approach was highly appreciated and additional guidance was not needed. The general overview is presented in the so-called ‘sunburst charts’ that show a multidimensional performance profile per institution (see Figure 1). The detailed presentations are offered as tables in which performance categories are shown per indicator. 

Figure 1 – U-Multirank’s ‘sunburst’ charts “[give] an impression ‘at a glance’ of the performance of an institution”.8 The charts show the performance of each institution across a number of indicators, with one ‘ray’ per indicator: where an institution ranks highly in an indicator, the ‘ray’ is larger. These indicators are grouped into categories around the chart. These two charts show the performance of two institutions: a large Scandinavian university (top) and a large southern European university (bottom).

Curriculum Vitae: Frans van Vught

Frans van Vught (1950) is a high-level expert and advisor at the European Commission. In addition he is president of the European Center for Strategic Management of Universities (ESMU), president of the Netherlands House for Education and Research (NETHER), and member of the board of the European Institute of Technology Foundation (EITF), all based in Brussels. He was president and Rector of the University of Twente, the Netherlands (1997–2005). He has been a higher education researcher for most of his life and published widely in this field. His many international functions include membership of the University Grants Committee of Hong Kong, the board of the European University Association (EUA) (2005–2009), the German ‘Akkreditierungsrat’ (2005–2009), and the L.H. Martin Institute for higher education leadership and management in Australia. Van Vught is a sought-after international speaker and has been a consultant to many international organizations, national governments and higher education institutions all over the world. He is honorary professor at the University of Twente and at the University of Melbourne, and holds several honorary doctorates.

References:

  1. http://www.u-multirank.eu/
  2. http://www.arwu.org/index.jsp#
  3. http://www.topuniversities.com/university-rankings/world-university-rankings
  4. http://www.timeshighereducation.co.uk/world-university-rankings
  5. Boulton, G. (2010). University rankings: Diversity, excellence and the European initiative. League of European Research Universities Advice Paper. No. 3, June.
  6. Beer, J. et al. (2011). Let variety flourish. Times Higher Education, 2 June.
  7. http://www.u-map.eu/
  8. U-Multirank (2011). The design and testing the feasibility of a multi-dimensional global university ranking. Draft version for distribution at the U-Multirank conference, Brussels, Thursday 9 June 2011.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Mapping & Measuring Scientific Output

This event focused on scientific output measurements, methodologies and mapping techniques. Research Trends summarizes and reports back.

Read more >


Hundreds of delegates participated in a day-long symposium on scientific evaluation metrics, held in Santa Fe New Mexico on May 10th 2011, the result of a collaboration between Elsevier and Miriam Blake, Director of the Los Alamos National Laboratory (LANL) Research Library. This symposium focused on scientific output measurements, methodologies and mapping techniques, and was globally broadcasted from the conference ballroom, where world-renowned speakers presented and discussed existing and emerging metrics used to evaluate the value and impact of research publications.

Measuring the impact and value of scientific publications is critical as governments are increasingly seeking to distribute research funds in ways that support high-quality research in strategically important fields..  Through the years, several methodologies and metrics have been developed in order to address some of the many factors that can be taken into consideration when assessing scientific output. New evaluative metrics have emerged to capitalise on the wealth of bibliometric data in analyzing citation counts, article usage, and the emergence and significance of collaborative scientific networks. In addition, ever-increasing computational power enables rigorous relative and comparative analysis of journal citations and publication relationships to be calculated and used in unique ways, including a variety of visualization solutions.

Symposium themes generated from registrants feedback and comments

New developments

The symposium offered insight into the topic of research evaluation metrics as a whole. The discussion was headed by Dr. Eugene Garfield and Dr. Henk Moed, who addressed established and emerging trends in bibliometrics research, with both stressing the necessity of using more than one method to accurately capture the impact of research publications and authors. Emerging and innovative approaches using journal and scientific networks, weighted reference analysis and article usage data were also presented. Dr. Jevin West discussed the latest developments in the EigenFactor (http://www.eigenfactor.org); Dr. Henry Small unveiled a new method for citations text mining in emerging scientific communities; Dr. Johan Bollen presented the MESUR (http://www.mesur.org/MESUR.html) project; and Dr. Kevin Boyack (http://mapofscience.com) demonstrated how co-citation analysis can be used to identify emerging and established scientific competencies within institutions as well as countries. The methodological discussion was accompanied by a demonstration of visualization solutions that capture these relationships and enable a broad view of scientific trends, networks and research foci.

More than a thousand words

The power of visualizing such networks was demonstrated by Dr. Katy Börner, who headed the discussion on the variety and diversity of scientific mapping tools. Dr. Börner brought a wealth of examples through the “Places & Spaces” (http://scimaps.org/) exhibition, which was displayed in the conference room and via her presentation. Maps for scientific policy and economic decision makers, along with maps for forecasting and research references, were among the examples displayed and discussed by Dr. Börner. This was followed by Mr. Bradford Paley, who discussed visual and cognitive engineering techniques that support the analysis of scientometric networks (http://wbpaley.com/brad/Elsevier.html).

Multidimensionality

With the evident paradigm shift from print and paper to official and nonofficial online networks, as well as usage data and the wealth of data that they offer, the main discussion point during the symposium was the need for multidimensionality of measurements that capture and represent the complex arena we call “Scientific Impact”. Today, using one method, value or score to determine whether a researcher, research group or institution is indeed impactful seems invalid. If there is a lesson to be learned from this research event, it is that the scientific community has to find the correct and fair balance between a variety of computational metrics and qualitative peer-review processes.

Presentations & Audio recordings of this event are available:
http://www.elsevier.com/wps/find/librarianshome.librarians/LCPresentations

References:

  1. OECD (2010), Performance-based Funding for Public Research in Tertiary Education Institutions: Workshop Proceedings, OECD Publishing. http://dx.doi.org/10.1787/9789264094611-en
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Did you know

…That there are now more than 37 h-index variants?

Since its proposition by physicist Jorge Hirsch in 20051, the h-index has become a popular bibliometrics measure to evaluate scientists, and has featured regularly in Research Trends2–4. The simplicity and intuitiveness of the h-index have contributed to its popularity, but also to its criticism by a community wishing for more precise and unbiased measures, as the h-index tends to favor late-career scientists. As a consequence, several corrections to the metrics have been put forward: a recent paper has identified no less than 37 h-index variants that have emerged in the past 6 years. Interestingly the study found high levels of correlation between the h-index and most variants, suggesting that many of these tend to measure the same aspect.

1. Hirsch, J.E. (2005) An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, Vol. 102, No. 46, pp. 16569–16572.
2. Egghe, L. (2007) From h to g: the evolution of citation indices. Research Trends, September.
3. Bornmann, L. (2008). The h-index and its variants: which works best? Research Trends, May
4. Plume, A. (2009). Measuring up: how does the h-index correlate with peer assessments? Research Trends, May.
5. Bornmann, L., Mutz, R., Hug, S.E. & Daniel, H.D. A multilevel meta-analysis of studies reporting correlations between the h index and 37 different h index variants. Journal of Informetrics, Vol. 5, No. 3, pp. 346–359.

56
https://www.researchtrends.com/wp-content/uploads/2011/11/Research_Trends_Issue24.pdf