Issue 22 – March 2011

Articles

The Research Excellence Framework: revisiting the RAE

In 2014, the Research Excellence Framework will finally replace the Research Assessment Exercise. Research Trends provides an interim report on how the REF is shaping up, and the role of bibliometrics in the new framework.

Read more >


In January 2008, Research Trends brought you a detailed overview of the UK’s Research Assessment Exercise (RAE) — then entering its final iteration — along with a look ahead at the RAE’s successor, the Research Excellence Framework (REF)1. To supplement this, Henk Moed described the behavior-changing effect research evaluation — and by extension any bibliometric indicators used in such evaluation — can have on institutions; and Bahram Bekhradnia provided a cautionary take on emphasizing bibliometrics in the REF2–3. In the three years since, the Higher Education Funding Council for England (HEFCE) has carried out consultations and pilot exercises, including one focusing on the use and value of bibliometrics. So halfway to its expected completion in 2014, how does the REF look?

A firm presumption

Bibliometric indicators were intended to play a large part in the REF; in fact, in the early stages of its development, “the Government [had] a firm presumption that after the 2008 RAE the system for assessing research quality … [would] be mainly metrics-based.” The suggested reason: to reduce “some of the burdens imposed on universities”4.

In 2009 HEFCE conducted a pilot exercise of bibliometrics for the REF, which showed “general but cautious support” for using citation data to complement — but not replace — peer review. Ironically, one stated concern was “the cost and burden involved”; but unease extended to the value such information provides5. Confronted with such concerns, HEFCE concluded that citation information should inform expert review, rather than act as a primary indicator of quality; and further that the use of citation data should be an option available to sub-panels, rather than an imposed requirement. As Bahram Bekhradnia stated in his 2009 critique of the REF: “The process now proposed is radically different [from the initial, metrics-based proposals], and will recognizably be a development of the previous Research Assessment Exercises.”6

The recession of metrics

The widespread economic recession of the past few years has affected governmental policies in practically every area, and the REF has been no exception. With purse strings tightening, and financial concerns at the heart of current political debate, people naturally asked whether university research should be more accountable to the economy. And so a new component of the REF was placed alongside research environment and quality: impact.

The impact metric covers the economic and social impact of research, and accounts for 25% of the overall score in the REF. The inclusion of this measure rapidly shifted the focus of discussions about the REF onto impact, with some academics speaking out about the incompatibility between such a metric and curiosity-driven research7. This shift in attention led bibliometrics to fade from view, which was compounded when David Willetts, Minister for Universities and Science, announced a year-long delay to the REF to discuss comprehensively the impact component, and a pilot exercise was conducted to develop a practical method of assessing impact8–9.

Image

Building the Framework

Over the last few months, chairs have been appointed to the main and sub-panels of the REF, and these will likely have more importance than under the original, metrics-based plans. The next step will be the appointment of panel members this year, and more detailed guidance on submissions and assessment criteria10; alongside this, HEFCE recently put out a call for tenders for provision of bibliometric data. Under its current timetable, the REF will inform funding from 2015 onwards; and with budget cuts across the state affecting higher education institutions, there will be pressure on HEFCE to get it right first time with the new system.

One question that deserves renewed attention, after the lurch of focus to impact, is whether bibliometrics should have a greater role in the assessment of research. The Framework has returned to its RAE roots with a confidence in expert review to the near-exclusion of statistics; but as one report following the bibliometrics pilot says, “[t]he trust in the infallibility of peer review is striking, and feels rather contrived … in light of the numerous studies carried out over the years that note it has limitations as well as great strengths”. Bibliometric indicators have limitations, and caveats that must be applied to conclusions drawn from their use; but where peer review is deemed critical, “it seems incontrovertible that such judgments will be more robust for having considered multiple streams of data and intelligence, both subjective and objective.”11

Assessing research around the world

• In Australia, the Research Quality Framework (RQF) was replaced by the Excellence in Research for Australia (ERA) initiative. The first full evaluation under this system was conducted in 2010, with another round to follow in 2012.

• Funding for Flemish institutions in Belgium is determined in part by the so-called BOF-key, funding parameter which since 2003 incorporates bibliometric indicators12.

• Set up in 2007, AERES evaluates France’s higher education institutes. The agency carries out on-site inspections, and its findings are used to direct funding.

Italy’s Valutazione Triennale della Ricerca (VTR) ran in 2003; six years later, outcomes of the assessment were used for the first time to allocate funds. The system was fully based on peer review13. Its successor exercise, the Valutazione Quinquennale della Ricerca (VQR), brings citation analysis alongside peer review as an option for assessment.

• The Performance-Based Research Fund (PBRF) is New Zealand’s tertiary education funding model, of which 60% is determined by Quality Evaluation of research. This is assessed by expert peer review; the next assessment will take place in 2012.

References:

1. Research Trends (2008) “The RAE: measuring academic research”, Issue 3.

2. Moed, H.F. (2008) “The effects of bibliometric indicators on research evaluation”, Research Trends, Issue 3.

3. Bekhradnia, B. (2008) “What is the best way to assess academic research?”, Research Trends, Issue 3.

4. HM Treasury (2006) ”Science and innovation investment framework 2004–2014: next steps”.

5. HEFCE (2010) “Research Excellence Framework: Second consultation on the assessment and funding of research: Summary of responses”.

6. Bekhradnia, B. (2009) “Proposals for the Research Excellence Framework — a critique”, HEPI Reports.

7. Connor, S. (2010) ”Nobel laureates: don’t put money before science”, The Independent (7 January).

8. Department for Business, Innovation and Skills. (2010) “David Willetts announces review of the impact requirement in the Research Excellence Framework”, News Distribution Service (9 July).

9. Gilbert, N. (2010) “Experts will assess UK research ‘impact’ to award funding”, Nature News (11 November).

10. HEFCE (2010) “Research Excellence Framework timetable”. http://www.hefce.ac.uk/research/ref/timetable.pdf

11. Technopolis (2009) “Identification and dissemination of lessons learned by institutions participating in the Research Excellence Framework (REF) bibliometrics pilot”, Report to HEFCE.

12. Debackere, K., Glänzel, W. (2004), “Using a bibliometric approach to support research policy making: The case of the Flemish BOF-key”, Scientometrics, Vol. 59, No. 2, pp. 253–276.

13. Franceschet, M., Costantini, A. (2011) “The first Italian research assessment exercise: A bibliometric perspective”, Journal of Informetrics, Vol. 5, No. 2, pp. 275-291.

Useful links:

HEFCE's bibliometrics pages

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

University rankings – what do they measure?

Universities have their own “Top 40” charts, in the form of worldwide rankings of university excellence. Research Trends explores what determines the ranking of institutions, and whether regional differences in higher education make meaningful international comparisons problematic.

Read more >


In Issue 9 Research Trends examined the Times Higher Education-QS World University Rankings, and we explored how the rankings of institutions in different countries have changed over the years. In this article we revisit university rankings from a country and regional perspective.  Two of the most widely known world university rankings — the THE-QS World University Rankings* and the Academic Ranking of World Universities (ARWU) produced by Shanghai Jiaotong University — measure the performance of universities using a range of indicators, which are combined into an overall score that is used to determine the university’s rank. But how do different countries and regions perform for the various indicators on which their overall scores, and therefore rankings, are based? We investigate this question using data from the 2009 ranking exercises.

*In 2010 the THE-QS ranking split into two new ranking schemes: the first, produced by QS, continued with the same methodology; the second, produced by THE in Collaboration with the information company Thomson Reuters, used modified methodologies, indicators, and data sources.

International flavour

The THE-QS ranking uses six indicators: academic peer review; employer review; faculty to student ratio; citations per faculty member; proportion of faculty that are international; and proportion of students that are international. An overall analysis of the average values for these indicators by country reveals an interesting pattern in the two ‘internationality’ indicators (see Figure 1): these measures tend to be higher in small, wealthy countries with a strong research base, such as Singapore, Ireland, Switzerland, and Hong Kong. The small size of these countries means that they have a relatively small domestic pool of students and researchers relative to the global pool, which as a result will tend to be strongly represented at these universities.  We also see high values of these measures for countries with a global research reputation, English language culture, and strong international links (see Figure 1), such as the UK (popular with students globally, due to historic research culture and reputation), and Australia, which is a popular higher education centre in the South Asia region. Interestingly, the US, whose institutions dominate the THE-QS top 200, scores relatively low for measures of ‘internationality’; this could be attributable to a number of factors, including regional isolation, high costs, and the large pool of domestic students and researchers available to populate the universities.

Figure 1 – Average scores for ‘internationality’ indicators in the 2009 THE-QS World University rankings, selected countries.

Language bias

The ARWU scheme also uses six indicators: major academic awards to alumni; major academic awards to staff; researchers in Thomson’s Highly Cited lists; publications in Nature and Science; volume of publications in Thomson’s Science Citation Index; and per capita performance (a score based on the first five indicators that is weighted by the number of staff at the university). A country-by-country breakdown of indicator scores (see Figure 2) reveals that countries with a strong English language culture perform well for the Highly Cited indicator (a measure based on data from the Web of Science database, which has a strong English language emphasis).

Figure 2 – Average scores for each indicator in the 2009 ARWU ranking, selected countries. Alumni = Weighted count of alumni winning Nobel Prizes or Fields Medals; Award = Weighted count of staff winning Nobel Prizes or Fields Medals; HiCi = Count of highly cited researchers in 21 subject categories; N&S = Count of articles published in Nature and Science in recent 5 years. PUB = Count of articles indexed in Science Citation Index-Expanded and double the count of articles indexed in Social Science Citation Index in recent year; PCP =     Per Capita Performance: weighted scores of the above five indicators divided by the number of FTE academic staff. Asterisks (*) indicate countries with a notably high HiCi score.

Scaling up to regional level in the ARWU rankings, it emerges that institutions based in North America (the US and Canada) outperform institutions in other regions on average, according to the Highly Cited and Nature/Science publication indicators, both of which are measures of high impact research (see Figure 3). In contrast, we see that institutions in the Asia-Pacific region perform poorly for the two indicators that measure major awards to alumni and staff.

Figure 3 – Average scores by region, for indicators in the 2009 ARWU ranking. Indicators as per caption for Figure 2.

In 2009, both the THE-QS and ARWU university ranking schemes showed substantial country-level and regional-level differences in indicator scores. This raises the question of whether the indicator scores effectively measure the quality of individual universities, or whether they are too strongly influenced by global variation in the higher education/research system to allow meaningful comparisons between institutions that are located in different geographical zones.

Useful links:

THE-QS World University Rankings

Academic Ranking of World Universities

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

An update on Obama and American science: Uncovering US competencies

President Obama has emerged as a staunch supporter of science, and an enthusiastic advocate of its power for social good. Yet does the US have what it takes to deliver on this potential? Research Trends finds out.

Read more >


During months of presidential campaigning Barack Obama spoke with great energy and enthusiasm about the power and promise of science. Soon after his election in November 2008 Obama took a bold stand for making decisions based on science and announced that he had assembled a scientific ‘Dream Team’, in which he brought together the highest impact scientists working in the US today to provide policy advice1. In his inauguration speech at the start of 2009 Obama’s priorities in science became evident, when he spoke of “…wield[ing] technology's wonders to raise health care's quality and lower its costs; harness the sun, winds and the soil to fuel our cars and run our factories; transform[ing] our schools and colleges and universities to meet the demands of a new age”2. In February 2009, Obama signed into law The American Recovery and Reinvestment Act (ARRA), a stimulus package setting aside US$21.5 billion for federal research and development funding — one of the largest increases in research funding in decades3,4.

President Barack Obama. Image from www.whitehouse.gov.

American science in the spotlight

Obama’s conviction in the power of science and technology is not in doubt. But how likely is Obama to succeed in achieving breakthroughs in his target fields of clean energy, biomedical research and information technology? More specifically, within the broadness of these interdisciplinary fields of science, where do the US’s (hidden) competencies lie? Research Trends uses SciVal Spotlight to find out.

SciVal Spotlight is a web-based strategic analysis tool, based on Scopus data, which offers an interdisciplinary perspective of research performance that helps institutional and government leaders identify their institution’s and/or country’s academic strengths. SciVal Spotlight differs from the more traditional method of evaluating research performance based on the broad classifications of journals in main subject areas, and instead follows a bottom-up aggregation of research activity that classifies all articles published within a given institution or country based on co-citation analysis. On a country level, it creates ‘country maps’ that illustrate academic performance across scientific fields, as well as in relation to other countries, and therefore provides a much more detailed view of clustered research output per country5,6.

Overall Spotlight distinguishes 13 main research areas, with 554 underlying scientific disciplines spread between them. A Spotlight view of all US academic papers published over the five years ending 2009 reveals 1,707 distinctive competencies (DCs) within the US (that is, niches of excellence for which the US has a relative large article market share) - see Figure 1. These distinctive competencies, or DCs, become most informative when one drills down to see the main key words, journals, top disciplines, top authors and number of articles associated with them.

Figure 1 -  SciVal Spotlight map for the 5 years ending 2009, showing 1,707 distinctive competencies (DCs). For more information on the Spotlight approach, see www.scival.com/spotlight.

US distinctive competencies

So which — and how many — of these DCs relate to disciplines that are central to Obama’s vision of the promise of science? Given the broadness of these key fields, we chose to search by three selected disciplines which are sure to underlie each of the key fields: “biotechnology”, “energy fuel” and “computer networks”.

Our search on “biotechnology” revealed 24 DCs — that is, 24 areas within the broad field of biotechnology that the US excels in — which encompass studies varying from analysis of gene expression, metabolic acids, and the plasma membrane to enzymatic hydrolysis and ethanol production, with the percentage of articles by authors from US institutions (“article market share”) ranging between 30% and 54%. In the “energy fuel” discipline the US has six DCs, one of which (for example) relates to studies of carbon dioxide and supercritical carbon dioxode. Just three institutions — the University of Texas at Austin, the University of North Carolina at Chapel Hill and North Carolina State University — have a market share of articles of 30% on these topics. The “computer networks” discipline revealed five DCs, of which one, with an article market share of nearly 50%, includes studies on such topics as energy consumption and energy efficiency — a perfect example of one of Obama’s key fields overlapping that of another.

Although a much more thorough and in-depth analysis is required to get a complete answer to our leading question — where do the US’s underlying competencies lie? — with 1,707 DCs related to the research fields he aims to invest heavily in, Obama can rest assured: the US has great potential to meet the scientific and technological goals his administration has in its sights.

References:

1. Research Trends (2009) “Obama’s Dream Team”, Issue 3.

2. BBC News (January 2009) “Barack Obama's inaugural address in full”.

3. Reisch, M.S. (2009) “Equipping the science agenda”, Chemical and Engineering News, Vol. 87, No. 33, pp. 13–16.

4. SciVal White Paper (2009) “Navigating the Research Funding Environment”.

5. SciVal Spotlight Prospectus “Establish, Evaluate and Execute Informed Strategies”.

6. ElsevierNews (2009) “Elsevier Launches SciVal Spotlight: New Tool Provides Multidisciplinary View Of Research Performance”.

Useful links:

SciVal Spotlight

US Government Recovery Act

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Science, music, literature and the one-hit wonder connection

Most scientists dream of getting their work into top journals such as Science and Nature. But does achieving this goal reflect the brilliance of researchers, or simply scientific good luck? Professor Isaiah T. Arkin at The Hebrew University of Jerusalem explores the issues in this guest feature.

Read more >


It is a well known fact that publishing in Science or Nature, the scientific world’s top journals, is an incredibly difficult task. Despite being near-compulsory reading for any scientist, most never get a chance to air findings in their pages. Yet in the event of success, one’s career may take a turn for the better, with doors opening to lucrative academic positions, conference invitations, funding possibilities, and more.

Chance favors the prepared mind

What then does it take for a scientist to publish in Science or Nature? Is it that those who publish in the “top two” are simply better scientists, in terms of skill, funding, infrastructure, co-worker availability and so on? Or is publication simply a matter of chance that depends on researchers stumbling upon an interesting finding? Clearly, both factors are important for success, as eloquently stated by Pasteur1, yet their relative contributions remains unknown.

In an attempt to address this question an analysis was undertaken with the aim of estimating the repeat probability of publication in Science or Nature. The rationale was based on the fact that if one finds that most publications in the top journals are by authors that publish in them repeatedly, then sheer chance does not seem to be a major contributing factor to publication.

Yet if a publication in Science or Nature is a singular event, then one might conclude that the success might have been fortuitous, in a sense that the same individual is unlikely to publish there ever again. The results of such an analysis on 37,181 Science and 28,004 Nature publications are presented in Figure 1. Of these, 71% are by authors who have just one Science or Nature paper to their credit, with 15% of papers by researchers with two, and 6% with three. Interestingly, a slightly more polarized distribution is obtained when analyzing repeat publications by “last authors”, taken to represent the principal scientist of a particular study. Here, 74% of last authors in Science or Nature are unlikely to be last authors again in the same venue.

Figure 1 - Percentage of all publications in Science and Nature as a function of the number of publications per individual researcher (all authors or last authors). In order to focus on scientific publications, rather than editorials and commentaries, the following limiting criteria for a publication’s “eligibility” were used: the presence of an abstract; no review qualifier in the PubMed database; and article length of at least three pages. Finally, in an attempt to minimize grouping publications from different individuals, only publications in which the author has at least two initials were selected for analysis. The bibliographic database used was the PubMed portal of the United States National Library of Medicine. Also shown is an analysis of authors whose books reached the top of the New York Times’ bestsellers list (according to the data assembled by Hawes Publications). A similar analysis is also presented of the probability of musical artists (both groups and individuals) repeatedly placing their songs in the top 40 chart based on data compiled by the MBG top 40.

These findings suggest the following conclusions:

• There is less than a 30% chance of repeat publication in Science or Nature. Moreover, the odds are slightly worse for repeating last authors. Thus a scientist who has published in the top two journals is unlikely to repeat the endeavor by a ratio of more than 3:1.

• The chances of publishing repeatedly in Science or Nature are slightly smaller for the principal authors of the work in comparison with the other authors.

• The above potential success rate of repeat publication in Science or Nature is much higher than that of an “average” scientist, whose probability of publishing in Science or Nature is vanishingly small. Thus a publication in Science or Nature is an indication that the scientist is far more likely to publish there again compared with one who has not done so.

• Despite being in the minority, there is a definitive proportion of articles in Science or Nature that are published by authors that do so repeatedly. This list includes, not surprisingly, some of the most famous and influential scientists of our times.

Taken together, since most of articles in the top two scientific journals are written by authors that are unlikely ever to publish there again, they may be vernacularly classified as “one-hit wonders”.

The one-hit wonder phenomenon

In line with the above, it is intriguing to repeat this analysis on other human creative endeavors, such as literature and music. Thus one can compare the sporadic nature of scientific productivity (as manifested in publications in the top two scientific journals) with other human vocations. Specifically, the analysis was repeated, searching for singers (or groups) whose songs reached the “top 40” charts, and for authors of books that topped the New York Times’ bestsellers list. As seen in Figure 1 there is a similarity between the repeat probability of success between singers, authors and scientists.

Once more, nearly two-thirds of all the songs at the top of the charts, or books that make it to the top of the bestsellers list, are by individuals that will never repeat this feat. Thus, one finds that the sporadic nature of scientific creativity is mirrored to an extent in other human activities, such as literature and music. Finally it is notable that music, the field from which the term one-hit wonder arose, is the one in which the probability of repeat success is comparatively the highest.

One final analysis was undertaken with the aim of examining the predictive power of a publication in the best journals by potential academic recruits. In other words, in some of the world’s best academic institutions, candidates for tenure track positions are normally expected to have published in the top two journals prior to appointment. It is therefore interesting to examine whether researchers who published in Science or Nature during their post-doctoral fellowship or Ph.D. studentships are likely to publish in the top journals as independent group leaders. This question may be answered by examining the likelihood that an individual who has published in Science or Nature as a first author (as is common for post-docs and students) will later have publications in these journals in which they are listed as a last author (as is common for principal investigators/corresponding authors).

As seen in Figure 2, more than 87% of all scientists that have published in Science or Nature as first authors are unlikely to publish in the same venue later on as last authors. Furthermore, less than 7% of middle authors in Science or Nature will ever become last authors. Thus candidates who successfully published papers in the world’s top journals during the course of their studies are highly unlikely to repeat this feat as independent researchers.

Figure 2 - Probability that an individual who has published in Science or Nature as a first author (light blue) or middle author (dark blue) will publish there later on as a last author. The same qualifying limitations were applied as in the analysis of Figure 1.

In conclusion, it is possible to state that for the significant majority of Science or Nature authors publication represented a one-hit wonder, and the transition from a first (or middle) author to an article’s principal investigator is highly unlikely. Thus, chance seems to be of paramount importance in relation to preparedness1 for the majority of scientists.

References

1. Dans les champs de l’observation le hasard ne favorise que les esprits préparés. “In the fields of observation chance favors only the prepared mind”. Louis Pasteur, Lecture, University of Lille (7 December 1854).

Useful links:

The Arkin Lab Home Page

Author’s note: The author wishes to thank Joshua Manor, Hadas Leonov and Prof. Joseph S.B. Mitchell for helpful discussions. This work was supported in part by a grant from the Israeli science foundation (784/01,1249/05,1581/08). ITA is the Arthur Lejwa Professor of Structural Biochemistry at the Hebrew University of Jerusalem.

Editor's note: Professor Arkin's address is The Alexander Silberman Institute of Life Sciences, Department of Biological Chemistry, The Hebrew University of Jerusalem, Edmund J. Safra Campus, Givat-Ram, Jerusalem 91904, Israel and can be contacted at arkin (at) huji.ac.il.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Tipping the balance: The rise of China as a science superpower

The balance of global economic and political power is set to shift from West to East in the coming decades. Might the West also lose its scientific preeminence? To explore these questions, Research Trends looks at the rise, fall, and rise again of China as a scientific powerhouse.

Read more >


The Chinese mainland is home to one of the oldest civilizations in the world, and has seen massive technological and social change since it emerged as the modern state we know today as the People’s Republic of China. In the past 30 years, under reforming leaders such as Deng Xioaping, China has undergone the fastest Industrial Revolution in history, and is set to be the dominant global economic force within decades. Yet as China catches up with and overtakes the West as an economic and political powerhouse, will its scientific achievements keep pace? Chine once led the West technologically. Could it do so again — or does it already?

Early innovation, stagnation and re-emergence

Ancient China saw numerous technological innovations including paper and papermaking, woodblock and movable type printing, the invention of matches, the magnetic compass, cast iron and the iron plough, chain and belt drives, the propeller, and machines of war such as the crossbow, gunpowder, the cannon, and the rocket. However, while Europe underwent a scientific revolution starting in the 16th century, science and technology in China stagnated, a trend that accelerated with the creation of the People’s Republic of China in 1949 under the Communist rule of Mao Zedong. This was a period during which science in many industrialized nations was undergoing a post-war transition from “Little Science” characterized by individual or small-group efforts into the present era of “Big Science” typified by large-scale projects usually funded by national governments or inter-governmental groups. It was not until the establishment of a technocracy in the 1980s under the leadership of Deng Xiaoping that China started on its current path of scientific and technological advancement. The rise of China as a scientific nation can be documented on both the input and output sides, with a clear causal relationship between the two.

Returns on investment in Chinese science

In 2006, China embarked upon an ambitious 15-year plan1 to raise R&D expenditure to 2.5% of GDP, identifying energy, water resources, and environmental protection as research priorities. As part of this plan, investment in human resources is emphasized. China has become a higher education powerhouse, turning out more Ph.D. graduates than the UK annually since 2002 and closing on the US (see Figure 1). Perhaps a more immediate indicator of the scientific might of China is the size of the Chinese R&D workforce, which by 2007 already stood at 1.2 million people, exceeding that of the entire EU-27 grouping and poised to overtake the US (see Figure 2). In the absence of more recent figures, China may well already have surpassed the US in both of these key input metrics, but especially in number of researchers.

Figure 1 - Number of PhD graduates per country or country group in the period 2000-2006. Data from Appendix table 2-40 of Science and Engineering Indicators 2010 (National Science Foundation).

 

Figure 2 - Number of researchers per country or country group in the period 1995-2007. Data from Figure 3-48 of Science and Engineering Indicators 2010 (National Science Foundation).

 

The future of Chinese science

In terms of research output, China has shown remarkable growth. The number of articles appearing in the international literature — the most commonly-used indicator of research productivity — has risen exponentially in recent years. To see how this is perturbing the global balance of scientific output, each nation’s share of global article output can be determined. In 2008, China stood second only to the US by this metric, with 11.6% versus 20% of the global output, respectively. However, forecasting shares suggests that the dominance of the US is almost a thing of the past: based on a linear trend, China’s article share will surpass that of the US in 2013. Of course, this projection assumes that the observed trends to date will be maintained in the coming years, but it remains in keeping with another recent estimate of 2014 for China to surpass the US2. Indeed, the recent growth trends in the Chinese research workforce highlighted above are likely to manifest in key output metrics with a delay of just a few years.

Figure 3 - Share of global articles per country or country group in the period 1996-2016 (actual 1996-2009, projected 2010-2016). Data from Scopus.

China lost its scientific and technological edge from the 15th century onwards by becoming culturally insular, shunning exploration of the wider world and remaining suspicious of importing outside ideas and influences. Just as business depends on trading goods and services, so science depends on exchanging ideas and data, and self-imposed isolation is disastrous in either case. Now, in the 21st century, as China opens itself up to global markets — both of commerce and ideas — it again looks set to lead the world.

References:

1. Medium- and Long-term National Plan for Science and Technology Development 2006-2020

2. Leydesdorff, L., & Wagner, C. S. (2009). Macro-level indicators of the relations between research funding and research output. Journal of Informetrics, Vol. 3, No. 4, pp. 353–362.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Did you know

Drowning in the publication deluge?

Andrew Plume
In 2009, almost 1.39 million research articles alone (i.e. not including reviews, conference papers and other document types) were indexed in Scopus, the world’s largest abstracting and indexing database covering journals across all fields of science, social science and arts & humanities. That works out as one every 23 seconds, and yet each one typically takes 30 minutes to read1. Lifeboat, anyone?

[1] Tenopir, C. et al. (2009) Variations in article seeking and reading patterns of academics: What makes a difference? Library & Information Science Research 31, 139–148.

50
https://www.researchtrends.com/wp-content/uploads/2011/04/Research_Trends_Issue22.pdf