Issue 28 – May 2012

Articles

Does open access publishing increase citation or download rates?

The effect of “Open Access” (OA) on the visibility or impact of scientific publications is one of the most important issues in the fields of bibliometrics and information science. During the past 10 years numerous empirical studies have been published that examine this issue using various methodologies and viewpoints. Comprehensive reviews and bibliographies are given […]

Read more >


The effect of "Open Access" (OA) on the visibility or impact of scientific publications is one of the most important issues in the fields of bibliometrics and information science. During the past 10 years numerous empirical studies have been published that examine this issue using various methodologies and viewpoints. Comprehensive reviews and bibliographies are given amongst others by OPCIT (1), Davis and Walters (2) and Craig et al. (3). The aim of this article is not to replicate nor update these thorough reviews. Rather, it aims to presents the two main methodologies that were applied in these OA-related studies and discusses their potentialities and limitations. The first method is based on citation analyses; the second on usage analyses.

The debate surrounding the effect of OA started with the publication by Steve Lawrence (4) in Nature, entitled "Free online availability substantially increases a paper's impact", analyzing conference proceedings in the field computer science. "Open access" is not used to indicate the publisher business model based on the "authors pay" principle, but, more generally, in the sense of articles being freely available online. From a methodological point of view, the debate focuses on biases, control groups, sampling, and the degree to which conclusions from case studies can be generalized. This article does not aim to give a complete overview of studies that were published during the past decade but instead highlights key events.

In 2004 Stevan Harnad and Tim Brody (5) claimed that physics articles submitted as pre-print to ArXiv (a preprint server covering mainly physics, hosted by Cornell University) and later published in peer reviewed journals, generated a citation impact up to 400 per cent higher than papers in the same journals that had not been posted in ArXiv. Michael Kurtz and his colleagues (6) found in a study on astronomy evidence of a selection bias – authors post their best articles freely on the web - and an early view effect – articles deposited as preprints are published earlier and are therefore cited more often. Henk Moed (7) found that for articles in solid state physics these two effects may explain a large part if not all of the differences in citation impact between journal articles posted as pre-print in ArXiv and papers that were not.

In a randomized control trial related to open versus subscription based access of articles in psychology journals published by one publisher, Phil Davis et al. (8) did not find a significant effect of open access on citations. In order to correct for selection bias, a new study by Harnad and his team (9) compared self-selective self archiving with mandatory self archiving in four particular research institutions. They argued that, although the first type may be subject to a quality bias, the second can be assumed to occur regardless of the quality of the papers. They found that the OA advantage proved just as high for both, and concluded that it is real, independent and causal. It is greater for more citable articles then it is for less significant ones, resulting from users self-selecting what to use and cite. 1

Two general limitations of the various approaches described above must be underlined.

Firstly, all citation based studies mentioned above appear to have the following bias: they were based on citation analyses carried out in a citation index with a selective coverage of the good, international journals in the fields. Analyzing citation impact in such a database is in a sense a bit similar to measuring the extent to which people are willing to leave their car unused during the weekend, by interviewing mainly people on a Saturday at the parking place of a large warehouse outside town. These people have quite obviously decided to use their car, if they had not, they would not be there. Similarly, authors who publish in the selected set of good, international journals – a necessary condition for citations to be recorded in the OA advantage studies mentioned above – will tend to have access to these journals anyway. In other words: there may be a positive effect of OA upon citation impact, but it is not visible in the database used. The use of a citation index with more comprehensive coverage, would enable one to examine the effect of the citation impact of covered journals upon OA citation advantage. For instance, is such an advantage more visible in lower impact or more nationally oriented journals than it is in international top journals?

Secondly, analyzing article downloads (”usage”) is a complementary and in principle valuable method for studying the effects of OA. In fact, the study by Phil Davis and colleagues mentioned above applied this method and reported that OA articles were downloaded more often than papers with subscription-based access. However, significant limitations of this method are that not all publication archives provide reliable download statistics, and that different publication archives that do generate such statistics may apply different ways to record and/or count downloads, meaning that results are not always directly comparable across archives. The implication seems to be that usage studies of OA advantage comparing OA with non-OA articles can be applied only in “hybrid” environments in which publishers offer authors both the “authors pay” and a “readers pay” option upon submitting a manuscript. This type of OA may however not be representative for OA in general, as it disregards self-archiving in OA repositories that are being created in research institutions all over the world.

Future research has to be aware of these two general limitations,  as they limit the degree to which outcomes from case studies can be generalized and provide a simple, unambiguous answer to the question whether Open Access does - or does not - lead to higher citation or download rates.

References

1. OPCIT (2012) The Open Citation Project. The effect of open access and downloads ('hits') on citation impact: a bibliography of studies. http://opcit.eprints.org/oacitation-biblio.html.
2. Davis, P.M. and Walters, W.H. (2011) “The impact of free access to the scientific literature: A review of recent research”, Journal of the Medical Library Association, 99, 208-217.
3. Craig, I.D., Plume, A.M. , McVeigh, M.E. , Pringle, J. , Amin, M.(2007) “Do open access articles have greater citation impact? A critical review of the literature”, 1, 239-248.
4. Lawrence, S. (2001)”Free online availability substantially increases a paper's impact”, Nature, 411 (6837), p. 521.
5. Harnad, S., Brody, T. (2004) “Comparing the impact of open access (OA) vs. non-OA articles in the same journals” D-Lib Magazine, 10(6).
6. Kurtz, M.J., Eichhorn, G., Accomazzi, A., Grant, C., Demleitner, M., Henneken, E., Murray, S.S. (2005) “The effect of use and access on citations”, Information Processing & Management, 41, 1395–1402.
7. Moed, H.F. (2007) “The effect of “Open Access” upon citation impact: An analysis of ArXiv’s Condensed Matter Section” Journal of the American Society for Information Science and Technology, 58, 2047-2054.
8. Davis, P.M., Lewenstein, B.V., Simon, D.H., Booth, J.G., Connolly, M.J.L. (2008) "Open access publishing, article downloads, and citations: Randomised controlled trial", BMJ, 337 (7665), 343-345.
9. Gargouri, Y., Hajjem, C., Lariviére, V., Gingras, Y., Carr, L., Brody, T., Harnad, S. (2010) “Self-selected or mandated, open access increases citation impact for higher quality research”, PLoS ONE, 5 (10), art. no. e13636.

 

Footnote

[1]  In an earlier version of this piece, published on the Bulletin Board of Elsevier’s Editors Update I included a paragraph about the Gargouri et al. study that appears to be based on a misinterpretation of Table 4 in their paper. I wrote that “But they also found for the four institutions that the percentage of their publication output actually self-archived was at most 60 per cent, and that for some it did not increase when their OA regime was transformed from non-mandatory into mandatory.  Therefore, what the authors labeled as “mandated OA” is in reality to a large extent subject to the same type of self selection bias as non-mandated OA.” As Stevan Harnad has pointed out in a reply, Table 4 relates to the date articles were published, not when they were archived. Self-archiving rates are flat over time because they include retrospective self-archiving.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Bibliometrics and Urban Research: Part I

Global urban development was one of the significant innovations of the 20th century, changing both human and natural environments in the process.  Approximately 40 scholarly journals exist dedicated solely to urban studies, but with over 3 billion people now living in cities worldwide, it is inevitable that topics with an urban dimension are published across […]

Read more >


Global urban development was one of the significant innovations of the 20th century, changing both human and natural environments in the process.  Approximately 40 scholarly journals exist dedicated solely to urban studies, but with over 3 billion people now living in cities worldwide, it is inevitable that topics with an urban dimension are published across the science spectrum, in journals ranging from topics covering anthropology to zoology. 

 This breadth of material presents challenges to those with urban interests, and we are collaborating on the production of the first meta-journal in the field, designed to pull together what we know about recent scholarship on cities, in order to keep researchers up to date. As part of the development of Current Research on Cities (CRoC, 1), we investigated the diversity of publications in urban affairs using keyword analysis and found three distinct spheres of ‘urban knowledge’ that contain some overlap but also significant differences.

What we did

We explored the relationship between the different branches of urban research in the following manner. First, we identified three distinct clusters of published material:

  1. research published in the 38 journals of the Thomson Reuters “Urban Studies” cluster; 
  2. research with urban content in the social sciences and humanities;
  3. research with urban content published in the applied sciences.

 In the SciVerse Scopus database of journal articles published in 2010, which contains 991,000 entries, we identified research papers containing the keyword ‘urban’ plus one of the following keywords—planning, renewal, development, politics, population, transport, housing—that have shown up in a pilot project. We limited the search to Social Science subject areas and to relevant subject areas in the applied Sciences (ignoring medicine, engineering and so forth). This yielded the following numbers of articles and reviews, see Table 1.

Journals Number of Reviews and Articles Keywords
Urban Studies cluster 590 5109
Social Sciences 3719 32121
Sciences 2429 57629

Table 1 - Data on urban publications in the three different clusters . Source: Scopus, February 2012

As a second step of our analysis, we looked at frequencies of keywords attributed by indexers such as MEDLINE and Embase. Redundancies were eliminated and minor categories collapsed: e.g. water use and water planning are aggregated to ‘water’.  The three data sets were rearranged according to the keyword frequency, scaled against the grand totals for each column to make them comparable (e.g. 502 as a proportion of 32121 = 156, the first entry in the social sciences column).

Rank SCIENCES SOCIAL SCIENCES URBAN STUDIES
1 Water 254 Urban Planning 156 Housing 286
2 Environment 144 US 129 US 244
3 Urban Area 143 Urban Area 127 Urban Planning 240
4 Air 93 Urban  Population 126 Urban Development 221
5 Land Use 73 Human 109 Policy 215
6 Atmosphere 71 Urban  Development 106 Urban Area 176
7 Human 69 History 91 Neighborhood 148
8 US 68 Female 78 Urban Population 119
9 China 63 Housing 69 Urban Economy 90
10 Urban Planning 61 China 64 Metropolitan Area 88
11 Pollution 60 Urban Policy 64 Governance 74
12 Urbanization 54 Male 61 UK 74
13 Urban Population 51 Neighborhood 61 China 68
14 Urban Development 47 Urbanization 59 Social Change 62
15 Sustainability 40 Land Use 58 Urban Renewal 60
16 Climate 38 Rural 58 Urban Society 58
17 GIS 34 Policy 56 Urban Politics 54
18 Transport 34 Planning 54 Education  48
19 Female 32 Adult 51 Urbanization 48
20 Agriculture 29 Metropolitan Area 45 Strategic Approach 48

Table 2 - Appearances of keywords in the three clusters: those in RED are unique, those in BLUE are common to all three columns, and those shaded are discussed in the below interpretation.  Source: Scopus, February 2012

How we interpret these results

From this preliminary analysis, we can make a number of inferences. First, we can see that there is relatively little overlap between the three columns, with 22 of the 60 entries being unique (10 in the Sciences cluster, 9 in the Urban cluster). The variation is systematic: in Sciences, research focuses on water, air and climate, whereas in the other columns it emphasizes housing, governance and planning. Surprisingly, the points of potential convergence—such as ‘sustainability’—appear only in the Sciences column.   

When we examine the origins of the research we begin to understand the lack of integration between the three areas of specialty.  Half of the Urban Studies research emanates from the Anglophone countries; in contrast, Chinese authors contribute relatively more to the Sciences cluster (see Figure 1). 

Figure 1 - The percentage of papers within a category that have at least one author with an affiliation in the countries displayed: e.g. 33% of all Urban Studies papers have an author with an American affiliation. 

A second issue of importance is that research undertaken both in the Social Sciences and Urban clusters is attentive to scale; we have marked the appearance of both ‘neighborhood’ and ‘metropolitan’ in these columns. In contrast, Science research considers broader categories, such as urban versus rural. This reflects the tendency for applied science to apply itself to broad processes such as climate change, and the much narrower concerns of social scientists with phenomena such as gated communities.

Why these results are of relevance

The above data suggest that there may be only limited integration of research efforts undertaken by those who work explicitly in urban studies, social scientists who work in “cities”, and scientists who are concerned with the environmental impacts of urban development. Some part of this may be driven by geography and will disappear as more Chinese, Korean and Japanese scholars publish in international journals ((2)).  What remains however is that there is an astonishingly small commitment to pressing environmental issues such as climate change, sustainability and adaptation outside the science cluster.

When asked for his view on the reasons why these different disciplines influence the field, Professor C. Y. Jim from the Department of Geography at the University of Hong Kong comments, “Cities are the most complex, changeable, multidimensional and enigmatic artifacts ever contrived by humankind. Proper deciphering of this apparently unfathomable riddle demands synergistic confluence of wisdom from different quarters. A transdisciplinary, interdisciplinary and multidisciplinary (TIM) approach is more likely to bring a fruitful union of otherwise disparate concepts and methods and generate innovative ideas to fulfill this quest.”

It is to address this problem that CRoC has been developed.  As a meta-journal, the aim is to publish solicited material that can assist in bridging these silos, while building on the points of integration that do exist within the different communities of urban scholarship.

In the next issue of Research Trends, we will look at author distributions in finer detail: rather than assigning all authors with a UK affiliation to the nation as a whole, we can view the specific locations of each affiliation on a map.

References


1. Kirby, A. (2012) “Current Research on Cities and its contribution to urban studies”, Cities Volume 29, Supplement 1 S3–S8 http://dx.doi.org/10.1016/j.cities.2011.12.004
2.  Haijun Wanga et al. (2012) “Global urbanization research from 1991 to 2009: A systematic research review”, Landscape and Urban Planning, 104, 299–309.
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Usage: an alternative way to evaluate research

From the darkness to the light Librarians have long struggled to measure how library resources where being used: for decades, reshelving and circulation lists were the main methods available to them. Publishers had no idea how much their journals were used; all they had was the subscription data (e.g number and location of subscribers, contact […]

Read more >


From the darkness to the light

Librarians have long struggled to measure how library resources where being used: for decades, reshelving and circulation lists were the main methods available to them. Publishers had no idea how much their journals were used; all they had was the subscription data (e.g number and location of subscribers, contact details, etc). With the advent of electronic content n the late 1990s this changed: publishers could see how often articles from a certain journal were downloaded, and by which customers. Librarians could now see whether and how the resources they purchased were being used. Both groups gained a wealth of information that could help them manage their publications and collections.

Joining efforts towards common standards

It wasn’t long before the need for standardization emerged. Every publisher had its own reporting format, meaning that for librarians combining data and comparing definitions from various publishers took a lot of time and effort. In March 2002, Project COUNTER (Counting Online Usage of Networked Electronic Resources) (1) was launched. In this international initiative, librarians, publishers and intermediaries collaborated by setting standards for the recording and reporting of usage statistics in a consistent, credible and compatible way. The first Code of Practice was published in 2003. This year, COUNTER celebrates its 10th anniversary and has published the fourth release of its integrated Code of Practice, which covers journals, databases, books and multimedia content. This release contains a number of new features, including a requirement to report the usage of gold open access articles separately, as well as new reports about usage on mobile devices. The COUNTER Code of Practice specifies what can be measured as a full text request, when a request needs to be ignored in the reports, and the layout and delivery method of the reports. They also require an annual audit of the reports, with an independent party confirming that the requirements are met.

What usage can tell us

What is a full text article request in fact? A full text article is defined as the complete text of an article including tables, figures and references. When a user requests the same article in the same format within a certain time limit, only one of the requests can be counted. There is a lot of value in usage information: a librarian can see which titles are used most. Cost per article use can be calculated, which can give an indication of the relative value of a journal. In times of tight budgets, it might be considered the most important measure determining cancelations.

What usage does not tell us

While requests for full text give an indication of user interest, it doesn’t tell you how the article is being used. In a way, the requests are like the orders in a webshop: it tells you an item has been ordered, but it doesn’t tell you whether the user receives it or if it’s lost during shipping. It doesn’t tell you what the user does with the item when it is received: do they give it away, put it on their shelves or actually use it – and if so how? The usage data certainly doesn’t tell you why the article was requested: did the professor tell the students to download it, is it vital for research, does the user want it “just in case”, or is the title so funny that someone wants to hang it near the coffee machine?

Using usage data

Information on the actual articles being used can give an indication of the direction a field is growing in Usage data can reveal an interest in a particular subject if relevant articles are used more than those on other subjects. It can also provide geographical information as to the regional spread of the interest. Usage data is by no means the only indicator, but it can provide insight into trends sooner after article publication than citations do. Two initiatives are at the forefront of usage data implementation: the MESUR project in the USA, and the Journal Usage Factor in the UK.

The Journal Usage Factor

The Journal Usage Factor (UFJ) project, a joint initiative between UKSG and COUNTER, has recently released The COUNTER Code of Practice for Usage Factors: Draft Release 1”. In this document, the publication and usage period used for the calculation are defined as two concurrent years: this means that the 2009-2010 UFJ will focus on 2009-2010 usage of articles published in 2009-2010. The UFJ will be the “median value of a set of ordered full-text article usage data”(1). It will be reported annually as an integer, will integrate articles-in-press from the accepted manuscript stage, and will incorporate usage from multiple platforms. At this stage it is proposed that there will be two versions of the UFJ:

  • One based on usage to all paper types except editorial board lists, subscription information, and permission details.
  • One based on scholarly content only (short communications, full research articles, review articles).

The draft of the project document is available until 30 September 2012 for public consultation in the form of comments to the COUNTER Project Director  Peter Shepherd. Based on the feedback received, the Code of Practice will be refined prior to implementation in 2013. Research Trends will keep an eye on the project and report any further development online through www.researchtrends.com.  Peter Shepherd commented that “one of the main benefits of a statistically robust Usage Factor will be to offer alternative insights into the status and impact of journals, which should complement those provided by Impact Factors and give researchers, their institutes and their funding agencies a more complete, balanced picture”

How does usage compare to citations?

COUNTER and UKSG (UK Serials’ Group) commissioned extensive analyses from the CIBER research group into the proposed JUF. In 2011they published their findings in a report that included correlation analyses between theUFJ and a couple of bibliometrics indicators (SNIP and Impact Factor). For both analyses, they found low correlations: results which they did not find surprising as they “did not expect to see a clear correlation between them. They are measuring different things (`votes’ by authors and readers) and the two populations may or may not be co-extensive” (2). Although highly cited papers tend to be highly downloaded, the relationship is not necessarily reciprocal (particularly in the practitioner-led fields). Indeed, while users encompass citers they are a much wider and more diverse population (academics but also students, practitioners, non-publishing scientists, layperson with an interest, science journalists, etc.). There have been several bibliometrics studies comparing usage to citations and findings vary in degree of correlation depending on the scope and subject areas of the studies (3). A 2005 study by our Editor-inChief Dr. Henk Moed (4) found that downloads and citations have a different age distribution (see Figure 1)), with downloads peaking then tailing off promptly after publication, but citations showing a more even (though still irregular) distribution for a much longer time after publication. The research also found that citations seemed to lead to downloads: as an article is published citing a previous article, a spike is observed in the usage of the first article. These interesting results may not be surprising, as Dr. Henk Moed comments, “Downloads and citations relate to distinct phases in scientific information processing.”He has since performed more analyses correlating early usage with later citations, and found that in certain fields usage could help predict citations (e.g. Materials Chemistry), but in others the correlation was too weak to allow this (e.g. Management).

Where will usage go?

Usage data’s increasing availability has been matched by a seemingly rising interest in the field of bibliometrics but also more general academic communities. Although there is still a strong focus on citation metrics, the advent of COUNTER and other projects such as MESUR demonstrate the growing attention given to usage data. Yet it is still early days for usage: although a lot is happening in this relatively new field, it will take time to reach the levels of expertise and familiarity attained with the more traditional citation data. The Usage Factor is one of the first and most visible initiatives: it will be fascinating to monitor its deployment in the coming years, and see what other exciting and perhaps unexpected indicators will emerge from usage data in the future.

Figure 1- Age distribution of downloads versus citations.  Source: Moed, H.F. (4)


References

1. COUNTER project (2003) “COUNTER code of practice”, retrieved 27 March 2012 from the World Wide Web: http://www.projectcounter.org/documents/Draft_UF_R1.pdf
2. CIBER Research Ltd (2011) “The journal usage factor: exploratory data analysis”, retrieved 8 August 2011 from the World Wide Web: http://ciber-research.eu/CIBER_news-201103.html
3. Schloegl, C. and Gorraiz, J. (2010) “Comparison of citation and usage indicators: the case of oncology journals”, Scientometrics, Volume 82, Number 3, 567-580, DOI: 10.1007/s11192-010-0172-1
3. Brody, T., Harnad, S., and Carr, L. (2006) “Earlier Web usage statistics as predictors of later citation impact”, Journal of the American Society for Information Science and Technology, Volume 57, Issue 8, DOI: 10.1002/asi.20373
3. McDonald, J. D. (2007) “Understanding journal usage: A statistical analysis of citation and use”. Journal of the American Society for Information Science and Technology, volume 58, issue 1, DOI: 10.1002/asi.20420
3. Duy, J. and Vaughan L. (2006) “Can electronic journal usage data replace citation data as a measure of journal use? An empirical examination”, Journal of Academic Librarianship, Volume 32, Issue 5, DOI: 10.1016/j.acalib.2006.05.005
4. Moed, H. F. (2005) “Statistical relationships between downloads and citations at the level of individual documents within a single journal Journal of the American Society for Information Science and Technology, Volume 56, Issue 10, DOI: 10.1002/asi.20200
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Powerful Numbers – an Interview with Diana Hicks

Research Trends recently spoke to Diana Hicks, Professor and Chair of the School of Public Policy, Georgia Institute of Technology, Atlanta GA, USA on “Powerful numbers”, or the questions how bibliometrics are used to inform science policy. You recently gave a talk at the Institute of Science & Technology of China (where you gave examples […]

Read more >


Research Trends recently spoke to Diana Hicks, Professor and Chair of the School of Public Policy, Georgia Institute of Technology, Atlanta GA, USA on “Powerful numbers”, or the questions how bibliometrics are used to inform science policy.

Diana Hicks

You recently gave a talk at the Institute of Science & Technology of China (where you gave examples of how bibliometric data were used by government officials to inform science funding decisions. Could you tell us how you discovered these numbers, and about the work you did in investigating their use?

In the talk there were five famous examples of science policy analyses that have influenced policy plus one of my own.  Two I learned about because I was in the unit that produced the analyses at about the same time – Martin & Irvine’s influential analysis of the effect of declining science funding on UK output, and Francis Narin’s discovery of the strong reliance of US patents and scientific papers in areas such as biotechnology.  Two are very famous in science policy – Edwin Mansfield’s calculation of the rate of return to basic research (1), and the NSF’s flawed demographic prediction of a decline in US engineers and scientists.  Another I had seen presented at conferences and was able to follow in real time over the subsequent years – Linda Butler’s identification of the declining influence of Australian publications.  And the final one was my own, President Obama using a number related to one of my analyses in a speech.  I communicated with the authors to gather inside information on the influence the analyses exerted and when that was not possible I searched the internet.   The grey literature in which policy influence is recorded is all indexed these days, making it possible to go back (not too far in time) to put together these kind of stories.

Do you think bibliometric indicators make a difference and raise the level of policy debates, or are they only used when they are in agreement with notions or objectives that policy makers had anyway, and ignored if they point into a different direction?

I think policy making is influenced by a complex mix of information including anecdotes, news coverage, lobbying as well as academic analyses.  And while it would be naïve to expect a few numbers to eliminate all debate and determine policy in a technocratic way, if we don’t bother to develop methods of producing numbers to inform the debates, only anecdotes will be available, and that would be a worse situation.

In a recent article you published in Research Policy (2) you gave an overview of country-based research evaluation systems demonstrating different approaches and metrics used.  In your view, do you think there should or could be a way to merge these systems and create a comprehensive evaluative module, or do different countries indeed need different systems?

The bulk of university funding comes from the national level in most countries, and so systems to inform the distribution of the funding should be designed to meet the needs of the national decision makers and their universities.  On the other hand, national leaders also want a university system that is internationally competitive: therefore international evaluation systems, such as global university rankings would also be relevant, and high rankings could be rewarded with more resources.

Do you think these rankings have had an impact upon university research managers?

In the United States the domestic rankings of universities and of departments, especially business schools, has certainly influenced university management.  I think going forward the global rankings will be very influential.  They allow universities to demonstrate achievement in a globally competitive environment.   Universities can use a lower than ideal ranking as a resource for arguing for more money to improve their rankings.

Based on your experience studying research networks and collaborations, do you think that there’s a way to direct these by strategic funding or other external methods, or are these organic processes that are led by research interest?

I consulted with my colleague Juan Rogers, who has conducted studies of centers and various evaluation projects that have shown that the US Federal research funding agencies have tried consistently to direct research networks and collaborations. He informs me that arguably, the research center programs, especially those aiming at interdisciplinary research, are thought of as either facilitating collaborations by reducing transaction costs or capturing existing networks that were distributed and putting them under one roof to manage them as concentrated human capital. The results have been mixed vis a vis the management question. Networks with other shapes emerged in which now the centers are nodes rather than informal teams of individual researchers (one of the points of distinguishing broad informal networks which we labeled "knowledge value collectives" from networks that have more explicit agreed upon goals and procedures which we labeled "knowledge value alliances").

The agencies have also attempted to broker collaborations by taking a set of individual proposals that have been submitted independently and asking the PIs to get together and submit joint proposals that are bigger than each individual proposal (but maybe not as large as sum of the individual proposals) with the intent not only to save some money and "spread the wealth around" but hoping to improve the science with the expanded collaborative arrangement. Again, results are mixed. It depends on whether those involved in the "shotgun marriage" can get along. We've seen cases that had huge qualitative change consequences for a field (plant molecular biology, for example), others that fell apart, mainly due to personality clashes (according to our informants), and others that continued to coordinate their work to simulate the collaboration and satisfy the funding agency but didn't do anything very differently than they would have if they'd worked on their original proposals (areas of earth science, were examples here).

So to my mind the answer is that networks are de facto managed and manipulated, but that gaining control of them to set common goals and measure success in achieving them against invested resources, as an organization would do, is futile. If the networks are big enough, they'll adapt and many of the cliques will figure out how to game the manipulators and self appointed managers. At the same time, more modest goals, such as getting attention for problems that seem to be under-researched, may be a reasonable goal for the agencies that intervene in the networks.

From your international experience working with science policy authorities what are the main differences and/or similarities that you see between the western and asian approaches to science funding, encouraging innovation and strengthening the ties between research and industry (i.e. do you think the western world pushes more for innovation that will translate into business outcome or vice versa?)

My experience, along with that of my colleague John Walsh, suggests that at least the US and Japan may look different, but they end up achieving much the same thing.  For example, people used to think there was very little collaboration between industry and universities in Japan because of restrictive rules concerning civil service employment.  But the data showed collaboration rates were similar in Japan and western countries.  Further investigation revealed that the Japanese had developed informal mechanisms that “flew below the radar” but were just as effective as the high profile, big money deals that pharmaceutical companies were signing with US universities in those days.

The push for applied research has been said to be very strong in Japan and China, to the extent that the governments are not interested in basic research.  However, with so much research these days in Pasteur’s Quadrant (3) where contributions to both knowledge and innovation result, it is not clear that carefully constructed data would support the existence of big differences between east and west in this dimension.

What do you think are essential elements to creating a balanced and sustainable evaluative infrastructure for science? (e.g. diversified datasets, international collaborations)

There are several challenges in creating such an infrastructure, including private ownership of key resources, long term continuity, and great expense.  An evaluative infrastructure must bring together disparate data resources and add value to them through federating different databases and identifying actors – people, institutions, agencies.  It must do this in real time.  And, it must somehow provide access to resources that are at present accessed individually, in small chunks, because database owners are wary of losing their intellectual property.  This will cost a lot of money, so it doesn’t make sense for one agency, or maybe even one country, to do it.  Also, once you have set up the infrastructure, you want to keep it going.  All this suggests that the best solution is a non-profit institute, jointly funded by several governments, to engage in curating, federating, ensuring quality control and mounting the databases so that they are available to the global community.  The institute would need to be able to hire high level systems engineers as well as draw on cheap, but skilled manual labor in data cleaning.  This project would cost a lot of money, more money than funders are typically willing to spend on social sciences.  This means we would need to get maximum value by using the infrastructure for more than just evaluation.

This vision is analogous to the way economic statistics are produced.   Governments spend a great deal of money administering the surveys that underpin standard economic measures such as GDP and employment.  Government departments do this year after year, so there is continuity in the time series.  Economists can gain access to the data, under specified conditions, to use for their research.  Unfortunately, one-off research grants are not going to get us to this end point.  Nor are resources designed for search and retrieval ever going to be enough without that extra added value that makes them analytically useful.

References

1. Mansfield, E. (1980) “Basic Research and Productivity Increase in Manufacturing”, The American Economic Review. 70 (5). . 863-873
2. Hicks, D. (2012) “Performance-based university research funding systems”, Research Policy, 41(2), 251-261.
3. Stokes, D.E. (1997) Pasteur's Quadrant – Basic Science and Technological Innovation, Washington: Brookings Institution Press
VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Evidence-based Science Policy Research Seminar

On March 28th in Beijing, Elsevier and The Institute of Scientific and Technical Information of China (ISTIC) hosted a half-day seminar, attended by over 100 people. The seminar focused on the importance of using evidence-based approaches to scientific performance analysis, especially when using it to inform science policy decisions. Evidence-based research relies on the inclusion […]

Read more >


On March 28th in Beijing, Elsevier and The Institute of Scientific and Technical Information of China (ISTIC) hosted a half-day seminar, attended by over 100 people. The seminar focused on the importance of using evidence-based approaches to scientific performance analysis, especially when using it to inform science policy decisions. Evidence-based research relies on the inclusion of diverse datasets in the analysis in order to obtain an in-depth and accurate understanding of scientific progression, competencies and potentialities.

Image 1 – Hosts of the Evidence-based Science Policy Seminar, Beijing, March 28 2012.

The seminar was hosted by Mr.  Wu Yishan, Deputy Director of ISTIC and featured speakers such as Dr. Zhao Zhiyun, Deputy Director at ISTIC; Prof. Dr. Diana Hicks, Chair of the School of Public Policy, Georgia Institute of Technology; Prof. Peter Haddawy, Director of the International Institute for Software Technology at the United Nations University in Maca; and  Dr. Henk Moed, Elsevier Scientific Director.

In her opening presentation, Dr. Zhao discussed ISTIC approaches to evidence based research which includes analyzing internal and external bibliographic databases, patents depositories and technical literature. To that end, ISTIC looks to include reliable and comprehensive scientific datasets from around the world and apply diverse bibliometric methodologies in order to be able to position China in the science world and better understand China’s international scientific collaborations.

Image 2 – Mr.  Wu Yishan, Deputy Director of ISTIC opening the seminar.

Dr. Zhao’s presentation opened up the discussion about bibliometrics as methodology and whether or not it has an actual impact on science policy. To answer this question, Prof. Dr. Diana Hicks presented a series of case studies named “Powerful Numbers” in which she demonstrated how absolute figures, taken from different bibliometric studies, were molded and used by several national  administrations in the USA, UK and Australia to make decisions regarding science funding.  After presenting examples of such use of bibliometric figures, Dr. Hicks concluded that “policy makers over the past few decades have drawn upon analytical scholarly work, and so scholars have produced useful analyses.  However, the relationship between policy and scholarship contains tensions.  Policy users need a clear number.  Scholars seem afraid to draw a strong conclusion, and do not encapsulate their discoveries in simple numbers.”

In the same context, Dr. Henk Moed discussed the use and misuse of the Journal Impact Factor indicator and the ways by which it can be manipulated to achieve certain results, reinforcing the notion that there is no one figure or absolute numeric value that can represent productivity, impact or competency. He presented a new journal metric, SNIP (Source-Normalized Impact per Paper), and discussed its strong points and limitations. Dr. Moed stressed the fact that any conclusion or decision regarding scientific analysis must be preceded by a careful consideration of the purpose of analysis, the appropriate metric and the unit under consideration.

The seminar was concluded by Prof. Peter Haddawy who presented the Global Research Benchmarking System (GRBS) which provides multi-faceted data and analysis to benchmark research performance in disciplinary and interdisciplinary subject areas. This system demonstrates how using SNIP, publications, citations and h-index figures among other data points enables a comprehensive ranking of universities’ research.

In conclusion, this seminar informed the audience of the importance of opening up analytical work being done on productivity, impact and competencies analysis in science to include as many relevant datasets as possible and use more than one metric or a single number. Evaluation must be multi-faceted and comprehensive, much like the research it is trying to capture which is collaborative, international and multi-disciplinary.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Did you know

… that a 10-year-old girl recently discovered a molecule and co-authored her first scientific publication?

Dr Judith Kamalski

During a chemistry lesson at her school, Clara Lazen from Kansas City was given a box of model atoms, a short explanation of the basic rules of bonding atoms and given free rein to play with them as she liked. The molecule she produced intrigued her teacher, Kenneth M. Boehr. He had never seen it before, so sent a picture to his friend Robert W. Zoellner, Professor of Computational Chemistry at Humboldt State University in California.
Professor Zoellner too had never seen a compound of this kind. His calculations proved that it could, theoretically, exist but that it would be a potential high energy density molecule – an explosive compound. He wrote up his findings adding Clara and teacher Kenneth Boehr, as co-authors, with the affiliation “Border Star Montessori School”. Clara must be one of the youngest co-authors ever on a scientific paper!

Zoellner, R.W. , Lazen, C.L., Boehr, K.M. (2012) “A computational study of novel nitratoxycarbon, nitritocarbonyl, and nitrate compounds and their potential as high energy materials”, Computational and Theoretical Chemistry, Vol. 979, pp. 33-37.

71
https://www.researchtrends.com/wp-content/uploads/2012/06/Research_Trends_Issue28.pdf