Issue 23 – May 2011

Articles

Research Assessment 101: An introduction

What is research assessment, what purposes does it serve, and how is it carried out? Research Trends introduces the basic components of this complex process.

Read more >


In upcoming issues of Research Trends we dedicate attention to research assessment. Here we explain why we have chosen this subject, how it is defined, its historical background, how the article series is built up, and which topics will be addressed. We also highlight a few fundamental principles that underlie the subsequent articles in the series.

Measuring returns on investment

Research assessment is a broad endeavour. At root it is an attempt to measure the return on investment in scientific-scholarly research. Research assessment includes the evaluation of research quality and measurements of research inputs, outputs and impacts, and embraces both qualitative and quantitative methodologies, including the application of bibliometric indicators and mapping, and peer review.

Research performance is increasingly regarded as a key factor in economic performance and societal welfare. As such, research assessment has become a major issue for a wide range of stakeholders, and there is consequently an increasing focus on research quality and excellence, transparency, accountability, comparability and competition.

 

Institutions compete for students, staff and funding through international rankings.

This focus means that government funding of scientific research – especially in universities – tends to be based more and more on performance criteria. Such a policy requires the organization of large-scale research assessment exercises by national governmental agencies. The articles in this issue are intended to provide a concise overview of the various approaches towards performance-based funding in a number of OECD member states.

The institutional view

Today, research institutions and universities operate in the context of a global market. International comparisons or rankings of institutions are published on a regular basis, with the aim of informing students and knowledge-seeking external groups about their quality. Research managers also use this information to benchmark their own institutions against their competitors.

In light of these developments, institutions are increasingly setting up internal research assessment processes, and building research management information systems. These are based on a variety of relevant input and output measures of the performance of individual research units within an institution, enabling managers to allocate funds within the institution according to the past performance of the research groups.

At the same time, trends in publishing have had a crucial impact on assessing research output. Major publishers now make all their content electronically available online, and researchers consistently report that their access to the literature has never been better. In addition, disciplinary or institutionally oriented publication repositories are being built, along with the implementation of institutional research management systems, which include metadata on an institution’s publication output. Currently, three large multidisciplinary citation indexes are available: Elsevier’s Scopus, Thomson Reuters’ Web of Science, and Google Scholar.

In conjunction with the increasing access to journals and literature databases, more indicators of research quality and impact are becoming available. Many bibliographical databases implement bibliometric features such as author h-indexes, as well as publication and citation charts. More specialized institutes produce other indicators, often based on raw data from the large, multi-disciplinary citation indexes. Today, the calculation of indicators is not merely the province of experts in bibliometrics, and the concept of “desktop bibliometrics” is increasing becoming a reality.

An overview of the various topics covered in this series is presented in Table 1 below. These topics will be presented in short review articles, illustrative case studies, and interviews with research assessment experts and research managers. The main principle underlying the various articles in this series is that the future of research assessment exercises lies in the intelligent combination of metrics and peer review. A necessary condition is a thorough awareness of the potentialities and limitations of each of these two broad methodologies. This article series aims to increase such awareness.

Table 1 — Overview of topics addressed

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

The multi-dimensional research assessment matrix

Research output can measured in many different ways, and no single measure is sufficient to capture the diverse aspects of research performance. Research Trends invites you to explore the options…

Read more >


Research performance can be assessed along a number of different dimensions. In this article, we explore the notion of the multi-dimensional research assessment matrix, which was introduced in a report published in 2010 by an Expert Group on the Assessment of University-Based Research (AUBR), installed by the European Commission. Figure 1 presents a part of this matrix.

Research assessment is a complicated business. To design a practical, informative process requires making decisions about which methodology should be used, which indicators calculated, and which data collected. These decisions in turn reflect answers to a number of questions about the scope and purpose of the research assessment process in hand. A thorough exploration of many of these questions has been presented in Moed (2005).

Table 1 — The multi-dimensional research assessment matrix. This table presents a core part of the matrix, not the entire matrix. It aims to illustrate what the matrix looks like. It should be read column-wise: each column represents a different dimension. See AUBR (2010) for more information.

What, how, and why?

A fundamental question is the unit of the assessment: is it a country, institution, research group, individual, research field or an international network? Another basic question revolves around the purpose of the assessment: is it to inform the allocation of research funding, to improve performance, or to increase regional engagement? Then there are questions about which output dimensions should be considered: scholarly impact, innovation and social benefit, or sustainability?

The matrix distinguishes four assessment methodologies: i) peer review, which provides a judgment based on expert knowledge; ii) end-user reviews, such as customer satisfaction; iii) quantitative indicators, including bibliometric and other types of measures; and iv) self evaluation. These four methodologies can be — and often are — combined into a multi-dimensional assessment.

Bibliometric indicators have a central role in research assessment systems, and the main types are listed in Table 1. Table 2 distinguishes three generations of such indicators. Typical examples from each generation are: the Thomson Reuters journal impact factor; relative or field-normalized citation rates; and citation impact indicators giving citations from ‘top’ journals a higher weight than citations from more peripheral publications. These examples and others are explored in the boxed text.

Table 2 — Types of bibliometric indicators.

 

Table 1 also lists typical examples of non-bibliometric indicators. These include knowledge transfer activities reflected in the number of patents, licenses and spin offs; invited lectures at international conferences; the amount of external funding; Ph.D. completion rates; and the share of research-active staff relative to total staff.

The unit of assessment, the purpose of the assessment, and the output dimension considered determine the type of indicators to be used in the assessment. One indicator can by highly useful within one assessment context, but less so in another. This is illustrated in three examples presented in Figure 1.

Figure 1 – Three examples from the multi-dimensional research assessment matrix (MD-RAM) showing how the unit of assessment, purpose of the assessment, and output dimension determine the type of indicators to be used.

 

Entering the Matrix

The concept of multi-dimensionality of research performance, and the notion that the choice as to which indicators one applies is determined by the questions to be addressed and aspects to be assessed, is also clearly expressed in the recent “Knowledge, Networks and Nations” report from the Royal Society (Royal Society (2011), pp. 24-25):

“In the UK, the impact and excellence agenda has developed rapidly in recent years. The Research Assessment Exercise, a peer review based benchmarking exercise which measured the relative research strengths of university departments, is due to be replaced with a new Research Excellence Framework, which will be completed in 2014. The UK Research Councils now (somewhat controversially) ask all applicants to describe the potential economic and societal impacts of their research. The Excellence in Research for Australia (ERA) initiative assesses research quality within Australia’s higher education institutions using a combination of indicators and expert review by committees comprising experienced, internationally recognised experts."

The impact agenda is increasingly important for national and international science (in Europe, the Commissioner for Research, Innovation and Science has spoken about the need for a Europe-wide ‘innovation indicator’). The challenge of measuring the value of science in a number of ways faces all of the scientific community. Achieving this will offer new insights into how we appraise the quality of science, and the impacts of its globalisation.”

  

Exploring the indicators

  • Journal Impact Factors. The Thomson Reuters Journal Impact Factor was originally invented by Eugene Garfield to expand the coverage of his Science Citation Index with the most useful journals, but is nowadays often used in many types of research assessment processes. It is defined as the average number of citations in a particular year to documents published in a journal in the two preceding years.
  • Relative citation rates. The relative, field-normalised citation rate is based on the notion that citation frequencies differ significantly between subject fields. For instance, authors in molecular biology publish more frequently and cite each other more often than do authors in mathematics. In its simplest form the indicator is defined as the average citation rate of a unit’s papers divided by the world citation average in the subject fields in which the unit is active.
  • Influence weights. Pinski and Narin (1976) developed an important methodology for determining citation-based influence measures of scientific journals and (sub-)disciplines. One of their methodology’s key elements is that it assigns a higher weight to citations from a prestigious journal than to a citation from a less prestigious or peripheral journal.
  • Google PageRank. Pinski and Narin’s ideas also underlie Google’s measure of PageRank. The “value” of a web page is measured by the number of other web pages linking to it, but in this value assessment links from pages that are themselves frequently linked to have a higher weight than links from those to which only few other pages have linked.
  • Other studies. Similar notions may play an important role in the further development of citation impact measures. Good examples are the work by Bollen et al. (2006) on journal status, and the Scimago Journal Rank (SJR) developed by the SCImago group (González-Pereira et al., 2010), one of the two journal metrics included into Scopus.

 

References:

AUBR (2010). Expert Group on the Assessment of University-Based Research. Assessing Europe’s University-Based Research. European Commission – DG Research. http://ec.europa.eu/research/era/docs/en/areas-of-actions-universities-assessing-europe-university-based-research-2010-en.pdf

Bollen J., Rodriguez, M.A., Van De Sompel, H. (2006). Journal status. Scientometrics, Vol. 69, pp. 669-687.

González-Pereira, B., Guerrero-Bote, V.P., Moya-Anegón, F. (2010). A new approach to the metric of journals’ scientific prestige: The SJR indicator. Journal of Informetrics, Vol. 4, pp. 379-391

Moed, H.F. (2005). Citation Analysis in Research Evaluation. Dordrecht (Netherlands): Springer. ISBN 1-4020-3713-9, 346 pp.

Pinski, G., Narin, F. (1976). Citation influence for journal aggregates of scientific publications: theory, with application to the literature of physics. Information Processing and Management, Vol. 12, pp. 297–312.

Royal Society (2011). “Knowledge, Networks and Nations: Global scientific collaboration in the 21st century”. http://royalsociety.org/policy/reports/knowledge-networks-nations

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Performance-Based Research Funding Systems: Rewarding (only) quality research?

In this era of fiscal austerity, government funding of scholarly research has become a pressing issue for most countries. While in the past universities and research institutions could rely on a continuous, steady flow of financial support, today governments increasingly tie funding to metrics. Therefore, the performance of tertiary education institutions has turned into a […]

Read more >


In this era of fiscal austerity, government funding of scholarly research has become a pressing issue for most countries. While in the past universities and research institutions could rely on a continuous, steady flow of financial support, today governments increasingly tie funding to metrics. Therefore, the performance of tertiary education institutions has turned into a metric-enhancing competition. As such, the development and use of Performance-Based Research Funding Systems (PRFS) has become a prominent topic of discussion among academics and government officials alike. December 2010 saw the publication of the proceedings of the OECD-Norway workshop “Performance-based Funding for Public Research in Tertiary Education Institutions“. Diana Hicks, Professor and Chair of the School of Public Policy, Georgia Institute of Technology, Atlanta, contributed the opening chapter to this volume, providing a complete literature review on PRFS across thirteen different countries. In discussion with Research Trends, Diana shares her views on some of the hot topics in the discussions on PRFS.

 

Critics of PRFS sometimes argue that it reinforces the influence of conservative scientific elders by punishing novel research and inhibiting the emergence of new and interdisciplinary research fields and departments, raising questions about conservatism and innovation in science.

Diana stresses that PRFS do not disadvantage researchers by distinguishing between young and old researchers as such, but by differentiating between established and new research departments. “One universal element of PRFS used amongst governments is that they reward demonstrated research excellence rather than potential or promise. Any department or institution new in research will go by unrecognized in PRFS in the form they currently stand”. Similar concerns apply to the assessment of research in interdisciplinary scientific fields. Although such research is not necessarily seen as ‘new’ or ‘novel’, current evaluation systems disadvantage interdisciplinary research because evaluations tend to be based on publications in the core of a field. “What we see here is a cycle of accumulation: people in evaluation committees often form a representation of the best academics within a (core) field, coming from the best (core) departments, at the same time being editors of the best (core) journals, with all focus on the core of field. If, however, the aim of research is to link core A with core B it would typically be evaluated as not belonging to either core, and thus not good”.

 

It seems a simple solution would be to build into the system a way of evaluating research relative to discipline, with the effect of adding an extra level of complexity to the system. But how complex can a system be, and still remain practically useful?

Diana recalls that systems of research evaluation typically start out as relatively simple, and become more complex over time as committees deal with criticisms coming from academics. “Because stakeholders of the system are in fact academics, there will always be on-going research on how systems can be improved in parts where it is unfair. Governments, striving for fairness and objectivity within their system, in their turn answer by implementing elaborations to the system”. Levels of complexity now vary among PRFS used across countries by different governments. “What we do not know though is what the actual cost and benefits are of increased complexity in any PRFS”. How much complexity and costs a system can bear while at the same time remaining manageable and workable is a question which current PRFS have not answered. “In doing specific cost-benefit calculations, governments may decide how much complexity is actually worthwhile”.

 

PFRS are inherently competitive, and so there are incentives to manipulate the system for self-serving ends. How significant is this issue in current PRFS, and how should agencies assessing research respond?

”This is an unfortunate side effect in most PRFS,” says Diana. The solution, Diana believes, lies in making small, incremental adaptations to the system in every round of evaluation. “Governments can tweak the rules of assessment so that universities and institutions are hampered in manipulations aimed solely to improve their score. The aim of doing this is to minimize the focus on the metrics used for evaluation, while maximizing the focus on performance”.

Professor Diana Hicks

1985-1999: Faculty member at SPRU - Science and Technology Policy Research, University of Sussex, UK

1998-2003: Senior Policy Analyst at CHI Research, US

2003- : Professor and Chair of the School of Public Policy, Georgia Tech, US

Professor Hicks has taught at the Haas School of Business at the University of California, Berkeley and worked at the National Institute of Science and Technology Policy (NISTEP) in Tokyo. She is an honorary fellow of the Science Policy Research Unit, University of Sussex, UK and on the Academic Advisory Board for Center for Science, Policy and Outcomes, Washington D.C.

Further reading:

OECD (2010). Performance-based Funding for Public Research in Tertiary Education Institutions: Workshop Proceedings, OECD Publishing.  

http://dx.doi.org/10.1787/9789264094611-en

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Excellence in Research for Australia — a new ERA

Research Trends talks to Margaret Sheil, Chief Executive Office of the Australian Research Council, about Australia’s evolving research assessment framework.

Read more >


As Chief Executive Officer of the Australian Research Council, Professor Margaret Sheil is responsible for the Excellence in Research for Australia (ERA) initiative. ERA aims to assess research quality in the Australian higher education context by combining indicators with expert review. In this article Research Trends talks with Professor Sheil about ERA and its consequences.

Broadly speaking, Australian universities receive funding through two routes: competitive research grants, which are awarded to academics based on the strength of their proposals; and ‘block funding’, which is given to universities to cover the indirect costs of doing research in a university, such as maintaining libraries and IT systems, hiring support staff, and so on. In general, universities get more block funding if they pull in more competitive grants, and generate more publications and graduates. “This can lead to a mindset of ‘doing better by doing more’,” says Sheil. “Research excellence assessments ask whether we’re driving research excellence rather than just research quantity.”

How does Excellence in Research for Australia framework achieve this goal?

The Excellence in Research for Australia (ERA) framework came into existence in 2009, replacing the Research Quality Framework established under the previous government (but never actually implemented). Whereas the RQF tried to develop a one-size-fits-all model across the university sector, “we know that different disciplines judge quality differently,” says Sheil. So ERA takes a ‘matrix approach’ that draws on a range of indicators that collectively can be applied to the whole sector, even while some components carrying more weight in certain disciplines than other. “We decided to look at indicators of quality that are accepted within disciplines, clustered like-minded disciplines together, and said ‘If there are robust metrics, we’ll use them’”. ERA breaks down the universe of research into 157 disciplines, and for 101 of these citation analyses were a key indicator (especially the physical sciences). “Where there wasn’t confidence that metrics would work, or in areas such as the humanities where books are more important than journal publications, we used expert peer review as an indicator of quality,” says Sheil. “There’s a lot of confidence about citation analysis in many disciplines, but we believe we need experts to judge whether this makes sense for a particular discipline, and what it means in the context of other indicators.” These include ‘esteem’, such as how many members of a department belong to learned academies; ‘applied indicators’, such as the number of patents produced and income generated through commercialization of research; and success in gaining competitive grants, which have an in-built quality control component. Finally, ERA has produced a list of journals ranked by quality, which has been used to look at how many publications from a discipline get into the higher-ranking journals. “All of these indicators are grouped by discipline and by university, and then expert committees look at the total of the indicators and derive an overall assessment score,” says Sheil.

The first ERA report was released in 2010, and another will be published in 2012. What has been learned in these early days?

“There has been a misunderstanding that ERA is about ranking the quality of whole universities, rather than individual disciplines,” says Sheil. “We don’t think university rankings are meaningful, but we could have been clearer about this.” For instance, The Australian, a leading national newspaper took the assessment scores of disciplines within universities, combined these scores and then averaged them to create an overall ‘university ranking’ score. “That doesn’t make any sense,” says Sheil. A small university specializing in, for example, theology, may be world-class in this discipline and would thus be placed high in university rankings, above larger universities that score highly on some disciplines but lower on others. This can lead to the masking of pockets of excellence in institutions with a broad disciplinary remit.

The past couple of years have also revealed controversy over ERA’s journal ranking, a core element of the sytem. “We learned that we haven’t managed to stop the obsession with journal rankings, which is the most commonly misunderstood aspect of ERA.”

Some commentators have claimed that the journal rankings are not fair, and that some disciplines suffer as a result of the rankings settled on in ERA. How do you respond?

Some observers, says Sheil, seem to think that ERA is solely about the journal rankings (each journal is given a single quality rating of A*, A, B or C), which is why they’ve received such attention. “There’s a view that if you don’t have a high number of A and A* journals in your discipline, it’s disadvantaged. But if you look at zoology, which is a strong focus of Australian biology, there are hardly any A* journals but the discipline still performed very well because the work done in this area is highly cited and scores well on other indicators in the assessment matrix,” says Sheil. That’s not to suggest that the journal ranking system is perfect. “Did we get some journals wrong? For sure — there are 22,000 journals to rank, after all! But because people have got a bit obsessive about this, especially journal editors, we’re currently looking at what impact the journal ranking element had on overall assessments.”

Is there an inherent danger in performance-based research assessment that it can discourage exploratory, novel research?

“If we used the assessment outcomes to decide every allocation of research of funding, this would be a real concern,” says Sheil. “But we’re introducing other things into the grant side of the business to counterbalance that.” In addition, while ERA “looks backwards” to enable the government to assess whether direct block funding pumped into universities has been well spent, grants are essentially “forward looking”, and therefore based on different criteria. “It’s really important that when it comes to grant giving — where we’re assessing potential — that we recognise these issues, and continue to invest in and take risks on the next generation of research.”

Professor Margaret Sheil (FTSE FRACI C Chem)

1990–2000: Lecturer in the Department of Chemistry, University of Wollongong (UoW), Australia

2001: Dean of Science at UoW

2002–2007: Deputy Vice-Chancellor (Research) at UoW

2007– : Chief Executive Officer of the Australian Research Council

Professor Sheil is a member of the Cooperative Research Centres Committee, the Prime Minister’s Science Innovation and Engineering Council and the National Research Infrastructure Council. She is also a member of the Board of the Australia-India Council, the Advisory Council of the Science Industry Endowment Fund and the National Research Foundation of Korea. She is a Fellow of the Academy of Technological Sciences and Engineering (FTSE) and the Royal Australian Chemical Institute (FRACI).

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Layered assessment: Using SciVal Strata to examine research performance

Tracking the publication output of a single researcher is straightforward, but what about tracking the contributions of individuals to research groups whose composition changes dynamically over time? SciVal Strata provides the answer.

Read more >


Strata is the latest addition to Elsevier’s SciVal suite of research performance and planning tools, and provides methods of assessing individuals and groups through their publication histories. In the last issue, we showed how SciVal Spotlight could visualize the research landscape of the United States; in this article we look at the ways SciVal Strata can chart the research performance of an individual or wider group, either alone or in comparison.

 

Reading between the lines

Science is an inherently progressive, cumulative enterprise. Each year brings more qualified scientists and researchers, more papers and ever more citations to those papers. So the standard view of citations over time in SciVal Strata might come as a surprise. Figure 1 shows the average number of citations per year of papers published in the field of Ecology, Evolution, Behavior and Systematics. At a glance it appears that citations in the field are plummeting, perhaps signaling the implosion of these scientific disciplines.

Figure 1 – Average citations per paper published in the field of Ecology, Evolution, Behavior and Systematics: the three benchmarks show UK papers (purple), European papers (green), and all world papers (blue). Source: SciVal Strata.

Of course, that isn’t the case. This decreasing chart shows a decline neither in scientific quality nor citation quantity: the default view shows average citations per year to documents from each publication year, rather than counting citations cumulatively over time. Since recent publications will typically have received fewer citations to date on average – as they have had less time to accumulate those citations – the shape of the curves now makes sense.

With benchmarks quickly set up in Strata by the user, one can compare researchers to average citation rates in their field. Figure 2 shows a researcher compared with the world average. The previously noted downward slope can be seen, but when looking at an individual’s performance highs and lows can be spotted which will tend to be absent from averages. This researcher clearly had success with papers published in 2000, shown by the sharp rise in the line at that year. Smaller rises can also signify success: while 2008 shows a value that is low relative to earlier years, when the shorter amount of time 2008 papers have had to accumulate citations is taken into account, it was clearly a successful year.

Figure 2 – Average citations per document by publication year of a researcher in the field of Ecology, Evolution, Behavior and Systematics (red) compared with the world average (blue). Source: SciVal Strata.

  

Team players

SciVal Strata enables the analysis of citation patterns in entire research fields, as well as by individual researcher. SciVal Strata also has various ways of showing the contribution an individual makes to a research group; Figures 3 and 4 show two methods. In Figure 3, a researcher is directly compared with his or her team: the two lines weave in and out, as the individual or group outperforms the other, and it is easy to see some disparity in the years 2001 and 2003.

Figure 3 – A researcher (red) compared with their research group (brown) and the world average (blue), showing average citations per document by publication year. Source: SciVal Strata.

Another way of examining the contribution this researcher makes to the research group is to compare two versions of the research group: one with the researcher included, and one without (see Figure 4).

Figure 4 – A comparison of the same research group either with (brown) or without (red) one of its researchers, showing average citations per document by publication year. World average is also shown (blue). Source: SciVal Strata.

 

Open to other views

Bibliometricians commonly warn against the use of a single indicator to make assessments of research output and quality: different measures must often be used evaluate different aspects of performance. For instance, in an assessment of an individual, the number of invited lectures at international conferences is a useful, non-bibliometric indicator. In SciVal Strata, any comparison — whether between individuals, groups, or any other ‘cluster’ of researchers — can be made looking not only at average citations per papers, but also at h-index, citation and publication counts, or the ratio of cited to uncited papers. Figure 5 shows two researchers compared using their h-index values, and Figure 6 their cited and uncited papers from each year. This range of indicators, and the flexibility they allow, means that a comprehensive view of a researcher or group can be used to aid important decisions about promotion, recruitment, and collaboration.

Figure 5 –Comparison of two researchers’ h-index values. The curves show the citations received by each researchers’ papers when arranged in descending order of citations. Dropping a line to either axis from the intersection of each curve with the black line (at 45 degrees from the origin) shows the h-index: here one researcher (green) shows a higher h-index than the other (red). Source: SciVal Strata.

 

Figure 6 – Comparison of the outputs of two researchers per year. The bars show total number of documents, and each is split into solid and faded sections showing the documents that are, respectively, cited and uncited to date. Comparison of the faded and solid areas shows the uncited rate of documents published each year: as expected, this is higher in more recent years as recent documents have had less time to become cited. Source: SciVal Strata.

However, while bibliometric indicators offer a clear view of an individual’s performance — particularly when a wide range are available — it is important to note that they may not tell the whole story. For example, if each co-author of an article is assigned one full credit for the publication, this can mask differences in their actual contributions to the article: one author may have done the majority of the work. Rather than hide such difficulties, bibliometricians and others involved in research assessment need to either use more sophisticated approaches, such as the comparisons available in SciVal Strata, or combine bibliometric assessment with other indicators of research performance and researcher prestige. 

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)

Did you know

Mentor citation records can influence grant application success?

Sarah Huggett
A recent study has found a positive correlation between female postdoctoral fellows’ grant application success and the citation records (citations and h-index) of their mentor. “Females whose mentor’s h-index was in the top quartile were nearly 3 times as likely to receive a major grant as those whose mentors were in the bottom quartile”¹. Interestingly, the correlation did not apply to male postdoctoral fellows.

The author interpreted these findings as a sign that potential residual sexual discrimination can be defeated by association with a prestigious mentor, but still acknowledged that this would benefit from further analysis.

[1] Levitt, D.G. (2010) Careers of an elite cohort of U.S. basic life science postdoctoral fellows and the influence of their mentor’s citation record. BMC Medical Education 10:80.

54
https://www.researchtrends.com/wp-content/uploads/2011/06/Research_Trends_Issue23.pdf