Journal ranking biased against interdisciplinary research

The widespread use of rankings of journals in research
institutes and universities creates a disadvantage for interdisciplinary
research in assessment exercises such as the British Research Excellence
Framework
. This is the conclusion of a paper presented at the 2011 Annual
Conference of the Society for the Social Studies of Science
in Cleveland (US) by
Ismael Rafols (SPRU, Sussex University), Loet Leydesdorff (University of
Amsterdam) and Alice O’Hare, Paul Nightingale and Andy Stirling (all SPRU,
Sussex University). The study is the first quantitative proof that researchers
working at the boundaries between different research fields may be
disadvantaged compared with monodisciplinary colleagues. The study argues that
citation analysis, if properly applied, is a better measurement instrument than
a ranked journal list.

 

The study is quite relevant for research management at
universities and research institutes. Journal lists have become a very popular
management tool. In a lot of departments, researchers are obliged to publish in
a limited set of journals. Some departments, for example in economics, have
even been reorganized on the basis of having published in such a list. The way
these lists have been composed does vary. Sometimes a group of experts decides
whether a journal belongs to the list, sometimes the Journal Impact Factor
published by ISI/Thomson Reuters is the determining factor.

 

The study by Rafols et al. has analyzed one such list: the
ranked journal list used by the British Association of Business Schools. This
list is based on a mix of citation statistics and peer review. It ranks
scholarly journals in business and management studies in five categories.
“Modest standard journals” are category 1, “world elite journals” are category
4*. This scheme reflects the experience researchers have with the Research
Assessment Exercise categories. The ranked journal list is meant to be used
widely for a variety of management goals. It is used as an advice for
researchers about the best venue for their manuscripts. Libraries are supposed
to use it in their acquisition policies. And last but not least, it is used in
research assessments and personnel evaluations. Although the actual use of the
list is an interesting research topic in itself, we can safely assume that it
has had a serious impact on the researchers in the British business schools
community.

 

The study shows first of all that the position of a journal
in the ranked list correlates negatively with the extent of interdisciplinarity
of the journal. In other words, the higher the ranking, the more narrow its
disciplinary focus. (The study has used a number of indicators for
interdisciplinarity by which different aspects of what it means to be
interdisciplinary have been captured.) Rewarding researchers to publish first
of all in the ranked journal list may therefore discourage interdisciplinary
work.

 

The study confirms this effect by comparing business and
management studies to innovation studies. Both fields are subjected to the same
evaluation regime in the Research Excellence Framework. Intellectually, they
are very close. However, they differ markedly with respect to their
interdisciplinary nature. Researchers in business schools have a more
traditional publishing behaviour than their innovation studies colleagues. The
research units in innnovation studies are consistently more interdisciplinary
than the business and management schools.

 

Of course, publication behaviour is shaped by a variety of
influences. Peer review may be biased against interdisciplinary work because it
is more difficult to assess its quality. Many top journals are not eager to
publish interdisciplinary work. This study is the first to show convincingly
that these already existing biases tend to be made even stronger by the use of
ranked journal lists as a tool in research management. The study confirms this
effect by comparing the performance based on the ranked journal list with a
citation analysis. In the latter, the innovation studies research is not
punished by its more interdisciplinary character which does happen in an
assessment on the basis of the journal list. The paper concludes with a
discussion of the negative implications in terms of funding and acquiring
resources for research groups working at the boundaries of different fields.

 

The paper will be published in a forthcoming issue of Research Policy and has been awarded the best paper at the Atlanta Conference on Science and Innovation Policy in September 2011.

 

Reference: Ismael Rafols, Loet Leydesdorff,
Alice O’Hare, Paul Nightingale, & Andy Stirling, “How journal rankings can
suppress interdisciplinary research. A comparison between innovation studies
and business & management,” Paper presented at the Annual Meeting of the
Society for the Social Studies of Science (4S), Cleveland, OH, Nov. 2011;
available at http://arxiv.org/abs/1105.1227.

Harvard no longer number 1 in ranking

Recently, the new Times Higher Education World University Rankings
2011-2012 saw the light. The ranking revealed that Harvard University is no
longer number one on the list. Incidentally, the differences with Caltech – now
highest – are minimal. The main reason for Caltech’s rise are the extra
revenues it drew out of industry. Caltech’s income increased by 16%, thereby
outclassing most other universities. Harvard scored a bit better when it comes
to the educational environment. Other universities also rose on the list as a
result of a successful campaign to obtain (more) external financing. The London
School of Economics
, for example, moved from 86 to 47. The top of the ranking
did not change that drastically though. Rich US-based universities still dominate
the list. 7 out of ten universities highest on the list, and one third of the
top 200, are located in the US.

This illustrates the THE ranking’s sensitivity to slight
differences between indicators that, taken together, shape the order of the
ranking. The ranking is based on a mix of many different indicators. There is
no standardized way to combine these indicators, and therefore there inevitably
is a certain arbitrariness to the process. In addition, the THE ranking is
partly based on results of a global survey. This survey invites researchers and
professors to assess the reputation of universities. One of the unwanted
effects of this method is that well-known universities are more likely to be
positively assessed than less popular universities. Highly visible forms of maltreatment
and scandals may also influence survey results.

This year, the ranking’s sensitivity to the ways in which
different indicators are combined is aptly illustrated by the position of the
Dutch universities. The Netherlands are at number 3, with 12 universities in
the top 200 and 4 in the first 100 of the world. Given the size of the country,
this is a remarkable achievement. The result is partly caused by a strong
international orientation of the Dutch universities, and partly by previous
investments in research and education. But just as important is the weight
given to the performances of the social sciences and humanities in a number of
indicators. Compared to last year, the total performance of Dutch universities
most likely did not increase that much. A more likely explanation is that the profile
of activities and impact are better covered by the THE ranking.

The latest THE ranking does make clear that size is not the
most important determinant in positioning universities. Small specialized universities
can end up quite high on the list.

Still using the Hirsch index? Don’t!

“My research: > 185 papers, h-index 40.” A
random quote from a curriculum vitae in the World Wide Web. Sometimes,
researchers love their Hirsch index, better known as the h-index. But what does
the measure actually mean? Is it a reliable indicator of scientific impact?

 

Our colleagues Ludo Waltman and Nees Jan van
Eck have studied the mathematical and statistical properties of the h-index.
Their conclusion: the h-index can produce inconsistent results. For this
reason, it is actually not the reliable measure of scientific impact that most
users think it is. As a leading scientometric institute, we have therefore
published the advice to all universities, funders, and academies of science to
abandon the use of the h-index as a measure of the overall scientific impact of
researchers or research groups. There are better alternatives. The paper by
Waltman and Van Eck is now available as a preprint
and
will soon be published by the Journal of the American Society for Information
Science and Technology
JASIST.

 

The h-index is a measure of a combination of productivity and citation
impact. It is calculated by ordering the number of publications by a particular
researcher on the basis of the total number of citations they have received.
For example, someone who has an h-index of 40 has published at least 40
articles that have each been cited at least 40 times. Moreover, the remaining
articles have not been cited more than 40 times each. The higher the h-index the
better.

 

The h-index was proposed by physicist Jorge Hirsch in 2005. It was an
immediate hit. Nowadays, there are about 40 variants of the h-index. About one
quarter of all articles published in the main scientometric journals have cited
Hirsch’ article in which he describes the h-index. Even more important has been
the response by scientific researchers using the h-index. The h-index has many
fans, especially in the fields that exchange many citations, such as the
biomedical sciences. The h-index is almost irrresistable because it seems to
enable a simple comparison of the scientific impact of different researchers. Many
institutions have been seduced by the siren call of the h-index. For example,
the Royal Netherlands Academy of Arts and Sciences (KNAW) in the Netherlands inquires
about the value of the h-index in its recent forms for new members. Individual
researchers can look up their h-index based on Google Scholar documents via
Harzing’s website publish or perish. Both economists and computer scientists
have produced a ranking of their field based on the h-index.

 

Our colleagues Waltman and Van Eck have now shown that the h-index has some
fatal shortcomings. For example, if two researchers with a different h-index
co-author a paper together, it may lead to a reversal of their position in an
h-index based ranking. The same may happen when we compare research groups.
Suppose we have two groups and each member of group A has a higher h-index than
a paired researcher in group B. We would now expect that the h-index of group A
as group is also higher than that of group B. Well, that does not have to be
the case. Please note that we are now speaking of a calculation of the h-index
based on a complete and reliable record of documents and citations. The
problematic nature of the data if one uses Google Scholar as data source is a
different matter. So, even when we have complete and accurate data, the h-index
may produce inconsistent results. Surely, this is not what one wants using the
index for evaluation purposes!

 

At CWTS, we have therefore drawn the conclusion that the h-index should not be used as measure of scientific
impact in the context of research evaluation.

Rankings under Groninger fire

Rafael Wittek, director of the Internuniversity Center for
Social Science Theory and Methodology
, based at the University of Groningen,
recently attacked Dutch university policies at the occasion of the 25th
anniversary of his famous graduate school. One of his targets was “the hype
around rankings”. Accredited in 1986, the ICS was the first national social
science graduate school in the Netherlands. The school emerged from Dutch
networks of PhD students that were funded by the Ministry of Education and
Science. According to Wittek, the universities are now trying to get a high score
in the global rankings (such as the Times Higher Education ranking, the Shanghai
ranking
and of course also the Leiden ranking) and he argued that this is a
wrongheaded approach. “Rankings as an indicator of quality are a hype. To adopt
them is merely a policy reflex.”

 

I think the sociologist puts his finger on a sore spot in Dutch
science policy and management. This is particularly true for his critique of
the policies around PhD training and the national Graduate Schools. According
to Wittek, “The Hague” has been too eager to follow new European guidelines and
has promoted the competition, rather than the cooperation, among universities.
“In the last couple of years, many national Graduate Schools have been
dismantled and new local Graduate Schools have been created in their stead.
Dutch universities increasingly claim the results of ‘their’ researchers and
give them less possibilities to collaborate with colleagues from other
universities”. His remarks will strike a chord with everybody (such as myself)
who have been formed in the national research schools in which all or almost
all universities worked together. It is indeed a loss that the Dutch ministry discouraged
national Graduate Schools and completely switched towards stimulating local ones,
although happily a few nation-wide schools are still alive and kicking (such as
the Graduate School Science, Technology
and Modern Culture
).

 

Still, although his remarks are to the point, I do not think
he is completely right. For example, it is simply not true that the Dutch
universities would be involved in a ruthless competition with each other. On
the contrary, the new trend is the emergence of regional clusters of
universities as a new form of intimate collaboration to be able to compete
globally with American and Asian universities. Increasing collaboration is moreover
the trend in scientific publications,
as demonstrated recently by a study of my colleagues at CWTS and by the recent
Royal Society Report on scientific networks. The share of the multi-authored,
multi-institutional, international publications is still rising, in all fields
of research. And their average citation impact is greater than those of
single-author or national publications. I don’t think that we should
overestimate the power of university boards to limit the scale of scientific
collaboration.

 

Nonetheless, Wittek’s criticism of ranking should certainly
be taken very seriously. The sociologist sees a danger in the “policy reflex”
for the quality of research and in particular in the areas of high-risk
fundamental research. He thinks that researchers who are forced to score high
in the rankings will be reluctant to take on big, important questions and will
tend to develop a more limited and less risky research agenda. I agree. This is
indeed the most important risk of rankings running wild, disconnected from the
context of fundamental or applied research. But I think there may be a bit more
at play than just policy reflexes. The universities are confronted with an
accelerating process of global competition in which new scientific centres are
emerging, among others China, India, Brazil, Turkey and Iran. In these
countries, researchers tend to have to meet much stricter performance criteria
than is usual in the Netherlands. This makes it difficult, perhaps even
impossible, for Dutch university boards to ignore this. In the Netherlands this
problem is particularly acute since the recent xenophobic hype around
immigration in this country is making it already difficult enough to attract
talented young researchers from non-European countries. Does this mean that an
obsession with rankings is inevitable? I think not. I could imagine a number of
alternative, more imaginative strategies to counter this race for the highest
position in the rankings.

 

I do think Wittek is right
that recognition by peers is the strongest motivator for researchers. He even
thinks that scientists do not need any other stimulus. This last idea may be a
bit over the top. But I do think he has a good point. Therefore, rankings can
and should be used in direct connection with this peer stimulus. Policies that
are only focused on getting higher in the global university rankings indeed do
not make much sense. But this does not mean that it makes no sense at all to
rank. Rankings can very well be used to get a better understanding of ones
strong and weak points (both at the level of individual researchers, groups and
institutes, and universities and countries). This can be done while taking into
account the specific characteristics of the relevant disciplines. (For different
disciplines different databases may be needed to measure the rankings). Ranking
in context, that should

be
possible, shouldn’t it?

Anxiety about quality may hinder open access

Anxiety about the quality of open access journals hinders the further spread of open access publications. This conclusion was cited many times during the recent Co-ordinating workshop on Open Access to Scientific Information, in Brussels on May 4 this year. The workshop was attended by about 70 key players in Open Access and was organized by two EU directorates: Research and Information Society & Media.

The critical role of quality control came to the fore in various ways.

Salvatore Mele (CERN), coordinator of the SOAP project presented the results of their study (based on a Web survey) of the attitudes prevailing among researchers with respect to open access. They reveal a remarkable gap between strong support for open access on the one hand and a lack of actual open access publishing on the other hand. 89 % of the researchers say they are in favour of open access publishing. At the same time, only between 8 and 10 % of the articles published are open access. According to the SOAP study, two factors are mainly responsible for this gap: the problem of financing open access publications and the perceived lack of quality of many open access journals. The Journal Impact Factor of journals was also mentioned as a reason not to publish in existing open access journals.

 The weight of these factors does vary by field. For example, in chemistry 60 % of the researchers mention financial reasons as barrier to open access, whereas only 16 % of the astronomers see finance as problematic. In astronomy, worries about the quality of journals are mentioned most (by more than half of the astronomers) whereas this is only seen as a problem by about one-fifth of the chemists. This result points, by the way, to the need to develop specific open access policies for different scientific and scholarly fields. For example, in the humanities open access books will be an important issue.

Quality of the journals was also central in a new initiative made public at the workshop by the delegation of the ICT organization of the Dutch universities SURF: Clearing the Gate. This initiative is aimed at funding organizations such as the Dutch research council NWO. It calls upon them to develop a preference for open access publications for the research they fund. They should give priority to publications in high quality open access journals as a condition for funding. SURF is convinced that once this priority is installed, we will witness a strong growth in the number of available open access journals of a high to very high quality. The presentative of NWO joined this initiative and made clear that his organization already supports new open access journals in the social sciences and humanities. This Spring, NWO will publish a Call aimed at the other disciplines. NWO also supports the OAPEN initiative for open access books in the humanities. An important motivation for the organization is financial: “we do not want to pay twice for the same research”.

For evaluators and scientometricians, this development is an interesting challenge as well. How to evaluate open access activities in research?

Note:

My Dutch language report of the EU Open Access workshop meeting was published in the journal Onderzoek Nederland, nr. 277, 7 May 2011, p. 8.

My presentation at the EU workshop is available here.

Evaluating e-research

We had a very interesting discussion last week at the e-Humanities Group of the Royal Netherlands Academy of Arts and Sciences. The problem I presented is how to evaluate e-research, the newly emerging style of scientific and scholarly research that makes heavy use of, and contributes to, web based resources and analytical methods. The puzzle is that current evaluation practices are strongly biased towards one particular mode of scientific output: peer reviewed journal articles, and within that set in particular those articles published in journals that are used as source materials for the Web of Science, published by ISI/Thomson Reuters. If scholars in the sciences, social sciences and humanities are expected to contribute to e-science and e-research, it is vital that the reward and accounting system in the universities do honour work in this area. Here is the link to the presentation "Evaluating e-Research".

International networks start to drive research

Networks of
collaborating scientists spanning the globe are increasingly shaping the research
landscape. The share of papers co-authored by researchers from different
countries is steadily growing. More than one third of the papers is now based
on an international collaboration, up from one quarter fifteen years ago. On
top of this, these internationally co-authored papers have a higher citation
impact. Each foreign partner in a paper increases its potential to be cited up
to a tipping point of approximately 10 countries. The dynamics of these
international networks together with sustained investments in scientific
research by an increasing number of countries produce a much more multipolar
world. Not surprisingly, China is rising fast. Ranking countries on the number
of scientific papers produced, China is now number 2 with a share of 10 % of the
international scientific production. It is expected to become number 1 within a
few decades. Brazil and India are also emerging as powerful players on the
international scene. But the rise of new scientific centres is not restricted
to the BRICS countries. In the Middle East, both Turkey and Iran are investing
strongly with an enormous growth of authors and papers as a result. While Iran
published a bit more than 700 papers in 1993, in 2008 this was already more
than 13 thousand. Turkey published in 2008 four times as much as in 1996 and
its number of researchers has grown by 43 %. Still, the current heavyweights
are dominating the rankings based on citation numbers. With a decreasing share
in total publications (down from 26 top 21 %), the United States still attracts
the majority of citations, more than 30 % of all publications cite work
originating in the United States. Chinese papers have significantly less
impact: with 10 % of the share of papers, the Chinese collect only 3 % of the
citations.

 

These are
some of the highlights of the recent report of the Royal Society (UK),
"Knowledge,
Networks and Nations: Global scientific collaboration in the 21st century
".
This report is based on an analysis of all papers in the Scopus database
(Elsevier) published between 2004 and 2008, compared with the production
between 1993 and 2003. The report combines these findings with five case
studies of prominent international research initiatives in health research,
physics, and climate research. I think this report is a goldmine of interesting
facts and sometimes surprising developments and a must read for all science
policy actors.

 

For
European science policy makers, the report should moreover give pause for
reflection. The fast rise of international networks is particularly relevant
for Europe because of the rise of anti-immigration parties that currently have
a big impact on policy in general, and thereby also on science policy. The
share of internationally co-authored papers in the European countries is
rising, which means that the researchers in Europe need to be supported in
creating more international collaborations. This simply cannot be combined with
an anti-immigration policy focused on blocking international exchange of
scientific personnel. In Europe, very different from Asia, the general
political climate therefore seems to be out of step with the developments in
the world of science and scholarship. A creative science policy requires an
open attitude eager for international exchange of ideas and people, not least
also with colleagues in Turkey and Iran. And Turkey should become a member of
the European Union as soon as possible.

 

The report
also shows nicely that internationalization is not a simple process. Overall,
the number of internationally co-authored papers is on the rise. And in the
current scientific centres, this goes together with an increase of the share of
international papers in the total national scientific production. But in China
and Brazil, the share of international papers is decreasing, while the absolute
number of internationally co-authored papers is rising. Turkey and Iran show
comparable trends, albeit less clear.The
explanation is that in these countries the national research capacity is
building up faster than the growing international collaborations.