Journal ranking biased against interdisciplinary research

The widespread use of rankings of journals in research
institutes and universities creates a disadvantage for interdisciplinary
research in assessment exercises such as the British Research Excellence
Framework
. This is the conclusion of a paper presented at the 2011 Annual
Conference of the Society for the Social Studies of Science
in Cleveland (US) by
Ismael Rafols (SPRU, Sussex University), Loet Leydesdorff (University of
Amsterdam) and Alice O’Hare, Paul Nightingale and Andy Stirling (all SPRU,
Sussex University). The study is the first quantitative proof that researchers
working at the boundaries between different research fields may be
disadvantaged compared with monodisciplinary colleagues. The study argues that
citation analysis, if properly applied, is a better measurement instrument than
a ranked journal list.

 

The study is quite relevant for research management at
universities and research institutes. Journal lists have become a very popular
management tool. In a lot of departments, researchers are obliged to publish in
a limited set of journals. Some departments, for example in economics, have
even been reorganized on the basis of having published in such a list. The way
these lists have been composed does vary. Sometimes a group of experts decides
whether a journal belongs to the list, sometimes the Journal Impact Factor
published by ISI/Thomson Reuters is the determining factor.

 

The study by Rafols et al. has analyzed one such list: the
ranked journal list used by the British Association of Business Schools. This
list is based on a mix of citation statistics and peer review. It ranks
scholarly journals in business and management studies in five categories.
“Modest standard journals” are category 1, “world elite journals” are category
4*. This scheme reflects the experience researchers have with the Research
Assessment Exercise categories. The ranked journal list is meant to be used
widely for a variety of management goals. It is used as an advice for
researchers about the best venue for their manuscripts. Libraries are supposed
to use it in their acquisition policies. And last but not least, it is used in
research assessments and personnel evaluations. Although the actual use of the
list is an interesting research topic in itself, we can safely assume that it
has had a serious impact on the researchers in the British business schools
community.

 

The study shows first of all that the position of a journal
in the ranked list correlates negatively with the extent of interdisciplinarity
of the journal. In other words, the higher the ranking, the more narrow its
disciplinary focus. (The study has used a number of indicators for
interdisciplinarity by which different aspects of what it means to be
interdisciplinary have been captured.) Rewarding researchers to publish first
of all in the ranked journal list may therefore discourage interdisciplinary
work.

 

The study confirms this effect by comparing business and
management studies to innovation studies. Both fields are subjected to the same
evaluation regime in the Research Excellence Framework. Intellectually, they
are very close. However, they differ markedly with respect to their
interdisciplinary nature. Researchers in business schools have a more
traditional publishing behaviour than their innovation studies colleagues. The
research units in innnovation studies are consistently more interdisciplinary
than the business and management schools.

 

Of course, publication behaviour is shaped by a variety of
influences. Peer review may be biased against interdisciplinary work because it
is more difficult to assess its quality. Many top journals are not eager to
publish interdisciplinary work. This study is the first to show convincingly
that these already existing biases tend to be made even stronger by the use of
ranked journal lists as a tool in research management. The study confirms this
effect by comparing the performance based on the ranked journal list with a
citation analysis. In the latter, the innovation studies research is not
punished by its more interdisciplinary character which does happen in an
assessment on the basis of the journal list. The paper concludes with a
discussion of the negative implications in terms of funding and acquiring
resources for research groups working at the boundaries of different fields.

 

The paper will be published in a forthcoming issue of Research Policy and has been awarded the best paper at the Atlanta Conference on Science and Innovation Policy in September 2011.

 

Reference: Ismael Rafols, Loet Leydesdorff,
Alice O’Hare, Paul Nightingale, & Andy Stirling, “How journal rankings can
suppress interdisciplinary research. A comparison between innovation studies
and business & management,” Paper presented at the Annual Meeting of the
Society for the Social Studies of Science (4S), Cleveland, OH, Nov. 2011;
available at http://arxiv.org/abs/1105.1227.

Anxiety about quality may hinder open access

Anxiety about the quality of open access journals hinders the further spread of open access publications. This conclusion was cited many times during the recent Co-ordinating workshop on Open Access to Scientific Information, in Brussels on May 4 this year. The workshop was attended by about 70 key players in Open Access and was organized by two EU directorates: Research and Information Society & Media.

The critical role of quality control came to the fore in various ways.

Salvatore Mele (CERN), coordinator of the SOAP project presented the results of their study (based on a Web survey) of the attitudes prevailing among researchers with respect to open access. They reveal a remarkable gap between strong support for open access on the one hand and a lack of actual open access publishing on the other hand. 89 % of the researchers say they are in favour of open access publishing. At the same time, only between 8 and 10 % of the articles published are open access. According to the SOAP study, two factors are mainly responsible for this gap: the problem of financing open access publications and the perceived lack of quality of many open access journals. The Journal Impact Factor of journals was also mentioned as a reason not to publish in existing open access journals.

 The weight of these factors does vary by field. For example, in chemistry 60 % of the researchers mention financial reasons as barrier to open access, whereas only 16 % of the astronomers see finance as problematic. In astronomy, worries about the quality of journals are mentioned most (by more than half of the astronomers) whereas this is only seen as a problem by about one-fifth of the chemists. This result points, by the way, to the need to develop specific open access policies for different scientific and scholarly fields. For example, in the humanities open access books will be an important issue.

Quality of the journals was also central in a new initiative made public at the workshop by the delegation of the ICT organization of the Dutch universities SURF: Clearing the Gate. This initiative is aimed at funding organizations such as the Dutch research council NWO. It calls upon them to develop a preference for open access publications for the research they fund. They should give priority to publications in high quality open access journals as a condition for funding. SURF is convinced that once this priority is installed, we will witness a strong growth in the number of available open access journals of a high to very high quality. The presentative of NWO joined this initiative and made clear that his organization already supports new open access journals in the social sciences and humanities. This Spring, NWO will publish a Call aimed at the other disciplines. NWO also supports the OAPEN initiative for open access books in the humanities. An important motivation for the organization is financial: “we do not want to pay twice for the same research”.

For evaluators and scientometricians, this development is an interesting challenge as well. How to evaluate open access activities in research?

Note:

My Dutch language report of the EU Open Access workshop meeting was published in the journal Onderzoek Nederland, nr. 277, 7 May 2011, p. 8.

My presentation at the EU workshop is available here.

Limitations of citation analysis

An observation at the CWTS Graduate Course Measuring Science: in most lectures, the presenters emphasize not only how indicators can be constructed, measured, and used, but also under what circumstances they should not be applied. Thed van Leeuwen, for example, showed on the basis of the coverage data of the Web of Science that citation analysis should not be applied in many fields in the humanities and social sciences, and certainly not for evaluation purposes. If the references in scientific articles in the Web of Science are analyzed, there are strong field differences in the extent to which they cite articles that are themselves covered by the Web of Science. In biochemistry this is very high (92 %), whereas in the humanities this drops to below 17 %. Since citation analysis is almost always based on Web of Science data, most relevant data on communication in the humanities is missed by citation analysis. Of course, this is well-known and it is the usual argument in the humanities and social sciences against the application of citation analysis. However, this also has meant that most scholars see CWTS principally as associated with any use of citation analysis. CWTS does currently not have a strong reputation as the source of critique of citation analysis, although it has systematically, at least since 1995, criticized the Impact Factor and has also been very critical of the very popular and equally problematic h-index. Interesting mismatch between practice and reputation?

“Idiocy of impact factors”

Ron de Kloet, professor in medical pharmacology in Leiden and famous for his research on stress, about the journal impact factor in the university weekly Mare (my translation): "In the past, we did not have this complete idiocy around impact numbers". He thinks that those who have to judge scientists on their performance rely too easily on the journal impact factor. "In this way, the journal rather than the researcher is being assessed. And young researchers know that not their individual creativity counts but the visibility of the journal. This can make people obsessed and take away the pleasure in science." Wise words!