Rankings under Groninger fire

Rafael Wittek, director of the Internuniversity Center for
Social Science Theory and Methodology
, based at the University of Groningen,
recently attacked Dutch university policies at the occasion of the 25th
anniversary of his famous graduate school. One of his targets was “the hype
around rankings”. Accredited in 1986, the ICS was the first national social
science graduate school in the Netherlands. The school emerged from Dutch
networks of PhD students that were funded by the Ministry of Education and
Science. According to Wittek, the universities are now trying to get a high score
in the global rankings (such as the Times Higher Education ranking, the Shanghai
ranking
and of course also the Leiden ranking) and he argued that this is a
wrongheaded approach. “Rankings as an indicator of quality are a hype. To adopt
them is merely a policy reflex.”

 

I think the sociologist puts his finger on a sore spot in Dutch
science policy and management. This is particularly true for his critique of
the policies around PhD training and the national Graduate Schools. According
to Wittek, “The Hague” has been too eager to follow new European guidelines and
has promoted the competition, rather than the cooperation, among universities.
“In the last couple of years, many national Graduate Schools have been
dismantled and new local Graduate Schools have been created in their stead.
Dutch universities increasingly claim the results of ‘their’ researchers and
give them less possibilities to collaborate with colleagues from other
universities”. His remarks will strike a chord with everybody (such as myself)
who have been formed in the national research schools in which all or almost
all universities worked together. It is indeed a loss that the Dutch ministry discouraged
national Graduate Schools and completely switched towards stimulating local ones,
although happily a few nation-wide schools are still alive and kicking (such as
the Graduate School Science, Technology
and Modern Culture
).

 

Still, although his remarks are to the point, I do not think
he is completely right. For example, it is simply not true that the Dutch
universities would be involved in a ruthless competition with each other. On
the contrary, the new trend is the emergence of regional clusters of
universities as a new form of intimate collaboration to be able to compete
globally with American and Asian universities. Increasing collaboration is moreover
the trend in scientific publications,
as demonstrated recently by a study of my colleagues at CWTS and by the recent
Royal Society Report on scientific networks. The share of the multi-authored,
multi-institutional, international publications is still rising, in all fields
of research. And their average citation impact is greater than those of
single-author or national publications. I don’t think that we should
overestimate the power of university boards to limit the scale of scientific
collaboration.

 

Nonetheless, Wittek’s criticism of ranking should certainly
be taken very seriously. The sociologist sees a danger in the “policy reflex”
for the quality of research and in particular in the areas of high-risk
fundamental research. He thinks that researchers who are forced to score high
in the rankings will be reluctant to take on big, important questions and will
tend to develop a more limited and less risky research agenda. I agree. This is
indeed the most important risk of rankings running wild, disconnected from the
context of fundamental or applied research. But I think there may be a bit more
at play than just policy reflexes. The universities are confronted with an
accelerating process of global competition in which new scientific centres are
emerging, among others China, India, Brazil, Turkey and Iran. In these
countries, researchers tend to have to meet much stricter performance criteria
than is usual in the Netherlands. This makes it difficult, perhaps even
impossible, for Dutch university boards to ignore this. In the Netherlands this
problem is particularly acute since the recent xenophobic hype around
immigration in this country is making it already difficult enough to attract
talented young researchers from non-European countries. Does this mean that an
obsession with rankings is inevitable? I think not. I could imagine a number of
alternative, more imaginative strategies to counter this race for the highest
position in the rankings.

 

I do think Wittek is right
that recognition by peers is the strongest motivator for researchers. He even
thinks that scientists do not need any other stimulus. This last idea may be a
bit over the top. But I do think he has a good point. Therefore, rankings can
and should be used in direct connection with this peer stimulus. Policies that
are only focused on getting higher in the global university rankings indeed do
not make much sense. But this does not mean that it makes no sense at all to
rank. Rankings can very well be used to get a better understanding of ones
strong and weak points (both at the level of individual researchers, groups and
institutes, and universities and countries). This can be done while taking into
account the specific characteristics of the relevant disciplines. (For different
disciplines different databases may be needed to measure the rankings). Ranking
in context, that should

be
possible, shouldn’t it?

Be Sociable, Share!

2 thoughts on “Rankings under Groninger fire

  1. Ranking in context, that should be possible, shouldn’t it?

    I doubt it; there are several problems. First, the units to be ranked are specific arrangements of disciplines and interdisciplines (such as universities or research schools). Such institutional arrangements have a set of disciplinary contexts. Thus, one would have to rank with reference to different and potentially orthogonal contexts.

    For example, my own department (the Amsterdam School of Communications research) contains at least such different disciplinary orientations as:
    1. interpersonal communicaiton (social psychology);
    2. mass communication (political science);
    3. information science;
    4. social network analysis.

    The reference sets would be very different for these four directions and partially overlapping. Furthermore one would need to attribute percentages of the output to these different streams, while there may be in-between group synergies at the local level. Should my own work, for example, count 0.8 for information science, 0.1 for network analysis, and 0.1 for science and technology studies? How would one be able to work this out to a ranking? Would not too many arbitrary decisions be needed?

    Furthermore and more generally, these rankings (such as the Leiden ranking) make all kinds of assumptions which remain hidden. If one would rank using publications / dollar, Harvard would end at the bottom. Using such a ranking would explain why Dutch universities score not so different in the rankings: they are all funded on the same basis and they are competitive among them.

    Actually, in a study about universities, Willem Halffman and I found that differences are decreasing among universities. The explanation is easy: institutional incentives are increasingly similar and there is competition.

    Thus, I tend to agree that Wittek’s critique.

    Loet Leydesdorff
    University of Amsterdam

  2. And whom or what might we trust to predetermine what would be ‘strong’ and ‘weak’ points, and excercise measures for them, Paul? I suspect many would suggest statistics, such as citation analysis — the tail wagging the dog. It might also be wise to place collaboration and competition on separate continua: it is perfectly possible to collaborate and be in competition, with all sorts of (un)intended effects on quality and its guarantees — indeed, competition is arguably a standard attribute of any labour (including scientific labour), as you of course know very well.

Leave a Reply