Evaluating e-research

We had a very interesting discussion last week at the e-Humanities Group of the Royal Netherlands Academy of Arts and Sciences. The problem I presented is how to evaluate e-research, the newly emerging style of scientific and scholarly research that makes heavy use of, and contributes to, web based resources and analytical methods. The puzzle is that current evaluation practices are strongly biased towards one particular mode of scientific output: peer reviewed journal articles, and within that set in particular those articles published in journals that are used as source materials for the Web of Science, published by ISI/Thomson Reuters. If scholars in the sciences, social sciences and humanities are expected to contribute to e-science and e-research, it is vital that the reward and accounting system in the universities do honour work in this area. Here is the link to the presentation "Evaluating e-Research".

International networks start to drive research

Networks of
collaborating scientists spanning the globe are increasingly shaping the research
landscape. The share of papers co-authored by researchers from different
countries is steadily growing. More than one third of the papers is now based
on an international collaboration, up from one quarter fifteen years ago. On
top of this, these internationally co-authored papers have a higher citation
impact. Each foreign partner in a paper increases its potential to be cited up
to a tipping point of approximately 10 countries. The dynamics of these
international networks together with sustained investments in scientific
research by an increasing number of countries produce a much more multipolar
world. Not surprisingly, China is rising fast. Ranking countries on the number
of scientific papers produced, China is now number 2 with a share of 10 % of the
international scientific production. It is expected to become number 1 within a
few decades. Brazil and India are also emerging as powerful players on the
international scene. But the rise of new scientific centres is not restricted
to the BRICS countries. In the Middle East, both Turkey and Iran are investing
strongly with an enormous growth of authors and papers as a result. While Iran
published a bit more than 700 papers in 1993, in 2008 this was already more
than 13 thousand. Turkey published in 2008 four times as much as in 1996 and
its number of researchers has grown by 43 %. Still, the current heavyweights
are dominating the rankings based on citation numbers. With a decreasing share
in total publications (down from 26 top 21 %), the United States still attracts
the majority of citations, more than 30 % of all publications cite work
originating in the United States. Chinese papers have significantly less
impact: with 10 % of the share of papers, the Chinese collect only 3 % of the


These are
some of the highlights of the recent report of the Royal Society (UK),
Networks and Nations: Global scientific collaboration in the 21st century
This report is based on an analysis of all papers in the Scopus database
(Elsevier) published between 2004 and 2008, compared with the production
between 1993 and 2003. The report combines these findings with five case
studies of prominent international research initiatives in health research,
physics, and climate research. I think this report is a goldmine of interesting
facts and sometimes surprising developments and a must read for all science
policy actors.


European science policy makers, the report should moreover give pause for
reflection. The fast rise of international networks is particularly relevant
for Europe because of the rise of anti-immigration parties that currently have
a big impact on policy in general, and thereby also on science policy. The
share of internationally co-authored papers in the European countries is
rising, which means that the researchers in Europe need to be supported in
creating more international collaborations. This simply cannot be combined with
an anti-immigration policy focused on blocking international exchange of
scientific personnel. In Europe, very different from Asia, the general
political climate therefore seems to be out of step with the developments in
the world of science and scholarship. A creative science policy requires an
open attitude eager for international exchange of ideas and people, not least
also with colleagues in Turkey and Iran. And Turkey should become a member of
the European Union as soon as possible.


The report
also shows nicely that internationalization is not a simple process. Overall,
the number of internationally co-authored papers is on the rise. And in the
current scientific centres, this goes together with an increase of the share of
international papers in the total national scientific production. But in China
and Brazil, the share of international papers is decreasing, while the absolute
number of internationally co-authored papers is rising. Turkey and Iran show
comparable trends, albeit less clear.The
explanation is that in these countries the national research capacity is
building up faster than the growing international collaborations.

Does ranking drive reputation?

 The recent Times Higher Reputation Ranking also raises a number of more fundamental questions about the production of reputation. If we compare the reputation ranking with the overall THE World Universities ranking, it is striking that the reputation ranking is much more skewed. The top 6 universities eat almost the whole reputation pie. University number 50 (Osaka) has only 6 % of the "amount of reputation" that number 1 (Harvard) has, whereas number 50 in the overall THE ranking (Vanderbilt University) still has 69 % of the rating of number 1 (again Harvard). The reputation is based on a survey (of which the validity is unclear), but how do the respondents determine the reputation of universities of which they direct knowledge (for example because they do not work there)?

 A recent issue of the New Yorker has an interesting analysis by Malcolm Gladwell about ranking American colleges (The order of things. What colleges rankings really tell us, The New Yorker, February 14 & 21, 2011, pp. 68-75). His topic is another ranking, perhaps even more famous than the THE Ranking: the Best Colleges Guide published by U.S. News & World Report. This is also based on a survey where university teachers are asked to rank the American colleges. When a university president is asked to assess the performance of a college, "he relies on the only source of detailed information at his disposal that assesses the relative merits of dozens of institutions he knows nothing about: U.S. News." According to Michael Bastedo, an educational sociologist at the University of Michigan, "rankings drive reputation". Gladwell concludes therefore that the U.S. News ratings are "a self-fulfilling prophecy".

 The extremely skewed distribution of reputation is in itself an indication that this might also be true for the THE ranking. Performance ratings are ususally skewed because of network and scaling effects. A big research institute can mobilize more resources to produce top quality research, will therefore attract more external funding, and so on: this sustains a positive feedback loop. But if the resulting rankings are also strongly influencing the data that feed into the next ranking, the skewedness of the ranking becomes even stronger.

This would mean that the THE Reputation Ranking does not only show that, in the perception of the respondents, a few American universities plus Oxford dominate the world, it also indicates that these respondents use the THE ranking, and comparable rankings, to fill in the forms that subsequently determine the next ranking.

 Thus, this type of ranking creates its own reality and truthfulness.

Dutch reputation anxiety

 The recent Times Higher Education Top Universities by Reputation, published on 10 March 2011, has created some anxiety among Dutch universities. Some press releases suggested that this was a new ranking and it showed a much lower position of the universities than they had in the World Universities Ranking published in September 2010. To what extent should these universities worry?

 The recent reputation ranking is actually not a new ranking but the publication of a part of the older research underlying the September THE ranking. The reputation indicator that contributed to the ranking has now been published separately, which of course results in a different listing.

Comparing the two rankings, the reputation of the Dutch universities seems to be lower than their performance would justify. The Technical University Delft is highest at position 49. Among the top hundred only Utrecht University, Leiden University, and the University of Amsterdam are present. This contrasts clearly with the overall THE World Universities Ranking which is based not only on reputation but also on a mix of performance indicators. In that list, no less than ten Dutch universities are present among the best 200 universities of the world, with scores between 50 and 55 (Harvard scores 100). So this contrast might mean that the (relatively small) Dutch universities could improve their reputation management, especially at the international level.

 On the other hand, it is not clear how important this reputation ranking actually is. The results are based on an invitation only survey. THE sent out "tens of thousands" of requests to participate and received 13 thousand usable responses. It is unclear to what extent this sample is representative for the international academic community. There does appear to be some relation between the ranking results and effort in reputation management. The list is dominated by a small group of American universities together with Oxford University, so we see the usual suspects. All have invested in focused reputation management including the innovative use of new media. It would be interesting to analyze the determining factors for this reputation ranking. Perhaps THE can publish the underlying data?

La habitación de Fermat

Recently, we saw a somewhat crazy Spanish movie, "La habitación de Fermat". In the story, a couple of mathematicians and an inventor are invited to a mystery play on the basis of their capacity to solve puzzles. In the end they are locked up in a room that becomes smaller each time a puzzle is not solved in time. As a result, they threaten to become crushed and need to mobilize all their considerable brainpower to survive. I will not reveal who did it but the motivation is interesting. It has everything to do with the extreme competition in some fields of science. It all revolves around the solution to  Goldbach’s Conjecture. A young mathematician claims that he has discovered the proof and one of the older guys, who has been working on this problem for over thirty years, feels very threatened. This is exacerbated by the arrogance of the upstart and the brazenness with which he is giving interviews.The movie is full of dialogues that dwell on how it is to live in researc. In the closing part of the movie the group is boating back home. One of the mathematicians has gotten hold of the proof, not written by himself, and agonizes over whether he should publish it as his own or not. One of the others solves the problem by throwing the proof in the river. "What?", the guy shouts, this is a world disaster!" His companion rows on, looks around himself, and points to the fact that nothings seem to have changed. We see the proof drifting away, and the world is oblivious.