Does ranking drive reputation?

 The recent Times Higher Reputation Ranking also raises a number of more fundamental questions about the production of reputation. If we compare the reputation ranking with the overall THE World Universities ranking, it is striking that the reputation ranking is much more skewed. The top 6 universities eat almost the whole reputation pie. University number 50 (Osaka) has only 6 % of the "amount of reputation" that number 1 (Harvard) has, whereas number 50 in the overall THE ranking (Vanderbilt University) still has 69 % of the rating of number 1 (again Harvard). The reputation is based on a survey (of which the validity is unclear), but how do the respondents determine the reputation of universities of which they direct knowledge (for example because they do not work there)?

 A recent issue of the New Yorker has an interesting analysis by Malcolm Gladwell about ranking American colleges (The order of things. What colleges rankings really tell us, The New Yorker, February 14 & 21, 2011, pp. 68-75). His topic is another ranking, perhaps even more famous than the THE Ranking: the Best Colleges Guide published by U.S. News & World Report. This is also based on a survey where university teachers are asked to rank the American colleges. When a university president is asked to assess the performance of a college, "he relies on the only source of detailed information at his disposal that assesses the relative merits of dozens of institutions he knows nothing about: U.S. News." According to Michael Bastedo, an educational sociologist at the University of Michigan, "rankings drive reputation". Gladwell concludes therefore that the U.S. News ratings are "a self-fulfilling prophecy".

 The extremely skewed distribution of reputation is in itself an indication that this might also be true for the THE ranking. Performance ratings are ususally skewed because of network and scaling effects. A big research institute can mobilize more resources to produce top quality research, will therefore attract more external funding, and so on: this sustains a positive feedback loop. But if the resulting rankings are also strongly influencing the data that feed into the next ranking, the skewedness of the ranking becomes even stronger.

This would mean that the THE Reputation Ranking does not only show that, in the perception of the respondents, a few American universities plus Oxford dominate the world, it also indicates that these respondents use the THE ranking, and comparable rankings, to fill in the forms that subsequently determine the next ranking.

 Thus, this type of ranking creates its own reality and truthfulness.

Dutch reputation anxiety

 The recent Times Higher Education Top Universities by Reputation, published on 10 March 2011, has created some anxiety among Dutch universities. Some press releases suggested that this was a new ranking and it showed a much lower position of the universities than they had in the World Universities Ranking published in September 2010. To what extent should these universities worry?

 The recent reputation ranking is actually not a new ranking but the publication of a part of the older research underlying the September THE ranking. The reputation indicator that contributed to the ranking has now been published separately, which of course results in a different listing.

Comparing the two rankings, the reputation of the Dutch universities seems to be lower than their performance would justify. The Technical University Delft is highest at position 49. Among the top hundred only Utrecht University, Leiden University, and the University of Amsterdam are present. This contrasts clearly with the overall THE World Universities Ranking which is based not only on reputation but also on a mix of performance indicators. In that list, no less than ten Dutch universities are present among the best 200 universities of the world, with scores between 50 and 55 (Harvard scores 100). So this contrast might mean that the (relatively small) Dutch universities could improve their reputation management, especially at the international level.

 On the other hand, it is not clear how important this reputation ranking actually is. The results are based on an invitation only survey. THE sent out "tens of thousands" of requests to participate and received 13 thousand usable responses. It is unclear to what extent this sample is representative for the international academic community. There does appear to be some relation between the ranking results and effort in reputation management. The list is dominated by a small group of American universities together with Oxford University, so we see the usual suspects. All have invested in focused reputation management including the innovative use of new media. It would be interesting to analyze the determining factors for this reputation ranking. Perhaps THE can publish the underlying data?

La habitación de Fermat

Recently, we saw a somewhat crazy Spanish movie, "La habitación de Fermat". In the story, a couple of mathematicians and an inventor are invited to a mystery play on the basis of their capacity to solve puzzles. In the end they are locked up in a room that becomes smaller each time a puzzle is not solved in time. As a result, they threaten to become crushed and need to mobilize all their considerable brainpower to survive. I will not reveal who did it but the motivation is interesting. It has everything to do with the extreme competition in some fields of science. It all revolves around the solution to  Goldbach’s Conjecture. A young mathematician claims that he has discovered the proof and one of the older guys, who has been working on this problem for over thirty years, feels very threatened. This is exacerbated by the arrogance of the upstart and the brazenness with which he is giving interviews.The movie is full of dialogues that dwell on how it is to live in researc. In the closing part of the movie the group is boating back home. One of the mathematicians has gotten hold of the proof, not written by himself, and agonizes over whether he should publish it as his own or not. One of the others solves the problem by throwing the proof in the river. "What?", the guy shouts, this is a world disaster!" His companion rows on, looks around himself, and points to the fact that nothings seem to have changed. We see the proof drifting away, and the world is oblivious.

Ranking as an instrument in the competition

The different lists of university rankings have attracted increasing attention because of their potential as a weapon in the increasingly fierce global competition between universities. A university that is confronted with a lower position in the rankings has to provide a plausible explanation. And universities that are placed on a higher place in the list naturally celebrate this. Let us take a look at the Netherlands. A few weeks ago, the Leiden Ranking produced by CWTS was good news for the Erasmus University (EUR) in Rotterdam. They were placed as the 6th university in Europe. The university immediately published an advertisement in the national newspapers to congratulate its researchers with this leading position in the Netherlands. The advertisement had the facts right, but it emphasized the criterion that puts the EUR highest (number 6 in the list of 100 largest European universities): the number of citations per publication. This indicator is favorable for universities with large medical faculties and hospitals, because these are large research fields with on average much more references and citations than, for example, in the technical sciences or in philosophy. And it matters which universities are used as the relevant group to rank. Using the same indicator of citations per paper puts the EUR number 9 among the 250 largest European universities, because 3 smaller universities appear in the top before even Oxford and Cambridge. Still a very good score and still number 1 of the Netherlands in this ranking. But how does it look when we use other indicators? CWTS now uses two different indicators to take field differences into account. How does the EUR score in these lists? The traditional CWTS "crown indicator" puts the EUR on number 8 among the 100 largest and number 14 among the 250 largest European universities. The improved CWTS indicator gives the EUR a score of 11 among the 100 largest and 15 among the 250 largest universities in Europe. In all these cases, the EUR is highest among the Dutch universities. If size is taken into account in combination with quality, however, the University of Utrecht has the highest score in the Netherlands (nr. 8) and the EUR ends on position 20, after Utrecht and the University of Amsterdam.

 So what is the lesson here? First, ranking is a pretty complicated affair because there are many ways to rank universities. Rankings simplify these comparisons of many different dimensions. The universities are forced to build on this and reduce this complexity even further. This is facilitated by the fact that the different rankings produce different results. It enables universities to choose the most favorable ranking. It also enables universities to debunk a ranking by pointing to other results in the other rankings or even debunk the ranking as such by showing contradictions among ranking results. However, this does not disempower these rankings. As Richard Griffiths (professor of social and economic history in Leiden) stated two weeks ago in the university weekly Mare: "Such a list can be a pile of junk, but it is best not to be in the bottom of the pile." Universities are therefore also discussing to what extent mergers can help to improve their ranking scores. For example, it might be profitable for a technical university to be coupled to a large academic hospital.

 Not only individual universities are actively engaged in the debate about rankings, the same holds for associations of universities. The Dutch university association VSNU concluded from the Times Higher Education Supplement (THES) ranking that the Netherlands is the fifth best academic country in the world. As science journalist Martijn van Calmthout wrote in De Volkskrant: this requires some creativity because the Netherlands as a whole does no longer belong to the world top (which does not mean that there are no fields where Dutch researchers belong to the best performers in the world). No Dutch university belongs to the 100 best universities in this ranking (which uses a very different set of indicators from the Leiden Ranking, see the next blog post). In fact, the Dutch universities group together pretty close. Their relative position depends on the indicator used. Leiden scores highest when external funding is the main criterion in the THES ranking. And Shanghai puts Utrecht highest (number 50 in the world list) followed by Leiden (at 70). How significant are the differences among the Dutch universities actually?

 The differences between the different rankings creates a drive to keep producing new indicators to capture aspects and dimensions of quality that are not measured satisfactorily in the existing ones. This cannot go on endlessly. It may be time to take the perverse effects of this one-dimensional ranking more seriously. One way is to further develop truly multi-dimensional indicators, another to investigate the underlying properties of indicators more thoroughly, and a third to take the limits of indicators more seriously, especially in science policy. Will it be possible to combine these three strategies?

Ranking universities

In the last two weeks, several new university rankings were published. Since universities are facing ever tougher competition, their placement in university rankings becomes increasingly important. So, I’ll spend a couple of blogs on rankings, how the lists are constructed, and what one needs to take into consideration in their interpretation. It struck me that the business of ranking has become more sophisticated over the years. Now that rankings are an instrument for universities in the competition for resources, researchers and students, the competition between them is also increasing. This can work to increase the quality of these rankings, on the other hand it might also promote an overly simple interpretation. Ranking is a complicated business, because it means that a complex phenomenon such as quality, which is by definition composed of many independent dimensions, is reduced to a one-dimensional list. The attraction of rankings is exactly this reduction of reality to an ordered list in which one’s position is unambiguous. This also means that ranking is an inherently problematic business. For example, a university may have high quality teaching as its core mission. This means this university may not score high in a ranking that does not really take teaching into account. In other words, if one wants to evaluate the performance of an institution, one should take into account its mission. It would still be a difficult task to squeeze the complex network of performances of institutions into a simple ordered list. And perhaps we should abstain from ordered lists as such, and develop a completely new form of presentation of performance data. The importance of university missions and the fact that quality is a complex phenomenon that has many different aspects, is central in a European research project lead by CHEPS in which CWTS also participates. This project may produce a new way of monitoring university performance. But for now, we are stuck with one-dimensional rankings. There are five different university rankings that are commonly used, and I will spend a blog on each of them in the course of this week. These are: the Times Higher Education Supplement ranking, the QS ranking (a spin-off of the THES ranking), the Leiden ranking produced by CWTS, the Shanghai ranking, and the somewhat lesser-known Web of World Universities ranking. In the next blog, I’ll discuss how rankings are being used by universities, then I will discuss each ranking in more detail, to conclude with some ideas about the future of rankings.