Harvard no longer number 1 in ranking

Recently, the new Times Higher Education World University Rankings
2011-2012 saw the light. The ranking revealed that Harvard University is no
longer number one on the list. Incidentally, the differences with Caltech – now
highest – are minimal. The main reason for Caltech’s rise are the extra
revenues it drew out of industry. Caltech’s income increased by 16%, thereby
outclassing most other universities. Harvard scored a bit better when it comes
to the educational environment. Other universities also rose on the list as a
result of a successful campaign to obtain (more) external financing. The London
School of Economics
, for example, moved from 86 to 47. The top of the ranking
did not change that drastically though. Rich US-based universities still dominate
the list. 7 out of ten universities highest on the list, and one third of the
top 200, are located in the US.

This illustrates the THE ranking’s sensitivity to slight
differences between indicators that, taken together, shape the order of the
ranking. The ranking is based on a mix of many different indicators. There is
no standardized way to combine these indicators, and therefore there inevitably
is a certain arbitrariness to the process. In addition, the THE ranking is
partly based on results of a global survey. This survey invites researchers and
professors to assess the reputation of universities. One of the unwanted
effects of this method is that well-known universities are more likely to be
positively assessed than less popular universities. Highly visible forms of maltreatment
and scandals may also influence survey results.

This year, the ranking’s sensitivity to the ways in which
different indicators are combined is aptly illustrated by the position of the
Dutch universities. The Netherlands are at number 3, with 12 universities in
the top 200 and 4 in the first 100 of the world. Given the size of the country,
this is a remarkable achievement. The result is partly caused by a strong
international orientation of the Dutch universities, and partly by previous
investments in research and education. But just as important is the weight
given to the performances of the social sciences and humanities in a number of
indicators. Compared to last year, the total performance of Dutch universities
most likely did not increase that much. A more likely explanation is that the profile
of activities and impact are better covered by the THE ranking.

The latest THE ranking does make clear that size is not the
most important determinant in positioning universities. Small specialized universities
can end up quite high on the list.

Perspectives on computer simulation and data visualization

When it comes to critical analysis of the role of computers,
data visualization, simulations and modeling in the sciences, there’s a lot to
be learned from humanities scholars. I’m currently teaching a course on the
role of computer-generated images in contemporary science and visual culture at
Utrecht University. Yesterday I learned that the
New Media department hosts two very interesting events. Today, Tuesday October 18,
there’s a workshop on software applications as active agents in shaping
knowledge. The two keynote speakers are Dr Eckhart Arnold (University of Stuttgart),
expert in the field of simulation technologies, and Dr Bernhard Rieder (University of Amsterdam), who researches how computers
and software organize knowledge.

A week later, on October 25, Setup will host an event on
data visualization
at the Wolff Cinema movie theatre in Utrecht. Some of the most striking recent
data visualization projects will be displayed on screen, and the following
questions will be addressed: what makes data visualizations so appealing? Do
they bring across the same message as the ‘raw’ data they originate from? Ann-Sophie
Lehmann (associate professor New Media en Art History, UU) will discuss the
visualizations and will throw light on some of the effects they have on
viewers. One question that came to my mind is what this particular context (a
movie theater) does to the (reception of) the visualizations, compared to a
web-based interaction on a laptop or PC, for instance.

Understanding Academic Careers

On November 16, 2011, the Rathenau Institute and the VU University Amsterdam organize a symposium on Dynamics of Academic Leadership. The symposium addresses the conditions that are necessary for high level
performance and creativity in research, and the implications for research
management and policy. Paul is one of the invited speakers. He will discuss some of the programmatic aspects and preliminary results of a large European FP-7 project: Academic Careers Understood through Measurement and Norms (ACUMEN). ACUMEN is aimed at understanding the
ways in which researchers are evaluated by their peers and institutions,
and at assessing how the science system can be improved and enhanced. The project is a cooperation among several European research institutes, with Paul as the principal
investigator and CWTS’s Clifford Tatum as project manager.

Science mapping: do we know what we visualize?

For example, what often gets
glossed over in these endeavors is that visualizations of scientific
developments also
prescribe how these developments should be known in the first place.
Science maps are produced by particular
statistical algorithms that might have been chosen otherwise, calculations
performed on large amounts of ‘raw’ data
stored in databases, and for this reason they are not simply ‘statistical
information presented visually’.
The choice for a particular kind of visualization is
often connected to the specificities and meaning of the underlying dataset and
the software used to process the data.
Several software packages have been specifically
designed for this purpose (the VOSViewer supported by CWTS being one of them).
These packages prescribe how the data should be handled. Different choices in
selection and processing of the data will lead to sometimes strikingly
different maps. Therefore, we will increasingly need systematic experiments and
studies with different forms of visual presentation (Tufte, 2006).

At the same time, a number of interfaces are built into the mapping
process, where an encounter takes place with a user who approaches these
visualizations as evidence.

But how do these users actually behave?
To our knowledge hardly any
systematic research is done on how users (bibliometricians, computer
scientists, institute directors, policy makers and their staff, etc.) engage
with these visualizations, and which skills and strategies are needed to engage
with them.
critical scrutiny is needed of the degree of ‘visual literacy’ (Pauwels, 2008)
demanded of users who want to critically work with and examine these
visualizations. The visualizations
technical or
formal choices that determine what can be visualized and what will remain
hidden. Furthermore, they are also shaped by the broader cultural and
historical context in which they are produced.

here is a
tendency to downplay the visuality of science maps, in favor of the integrity
of the underlying data and the sophistication of transformation algorithms.
However, visualizations are “becoming increasingly dependent upon technology,
while technology is increasingly becoming imaging and visualization technology”
(Pauwels 2008, 83). We expect that this interconnection between data selection,
data processing and data visualization will become much stronger in the near
future. These connections should therefore be systematically analyzed, while
the field develops and experiments with different forms of visual

As said, science
mapping projects do not simply measure and describe scientific developments –
they also have a normative potential.
Suppose, in an hypothetical example, that the director of a research institute wants to
map the institute’s research landscape in terms of research topics and possible
applications, and wants to see how the landscape develops over the next five
years. This kind of mapping project, like any other description of reality, is
not only descriptive but also performative. In other words, the map that gets
created in response to this director’s question also shapes the reality it
attempts to represent. One possible consequence of this hypothetical mapping
project could be that the director decides on the basis of this visual analysis
to focus more on certain underdeveloped research strands, at the expense of or
in addition to others. The map that was meant to chart the terrain now becomes
embedded in management decision processes. As a result, it plays an active part
in a shift in the institute’s research agenda, an agenda that will be mapped in
five years’ time with the same analytical means that were originally merely
intended to describe the landscape.

A comparable example can actually be found in
Börner’s book: a map that shows all National Institute of Health (NIH) grant
awards from a single funding year.
The project comes with a website, giving
access to a database and web-based interface. The clusters on the map
correspond to broader scientific topics covered in the grants, while the dots
correspond with individual grants clustered together by a shared topical focus.

Here, too, it
would be informative to analyze the potential role these maps play as policy instruments
(for instance, in accountability studies). This type of analysis will be all
the more urgent when
bibliometric maps are increasingly used for the purposes of research
evaluation. The maps created on the basis of bibliometric data do not simply ‘visualize
what we know’. They actively shape bibliometric knowledge production, use and
dissemination in ways that require careful scrutiny.