A search for the most objective global university ranking system was one of the main themes of the conference, which was organized by the Russian Association of Higher Education Researchers, the HSE, and some other research institutions. Larisa Taradina, Deputy Head of the HSE Office for International Development, in her paper in the journal Otechestvennye Zapiski (No. 4, 2013) gave a detailed analysis of the most authoritative global university rankings: Britain’s QS and THE, China’s ARWU, and thepan-European project, UMultirank.
Expert discussions related to these rankings revolve mainly around three issues: whether the rankings can be trusted as ‘calculators’ of a university’s success, who is behind the rankings (who determines ranking principles – independent experts or stakeholders), and whethereach country’s many cultural and economic nuances are taken into account. For the moment, UMultirank’s methodology is showing a trend towards new decision-making, Larisa Taradina believes.
The advantages of this ‘youngest’ ranking system is that it is multidimensional, offers extensive search capabilities, is focused on the user (this is a university navigator), and takes into account the specifics of each country.
The most renowned international rankings–QS and THE–have a single origin: in 2004 a global university ranking, developed by the British consulting company Quacquarelli Symonds (QS) together with the Times Higher Education (THE) weekly, was first published. But,at the end of 2009, the alliance fell apart due to disagreements in methodology. THE started to cooperate with Thomson Reuters, and QS held onto the old methodology and started collaborating with Elsevier. The two rankings differ considerablytoday.
THE selects only those universities that are active in research.QS also takes this criterion into account, but intensive publication activity is not a necessary requirement for participation. Both systems consider the ‘citation of research publications’ to be indicative of a university’s expertise in various scientific fields and its role in distributing new knowledge. But in THE, citation is a separate category, and in QS it is an indicator in the ‘Quality of Research’ category. The weight of this indicator is also different: in QS it is 20%, and in THE–30%.In addition, data is collected from different databases: QS relies on Scopus, andTHEonWeb of Science.
QS’s ranking system has as a base academic contacts accumulated over many years, to which suggestions from universities participating in the rankings are added annually. Rectors, vice rectors, deans, department heads, etc., are invited as experts. All the experts included in the database are invited to participate in a survey which becomes the basis upon which the ‘Academic Reputation’ indicator (40% of the total ranking weight) is composed.
THE’s expert database is compiled automatically from the list of authors who have published their work in journals included in the Web of Science. Annually, 150,000 randomly selected authors are invited to participate in the survey.
As a result of the survey, THE publishes a reputation ranking that is separate from the main ranking. The reputation ranking is the one most questioned by the academic community. ‘Things like Moscow State University’schanging position in the rankings are unexplainable: in 2011 it was 33rd in the world; in 2012, it was out of the rankings; and in 2013, it appeared in 50th place’, offers Larisa Taradina,as an example.
The most well-known Asian system of ranking is the Academic Ranking of World Universities, ARWU. It was developed by Shanghai University, one of China’s leading academic institutions, and that’s why unofficially the ranking is called the Shanghai Ranking.
The main difference between ARWU and other rankings is that its data is openly accessible and can easily be checked. This improves the trustworthiness of the results and saves universities from additional work–i.e.having to provide statistical dataannually.
The ranking is based on open sources showing universities’ achievements in research. This means that ARWU pays attention to universities with Nobel Prize and the Fields Medal winners, highly cited researchers, and authors whose articles have been published in Nature and Science.
Larisa Taradina believes that a shortcoming of the ranking is that it’s one-sided in some way. ARWU includes mainly universities specializing in natural and technical sciences. Social and economic sciences are less well represented in it.
U-Multirank, the international university ranking, was commissioned by the European Commission in 2009 and developed by a consortium of European universities and research centres from nine countries. It is a rating scale of a new generation and the youngest of the rankings in existence to this point.
U-Multirank is more of a search engine and a global university compass than a list of universities ranked by achievements. So, a considerable advantage of U-Multirank is that it’s flexible, multidimensional, and user-friendly. Its tools allow all stakeholders to select the indicators important to them and, as a result, get a list of universities matching their requirements.
The ranking’s nominations are for teaching and learning; research (nine indicators); knowledge transfer (eight indicators); internationalization (six indicators); and participation in the life of the region (five indicators).
Larisa Taradina emphasizes another important advantage of the new ranking system: U-Multirank takes into account the diversity of the universities—the cultural, linguistic, economic, and historical aspects of the educational systems. The first result of the ranking will be published in early 2014.
The task of comparing universities and programmes is an objective need, Larisa Taradina says. But rankings are imperfect and that’s why they’re often criticized.
The main question concerning all rankings is how objective their assessments are, and whether their methodology allows the ‘actual potential of a university’s development’ to be measured. According to the researcher, some university indicators reflect only past achievements (such as the number of Nobel laureates). The weight of the indicators is also a controversial topic.
No ultimate solution has been found yet for the task of comparing the world’s universities, Larisa Taradina concludes. But the fact that the ranking system is being scrutinized by expertsshows that ‘in the near future, researchers will be looking for new ways to compare universities’, the author believes. It is probable that new players will appear in the field of compiling university rankings.