Why we shouldn’t bother about world university rankings

Various international university ranking systems have mushroomed since the first Academic Ranking of World Universities (Arwu, or Shanghai Ranking) started in 2003.

Malaysians are now also familiar with the QS World University Ranking and Times Higher Education World University Ranking (THE).

Last week, we celebrated the success of 10 Malaysian universities featured in the QS World University Rankings 2020, with Universiti Malaya (70), Universiti Putra Malaysia (159), Universiti Kebangsaan Malaysia (160) and Universiti Sains Malaysia (165) making giant leaps.

Generally, the ranking scores have been used for universities’ marketing purposes as well as evidence of academic improvement to help secure funding. They are also used as KPIs for academics who can then get their papers published in high quality journals. Rankings assist students and parents in making decisions on higher education.

When it comes to choosing higher education institutions, most parents and secondary school students (especially those from Asian countries) are tempted to go with universities placed highly in world university rankings.

From my exchanges with professors, there has been a growing disrespect for these ranking systems.

University administrators and the public in Asia and Latin America are obsessed with these ranking games. However, this is not the case in the US and UK.

In the US alone, according to an online survey of more than 15,000 students in 2009, only 10% of the participants indicated that they would take rankings into account when considering college options.

About 47% of them would first consider academic programmes and faculties, followed by financial considerations (21%), and campus environment and student life (13%) (Ref:www.royall.com).

There is a lack of awareness about how the ranking systems work, and what information or indication we can extract from the ranking results.

One particularly important point of which the public should be aware is the lack of reliable indicators or measurements to assess how well a university is doing in teaching and learning.

All four prominent ranking bodies – QS, THE, Arwu and Center for World University Rankings (CWUR) – do not directly or effectively reflect actual teaching quality at the undergraduate level.

The second best options adopted by the QS and THE are proxy indicators such as “Faculty/Student Ratio”, “International Student Ratio” and “International Faculty Ratio”. This claim is supported by a 2017 finding by the UK’s Teaching Excellence Framework (TEF, a system assessing the quality of teaching), that TEF gold-rated universities are ranked low in other existing university rankings, which implicates a strong inverse correlation between research and the teaching quality of the university.

The take-home message from the research is that “choosing a university for undergraduate study based on university rankings is unwise”!

The other reason why many professors from the UK and US pay no attention to certain rankings is due to the “peer-to-peer reputation survey”, which lacks transparency and is not validated.

Meanwhile, a systematic research performed on 13 ranking systems (including QS, THE, Arwu and CWUR) has shown that “reputation surveys, self-reported and unvalidated data, and non-replicable analyses” would negatively affect the quality of rankings (Vernon et al report published in the journal PLoS ONE in March 2018).

Rankings relying on “peer-to-peer reputation survey” indicators (such as “Academic Reputation” and “Employer Reputation”) have been found to show low replicability of ranking results, and do not reflect the quality of teaching and learning environments of a university.

Furthermore, results and data of peer reputation surveys are not made available to the public.

Given the fishy methodologies, rankings should not be used for comparison of universities or in making decisions on higher education.

Consulting students currently studying at the particular university, visiting the university, talking to the professors and spending time as interns at the university would be a better approach to getting to know the quality of the targeted institution.

Rankings should also not be used by university leaders or governments to help design strategies and education policies.

Most ranking organisations are marketing companies, whose aims and visions are profit-oriented.

When these companies succeeded in influencing the public to use their rankings, university leaders are under stress to follow the measurements set by those companies because they are key to generating revenue.

A higher ranking would mean more student applications, and hence better quality students and staff to be recruited.

To the government, to a certain extent, ranking information is useful for aiding decision-making in terms of funding and budgeting research activities. But they should not be relying on ranking results.

Rankings do not show the actual scene, and are merely overly generalised interpretations of a university’s performance, such as in research.

Meanwhile, policymakers and governments in advanced non-English speaking countries would never bother looking at ranking measurements.

Germany and Finland have prestigious universities and a high standard of tertiary education. Their universities are not ranked top in any of the ranking systems.

Meanwhile, in March 2019, a report published by the Universitas 21 on 50 countries revealed more details about the ranking of national higher education systems.

Led by researchers from the University of Melbourne, the “U21 Ranking of National Higher Education Systems” assesses a country’s higher education system by looking at issues such as government expenditure on tertiary education as a percentage of GDP, expenditure for research and development as a percentage of GDP, output such as total articles produced, impact of articles as measured by citations, number of researchers and unemployment among graduates.

From these indicators, we learn more about the investment-output efficiency across Malaysian public universities.

In terms of resources, Malaysia was ranked 11th, 12th and 17th in the years 2016, 2017 and 2018.

We were and still are much better than many other developed countries such as Germany, Korea, New Zealand, Japan and Taiwan, and others like Thailand, China, Iran, Brazil, Croatia and Greece, among others.

When it comes to output, Malaysia has shown a continuous drop in ranking, at 39th, 42nd and 45th in the same years above – ironically, much lower than any of the aforementioned countries.

The drop in the U21 ranking of Malaysian universities over the years contradicts the QS and THE rankings which reported better performances by these universities.

A university ranking system, for the sake of the betterment of society, should be used as a motivation indicator for improvement. This would benefit society and the scientific community through research and academic activities, eventually strengthening the economies of the respective nations and regions through scientific outcomes and impacts on humanity.

Most importantly, cost efficiency and productivity play a crucial role in ensuring investment-output efficiency among public universities in Malaysia.

While the above-mentioned peer governments could spend less but achieved a lot more “Boleh” than Malaysia, there is no reason why Malaysia “Tak Boleh” in this matter.

Dr Song BK is an FMT reader.

The views expressed are those of the author and do not necessarily reflect those of FMT.