Skip navigation
In my opinion

Academic rankings: The university’s new clothes?


Every year on August 15 exactly, many university presidents – particularly in Europe – get nervous. They know that the annual “Shanghai Ranking” – published since 2004 – is released on that day. Has their institution moved up or down on the list? Whatever the case may be, the communications department will prepare a press release to either applaud a rising position or to justify a fall by calling for more government investment to better “compete” in the so-called “global university market.”

However, given the numerous studies about the supposed scientific validity of university rankings and their perverse effects, it remains somewhat mysterious that many intelligent and well-educated academics and administrators continue to use these invalid indicators to promote their institutions and make important strategic decisions like hiring and promotions.

The whole ranking game reminds me of the famous tale penned by Danish author Hans Christian Andersen: The Emperor’s New Clothes. One may liken many academic leaders and managers who take these rankings seriously to the emperor, “who was so excessively fond of new clothes” that he had been persuaded by “two rogues, calling themselves weavers” that “they knew how to weave stuffs of the most beautiful colors and elaborate patterns.” The new clothes “have the wonderful property of remaining invisible to everyone who was unfit for the office he held, or who was extraordinarily simple in character.”


Skeptical – for he could not see the supposed new cloth – but afraid of appearing stupid, the emperor’s old minister, who was asked to monitor the work of the weavers, told them he would “tell the Emperor without delay, how very beautiful” he found the (in fact invisible) patterns and colors of his new suit. This is very similar to institutions that buy rankings they know to be highly problematic but feel forced upon by convincing sellers. Seeing nothing either, but also afraid of losing his “good, profitable office,” another employee “praised the stuff he could not see, and declared that he was delighted with both colors and patterns.” He confirmed to the emperor that “the cloth [the ranking!], which the weavers are preparing, is extraordinarily magnificent.” The emperor thought he would show himself to be a fool, and not fit for his position, if he said aloud what he thought: that there was nothing to see. He thus preferred to say aloud, “Oh! the cloth [ranking] is charming. It has my complete approbation” and he agreed to wear it for the next procession (er, marketing campaign…).

During his public wandering, “all the people standing by, and those at the windows, cried out, ‘Oh! How beautiful are our Emperor’s new clothes! What a magnificent train there is to the mantle; and how gracefully the scarf hangs!’ In short, no one would allow that he could not see these much-admired clothes; because, in doing so, he would have declared himself either a simpleton or unfit for his office.” But then, there emerged “the voice of innocence,” that of a little child in the crowd, who cried: “But he has nothing at all on!” The crowd finally repeated that obvious truth, and the emperor “was vexed, for he knew that the people were right; but he thought the procession must go on now! And the lords of the bedchamber took greater pains than ever, to appear holding up a train, although, in reality, there was no train to hold.”

Will university leaders behave like the emperor and continue to wear the “new clothes” provided for them each year by sellers of university rankings (the scientific value of which most of them admit to be nonexistent)? Or, will they listen to the voice of reason and have the courage to explain to the few who still think they mean something that they are wrong; reminding them in passing that the first value in a university is truth and rigor, not cynicism and marketing.

We can also decide not to leave these decisions solely in the hands of academic leaders and work collectively to counter the forces that seek to impose illusory rankings on universities by continuing to make rigorous criticism of such practices each time they are used. Reasoned critiques are more likely to defeat the most perverse uses of benchmarking than resigned acceptance that they are inevitable and that it is useless to oppose them.

Bibliometric methods can be essential to go beyond local and anecdotal perceptions, map the state of research and identify trends at different levels (regional, national, and global). However, the proliferation of invalid indicators can harm serious evaluations by peers, which are essential to running any organization. We must go beyond the generalities of those who repeat ad nauseam that “rankings are here to stay” – without ever explaining why this must be so – and open these “black boxes” in order to question the nature and value of each and every indicator used to assess research at a given scale. Only these rational and technical analyses will ensure that decisions are based on solid evidence. Before attempting to rank a laboratory or a university among “the best in the world,” it is necessary to know precisely what “the best” means, by whom it is defined, and on what basis the measurement is made. Without this conception of the nature and consequences of the measurement, the university captains who steer their vessels using bad compasses and ill-calibrated barometers risk sinking in the first storm.

Yves Gingras’ most recent book, Bibliometrics and Research Evaluation: Uses and Abuses, was published by MIT Press in September. This article is based on its conclusion.

Yves Gingras is Canada Research Chair in history and sociology of science and professor in the history department at Université du Québec à Montréal.

Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

  1. Reuben Kaufman / November 23, 2016 at 15:26

    Thank you so much for underlining what should be obvious to anyone in an academic environment. It still puzzles me completely how fellow academics, once catapulted into the heavenly sphere of senior administration, seem to lose all sense of rigorous analysis when it comes to University Rankings. A former president at my university once explained to me, “we have to pay attention to the rankings (referring to MacLean’s in this instance), because the public at large do.” But I thought that a major role of a university is to promote critical thinking. What was particularly galling to me was: whenever the ranking rose a point or two, the president would broadcast far and wide how recent policy initiatives led to this improvement, and whenever the ranking fell a point or two, the announcement was how invalid the metrics are. Naturally, the rise and fall was always pretty well random statistical noise, and had nothing to do with anything objective. How in the world can we continue to tolerate this nonsense?

  2. Tim Buell / November 28, 2016 at 23:19

    Do we need yet more discussion about the pulp fiction of university rankings? Their only apparent effect is to dupe university advancement staff to create and buy full-page ads in the magazines that print this stuff.

    The result? Everyone still applies to Cambridge, MIT, Stanford or Harvard — just as they’ve always done — as if decreed in Ecclesiastes. Should you require more proof, just watch any Hollywood film or TV show about smart people.

Click to fill out a quick survey