If you believe the 2014-15 edition of the Times Higher Education World University Rankings, we should be worried about the international standing of Canada’s world-class universities. What’s the rational for such pessimism? According to THE rankings, over the last four years (from 2011-12 to 2014-15), eight Canadian universities that usually have done well in the rankings have seen a continual drift downwards. Together, they’ve dropped a combined 83 ranks over the period. McMaster University has lost the most at 29. But should we really worry about this seemingly sad state of affairs? Actually, no.
A matter of research universities
The THE rankings judge research-intensive universities “across all their core missions.” It grants the greatest importance to research volume, output, productivity and reputation; citations to research publications; graduate and doctoral training; as well as research-based relationships with industry and international clusters. Although not absent, teaching indicators get less attention than the research environment.
The other well-known international ranking, Shanghai Jiao Tong University’s Academic Ranking of World Universities, or ARWU, focuses even more on research performance. Its main indicators include professors’ total publications in indexed journals; papers accepted in Nature and Science; highly cited papers; and top international awards, e.g., Nobel Prizes and Fields Medals, by faculty and alumni.
World-class universities are, by and large, research universities. According to the Carnegie Foundation, research universities offer a broad array of undergraduate studies but place graduate studies at the top of their priorities. Over a given number of years, they must also grant a minimum number of PhDs and attract a certain level of merit-based research funds. The better they do in these criteria, the more firmly entrenched they are within the elite group of research universities.
Based on these aforementioned criteria, at the turn of this century the Carnegie Foundation counted only about 300 research universities among the roughly 4,000 higher-education institutions in the U.S., or less than 10 percent. In his seminal 2004 book, Knowledge and Money: Research Universities and the Paradox of the Marketplace, author R.L. Geiger illustrates the remarkable contribution of these institutions to society. One hundred of these public and private research universities accounted for 28 percent of all bachelor’s degrees awarded in the U.S., 34 percent of first professional degrees and 68 percent of PhDs; they also perform nearly 75 percent of all university-based research.
The Canadian universities that we find in the THE and ARWU rankings belong to this specific class of university. Most of them are members of Canada’s U15 group of research-intensive universities based on such criteria as their annual level of research investments, their prioritization of graduate studies and the number of degrees awarded, especially doctoral. They represent only about 15 percent of Canada’s universities or university-level colleges, and yet they attract three-quarters of all research funding and grant an equal percentage of PhDs, while also admitting 46 percent of all undergraduate students.
The diagnosis of Canada’s research universities
In our forthcoming book, Leading Research Universities in a Competitive World (McGill-Queens’ University Press, February 2015), we review the characteristics of the THE and ARWU rankings. Our analysis concludes that THE, in particular, suffers from a great deal of year-over-year instability, mainly due to the importance it places on the results of an annual survey of academic peers concerning institutional reputation. These subjective reputational factors greatly affect institutions’ scores in two of the ranking’s major indicators: teaching and research quality. Although THE improved its methodology in 2010, these reputational factors continue to lead to some particularly surprising year-to-year variations.
From 2011-12 to 2014-15, THE lists the same eight Canadian universities among its top 200 world-class universities. As mentioned, these universities collectively dropped 83 spots during this period. However, analyzing the scores from indicators used for these rankings changes the diagnosis considerably. Between the two periods studied, the drop in the total score for the eight universities is only 43.3 points out of a total of 2479.6 for the five indicators used. Therefore a potentially insignificant variation of 1.75 percent in the total score results in an apparently impressive drop of 83 places for Canadian universities. What’s more, the two indicators that include the results of the reputation surveys, a frequent cause of volatility in the rankings, account for 94 percent of the drop in the total score.
Meanwhile, in the ARWU rankings for the same period, Canada’s top eight research universities showed much less volatility. For six of the eight Canadian universities that figure among the top 150 universities, the hierarchical order of their respective ranks tends to be similar among the two international university rankings. But, the ARWU doesn’t use a reputational survey. As a result, their quantitative indicators don’t lead to a significant drop for Canada’s top eight universities. Moreover, the relative positions of Canada’s universities – which was such a problem for McMaster in the THE – remain very stable in the ARWU for the entire period.
More-relevant questions
The 2014-15 THE ranking asks a relevant question regarding the unequal international distribution of world-class universities between countries. What happens when world university rankings are normalized according to certain characteristics of these countries? For example, if you divide the number of top universities in the THE for our four-year period by a country’s gross domestic product, Canada receives a rather middling score among about 30 countries.
In Leading Research Universities, we explore this crucial issue through a macro-economic model that looks at six factors: a country’s total population, GDP, per-capita GDP, research spending as a percentage of GDP, proportion of the labour force with a university degree, and economic density (GDP multiplied by GDP per capita). For the 2012 edition of both international rankings, seven OECD countries – the U.S., Germany, France, Japan, Australia, United Kingdom, Australia, and Canada – account for 59 percent of the top 400 world-class universities, 73 percent of the top 200, 80 percent of the top 100, and 90 percent of the top 50. We hoped this model would shed light on why some of these countries’ research universities fare better than their international counterparts.
For Canada, a small country by the standards of this group, on the basis of the model’s variables and with the leading American institutions serving as a reference point, we expected a maximum of eight to 10 Canadian institutions in the top 400. Actually, that year, Canada had no fewer than 16 and 18 universities in the top 400 in the ARWU and THE, respectively. This strong performance persists, though to a lesser extent, through the top 200 and top 100. In these two categories, Canada posted a higher proportion of research universities than we would have expected relative to U.S. universities in the same category. Canada also had its fair share of research universities in the top 50.
This macro-economic model is certainly not the final word on why some countries do better than others in the rankings. It nonetheless corroborates that countries with the highest economic density tend to place a greater number of universities in international rankings. However, this model did not fully explain why some countries surpassed expectations while others fell short.
The bulk of our book, therefore, looks at four national university case studies – the U.S., U.K., France and Canada, selected for their universities’ contrasting performances within international rankings – to explore other more complex questions. Namely, how universities, and more precisely research universities, access and control their human, material and financial resources in order to perform their multiple missions.
What is at issue is therefore not just of the amount of resources available to universities, but also the diversity of their sources of funding. How human, physical and financial resources are obtained are also studied, which are not entirely favourable to performance and are subject to market influences and relations that these institutions do not completely control. The same applies to government regulation, not to be confused with issues of public funding, which plays a key role in the degree of autonomy universities can build.
Again we see how, for a given university system, that all of these structural factors combined also determine the performance of universities. An appropriate and structuring balance between these factors allows research universities to behave like well-integrated organizational actors that will have an easier time being included in select groups of world-calibre universities.
This comparative analysis efficiently establishes the organizational environment of Canadian universities. However, universities, particularly research universities, can and must do even better. In an increasingly globalized, knowledge-based world, the Canadian university system is facing major challenges and must improve some of its key areas of performance.
Robert Lacroix, an economist, is rector emeritus at Université de Montréal; Louis Maheu, a sociologist, is a professor emeritus, also at Université de Montréal.
It’s funny, I’ve always thought that these rankings reach the height of stupidity (or maybe the depths). Why? What is the ultimate purpose of ranking a university? Might it be mostly to attract more student applicants? Certainly the academic leaders of Canadian universities that ranked among the top 10 in their category of Maclean’s system seemed to shout that fact at the top of their metaphorical voices in brochures that target the general public. If they rose an insignificant rank or two in one year, they shouted even louder. If they fell a rank or two, they growled at how the system was faulty.
If there is another, or a better, reason for these rankings, would somebody inform me in a reply?
But any sensible student applicant will look for a PROGRAM that is strong in their area of interest, or in the case of a graduate student, a PROFESSOR who has a strong reputation in their area of research interest. Those programs or those professors can be in institutions that are way down in, or don’t even appear in, the universal rankings, and yet could be far more valuable to someone’s ultimate career.
And then the ranking farce was extended to invent league tables for secondary schools. I could give an example where a school (in England) that had been at the top of their league for some years, tumbled down dozens (if not hundreds) of places because one class decided not to answer one section of an English exam that year (for reasons that need not detain us here). This is a parallel example to the one given above where an insignificant 1.75% drop in total score caused an 83-place tumble in the rankings….
Would the authors of this article reply and present some evidence to show why I’m wrong to look so unkindly on this stupid behaviour (participating in these worthless exercises) by the supposedly intelligent leaders of supposedly reputable universities?
My website published a ranking of Canadian universities through purely student and alumni-driven reviews – because at the end of the day, that’s what matters doesn’t it?
Last year’s report had 550 responses across 27 universities and we’re continuing to grow this.