Rethinking the role of higher education in an AI-integrated world

Mark Daley, Chief Artificial Intelligence Officer at Western University, reflects on the role of universities in a world where intelligence is abundant.

February 06, 2026
Credit: Garno Studio

A peculiar quiet has settled over higher education, the sort that arrives when everyone is speaking at once. We have, by now, produced a small library of earnest memos on “AI in the classroom”: academic integrity, assessment redesign and the general worry that students will use chatbots to avoid thinking. Our institutions have been doing the sensible things: guidance documents, pilot projects, professional development, conversations that oscillate between curiosity and fatigue. Much ink has been spilled on these topics, many human-hours of meetings invested, and strategic plans written. 

All of this is necessary. It is also, perhaps, insufficient.  

What if the core challenge to us is not that students can outsource an essay, but that expertise itself (the scarce, expensive thing universities have historically concentrated, credentialled, and sold back to society) may become cheap, abundant, and uncomfortably good. 

This is not, of course, a prophecy. The future is best thought of as a portfolio of possible futures: branching, contingent, full of feedback loops, and shaped by politics, economics, accidents and human stubbornness. What I want to do here is inhabit one plausible, but too often dismissed, future long enough to notice what breaks and what might be rebuilt if it should come to pass. 

Spring 2032 

Daniel Kokotajlo and colleagues, in their AI Futures Model, offer a concrete (if opinionated) forecast: that by roughly 2032 we may see AI systems that are better AI researchers than any human, across the entire research endeavour, including the elusive matter of “research taste”: the knack for asking the right question in the right way at the right moment. They present this not as a single date etched in granite, but as a probabilistic timeline with assumptions and uncertainty.  

Now, suppose, purely for the purpose of thinking clearly, that they are roughly right in the way that matters. Suppose it is the spring of 2032, and a new kind of affordance has arrived: an AI system that is not merely a fluent text generator, but an end-to-end research agent. It can read the literature faster than your research team; write and run the code; design experiments; debug instrumentation; analyse results; draft the paper; respond to reviewers; and do it again, iteratively, with a kind of tireless, methodical creativity that is hard to compete with when you require sleep, childcare, committee meetings and the occasional existential wobble.  

Is your institution ready for that spring awakening? 

This is where the conversation, so often stuck on classroom tactics, needs to widen into strategy. Universities are, amongst other things, a social technology for aggregating human intelligence. We gather smart people (of different ages, training, and temperament) give them time, infrastructure, norms, and we ask them to produce two outputs society finds valuable: knowledge and people. 

“Knowing things” stopped being impressive centuries ago. The printing press took a swing; Wikipedia finished the job. What remained scarce was not information but judgment: synthesis, discernment, the ability to decide what matters, what is true, what should be done next (and, crucially, why). For most of modern history, there was only one scalable source of that kind of judgment: the human brain. 

In the spring-2032 scenario, that scarcity begins to wobble. 

Not in the dramatic, Hollywood sense. In the banal, budget-line sense. 

If you are a university leader, you have to ask an unromantic question: what happens to your value proposition if high-level cognition can be rented, on demand, in the cloud? If “intelligence” begins to look less like an attribute of individuals and more like infrastructure? 

Cheap intelligence changes the economics of expertise. 

Evidence for the plausibility of this scenario 

At this point, a fair-minded reader may object: aren’t we extrapolating wildly? Isn’t there a long history of overconfident AI predictions, followed by embarrassment, followed by another funding cycle? Absolutely. The AI field has a robust tradition of premature triumphalism. 

And yet. 

There is a reason some of us feel the ground possibly shifting, and it is not merely because the demos are better. It is because the pattern over the last decade has been remarkably consistent: systems improve when we scale computation, data and training. This is not a metaphysical claim that “intelligence is just compute”; it is only a practical observation about what has worked thus far. 

Canadian Turing Award laureate Rich Sutton captured this pattern in what he called “The Bitter Lesson”: across the history of AI, methods that scale with compute tend to win, in the long run, over methods that rely on carefully encoding human insight. The lesson is “bitter” (to computer scientists like me, at least) because it insults our desire to believe that cleverness, our cleverness, will remain the decisive ingredient.  

Hans Moravec, decades earlier, made a related point in “The Role of Raw Power in Intelligence“: what seems to matter for intelligence, as observed in nature and machines, is not special or privileged structures or processes, but simply the brute accumulation of computational capacity and the gradual emergence of capability from scale.  

Meanwhile, the empirical world is providing its own breadcrumbs. We have, in the last year or two, seen coding systems shift from “it’s just fancy autocomplete” to tools that can carry out multi-step tasks with increasing autonomy: writing tests, running them, revising code, debugging and coordinating a workflow. Anthropic’s Claude Code is one example. Andrej Karpathy has described 2025 as a threshold year for “agentic” coding; less a clever chatbot, more a junior collaborator who can actually move work forward autonomously. 

Software is not scholarship, but the coding story matters because it is a leading indicator of something broader: the shift from systems that talk about doing things to systems that do things, imperfectly but usefully, across time. 

Another breadcrumb comes from attempts to measure capability in a more structured way. METR, for instance, looks at the “time horizon” of AI agents: the length and complexity of tasks they can complete reliably. Their analyses suggest rapid improvement on this dimension, with a pace that, if it even roughly continues, would imply startling changes within a handful of years. You can argue about metrics and benchmark-gaming (and you should), but the direction is hard to ignore; METR’s most recent report from December 2025 is titled “AI capabilities progress has sped up”. 

Then there is mathematics, long treated as a sanctum of human reasoning. Fields medallist Terence Tao has been publicly reflecting on how AI tools are showing up in mathematical practice, and what it might mean as formalization and tool-assisted reasoning advance. Separately, 2025 saw multiple AI systems reaching the gold-medal threshold on International Mathematical Olympiad problems. An IMO score is not a research programme, but it is not nothing either; it is a sign that what we once dismissed as “mere pattern matching” is wandering into territory we used to reserve for human cognition. 

Gradually, then suddenly 

Institutions rarely fail because they lack intelligence. They fail because they are optimized for a different world. 

Universities, in particular, are exquisitely designed for slow-moving change. We consult. We deliberate. We create committees, subcommittees and, when truly desperate, task forces. We do this not because we are timid, but because legitimacy matters in a community built on argument, autonomy and shared governance. The time cost is, in normal circumstances, a feature, not a bug. 

But AI capability curves do not care about Senate cycles. 

Hemingway’s famous line about the speed of bankruptcy (“gradually, then suddenly”) is oft-quoted because it captures a common shape of change in complex systems: a long period in which it is easy to explain away uncomfortable signals, followed by a phase transition in which the new reality is obvious to everyone, everywhere, all at once, and far too late to address gracefully. 

My worry is not that universities will wake up one day and discover that a robot has replaced the President. My worry is that, if high-level cognition becomes cheap and reliable enough, universities will discover, suddenly, that some of what we offer the outside world is no longer scarce. 

Consider a mundane example. A manufacturing firm wants advice on a new materials process. Today, they consult a professor, perhaps with a graduate student in tow. In the spring-2032 scenario, they consult their in-house research agent first, because it is faster, cheaper and capable of exploring an enormous design space overnight. The professor becomes, at best, a “second opinion” (or a brand name), not the primary source of insight. 

Or take public policy. A Deputy Minister at Global Affairs Canada needs rapid synthesis: tradeoffs, historical analogues, stakeholder maps, risk assessment and treaty implications. Today, universities contribute to that cognitive capacity through faculty expertise. In a world of abundant machine cognition, the DM has a machine advisor that is quicker, calmer, knows the entirety of human history, has read every political science and international relations paper and text ever published, and is (let’s be honest) less emotionally attached to its own priors. 

This is not an insult to the professoriate. It is a statement about competition in free markets. When the outside world gains access to abundant high-level cognition, universities lose their monopoly on being “the smartest people in the room.” And when that monopoly erodes, some institutional authority quietly goes with it. 

Counterarguments worth taking seriously 

It would be irresponsible not to acknowledge the ways this scenario could be wrong. 

Maybe we hit a plateau: data bottlenecks, energy limits, algorithmic diminishing returns. Maybe regulation, liability and public trust constrain deployment. Maybe systems remain brittle in precisely the ways that matter for high-stakes research and policy (hallucinations are funny until they are expensive). Maybe “research taste” turns out to be more stubbornly human than some forecasters imagine, bound up with lived experience, values and embodied context. 

All plausible. 

But notice the asymmetry: the costs of being wrong are not evenly distributed. If this scenario is too aggressive, we will have spent some time thinking seriously about institutional adaptation. Hardly a mortal sin. If this scenario is too conservative, and change comes faster, we will have wasted the only resource universities cannot easily buy: time to adapt with legitimacy intact. 

So what should universities do? 

I am not going to offer a neat list of “five easy steps”. But I do think there is a framing shift university leaders should consider now, while we have daylight. 

The question is not, “How do we stop students from using AI?” That is, at best, a tactical question, and, at worst, a category error. 

The question is: What does a university become in a world where intelligence is abundant? 

To answer that, we need to distinguish between cognitive tasks and institutional functions. AI systems in this future can replicate cognitive outputs (analysis, synthesis, prose, code) with increasing fluency. What they cannot easily replicate are institutional functions that depend on social legitimacy, legal authority and embodied presence. 

Consider credentialing. A degree is not merely a signal that someone knows things; it is a coordination mechanism that allows employers, professional bodies and society to make decisions without having to assess each individualfrom scratch. An AI system can evaluate competence. It cannot, at present, grant a credential that other institutions will honour. The question is whether universities will defend and evolve that function, or watch it be unbundled by whoever moves faster. 

Consider formation. There is a difference between knowing how to reason about ethics and becoming the kind of person who notices ethical stakes in the wild. The seminar, the clinical rotation, the late-night argument in the graduate lounge, the slow accumulation of supervised responsibility – these are technologies for shaping judgment, not merely transmitting information. Whether AI can replicate this is an open question. But it is at least a defensible bet that embodied, relational, time-extended formation is harder to commoditize than document summarization. If so, universities should lean into that work, not treat it as a residual. 

Consider convening. Universities remain one of the few institutions that can bring together government, industry, civil society and the public on terms that are not purely transactional. That convening power rests on perceived independence and long time horizons, both of which are under pressure from other directions, but still represent genuine assets. In a world awash with machine-generated analysis, the ability to host trusted deliberation may matter more, not less. 

And consider accountability. When a consulting firm gives advice, there is a contract. When a physician prescribes, there is a licence. When a professor publishes, there is a name attached and a career at stake. Accountability is a social technology for making expertise trustworthy, and it requires entities that can be held responsible, that persist over time, that have something to lose. 

AI systems can produce outputs that encode expertise. What they cannot yet do is be accountable in the way institutions and individuals can. Universities might recognize that their future may lie less in being the sole source of insight and more in being the guarantor of it: the institution that vouches, that certifies, that takes responsibility when cognition has consequences. This is, arguably, closer to what the medieval university was anyway: less a factory for knowledge production than a guild for intellectual trust. 

I will not pretend these answers are complete. “Formation of judgment” is easier to invoke than to operationalize, and the history of higher education is littered with pious claims about character development that amounted to very little. But the alternative, assuming that what we do now will remain valuable simply because it has been valuable in the past, is a poor strategy when the competitive landscape may be shifting beneath us. 

Universities have reinvented themselves many times to meet the needs of the societies they serve. The spring of 2032 is not far away. Some readers of this essay will still be in their current institutional leadership roles. The question I’ll leave you with is simple: should something resembling this scenario prove true, what will you wish you had started, now, that you haven’t yet? 

The weekly read
for Canadian higher ed professionals
Join thousands of subscribers who receive career advice, news, opinion columns and feature stories from University Affairs.