Rethinking the role of higher education in an AI-integrated world
Mark Daley, Chief Artificial Intelligence Officer at Western University, reflects on the role of universities in a world where intelligence is abundant.
A peculiar quiet has settled over higher education, the sort that arrives when everyone is speaking at once. We have, by now, produced a small library of earnest memos on “AI in the classroom”: academic integrity, assessment redesign and the general worry that students will use chatbots to avoid thinking. Our institutions have been doing the sensible things: guidance documents, pilot projects, professional development, conversations that oscillate between curiosity and fatigue. Much ink has been spilled on these topics, many human-hours of meetings invested, and strategic plans written.
All of this is necessary. It is also, perhaps, insufficient.
What if the core challenge to us is not that students can outsource an essay, but that expertise itself (the scarce, expensive thing universities have historically concentrated, credentialled, and sold back to society) may become cheap, abundant, and uncomfortably good.
This is not, of course, a prophecy. The future is best thought of as a portfolio of possible futures: branching, contingent, full of feedback loops, and shaped by politics, economics, accidents and human stubbornness. What I want to do here is inhabit one plausible, but too often dismissed, future long enough to notice what breaks and what might be rebuilt if it should come to pass.
Spring 2032
Daniel Kokotajlo and colleagues, in their AI Futures Model, offer a concrete (if opinionated) forecast: that by roughly 2032 we may see AI systems that are better AI researchers than any human, across the entire research endeavour, including the elusive matter of “research taste”: the knack for asking the right question in the right way at the right moment. They present this not as a single date etched in granite, but as a probabilistic timeline with assumptions and uncertainty.
Now, suppose, purely for the purpose of thinking clearly, that they are roughly right in the way that matters. Suppose it is the spring of 2032, and a new kind of affordance has arrived: an AI system that is not merely a fluent text generator, but an end-to-end research agent. It can read the literature faster than your research team; write and run the code; design experiments; debug instrumentation; analyse results; draft the paper; respond to reviewers; and do it again, iteratively, with a kind of tireless, methodical creativity that is hard to compete with when you require sleep, childcare, committee meetings and the occasional existential wobble.
Is your institution ready for that spring awakening?
This is where the conversation, so often stuck on classroom tactics, needs to widen into strategy. Universities are, amongst other things, a social technology for aggregating human intelligence. We gather smart people (of different ages, training, and temperament) give them time, infrastructure, norms, and we ask them to produce two outputs society finds valuable: knowledge and people.
“Knowing things” stopped being impressive centuries ago. The printing press took a swing; Wikipedia finished the job. What remained scarce was not information but judgment: synthesis, discernment, the ability to decide what matters, what is true, what should be done next (and, crucially, why). For most of modern history, there was only one scalable source of that kind of judgment: the human brain.
In the spring-2032 scenario, that scarcity begins to wobble.
Not in the dramatic, Hollywood sense. In the banal, budget-line sense.
If you are a university leader, you have to ask an unromantic question: what happens to your value proposition if high-level cognition can be rented, on demand, in the cloud? If “intelligence” begins to look less like an attribute of individuals and more like infrastructure?
Cheap intelligence changes the economics of expertise.
Evidence for the plausibility of this scenario
At this point, a fair-minded reader may object: aren’t we extrapolating wildly? Isn’t there a long history of overconfident AI predictions, followed by embarrassment, followed by another funding cycle? Absolutely. The AI field has a robust tradition of premature triumphalism.
And yet.
There is a reason some of us feel the ground possibly shifting, and it is not merely because the demos are better. It is because the pattern over the last decade has been remarkably consistent: systems improve when we scale computation, data and training. This is not a metaphysical claim that “intelligence is just compute”; it is only a practical observation about what has worked thus far.
Canadian Turing Award laureate Rich Sutton captured this pattern in what he called “The Bitter Lesson”: across the history of AI, methods that scale with compute tend to win, in the long run, over methods that rely on carefully encoding human insight. The lesson is “bitter” (to computer scientists like me, at least) because it insults our desire to believe that cleverness, our cleverness, will remain the decisive ingredient.
Hans Moravec, decades earlier, made a related point in “The Role of Raw Power in Intelligence“: what seems to matter for intelligence, as observed in nature and machines, is not special or privileged structures or processes, but simply the brute accumulation of computational capacity and the gradual emergence of capability from scale.
Meanwhile, the empirical world is providing its own breadcrumbs. We have, in the last year or two, seen coding systems shift from “it’s just fancy autocomplete” to tools that can carry out multi-step tasks with increasing autonomy: writing tests, running them, revising code, debugging and coordinating a workflow. Anthropic’s Claude Code is one example. Andrej Karpathy has described 2025 as a threshold year for “agentic” coding; less a clever chatbot, more a junior collaborator who can actually move work forward autonomously.
Software is not scholarship, but the coding story matters because it is a leading indicator of something broader: the shift from systems that talk about doing things to systems that do things, imperfectly but usefully, across time.
Another breadcrumb comes from attempts to measure capability in a more structured way. METR, for instance, looks at the “time horizon” of AI agents: the length and complexity of tasks they can complete reliably. Their analyses suggest rapid improvement on this dimension, with a pace that, if it even roughly continues, would imply startling changes within a handful of years. You can argue about metrics and benchmark-gaming (and you should), but the direction is hard to ignore; METR’s most recent report from December 2025 is titled “AI capabilities progress has sped up”.
Then there is mathematics, long treated as a sanctum of human reasoning. Fields medallist Terence Tao has been publicly reflecting on how AI tools are showing up in mathematical practice, and what it might mean as formalization and tool-assisted reasoning advance. Separately, 2025 saw multiple AI systems reaching the gold-medal threshold on International Mathematical Olympiad problems. An IMO score is not a research programme, but it is not nothing either; it is a sign that what we once dismissed as “mere pattern matching” is wandering into territory we used to reserve for human cognition.
Gradually, then suddenly
Institutions rarely fail because they lack intelligence. They fail because they are optimized for a different world.
Universities, in particular, are exquisitely designed for slow-moving change. We consult. We deliberate. We create committees, subcommittees and, when truly desperate, task forces. We do this not because we are timid, but because legitimacy matters in a community built on argument, autonomy and shared governance. The time cost is, in normal circumstances, a feature, not a bug.
But AI capability curves do not care about Senate cycles.
Hemingway’s famous line about the speed of bankruptcy (“gradually, then suddenly”) is oft-quoted because it captures a common shape of change in complex systems: a long period in which it is easy to explain away uncomfortable signals, followed by a phase transition in which the new reality is obvious to everyone, everywhere, all at once, and far too late to address gracefully.
My worry is not that universities will wake up one day and discover that a robot has replaced the President. My worry is that, if high-level cognition becomes cheap and reliable enough, universities will discover, suddenly, that some of what we offer the outside world is no longer scarce.
Consider a mundane example. A manufacturing firm wants advice on a new materials process. Today, they consult a professor, perhaps with a graduate student in tow. In the spring-2032 scenario, they consult their in-house research agent first, because it is faster, cheaper and capable of exploring an enormous design space overnight. The professor becomes, at best, a “second opinion” (or a brand name), not the primary source of insight.
Or take public policy. A Deputy Minister at Global Affairs Canada needs rapid synthesis: tradeoffs, historical analogues, stakeholder maps, risk assessment and treaty implications. Today, universities contribute to that cognitive capacity through faculty expertise. In a world of abundant machine cognition, the DM has a machine advisor that is quicker, calmer, knows the entirety of human history, has read every political science and international relations paper and text ever published, and is (let’s be honest) less emotionally attached to its own priors.
This is not an insult to the professoriate. It is a statement about competition in free markets. When the outside world gains access to abundant high-level cognition, universities lose their monopoly on being “the smartest people in the room.” And when that monopoly erodes, some institutional authority quietly goes with it.
Counterarguments worth taking seriously
It would be irresponsible not to acknowledge the ways this scenario could be wrong.
Maybe we hit a plateau: data bottlenecks, energy limits, algorithmic diminishing returns. Maybe regulation, liability and public trust constrain deployment. Maybe systems remain brittle in precisely the ways that matter for high-stakes research and policy (hallucinations are funny until they are expensive). Maybe “research taste” turns out to be more stubbornly human than some forecasters imagine, bound up with lived experience, values and embodied context.
All plausible.
But notice the asymmetry: the costs of being wrong are not evenly distributed. If this scenario is too aggressive, we will have spent some time thinking seriously about institutional adaptation. Hardly a mortal sin. If this scenario is too conservative, and change comes faster, we will have wasted the only resource universities cannot easily buy: time to adapt with legitimacy intact.
So what should universities do?
I am not going to offer a neat list of “five easy steps”. But I do think there is a framing shift university leaders should consider now, while we have daylight.
The question is not, “How do we stop students from using AI?” That is, at best, a tactical question, and, at worst, a category error.
The question is: What does a university become in a world where intelligence is abundant?
To answer that, we need to distinguish between cognitive tasks and institutional functions. AI systems in this future can replicate cognitive outputs (analysis, synthesis, prose, code) with increasing fluency. What they cannot easily replicate are institutional functions that depend on social legitimacy, legal authority and embodied presence.
Consider credentialing. A degree is not merely a signal that someone knows things; it is a coordination mechanism that allows employers, professional bodies and society to make decisions without having to assess each individualfrom scratch. An AI system can evaluate competence. It cannot, at present, grant a credential that other institutions will honour. The question is whether universities will defend and evolve that function, or watch it be unbundled by whoever moves faster.
Consider formation. There is a difference between knowing how to reason about ethics and becoming the kind of person who notices ethical stakes in the wild. The seminar, the clinical rotation, the late-night argument in the graduate lounge, the slow accumulation of supervised responsibility – these are technologies for shaping judgment, not merely transmitting information. Whether AI can replicate this is an open question. But it is at least a defensible bet that embodied, relational, time-extended formation is harder to commoditize than document summarization. If so, universities should lean into that work, not treat it as a residual.
Consider convening. Universities remain one of the few institutions that can bring together government, industry, civil society and the public on terms that are not purely transactional. That convening power rests on perceived independence and long time horizons, both of which are under pressure from other directions, but still represent genuine assets. In a world awash with machine-generated analysis, the ability to host trusted deliberation may matter more, not less.
And consider accountability. When a consulting firm gives advice, there is a contract. When a physician prescribes, there is a licence. When a professor publishes, there is a name attached and a career at stake. Accountability is a social technology for making expertise trustworthy, and it requires entities that can be held responsible, that persist over time, that have something to lose.
AI systems can produce outputs that encode expertise. What they cannot yet do is be accountable in the way institutions and individuals can. Universities might recognize that their future may lie less in being the sole source of insight and more in being the guarantor of it: the institution that vouches, that certifies, that takes responsibility when cognition has consequences. This is, arguably, closer to what the medieval university was anyway: less a factory for knowledge production than a guild for intellectual trust.
I will not pretend these answers are complete. “Formation of judgment” is easier to invoke than to operationalize, and the history of higher education is littered with pious claims about character development that amounted to very little. But the alternative, assuming that what we do now will remain valuable simply because it has been valuable in the past, is a poor strategy when the competitive landscape may be shifting beneath us.
Universities have reinvented themselves many times to meet the needs of the societies they serve. The spring of 2032 is not far away. Some readers of this essay will still be in their current institutional leadership roles. The question I’ll leave you with is simple: should something resembling this scenario prove true, what will you wish you had started, now, that you haven’t yet?
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.
5 Comments
The problem with human intelligence is the same before AI existed and after AI was created. To become better at critical thinking, human beings need to read, write, and analyze text–a lot of text. No matter how fast knowledge is gathered, a student still needs to read, write, and analyze an abundance of material. This is a process no one can cheat (even if some try with ChatGPT). It’s time AI enthusiasts state this up front instead of marketing AI as a magical replacement for universities or the solution to students’ work load. AI can find answers at warp speed, but humans digest knowledge, interpret it, and understand its significance at a slow rate. Jump ahead to 2032, and this fact does not change. Humans still need to do the heavy lifting themselves if they want to improve their thinking. Too many students use AI for cognitive offloading because that is how AI companies advertise their services (i.e., pay money for a quick essay). Hence, we should view AI marketing with skepticism.
I’m now using AI as a required tool in my MBA ethics course. I require structured prompt use for critical analysis, once they have read their materials. With it my students’ level of critical insight has jumped at least 10 fold. I’d put their thinking up against most doctoral students, and that’s just starting their graduate studies. It’s not AI versus humanity, but humanity augmented by AI. And that is something the university is poised to lead in creating. We have to remember that knowledge is only a piece of the framework, not the whole we are trying to generate through our teaching. In a world where high quality knowledge is ubiquitous and cheap, universities are even more essential than the world where knowledge is constrained and expensive. It never crossed my mind when I was studying that I would teach hermeneutics to business students, but now it’s required. I think that’s better for the world and deeply satisfying for me.
I agree with the first commentator about the “problem with human intelligence”. Regarding the rise in use of Artificial Intelligence (AI) and related queries about the purpose of universities, there is a very relevant science fiction story by Issac Asimov (“Profession”), first published in 1957. The story describes a future where there is no real learning – information is simply automatically deposited into peoples’ brains. When they reach 18 each individual is assigned the career to which their brain’s absorption of information has rendered them most suited.
The main character in the narrative could not be assigned to any specific occupation. The character objected and tried to be reassessed, only to (apparently) be ignored by the authorities. After some time, the person found that he actually had been under observation because of his unique abilities. The individual found out that he was extremely valuable to his society as one of the few remaining who had the capacity for independent learning and thinking – abilities necessary to create and innovate, thus keeping humanity advancing, intellectually and practically.
The reported rise of use of AI in both business and education, with employees/students simply monitoring input and output rather than doing things themselves, echoes Asimov’s fiction. The 1957 story perhaps should be seen as a warning to us now to value independent thought and to support university education.
There is a quiet confidence in the way Daley moves between present conditions and emerging horizons. The speculative elements are not predictions so much as invitations to think more boldly. The scenarios feel less like forecasts and more like mirrors, reflecting back to us the assumptions we did not realize we were carrying.
Perhaps most powerful is the way the essay restores dignity to the institutional imagination. Universities are portrayed not as reactive entities scrambling to keep pace, but as enduring social forms capable of renewal. The language gestures toward responsibility, legitimacy, formation, convening — themes that resonate long after the page is turned, even if they resist easy summarization.
This is not an essay that dictates. It does not reduce complexity to a checklist. Instead, it creates space — intellectual, institutional, temporal — for leaders to sit with possibility. It feels timely without being trend-driven. Urgent without being frantic. Expansive without being diffuse.
In a crowded field of commentary, Daley’s piece stands out not because it shouts, but because it resonates. It feels less like an argument and more like an opening. And sometimes, in moments of transition, an opening is exactly what is required.
An enormously insightful perspective providing the institutional rationale for the courage needed to embrace AI in a manner that ensures student learning.
Rather than allow Big Tech to redefine learning to match their products, educators are now more confidently asserting the stark difference between AI applications for productivity versus learning.
Whilst graduates will no doubt amplify their future workplace impact using AI, doing so relies upon the development of durable critical thinking and communication skills that must remain the ‘super power’ of PSE institutions.