Illustration by: Katrin Emery

In 2020, Liam McCoy, then a medical student at the University of Toronto (U of T), joined the school’s Artificial Intelligence in Medicine Student Society. It was a small club. “We were seen as a bit of a fringe group,” he says.  

Fast forward eight years and every student in Canada is now probably using generative AI. AI is embedded into health systems, and it’s one of the hottest topics in health research.  

But the speed of this technological revolution is proving to be a problem for medical education.  

“The most challenging thing in this field is that the technology is moving incredibly quickly, at a pace that medical education is really struggling to keep up with,” says Dr. McCoy, now a neurology resident at the University of Alberta. He’s also a research affiliate at the Massachusetts Institute of Technology and research collaborator at Beth Israel Deaconess Medical Center in Boston, where he is studying the role of AI in medicine and medical education.  

Most curriculum innovations in education take three years to develop, he points out. “And three years ago, we hardly had [generative AI] at all.” 

Dr. McCoy believes that AI will lead to watershed changes in medical practice within a short timeframe. Research published in JAMA found that ChatGPT can already outperform physicians by 16 percentage points when it comes to making some diagnoses. That gap is likely to grow as the technology improves. By 2030, Dr. McCoy expects that it will be “considered unethical for clinicians not to use diagnostic support agents in their diagnostic process.”  

That means students in Canadian medical schools right now, who still have residencies and fellowships ahead of them, will enter independent medical practice after that shift to AI-assisted diagnosis has happened. They will be the first generation of physicians to start their careers in a fully AI-integrated health-care system.  

That’s a challenge for their education. Medical students worry that they are not getting enough exposure to AI for their future. In a survey of  486 medical students across Canada, published in BMC Medical Education, 85 per cent said that they did not have any formal educational opportunities about AI. Two out of three said AI should be formally taught as part of their training.   

By 2030, Dr. McCoy expects that it will be “considered unethical for clinicians not to use diagnostic support agents in their diagnostic process.” 

The survey took place in 2022 — light years ago in terms of AI uptake — and much has changed since then. Most medical schools in North America have since added some AI training to their curriculum. Even so, students remain concerned that they are not learning enough about AI and that there is too much variation from institution to institution. 

AI training is not well integrated into undergraduate medical education and that needs to change, says Samira Abbasgholizadeh-Rahimi, an engineer and Canadian Research Chair in AI and Advanced Digital Primary Health Care at McGill. She studies AI in medical education internationally, particularly in family medicine and primary health care, and developed a curriculum to help guide medical schools as they make decisions about AI.  

“We really need to start this as early as possible. We shouldn’t wait for [students] to go to residency stage or become a clinician before we teach them the basics of AI,” she says. “This is very important because, whether they want or not, they’re going to be exposed to these AI systems.”  

In May 2025, the Ontario Medical Students Association (OMSA) called for all medical students in the province to be given a training workshop on AI and for a standardized curriculum to help promote equitable access to AI education.  

Urmi Sheth, a medical student at McMaster who is on the OMSA council, said many medical students aren’t learning about things like AI scribes and other tools already in use. They are surprised when they encounter these tools while working with physicians. “Despite the fact that these tools were increasingly being used in clinical spaces, we weren’t being educated on them,” she says. She acknowledges that it is difficult to predict precisely what the role of AI will be in medicine by the time she and her colleagues are in practice. “But I certainly think it’s important for people to be able to navigate an evolving landscape with these tools.” 

Medical students aren’t waiting for the curriculum to catch up and are already adopting AI into their studies on their own. A 2024 survey of medical students in Ontario found more than half of students surveyed used generative AI, mostly ChatGPT, at least once a week.  Students were mostly using the technology to review or learn medical content, but nearly half were also trying it in a clinical context. They turned to AI to help generate differential diagnoses and support clinical decision-making. Still, even though they were eager adopters of the technology, more than 90 per cent said they worried about accuracy, reliability and bias, while half said they were concerned that AI would impair their learning or critical thinking. One-third said they wanted training on how AI systems are — and should be — implemented in clinical settings.  

Integrating AI into the medical curriculum 

University Affairs reached out to 18 Canadian medical schools and asked how and if they have integrated AI into their undergraduate medical curriculum. The responses reveal an educational system in flux, with variations across the country.  

At U of T, first-year medical students are introduced to AI in medicine. As they move through the program, they have seminars designed to help them understand the strengths and weaknesses of AI. At Western University, AI education has been incorporated throughout the MD curriculum, including mandatory AI-related teaching activities, with most training focused on AI literacy. Students at the University of British Columbia (UBC) are introduced in their first year to the principles of responsible and patient-centred AI use, including things like patient privacy, transparency, data security and privacy, as well as concerns about accuracy and bias. And, last fall, the University of Saskatchewan College of Medicine added four hours of AI instruction for its first-year students. 

“Many med schools are saying, ‘I don’t even know where to start. What do we teach?’” he adds.   

Other programs are figuring out their next steps. Representatives from the Université Laval say their medical school leaders are currently in talks to decide what’s ahead for AI within their program. 

While educators in all fields struggle vis-a-vis AI, perhaps nowhere is the challenge as profound as medicine, with its already jam-packed curriculum and life-or-death stakes. “Everyone complains about how the curriculum is stuffed. There’s no time or no opportunity to fit anything else in. [AI] is important, and so is everything else,” says Muhammad Mamdani, professor of medicine and director of the U of T’s Temerty Centre for Artificial Intelligence Education and Research in Medicine (T-CAIREM). He is also the Ontario Health clinical lead for AI.  

Around the world, the Chinese University of Hong Kong and the Nanyang Technological University in Singapore were among the first to mandate AI in their medical school curricula, says Dr. Mamdani. In North America and Europe, some schools are starting to movein that direction, and he expects that it will soon be mandatory at the U of T. “We hope to see AI being fully integrated into the medical school curriculum, where it’s just part of what you learn, like statistics and mathematics.” 

To help medical educators, the T-CAIREM team and the AI in Medicine International Education Working Group, a worldwide consortium of representatives from universities and health-care centres, created a framework for medical education in AI. Their report, which was released online in September 2025, is designed for medical schools that want to start teaching AI.  

Students need to know how machine learning models work so that they can evaluate things like the quality of the evidence and AI decision-making, says Dr. Mamdani. The risk of harm is high if health-care workers don’t understand the technology, he points out. There have already been cases where AI was deployed without proper evaluations and caused pain. A 2019 study found that many American hospitals were using a racially biased algorithm to decide which patients needed care. Black patients had to be deemed much sicker than white patients to be recommended for the same care because the algorithm had been trained on health-care spending data, and didn’t account for the fact that Black patients had unequal access to care.  

New set of skills needed for doctors 

At the heart of the educational challenge around AI in medicine is a fundamental question about the skill set of a good doctor. What does that look like? How does that change in the age of AI?  

Some qualities that define good doctors have changed before, even in the not-so-distant past. For centuries, some of the best doctors were the ones who could memorize the most information, says Amol Verma, the Temerty Professor of AI Research and Education at the University of Toronto. They could draw on a wealth of information stored in their heads and use it to make smart diagnoses and figure out treatments. But the advent of the internet diminished the value of memorization, leading to debate over whether physicians who didn’t memorize those facts would be less capable.  

 “I would argue that it’s better that it’s not my memory that determines whether a patient gets the right treatment. But, if we’re making that choice [to rely less on a physician’s memory], it has to be understood that we are making it deliberately,” says Dr. Verma. 

A similar debate is taking place now over AI. It is not necessarily a bad thing to have the assistance of a machine, said Dr. Verma, but it will require thoughtfulness and careful design to embed AI effectively and safely into medical training. 

“We have to be purposeful about doing it so that we have trained ourselves to think critically, ask questions, still do our own independent reasoning, but then maybe interact with an AI model so that it can challenge us and help us see our mistakes,” he says.  

Doctors will need the skills to challenge advanced AI models. According to Dr. Verma, they need to be able to recognize when AI is inadequate or inappropriate for the task. “For me, that is the ideal: we use that technology to heighten our cognitive and critical thinking abilities and our reasoning rather than dampen them. But that is not a given. It will require purposeful design, and I don’t think it’s obvious how to arrive at that better future.” 

Doctors will need the skills to challenge advanced AI models. They need to be able to recognize when AI is inadequate or inappropriate for the task. 

Research that explores the effect of AI on physicians’ training, knowledge and decision-making skills has drawn mixed and complicated conclusions. Perhaps that’s no surprise for a powerful and emerging technology: its effect is not straightforward, and neither wholly positive or negative.  

In one much talked about study published in The Lancet Gastroenterology and Hepatology in October 2025, Polish endoscopists who had AI-assistance while performing colonoscopies over a three-month period performed less well after the AI was withdrawn than they had before it was introduced — an effect known as deskilling. In another recent study published in Medical Science Education this January, led by researchers from the Schulich School of Medicine and Dentistry at UWO, 40 first-year medical students were asked to respond to eight challenging clinical cases involving pediatric nephrological or urologic concerns. Students had to answer multiple choice questions at three intervals: after independently reviewing a case, after reviewing a ChatGPT analysis of the case and one last time after a group discussion with their peers.  Students placed great trust in the AI-generated content, even more than they did with their peers, and they did so even when the AI suggestions were misleading or incorrect. They did so, wrote the authors, “potentially at the expense of their own critical thinking.”  

Experts who study AI in medical education agree that medical schools need to focus on teaching students the principles of AI, rather than the technical aspects. “The standard [for AI in medical education], if there were to be one, is about those fundamental skills around AI literacy. How do you critically analyze a new tool? How do you understand or recognize pros and cons, and how do you recognize the context in which you should use it?” says Rohit Singla, a resident in family medicine at UBC who, in 2019, launched a pilot project with two classmates to teach medical students about AI. Their ad hoc, student-led initiative grew into one of the first major projects in the country to teach AI to physicians-in-training.  

Dr. Liam McCoy and his colleagues hope that their research can help make AI systems work better in medicine. AI is here to stay in health care, and doctors will play a key role in testing and improving it. But he adds doctors will always be important decision-makers inproviding care. 

He likes to tell medical students a concept that he learned from a mentor. Human beings have relied on healers since long before medicine was useful, when it was the mere idea of a healer that brought comfort. They still do, and probably always will. “What I keep trying to impress upon students is that you are not only a mechanic of the body. You are also somebody leading human beings through some of the most difficult, stressful, scary times that they can possibly face. You have this role as an agent, as a steward, confidant and supporter of people going through difficult things. So, when we think about the things that AI won’t replace, we need to focus on those aspects.” 

The weekly read
for Canadian higher ed professionals
Join thousands of subscribers who receive career advice, news, opinion columns and feature stories from University Affairs.