Skip navigation

‘We haven’t yet had an AI Hiroshima’

Regarded as one of the godfathers of artificial intelligence, Yoshua Bengio now urges caution when it comes to AI.
BY MAUD CUCCHI
APR 26 2024

‘We haven’t yet had an AI Hiroshima’

Regarded as one of the godfathers of artificial intelligence, Yoshua Bengio now urges caution when it comes to AI.

BY MAUD CUCCHI | APR 26 2024

Back in 2012, Yoshua Bengio proclaimed that a mouse’s brain was infinitely more intelligent than any algorithm. “But ask me again in 10 years,” he added with remarkable foresight. A decade later, the founder and scientific director of Mila – the Quebec Artificial Intelligence Institute and professor at Université de Montréal is sounding the alarm about AI’s many potential risks: national security, deepfakes, disinformation, fraud, surveillance, societal destabilization, systemic discrimination, loss of control of AI systems and more.

In recent years, technological advancements have accelerated so quickly that he now believes that a superhuman AI could be developed in the next two decades, “or even the next few years.” How did the world’s most widely-cited computer scientist – one of the pioneers who pushed the boundaries of AI with more advanced learning algorithms – end up calling for a complete halt to research last year, becoming an AI whistleblower, and predicting a revolution not unlike the agricultural and industrial revolutions? Here’s a look at his rapid rise within the scientific community, complete with a host of puzzles with potentially life and death consequences.

“I’ve always been fascinated by the human mind – how the brain and intelligence work,” says Dr. Bengio. As a teenager, like any self-respecting computer geek, he dove headfirst into programming. When it came time to focus his university studies, he explored connectionism and drew inspiration from the brain’s structure to design systems at the cutting edge of AI. “What got me excited then and still fascinates me today is that intelligence can be explained a bit like how you explain physics, using a handful of scientific principles,” he says. In 1991, Dr. Bengio completed his doctorate in computer science at McGill University. After acquiring a range of experience abroad, initially in statistical learning and sequential data at the Massachusetts Institute of Technology, then in machine learning and computer vision at AT&T Bell Laboratories in Holmdel, New Jersey, he returned to Montreal as a faculty member at U de M where his work was found to have a practical application in voice and handwriting recognition.

While AI research is booming today, it was still in its infancy back in the 1990’s. “The systems couldn’t even handle basic tasks like recognizing handwritten numbers or phonemes. Today, these things are trivial, but at the time, we didn’t have the algorithms or the needed processing power and data.” That’s why only a handful of researchers considered the possibility that AI could be used maliciously, he argues. “Only a few nuts like me were saying we needed to be careful, but for a long time, it never crossed peoples’ minds that computers could represent a risk.”

A life dedicated to academic research

How do you summarize a 35-page CV? Dr. Bengio’s groundbreaking work on deep learning earned him the 2018 A.M. Turing Award, often referred to as the “Nobel Prize of Computing,” alongside Geoffrey Hinton and Yann LeCun. Unlike his colleagues, who worked for Google and Meta respectively, Dr. Bengio spent most of his career in academic research. In addition to his duties at U de M, he is co-director of the learning in machines & brains program at CIFAR. He also serves as the scientific director of IVADO, an interdisciplinary data valorization institute that brings together industry professionals and university researchers.

When his colleagues joined Google and Meta, he sensed the potential pitfalls of AI. “These companies have jumped on the deep learning bandwagon,” Dr. Bengio says. “They were quietly looking to use AI to enhance their online advertising systems.” Influencing consumers to choose one brand over another was one thing. But “it wasn’t hard to imagine AI influencing other, more impactful choices – not to mention political opinions – and that’s exactly what we’re seeing today,” he laments. But it took 10 years to sound the alarm. Realizing that you spent years researching and promoting potentially harmful technology “is a tough pill to swallow,” notes Dr. Bengio, who also held the Canada Research Chair in statistical learning algorithms from 2000 to 2019. “Until we understand what we’re doing, we’re just playing with fire,” he says. “Before taking action, we need to understand the risks. To me this is obvious, but it’s not for many people.”

“Science without conscience”

In 2017, U de M took the first steps in developing the Montréal Declaration for a Responsible Development of Artificial Intelligence. One of the first collaborative initiatives for the responsible development of AI, the declaration is grounded in ethical principles and fundamental values such as justice, well-being, privacy, democracy and accountability. But global awareness of the risks of AI only came into focus in 2023, when he joined colleagues and other world technology leaders to call for an AI moratorium. “Obviously, since then, companies have kept doing what they were doing,” he says. “But there has been a radical shift in public opinion, and among decision-makers and even researchers. Many people were worried about the risks but didn’t speak up for fear of being judged.”

This global effort is reminiscent of the nuclear pioneers who, in their later years, rallied to safeguard the future of humanity, which their own work had put at risk. “We haven’t yet had an AI Hiroshima,” Dr. Bengio notes, but since the moratorium, he has been working to raise awareness in influential forums. Since 2023, he has been a member of the United Nations Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology and has been commissioned by the U.K. to chair its “State of the Science” report on the capabilities and risks of frontier AI. In early February, he testified before a federal committee examining the legislative framework for artificial intelligence in Canada. “The advancements to come are likely to be very disruptive, and it’s impossible to predict when that will happen,” he says, emphasizing that there is an urgent need to regulate AI development and use now. “It just wouldn’t be sensible to leave these decisions in the hands of individuals. Some peoples’ interests are not aligned with the collective interest. That’s where things get serious.”

To contextualize the problem, Yoshua Bengio likens the advancement of AI to a tamed grizzly bear that has become so smart it can escape its cage to help itself to the fish that are used to reward it. “For AI, survival means either controlling humans or getting rid of them.” This is not a question of “existential consciousness.” It’s simple mathematical reasoning. “If this entity wants to maximize rewards, its best course of action is to take control of its environment. And that includes us.”

PUBLISHED BY
Maud Cucchi
Maud Cucchi has been working as a cultural journalist for over a decade and has worked for both Le Droit and Radio-Canada.
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

Click to fill out a quick survey