Former Facebook VP champions Canadian digital sovereignty  

CIFAR AI Chair Joelle Pineau believes ethics and the common good must guide technological development.

July 23, 2025
Canadian researcher Joelle Pineau, associate professor at the School of Computer Science at McGill University Photography by  Selena Phillips-Boyle

Joelle Pineau is an associate professor and William Dawson scholar at McGill University’s School of Computer Science, where she co-leads the Reasoning and Learning Lab. She is also a core academic member of Mila – Quebec Artificial Intelligence Institute and a Canada CIFAR AI Chair. Until May 2025, she was vice-president of AI research at Meta (formerly Facebook), where she led the Fundamental AI Research (FAIR) team. 

Dr. Pineau has worked in AI for more than 25 years. After beginning her career in voice recognition in the 1990s, she specialized in robotics at Carnegie Mellon University in Pittsburgh and machine learning at McGill. She co-founded the Reasoning and Learning Lab and helped make Montreal a global centre for AI. 

In April 2025, she announced she would be leaving Meta and taking time to consider her next professional steps. 

Photo by Selena Phillips-Boyle

Q. Tell us about your experience at Meta. 

A. I worked for Facebook (Meta) for all those years because it was, at the time, the only company capable of conducting cutting-edge AI research with large-scale models while embracing an open-science approach. That meant we could publish our results, make code accessible, and freely share our models. 

I’m tremendously proud of my role in creating that culture. At Fundamental AI Research (FAIR), the organization I led, we shared more than 1,000 research artifacts — code, models and data sets — on platforms like GitHub and Hugging Face. These resources have been downloaded more than 2.7 billion times. 

As the US political climate shifted, I felt it was time for me to refocus on Canada — to contribute to our own digital sovereignty and our future in AI.

One standout example is the LLaMA model, which has become one of the most used models in AI research. Teams worldwide have applied it in fields ranging from education to health to law. Providing open access to this type of model was really my passion project. 

After eight years, I’d taken my work as far as I could. As the US political climate shifted, I felt it was time for me to refocus on Canada — to contribute to our own digital sovereignty and our future in AI. 

Q. What is digital sovereignty? 

A. It’s mainly technological independence — not depending entirely on other countries, especially the United States, for our infrastructure and data. 

Currently, academic researchers in Canada have to either use federal supercomputers   — and there are a limited number of them — or rely on cloud services hosted abroad, which is expensive and raises sovereignty concerns. 

We also lack a national data-sharing strategy, which would both protect personal privacy and facilitate research. Certain groups are starting to develop local solutions, but we need to increase capacity to avoid depending exclusively on giants abroad. 

International collaboration is important, but it shouldn’t come at the detriment of our digital autonomy. We absolutely must develop a robust strategy to preserve that autonomy and not be at the mercy of our neighbours’ decisions or whims. 

Photo by Selena Phillips-Boyle

Q. AI has been around for a long time. Why are we hearing so much about it now? 

A. AI has actually been around since the 1950s. Its origins are often attributed to 1956 conference at Dartmouth. AI has been operating in the background for a long time, especially in algorithms. That includes Netflix recommendations, Instagram feeds, Google searches, and online bank fraud detection. 

AI has long been a part of our daily lives, but we didn’t see it. What’s changed recently is the emergence of models like ChatGPT, which are more visible, more accessible and more “on the surface.” We can interact with them directly. 

This change is the result of successive technological advances. Technology often progresses slowly, then makes a great leap forward.  Now we’ve reached a stage where even non-experts can interact with AI. 

Q. Can you explain reinforcement learning? 

A. I’ll use chatbots like ChatGPT as an example. These models are trained in two major phases. 

The first is pre-training, where the model learns to predict or imitate from huge databases, often without human supervision. 

The second is fine-tuning, or reinforcement learning from human feedback, where humans interact with the model, evaluate its responses and give them a thumbs up or thumbs down. This feedback refines the model’s behaviour so that it can better respond to human needs. 

Q. How do you respond to people worried about or even scared of AI advancements? 

A. First, as with any major technological advancement, we need to think seriously about the repercussions of AI. We can’t just look at its advantages We have to develop these tools responsibly, while looking at the big picture. Artificial intelligence can do amazing things: predictions, recommendations, applications in health, education and productivity. But these advantages also carry real risks. That’s why I always insist on balance. 

We can’t simply aim for performance. We have to develop these tools rigorously and responsibly. I become more skeptical when the conversation descends into science-fiction scenarios, like “AI is going to dominate humanity.” 

You hear this kind of comment a lot, but it’s not informed by thorough analysis. At present, there is no realistic mechanism to make us think that this is the most likely trajectory. 

Photo by Selena Philips-Boyle

These scenarios presume ongoing exponential growth, which is very rare in the physical world. It’s important to look at the limits of the system as a whole, instead of just extrapolating from a single variable. For example, AI models require tremendous amounts of energy. They’re trained in enormous data centres, which themselves consume huge amounts of resources. It’s not clear that we’ll have the needed energy to support that sort of growth long-term. 

But that’s not to say there aren’t risks; there are, and we’re seeing them now. That’s why it’s important to focus on the current risk vectors, to measure and mitigate them. 

Here’s a concrete example. In our work, especially in open source, we developed methods to evaluate the risks of our models. For example: Could the model disclose confidential information? Could it generate misinformation? Could it be used to invent chemical or biological weapons? 

It’s important to distinguish real risks from baseless fears, and especially to keep fear from dictating how we act or communicate.

Take, for instance, a text-to-speech model we developed. It was capable of reproducing someone’s voice from a single sample. This was very technologically advanced, but the potential for misuse was unacceptably high. We therefore chose not to make the model public, as no mechanism like watermarking existed at the time to authenticate synthetic voices. The risk was too high and impossible to reliably mitigate. 

In short, it’s important to distinguish real risks from baseless fears, and especially to keep fear from dictating how we act or communicate. It’s counter-productive to scare people without providing them with the knowledge or means to respond. We can see the consequences of that today: fear about vaccines, for example, has resulted in the resurgence of preventable diseases like the measles. That’s why my process is rigorous and transparent.

I don’t downplay the power of this technology. It has enormous potential, but it also carries risks. It’s important to stay nuanced and help people understand what they’re really exposed to. 

Making sensational claims, declaring we’re in imminent peril of a technological apocalypse — that’s irresponsible. Unfortunately, the media is drawn to this kind of rhetoric. Journalists love disaster stories. But building real solutions takes time. Even one alarmist message relayed by an influencer or an artist can destroy the public’s trust. 

Q: As a woman, have you found this field difficult to navigate? 

A: The field is still male dominated today, and it’s been that way throughout my career. But I also had the good fortune to come up in places with a critical mass of women — not a majority but say around 15 to 20 per cent. That made a difference. So, it was never a one-to-50 ratio of women to men. In those places I built some wonderful friendships, which made me feel that I was not alone. 

That said, I acknowledge there’s a lot of survivorship bias in my story. I’m still in this field today because I was well supported. I had a lot of good experiences and, above all, people who believed in me. Some were mentors who not only advised me but also opened doors for me. Even though they were often men, they offered me critical opportunities and responsibilities throughout my career. 

That’s why I’m still in the field today. But I know not everyone has received this support. Many women have a much harder time in their careers. I was lucky to be surrounded by people who recognized my potential and told me so, who trusted me, sometimes before I was completely ready. 

Leading AI at the top: a foundational experience 

Dr. Pineau was a strategic leader on Meta’s executive team during her last 18 months at the company. It was an intense, formative and profoundly human experience. 

“Mark Zuckerberg, who led the team, reached out to me directly because he wanted to pivot the business toward AI, and he needed someone who thoroughly understood the technology and was able to explain it in lay terms,” she says. 

Tasked with devising a company-wide AI strategy, she also took on the role of mediator, bridging the gap between science, governance and communication. “My role was not only to build a company-wide AI strategy, but also to clearly explain opportunities and risks and to defend certain decisions, like keeping our research open at a time when other large corporations were closing their models.” 

It was a delicate balance requiring dialogue, education and intellectual rigour. “Substantial and nuanced conversations were necessary, and I felt people really listened with a lot of respect and curiosity. It was an incredible learning experience.” 

Q. How have you balanced your career with your family? 

A. My spouse is also an AI researcher, and we have four children, now teens and young adults aged 16 to 21. 

Having four children is a lot of work for any family, regardless of whether you have a demanding career. That said, academic careers offer the advantage of flexibility. There are set hours for teaching, of course, but beyond that, we manage our own time. For example, I’ve worked from home one to two days per week for many years now, especially in the summer. 

I’ve also travelled a lot in recent years, which was only possible because I was able to count on a reliable partner being at home. We were also fortunate that our parents were always there for us, especially when the kids were young.  

Not everyone has this kind of family support, and we recognize that. We also benefit from the social supports in Montreal, and Canada in general: good-quality public schools, childcare services, an entire ecosystem that makes life easier for families. When I compare with colleagues in places like California, it’s clear how valuable that social safety net really is. It’s easy to take for granted, but it makes a real difference. 

Q. How do you feel about combining an academic career with a position in the industry? 

A. You see it in certain fields, but in the sciences — especially in McGill’s Faculty of Science, where I work — it’s rarer. That said, there’s a will at McGill to find ways to make it work. I’m not the only person to do it. I’ve split my time between the university and a position in the industry for eight years now. 

It’s true that this sort of dual commitment isn’t common, but I keep everything transparent, declare and conflicts of interest, and scrupulously manage my time. When I started, it was almost unheard of, but I was fortunate to have an open-minded department head willing to give it a try. For a number of years I split my time 50-50, but in the last few it’s been more like 80 per cent of my time in the industry and 20 per cent at the university. I’ll see how it shakes out next year. 

Working at a university offers more freedom to explore new ideas. 

This is an unusual model, but you see it more often in other disciplines. For example, doctors often split their time between academia and clinical practice. Business, law and engineering are other areas where you see it. It’s just less frequent in the fundamental sciences. 

I think that, when done right, dual roles can bring benefits to both environments. But it does require a certain flexibility. Being involved in the industry gives you a better understanding of the concrete issues at hand, and this feeds into your academic research. 

Working at a university offers more freedom to explore new ideas. We work with passionate students who bring fresh and innovative perspectives, which is very stimulating. 

My connection with the industry is a real source of motivation for me. When I chose McGill, I thought industry-based research would be siloed and rigid but instead I’ve discovered an open and dynamic environment, which really appeals to me. 

Q. What’s in store for you over the coming months? 

A. I’ve gained a wealth of experience in leadership, research and artificial intelligence. I’m taking the next few months to reflect on what comes next and where I want to put my energy. I haven’t decided yet. It’s a real period of transition. 

I’m taking time to explore possibilities, to see where I can make myself useful, where I want to invest my energy. I’m still supervising a few students. Several others have graduated over the years, five or six recently. I have two or three left. 

That’s an important anchor for me. I’m going to continue to support them. The question right now is: Do I go back to running a lab full-time, or do I start something new? That’s what I’m considering. 

Mila: the beating heart of AI in Montreal 

Established in 2019, Mila – Quebec Artificial Intelligence Institute is today a hub for AI research in Canada. The brainchild of Yoshua Bengio, Mila brings together more than forty professors and about 500 students from McGill and Université de Montréal, as well as Polytechnique Montréal and other institutions. 

“The idea is to unite Montreal’s diverse talents in AI to create a collaborative and stimulating environment,” explains Joelle Pineau. Mila is part of Canada’s national AI strategy, alongside the Vector Institute in Toronto and Amii in Edmonton. 

Instead of a centralized organization, Mila acts as a platform. “The researchers choose their own projects. Mila’s role is to provide a framework and resources for this research to flourish,” Dr. Pineau says. 

Mila’s researchers cover a wide range of fields, from language processing, computer vision and machine learning to responsible AI. “Some colleagues are working on algorithmic bias; others on the environmental impacts of AI; others on systems security; others still on applications in health, neuroscience, law.” 

For her part, Dr. Pineau specializes in reinforcement learning, a method inspired by conditioning mechanisms pioneered by the nineteenth-century Russian physiologist Ivan Pavlov. “Through reinforcement learning, machines learn from rewards: They test actions, measure the effects, and adjust their own behaviour to optimize results.” 

Photos by Selena Phillips-Boyle, a visual artist residing in Tiotià:ke (Montreal) whose practice is situated at the intersection of labour, gender, and sexuality.