The grey area of artificial intelligence
As AI transforms university practices, legal and ethical challenges are multiplying.
Artificial intelligence (AI) is taking universities by storm. The technology is being integrated into administrative processes, teaching and research, but universities have yet to fully reckon with its legal risks.
In February 2024, Patricia Kosseim, the Information and Privacy Commissioner of Ontario (IPCO), found that McMaster University’s use of AI violated students’ privacy after the university used a software called Respondus to record video and sound of students taking exams remotely. The software examined the recordings using AI to identify signs of cheating.
A student filed a complaint with the IPCO in 2021, claiming the university had inappropriately collected their personal information and that it was not made clear to students how the information was being collected, used, disclosed and destroyed. The commissioner agreed, ruling that McMaster had failed in its duty to inform students.
Ms. Kosseim also found that the contract between McMaster and Respondus did not adequately protect students’ information, and that Respondus had not obtained students’ consent to use the audio and video recordings it collected to train its system.
“AI poses numerous legal risks to universities because the technology is applied so widely, from administration to the delivery of services and information, to teaching and research,” notes Teresa Scassa, full professor of law at the University of Ottawa and a researcher with the Centre for Law, Technology and Society.
These risks can arise in unexpected places. In June 2025, the IPCO ruled that vending machines installed on the University of Waterloo campus had violated students’ privacy. The ruling came after an incident in 2024, when a student noticed a screen on a vending machine displaying an error message about a problem with its facial recognition feature. Students quickly rallied to voice their concern.
“Law 25 is very stringent. Even more so than European laws, which are widely considered to set the highest bar in terms of privacy protection.”
The machines, which have since been taken out of service, used facial recognition to identify students’ age and gender without their knowledge. “The university itself didn’t know about these cameras,” notes Dr. Scassa, “but it is still liable for what happened.”
Protecting personal information
Vincent Gautrais, a full professor of law at Université de Montréal and L. R. Wilson Chair in Information Technology and E-Commerce Law, believes protecting privacy and personal information is one of the biggest challenges associated with AI use.
On December 7, 2023, the Government of Canada published a guide on the responsible development and use of generative AI. Per the guide, organizations using AI must comply with privacy laws and regulations. They must also prove that they have the right to collect and use personal information, which may involve obtaining valid and informed consent from the people concerned.
These rules are explicitly laid out in Quebec’s Law 25, An Act to Modernize Legislative Provisions as Regards the Protection of Personal Information, which was adopted in 2021. It requires, among other things, that privacy risks be assessed before personal information is shared outside Quebec. Organizations must also obtain clear, free, and informed consent for each separate use of the personal information collected. They must also conduct a privacy impact assessment (PIA) before communicating personal information for the purpose of study, research, or the production of statistics without obtaining consent from the individuals concerned. The PIA must then be validated by Quebec’s privacy regulator, the Commission d’accès à l’information (CAI).
“Integrating these practices into research that analyzes participant data, for example, is a complicated matter,” says Dr. Gautrais. “But failing to do so can have undesirable consequences.”
As an example, he cites a 2022 CAI decision about a situation that occurred in Quebec’s Estrie region. The Val-des-Cerfs school services centre (SSC) had developed an algorithm in collaboration with data analysts from an accounting firm to predict which students were at risk of dropping out. The database was populated with de-identified information on grade 6 students, including grades, absenteeism rates, financial aid and disciplinary measures.
The CAI classified this data as personal information within the meaning of the law and ruled that, though de-identified, it might still be used to identify students. The CAI also ruled that the algorithm’s predictive indicators could influence decisions concerning students, and that the algorithm’s results—also known as inferred data—constituted new personal information. The SSC was therefore obligated to inform parents how and why that information was collected.
“Law 25 is very stringent,” notes Dr. Gautrais. “Even more so than European laws, which are widely considered to set the highest bar in terms of privacy protection.”
The potential for bias
The possibility that AI platforms will introduce biases into analyses or decisions represents another legal risk, since it may lead to accusations of discrimination.
“The risk of bias relates to the data used to train AI models, which excludes certain types of research and are therefore not representative of the complete scope of knowledge in any given domain,” explains Didier Paquelin, full professor in the department of teaching and learning studies at Université Laval and a member of the Observatoire international sur les impacts sociétaux de l’IA et du numérique (International observatory on the effects of digital technology and AI on society—OBVIA).
This can become especially problematic when AI is used to predict outcomes or make choices. In a university context, AI outputs could inject bias into student admissions, hiring decisions, and the academic advice given to students based on their anticipated probability of success.
“Universities should support or develop transparent AI tools whose processes can be understood and whose results can be interpreted.”
“Once predictive AI enters the equation, the risk of bias skyrockets,” notes Sébastien Gambs, a computer science professor at Université du Québec à Montréal and Canada Research Chair (Tier 2) in Privacy-preserving and Ethical Analysis of Big Data. For example, some companies use AI to determine salaries. Because the data used to train AI can reflect historical inequalities, such as lower pay for women, AI is more likely to reproduce those inequalities.
AI monitoring tools are also at higher risk of bias. Dr. Scassa explains that a sudden noise or a student looking away from their computer could be interpreted as cheating by AI-powered exam proctoring software. “AI fails to account for the fact that some students may live in a small space with multiple people, or have young children at home, which could cause distractions. Some systems also have trouble determining whether people with dark skin are looking at the screen or somewhere else.”
Dr. Gambs is creating AI models that reduce the risk of bias. He notes that there is very little transparency into how commercial, ready-to-use AI models work, which impedes users’ ability to adequately assess the risk of bias.
“Universities should either support or develop transparent AI tools whose processes can be understood and whose results can be interpreted,” he says. “And everyone who uses or is affected by them should have a say in their use.”
AI is not above the law
The government of Quebec published AI use guidelines for post-secondary institutions in August 2025. It requires that AI use comply with Law 25, particularly regarding how personal data is kept and protected and how copyright is upheld.
The guide emphasizes that AI often reproduces content without permission and stresses that users are responsible for any illegal distribution of content. There are intellectual property and copyright risks around both AI output and input. “Universities have to understand that feeding a scientific article, a student’s work or data into an AI tool like ChatGPT sends that work directly to a foreign company and beyond their control,” notes Dr. Gambs
Réjean Roy was among the architects of Quebec’s AI use guide. He is the director of training and knowledge mobilization at IVADO, an interdisciplinary and cross-sector AI training, research, and knowledge mobilization consortium led by Université de Montréal.
“There are pros and cons to artificial intelligence. Universities now face the challenge of leveraging the pros while curtailing the potentially very negative effects of the cons.”
“The guide lays out the major ethical principles that should steer AI use in post-secondary institutions, but it also answers the central question of: how do we get there?” says Mr. Roy.
The answer to this question involves a strong and well-structured governance framework. Universities must have a clear grasp of how professors, researchers, students and employees are using AI, educate them on their ethical and legal obligations, and consult them before introducing new AI technology. “These tools must also be thoroughly tested in secure environments before wide implementation,” adds Mr. Roy.
In January 2025, the federal government abandoned Bill C-27, which included legal provisions to regulate AI and to reform the protection of personal information. No further legislation is currently before Parliament.
Provincial legislation also remains minimal. In 2024, the Ontario provincial government adopted Bill 194, which permits the government to introduce regulations regarding the use of AI in the public service. There is currently no other provincial legislation that directly regulates AI.
However, even in the absence of legislation regulating AI in particular, Dr. Paquelin notes that “the law at large applies whenever we use this technology in universities.” This includes human rights andlabour laws, legislation on the protection of personal information, copyright law, and professional codes of conduct.
“There are pros and cons to artificial intelligence,” says Dr. Paquelin. “Universities now face the challenge of leveraging the pros while curtailing the potentially very negative effects of the cons.”
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.