Ensuring a responsible and ethical development of artificial intelligence

Many Canadian researchers are coming together in different organizational structures to study issues surrounding AI ethics and governance.

May 15, 2019
A I  written

The economic potential of artificial intelligence (AI) has been like a siren song for many governments, and Canada is no exception. According to Statista, a private data company, Canada was home to 0.7 percent of all worldwide private investment and public financing for AI between 2013 and 2018. That puts it in fifth place among the world’s countries, far behind China (60 percent) and the United States (29 percent).

Yet AI raises numerous questions of ethics and governance. The countless applications of this multifaceted technology can have negative effects in many different areas. Some observers bemoan the lack of attention being paid to the ethical dimension of AI governance in Canada, including Daniel Munro, a visiting scholar in the Innovation Policy Lab at the Munk School of Global Affairs and Public Policy at the University of Toronto. In a recent article published in Policy Options, he notes, for example, that the federal government’s Pan-Canadian Artificial Intelligence Strategy expressed little more than a vague intention to support academic research on these issues. In December 2018, however, Canada and France announced the creation of an alliance to promote an ethical and inclusive approach to AI.

Adopting clear principles

Here at home, the research community has united to establish safeguards on AI development. In December 2018, university researchers in Quebec issued the Montreal Declaration for the Responsible Development of Artificial Intelligence. To date, nearly 1,400 private individuals and 41 organizations have signed it.

“The goal is to establish a framework for the responsible development and deployment of AI, with principles that can adapt to different realities and different contexts, but also to participate in the broader discussion about AI ethics,” explains Nathalie Voarino, a doctoral student in bioethics who doubles as the Declaration’s scientific coordinator. The document was a collaborative compilation involving nearly 500 individual citizens.

It consists of 10 articles, some of which cover less frequently discussed aspects like privacy protection. “We need to preserve private spaces where people aren’t subjected to digital intrusions or evaluations,” adds Ms. Voarino. Other principles have to do with contributing to the well-being of all sentient beings, respect for autonomy, democratic participation and inclusion of diversity.

The Declaration also calls for caution and responsibility from people involved in both the development and use of AI. “Researchers have to take responsibility, because it’s hard to turn back once a discovery has been made,” Ms. Voarino observes. “At the same time, it’s often difficult to imagine the future uses to which these advances might be put, so the decision-makers, developers, users and ethicists working with these applications need to be cautious as well.”

A popular research subject

Research centres focusing on AI ethics are emerging throughout Canada. In Quebec, the International Observatory on the Societal Impacts of Artificial Intelligence and Digital Technology has adopted an approach based on responsible innovation. “It’s based on principles of inclusion, responsiveness, careful reflection and anticipation of future developments,” explains Lyse Langlois, the Observatory’s scientific director. The Observatory brings together 160 researchers from nine universities and nine CEGEPs. It will be hosted for five years by Université Laval, followed by another university in a different region of the province, and then an institution in Montreal.

“The Observatory has four functions: research and creation in eight research avenues, scientific and strategic monitoring, public deliberation, and public policy,” Dr. Langlois explains. “Our goal is to highlight the major ethical issues and to acquire real influence in the field.”

Also last December, the University of Guelph announced the founding of its Centre for Advancing Responsible and Ethical Artificial Intelligence (CARE-AI). Nearly 90 researchers from a range of disciplines will be associated with the Centre. Graham Taylor, an associate professor at U of Guelph, is its academic director. “Our objective will be to make sure that AI will benefit humans, and to reduce its negative impact,” he says.

The centre’s approach is based on three pillars. First is the method, i.e., the development of efficient yet human-compatible algorithms. Next is responsibility, the art of avoiding bias in algorithms and ensure that they are equitable, explainable and accountable. Finally comes the construction of AI applications that improve life for all humans.

For its part, U of T recently announced the creation of the Schwartz Reisman Innovation Centre. The centre will include the Reisman Institute for Technology and Society, which will study the social impact of AI and technology. “The purpose is to observe the effect of different technologies, including AI, on work, democratic institutions and all other aspects of society that they may come in contact with,” explains Vivek Goel, vice-president, research and innovation, at U of T.

Dr. Goel notes that the new innovation centre will bring together many people who are involved in developing marketable applications. “But we want to highlight the contributions of philosophers and researchers in social sciences and the humanities to help identify the potential negative impacts of these technologies before they reach the application stage,” he says.

The upshot is that it seems many researchers have decided to work to ensure that the siren song of AI doesn’t send us crashing into the rocks. It remains to be seen whether governments and private businesses will show the same degree of caution.

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.