ChatGPT? We need to talk about LLMs

Large language models are here to stay, but they also present ethics and equity questions about their design, operationalization, and legacy that universities must consider.

Large language models such as ChatGPT have disrupted how universities engage with teaching and learning. Large language models, or LLMs, are forms of artificial intelligence (AI) that use machine-learning to process, understand, and produce new content (text, image, etc). While some universities are embracing this new technology form as a teaching opportunity, others are updating their academic integrity policies and using punitive technological models to restrict its use.

Whatever way universities are “dealing” with LLMs, the message is clear: they are here to stay, and we need proactive solutions to manage them within institutions of higher learning. Yet, in the words of Amba Kak, AI Now’s Executive Director, “[If] we accept that these technologies are a kind of inevitable future, we’re also [accepting] that a handful of tech companies would have tremendous and unjustifiable control and power over societies and over people’s lives.”

Indeed, amidst the cacophony of articles, op-eds, and various other engagements, serious considerations over the harms of LLMs in higher ed have gone amiss. Before we engage in conversations about how we can ethically use LLMs for teaching, learning, and assessment, we need to determine whether LLMs themselves are ethical. Importantly, we must ask if they will create a more equitable future of learning, which has been a recent strategic priority for many American and Canadian institutions. Instead of a fatalistic embrace of technology as progress, we ask if this progress is, indeed, the type of progress we want. And if so, progress toward what kind of a world?

Towards the ethical production of LLMs

This is not a technophobic perspective — quite the contrary. While broader conversations about ethical AI more generally are being actively debated, our goal is to question the big picture in higher education, namely the ethics that guide the design, operationalization, and use of AI technologies and their impacts, not only on teaching and learning, but on our responsibilities to the environment, labour rights, equity, knowledge production, and more as an educational community. This, we argue, can be done by ensuring that all production and legacies of LLMs are ethical — this means socially and economically just, equity-affirming, environmentally responsible, and epistemically anti-colonial. Let us consider some of the current harms of LLMs in their three phases of design and development, operationalization, and future legacies:

The interactive image above identifies some of the harms and risks that have been described in the literature. We say “some” deliberately, as more and more research and analyses are being published by those most impacted by them. It is worth mentioning that there exists considerable overlap or intersectionality (compounded harm) between the categories mentioned above.

Here are some examples: LLMs are developed by inserting extensive amounts of internet-data and textual sources (articles, images, newspapers etc.) as their knowledge base, and this includes the good, the bad, and the ugly of the internet. Reliable and non-reliable sources are indistinguishable to the AI system: academic peer-reviewed articles are given as much “authority” as Facebook comments. To make outputs less toxic (racist, sexist, violent etc.), companies  were outsourcing labour to Kenyan workers, paying them less than $2 an hour to train datasets by labelling textual descriptions of sexual abuse, hate speech, and violence while disregarding the trauma of sifting through such violent content impacts workers. Further, while the system has translation capabilities, like much of the internet, most of its knowledgebase is western-centric, therefore silencing or erasing histories, cultures, and ways of knowing outside dominant western datasets. Consequently, the system’s knowledgebase is skewed and biased, and perpetually reproduces and reifies existing inequities, including future datasets.

When discussing the ethics of LLMs and their adoption within higher ed, ethical consideration must therefore be expanded to evaluate their full systemic impacts beyond teaching and learning, and beyond this moment. This means widening analyses to ensure the inclusion of global(ized), racialized, gendered, environmental, and socioeconomic perspectives that interrogate the role of informed consent. Indeed, harms will extend intergenerationally; will benefits do the same? Our actions now have profound consequences on/for future generations — their environment, economy, social norms, and knowledge reproduction.

Understanding and taking responsibility for potential harms

Given LLMs’ seamless and stealthy integration into platforms as ordinary as Microsoft Word or Google searches, institutions of higher ed need to contend with all the unintended ways in which LLMs are becoming part of the student and educator experience. If we accept that AI technologies are here to stay, ethical standards for ed tech (including procurement and adoption of open access platforms) will need to evolve along with them. Institutions of higher ed will need to expand their understanding of responsibility to be accountable for the harms of LLMs. This necessitates more transparency around harms, as well as education for students, staff, and faculty focused on digital justice principles (access, participation, common ownership, and healthy communities) and “consentful tech.”

Universities need to critically examine the labour implications for course authors and educational support staff. LLM platforms like NOLEJ and Tome are being commodified to build learning materials, appealing to overworked, undercompensated instructors who may be paid the same whether they develop an off-the-shelf course with assets produced by a corporate publisher, or a completely original bespoke learning experience for students. For institutions facing pandemic-induced austerity and hiring freezes, reducing labour costs through AI may be an ethically dubious way to make savings, while still producing new “original” course content to which they can still own intellectual property rights.

Institutions of higher ed are not passive consumers of this technology. They must take responsibility for the harms of LLMs and the continued privatization of teaching and learning technologies. As a community, we will need to reconceptualize how to achieve an equitable future of learning.

Rebecca Sweetman is the associate director of educational technologies for arts and science online at Queen’s University. Yasmine Djerbal is the associate director for the Centre for Teaching and Learning at Queen’s. 

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.