Adopting AI is a social contract

Integrating artificial intelligence into our societies and personal lives binds us to certain futures and forecloses the possibility of others. Are we ready to accept the consequences?

Graphic by: Edward Thomas Swan

Much of the present conversation about AI in higher education centers around questions of implementation. How do we use AI in accordance with principles of universal design? How can we ensure equity in its usage, be it across axes of gender, race or class? What does AI mean for the longevity of the professorial profession? Implementation should indeed be approached with care and nuance, and we welcome this conversation.  

Yet, questions of implementation assume that AI is desirable and inevitable in the classroom. The prior question of whether AI in higher education is actually desirable is often overlooked. Two widespread assumptions underpin this move: 1) technological progress is inevitable; 2) technology is apolitical — it only becomes political in its implementation. 

The first assumption must be false — or, at least, incoherent — because nothing in history is inevitable. In history, there are only increasing or decreasing likelihoods. Of course, people are constrained, conditioned or encouraged by wider forces and structures. But to believe history is inevitable is to strip individuals of their agency and to render the concepts of responsibility and justice empty fictions. Historical change happens because people make choices and do things. 

The second assumption is what historians and philosophers of technology call the “tool view” of technology. Tools, like hammers, seem politically and ethically inert. A hammer may be “good” if it is used to build good things, and “bad” if used for nefarious purposes. Ultimately, moral judgment falls onto the tool user and her intentions. The tool itself is a mere pawn.    

AI is not a tool, like a hammer 

When it comes to the large, complex technological systems that characterize our late-modern world, the tool view has been roundly rejected by those who study technology and its historical development. The political scientist Langdon Winner forcefully argued in 1980 that technologies do indeed have politics. A compelling example was that of nuclear power. Because of the incredibly powerful and inherently dangerous nature of nuclear power, as soon as a society adopts this technology it commits itself to a higher degree of surveillance and policing than before. Nuclear power necessarily entails a social and political arrangement that centralizes power instead of diffusing it. And, once adopted, it is very difficult to reverse the situation.  

The adoption of large-scale technologies like nuclear power and AI are, thus, social contracts with ramifications like those of legislative acts. But they are not subjected to the democratic scrutiny and debate of legislation because technologies are widely taken to be sophisticated tools: discreet, apolitical and amoral in themselves. Political and ethical questions are relegated to the sphere of user intention, as we see in much of the current discourse on AI. 

In opposition to the tool view, historians and philosophers of technology present a view of technologies as networks of materials, places, practices, people, institutions, politics and ethics that often literally span the globe. Most late-modern technologies are not like hammers. They have the potential to affect how we experience the world and act in profound, long-lasting and surprising ways. Who knew, for instance, that electrification of the urban landscape in the nineteenth century would lead to whole new ways of navigating and experiencing the cityscape, and new forms of artistic expression? Reorienting our perspective on AI to the network view allows questions about long-term and binding social-political commitments to gain purchase.

Are the benefits worth the risks? 

AI surely has benefits for higher education. It could reduce instructor workload by generating course design elements, like syllabuses or webpages, or assessment materials, like multiple-choice tests. It could produce lecture slides from notes. It might offer instructors speedy improvements for universal design or active learning strategies. It could address simple student queries about logistics. It may even serve as a “tutor” to work through difficult material. 

Yet its critics have pointed to many costs. Data centres use enormous and increasing amounts of energy — projected by some estimates to grow to 12 per cent of total U.S. electricity usage by 2030. MIT News states that by 2026, data center electricity consumption is expected to approach 1,050 terawatts, placing it in fifth place on a global list, between Japan and Russia. Coal plants slated for retirement have remained open to support this rapacious energy draw. The production of AI chips, and the cooling of data centres and power plants, consume large amounts of water, to say nothing of the associated pollution and carbon emissions. AI has generated misinformation, disinformation, and deep fakes. Producing AI devices involve exploitative supply chains, as made clear by Kate Crawford and Vladan Joler’s Anatomy of an AI System. Using AI to inform insurance coverage and court sentencing has been contested on human rights bases. Automated computation systems have been linked to decreased accountability. And the boom of AI has helped to further concentrate wealth and political power in the hands of a few.

There are also concerns specific to higher education that are under-discussed and under-disclosed. Dependence on AI may erode critical thinking and creativity among students and faculty alike. Use of these tools poses uncertain risks to privacy and data security. And consent to them is too often taken for granted through the “routine software update,” which poses challenging questions about academic labour conditions (as recently emphasized by Hannah Johnston in a Canadian Association of University Teachers (CAUT) feature article).  

If we forego disempowering assumptions about technological inevitability and naïve ideas about technology as apolitical which facilitate a premature leap to implementation, the question becomes: are the benefits of wholesale adoption worth the costs? We do not presume to answer this question, merely to pose it as coherent and critical and prior to discussions of implementation. AI may well be worthwhile in some areas, such as health-care research, perhaps also in certain areas of higher education. But such judgments demand nuanced consideration, attending to specific forms of AI and their costs, not a naive obeisance to boosterism. Like a legislative act, adopting AI binds us to certain futures and forecloses the possibility of others. We had better ensure that the potential environmental, political and social costs are worth it. Adopting technology for any purpose, including education, is not an apolitical or amoral act.

The weekly read
for Canadian higher ed professionals
Join thousands of subscribers who receive career advice, news, opinion columns and feature stories from University Affairs.