Skip navigation
In my opinion

Questioning the ethics of online proctoring

Instead of relying on a ‘technological fix,’ we need to ask what drives students to cheat in the first place.

BY KARI ZACHARIAS & KETRA SCHMITT | DEC 03 2021

“It would be a neat trick,” wrote physicist Alvin Weinberg in 1966, “if the social problems could be converted into technological problems.” Dr. Weinberg argued that issues like racism, overpopulation, and road safety are complex because their solutions require persuading people to change their behaviour. If it were possible to develop what Dr. Weinberg called a “technological fix,” thorny social problems could be addressed without engaging with the messiness of human motivations.

The technological fix remains a powerful temptation, despite decades of scholarship that reveals the problematic distinction between “social” and “technological” problems. As faculty members during an era of online instruction, we have had a front row seat to the implementation of such attempted fixes. One prominent example is universities’ use of automated proctoring systems for online tests. These systems are a proposed technological fix for a longstanding problem, the constraints of which are modified by online evaluation. They aim to replicate the scrutiny of in-person invigilation in various ways. Some systems allow for human invigilators to observe students in real time, while others rely on algorithms to report suspect behaviour. They may record a student’s screen, the student themselves, or both. Whatever the operating procedures of a given proctoring service, the systems share a nested set of assumptions: that automated proctoring is a) an ethically acceptable alternative that can b) effectively replicate the conditions of test-taking in a physical classroom, which is c) itself a necessary part of university education.

The ethics of automated proctoring have come under considerable attack from students and faculty alike. Students have reported increased anxiety around proctored online examinations, and have expressed concerns for their privacy and data rights. At some universities, students launched petitions opposing the use of automated proctoring on the basis of privacy violations. Universities and the companies involved have typically responded by citing the strength of their encryption software, or emphasizing that they do not sell data to third parties. Concerns about privacy breaches are legitimate, given previous cases of educational technology companies monetizing or publishing students’ data. However, they are not the only type of relevant worries.

Online proctoring demands that the proctor – human or otherwise – enter the student’s private sphere. While writing an online exam, the student is unaware of whether their actions are flagged as suspicious, and often there is no mechanism for real-time communication between student and proctor. These factors result in a test writing experience that demands an extraordinary level of surveillance. Even if we take proctoring services at their word that they are “extending every possible effort” to ensure students’ privacy, the nature of the product demands intrusion well beyond that experienced in a physical classroom.

Like a host of other well-documented examples, the algorithms that govern automated proctoring systems demonstrate the biases inherent in the information they rely on for training and identification. Proctoring systems define “normal” bodies and behaviours, and flag as suspicious anyone and anything that falls outside of the boundaries they construct. Students with black or brown skin may not be recognized by tracking systems. Students whose appearance, name, or gender does not match the information on their official identification may be flagged for additional scrutiny. Students with certain medical conditions may be labelled as suspect for behaviours that fall outside of the system’s default settings.

Proctoring systems also lay bare structural inequalities in simpler ways. Students who do not possess the necessary equipment or space to use a proctoring system are forced to spend money and time to obtain them, before navigating the often complex bureaucratic procedures required to use the systems. At best, this extra effort is a significant inconvenience. At worst, it seriously disrupts the lives and studies of already marginalized students.

As engineering professors, we are familiar with some common responses to the above critiques. These systems may not be ideal,” goes the refrain, but they are necessary. Our students are going to build bridges, design cars, and develop software, all of which will serve critical societal functions. If we don’t make sure that students know what they need to know, we will be putting the public in danger. We should do what we can to minimize bias and negative impact, of course, but some difficulties are inevitable.”

We too recognize the importance of educating engineering students responsibly. In fact, we argue that when we rely on deeply flawed systems to govern important aspects of students’ education, we model precisely the type of irresponsible behaviour that we want future engineers to understand and avoid. Our existing physical and digital infrastructures demonstrate the values and biases of the contexts in which we create them, rendering them less functional for some users and use cases than for others. Rejecting an ethically unacceptable technological fix is, in itself, a valuable lesson for our students.

More fundamentally, as we return to the physical classroom, we have an opportunity to question the nature of the issue that proctoring systems claim to address. Defining the problem narrowly and applying familiar constraints is the “neat trick” that makes a technological solution seem feasible, or even unavoidable. Asking what drives students to cheat in the first place, and whether our current assessment strategies are really the best way to advance student learning, is messier and more labour-intensive. It’s also a better path to finding real, equitable and effective solutions.

Kari Zacharias is an assistant professor in the Centre for Engineering Professional Practice and Engineering Education at University of Manitoba. Ketra Schmitt is an associate professor in the Centre for Engineering in Society at Concordia University.

COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *