I’m not afraid of ChatGPT in my classroom
All of my students use ChatGPT, but the grade distribution remains the same.
Questions about generative artificial intelligence technology in educational spaces have prompted some to worry about the future of education and others to sit down at the table to decide how to prevent, identify, and rectify uses of generative artificial intelligence (AI) in educational settings.
From today’s historical vantagepoint, I don’t think the most important question educators need to reckon with is “Should generative AI like ChatGPT have a place in classrooms?”
I think it’s too late.
Instead, we should ask, “How can teachers and educational institutions incorporate the virtues of generative AI into education plans without compromising learning outcomes?”
I tried to answer this question as I taught a course on “The Sociology of Culture” with a new experimental assignment.
The assignment gave students hands-on experience with sociological research and ChatGPT. Students were to create a research proposal investigating the social origin of a musical subgenre of their choosing by putting a maximum of three prompts into ChatGPT. Then, to complete the assignment, they would grade ChatGPT’s work, which meant assigning it a mark and providing a few lines of feedback.
When the submissions came in, they varied along lines of reading comprehension and critical thinking.
A few students only checked to see if the headings of ChatGPT’s output seemed correct. Research questions? Check. Research methods and ethical considerations? Check, check. Expected outcomes, timeline, budget? Check, check, check. A+ “Great work!”
The more thoughtful and thorough submissions evaluated what ChatGPT put on the page while considering things it left out. One student critiqued the AI’s list of research questions because only two of the four were relevant to the project it was prompted to write. Another student was unimpressed with the chatbot because it failed to present any correct information about the focal musical subgenre of the research proposal.
Other students found ChatGPT to be a decent research assistant. And they weren’t wrong. In some cases, the chatbot generated reasonable and actionable research proposals.
The reality that ChatGPT can generate strong research proposals does leads to an especially sharp and pointed question: why wouldn’t students use generative AI to complete their work?
It’s true that my students’ work revealed that — in some cases — an AI chatbot could do a good job producing academic work. But the only way to tell if this work is adequate, excellent, or peppered with errors is to read it and think it over. And to thoughtfully evaluate any written work, you need at least a little expertise, or you’ll need to learn something through a little self-guided research.
The column of grades for the ChatGPT assignment resembled the other ones I have seen during my experience as a university instructor. Generally, the average grade for a sociology assignment hovers around a B or 75 per cent. The average for the ChatGPT assignment was 77.4 per cent. The average grades for the other assignments (not involving generative AI) were 77.5 per cent. and 76.4 per cent. The average for the class? I’ll let you guess.
With the final grades in the books, I still wonder what is at the heart of concerns over students engaging with a new, exciting, and consequential technology in academic settings?
Maybe concerns steeped in the language of Doomsday prophesies circulating among educators reveal not only worry about technological change — but also the state of good faith between teachers and students.
Good faith between teachers and students is necessary for educational settings to be truly educational. For educators, holding good faith toward students means assuming they are honestly pursing learning objectives and developing their professional capacity to the best of their ability. But good faith doesn’t provide simple and precise answers to questions at the heart of the current debates about AI technologies in educational settings.
So maybe I’m naïve.
Still, as a university instructor, I think it is essential to respond to technological change, especially the new popular tools people are using to think, do their jobs, and even generate original knowledge. In other words, classrooms should be adapted to accommodate the social world, not the other way around.
Turning back to my fellow teachers facing students holding a new and scary technology, I ask again: How can we incorporate the virtues of generative AI into our education plans and achieve valuable learning outcomes? We can play with it, learn from it, and have a little good faith that our students and future colleagues will do the same.
Taylor Price is a PhD candidate in sociology at the University of Toronto.
Featured Jobs
- Vice-President Research & Scientific EngagementMS Canada
- Fashion - Instructional Assistant/Associate Professor (Creative & Cultural Industries)Chapman University - Wilkinson College of Arts, Humanities, and Social Sciences
- Economics - Associate/Full Professor of TeachingThe University of British Columbia
- Public Policy - JW McConnell Visiting ScholarMcGill University
- Public Policy - JW McConnell Professor of Practice McGill University
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.