Skip navigation
Career Advice

I’m not afraid of ChatGPT in my classroom

All of my students use ChatGPT, but the grade distribution remains the same.

BY TAYLOR PRICE | JUL 31 2023

Questions about generative artificial intelligence technology in educational spaces have prompted some to worry about the future of education and others to sit down at the table to decide how to prevent, identify, and rectify uses of generative artificial intelligence (AI) in educational settings.

From today’s historical vantagepoint, I don’t think the most important question educators need to reckon with is “Should generative AI like ChatGPT have a place in classrooms?”

I think it’s too late.

Instead, we should ask, “How can teachers and educational institutions incorporate the virtues of generative AI into education plans without compromising learning outcomes?”

I tried to answer this question as I taught a course on “The Sociology of Culture” with a new experimental assignment.

The assignment gave students hands-on experience with sociological research and ChatGPT. Students were to create a research proposal investigating the social origin of a musical subgenre of their choosing by putting a maximum of three prompts into ChatGPT. Then, to complete the assignment, they would grade ChatGPT’s work, which meant assigning it a mark and providing a few lines of feedback.

When the submissions came in, they varied along lines of reading comprehension and critical thinking.

A few students only checked to see if the headings of ChatGPT’s output seemed correct. Research questions? Check. Research methods and ethical considerations? Check, check. Expected outcomes, timeline, budget? Check, check, check. A+ “Great work!”

The more thoughtful and thorough submissions evaluated what ChatGPT put on the page while considering things it left out. One student critiqued the AI’s list of research questions because only two of the four were relevant to the project it was prompted to write. Another student was unimpressed with the chatbot because it failed to present any correct information about the focal musical subgenre of the research proposal.

Other students found ChatGPT to be a decent research assistant. And they weren’t wrong. In some cases, the chatbot generated reasonable and actionable research proposals.

The reality that ChatGPT can generate strong research proposals does leads to an especially sharp and pointed question: why wouldn’t students use generative AI to complete their work?

It’s true that my students’ work revealed that — in some cases — an AI chatbot could do a good job producing academic work. But the only way to tell if this work is adequate, excellent, or peppered with errors is to read it and think it over. And to thoughtfully evaluate any written work, you need at least a little expertise, or you’ll need to learn something through a little self-guided research.

The column of grades for the ChatGPT assignment resembled the other ones I have seen during my experience as a university instructor. Generally, the average grade for a sociology assignment hovers around a B or 75 per cent. The average for the ChatGPT assignment was 77.4 per cent. The average grades for the other assignments (not involving generative AI) were 77.5 per cent. and 76.4 per cent. The average for the class? I’ll let you guess.

With the final grades in the books, I still wonder what is at the heart of concerns over students engaging with a new, exciting, and consequential technology in academic settings?

Maybe concerns steeped in the language of Doomsday prophesies circulating among educators reveal not only worry about technological change — but also the state of good faith between teachers and students.

Good faith between teachers and students is necessary for educational settings to be truly educational. For educators, holding good faith toward students means assuming they are honestly pursing learning objectives and developing their professional capacity to the best of their ability. But good faith doesn’t provide simple and precise answers to questions at the heart of the current debates about AI technologies in educational settings.

So maybe I’m naïve.

Still, as a university instructor, I think it is essential to respond to technological change, especially the new popular tools people are using to think, do their jobs, and even generate original knowledge. In other words, classrooms should be adapted to accommodate the social world, not the other way around.

Turning back to my fellow teachers facing students holding a new and scary technology, I ask again: How can we incorporate the virtues of generative AI into our education plans and achieve valuable learning outcomes? We can play with it, learn from it, and have a little good faith that our students and future colleagues will do the same.

Taylor Price is a PhD candidate in sociology at the University of Toronto.

COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

  1. Mara Reich / August 2, 2023 at 16:27

    I couldn’t agree more! It’s time to embrace change and encourage students to think critically.

  2. Cristian Suteanu, Professor, Saint Mary's University, Halifax, Canada / August 2, 2023 at 21:01

    The author’s enthusiasm is admirable, and the question asked is a valid one. However, when offering insights on a certain theme one should rely on a bit deeper reflection beyond the strict context of here and now. In my view, the problem is not how to grade students who are using ChatGPT in a class exercise now, after having been educated in the “older” ways, in another world altogether. As one who has been involved for several decades both in teaching and in the spectacular world of evolving computer technology, from neural networks to the current accelerated developments, I have questions that I find quite troubling. For instance, what about the students’ intellectual development in the future, which – knowingly or unknowingly – we are preparing through our teaching? Will students – of all ages – rely increasingly more on “the new popular tools people are using to think, do their jobs, and even generate original knowledge”? Probably. With what implications? What can we do to prevent people from becoming dependent on AI tools to express their ideas? What can we do to prevent people from believing that they would be incapable of having their own ideas, after getting used to requesting and obtaining ideas from AI on a daily basis? I believe that such questions are not among those that can be postponed for a later time. Things are changing fast. Maybe the least we can do is think together, work together, and share our experience.

  3. Rodrigo Dal Ben / August 3, 2023 at 10:15

    Thank you for this balanced view! Much appreciated.

  4. Rob / August 16, 2023 at 03:05

    Just before the start of the summer semester, a handful of instructors in our department sat down to discuss approaches to our Introductory Academic Writing course, and of course, ChatGPT came up. How can we tell if something is written by AI? How should it be policed? Should the onus be on us as teachers to prove it or should we put that responsibility on the students? Can anti-plagiarism softwares like TurnItIn detect AI generative prose? Etc., etc., etc.

    The thing is, if anyone has actually tested out ChatGPT and similar AI Generative softwares against their course materials/assignments, I think it is pretty obvious they are not a threat to writing at the academic level (unless said materials/assignments are so generic or the requirements so juvenile as to facilitate this kind of writing). Not only is AI Generative software NOT a threat per se, but it is also not an effective tool for teaching writing, and I say this for a number of reasons.

    First, I have punched in my writing assignments verbatim and ChatGPT has never been able to produce anything remotely close to what I require in my course. In this sense, ChatGPT is similar to a ghostwriter who has not taken the course and does not understand how to integrate the content effectively. Might this require teachers to produce more unique assignments? Perhaps, but I think that even in collective courses, teachers still have their own approaches, so ChatGPT isn’t going to navigate that with any efficacy at this point because it is not attending the course. At least it hasn’t in my courses.

    Second, and this is what I point out to my students, ChatGPT is massively flawed, which means students will need to review the content, find sources of data, confirm the data, and then cite the data using the appropriate format (something ChatGPT remains exceptionally weak on). In the end, the students are creating far more work for themselves in the editing/revision stage than if they just do the assignment on their own. Since students are reluctant to approach the editing revision stage (at least in my 15 years of experience), this is going to cost them marks, which means they will (eventually) clue in that ChatGPT is not as helpful as it might seem on the surface (presuming teachers’ are not rubber stamping grades, but that is a different issue in higher education).

    Third, having taught EFL for over a decade, ChatGPT is not going to improve student’s writing anymore than a model essay will. ChatGPT writing is fairly static and inert, with none of the fluidity and nuance of human writing. As such, it is a let down even for modeling writing approaches. The generative texts do not teach students how to write, and one of the issues I have with the incorporation of technology in the classroom is that the same technologies we use to assist in the classroom is generally not available on examinations, which means we spend a semester (or even arguably a lifetime) letting technology supplant skills rather than develop them, then we pull the rug out from under students when we go, “Nope, you’re not allowed to use this for your assessment. ” It is one thing that has always struck me as ridiculous (right up there with word counts for writing assignments) yet no one else seems to see it this way (forest for the trees?).

    Ultimately I stand by Postman here – any technology is a Faustian bargain that generally costs more than it gives in return. Do we need to worry about ChatGPT? No. Does this mean we should embrace it? I would say the answer to that is no. What we need to do as teachers is explain the likely cost of using it to students in terms of time, effort, and outcomes (both positive and negative).

Click to fill out a quick survey