Rethinking university writing pedagogy in a world of ChatGPT
A solution may be found in focusing on the process of writing, rather than the final product.
OpenAI’s recent release of ChatGPT is, in a word, disruptive. In a matter of seconds, this open-sourced chatbot generates sophisticated writing in the genre of your choice. You want an argumentative essay defending a carbon tax in Canada with a thesis statement, main arguments, and counterarguments? You got it. You want a literature review on sustainable food practices with eight peer-reviewed journal articles? No problem. Just type the request in and ChatGPT will oblige. It generates such impressive content that a recent article in The Atlantic warns that ChatGPT “may signal the end of writing assignments altogether.”
It’s no secret that ChatGPT poses a significant challenge to higher education. Most obviously, there are the academic integrity concerns. Since ChatGPT generates original content, plagiarism software like Turnitin doesn’t currently flag it. But this technology poses a deeper challenge, one that cuts into the university’s core purpose of cultivating a thoughtful and critically engaged citizenry. If we can now all access sophisticated and original writing with a simple prompt, why require students to write at all?
The ability to write well, to formulate one’s ideas with clarity and concision, has long been a core learning objective of a university education. Writing is not only how we express our thoughts to others, but it’s how we develop our own thinking. George Orwell, as he often did, summed up the problem perfectly: “If people cannot write well, they cannot think well, and if they cannot think well, others will do their thinking for them.” Ultimately, ChatGPT has the potential to disrupt our ability to think.
Given this challenge, what can university instructors do from a writing pedagogy perspective? First, we ought to avoid generic essay topics like carbon taxes and sustainable food practices by instead formulating specific questions on individual scholars and works. Using this reasoning, I put the following prompt into ChatGPT: “Detail Tom Regan’s and Peter Singer’s respective arguments for animal rights. How are they similar? How are they different? How would Regan respond to Singer and how would Singer respond to Regan?” I was surprised that it generated a cogent response, including how each philosopher would respond to the other. Since Regan and Singer are well-known philosophers, a better approach is to craft questions that focus on lesser-known scholars and works. When I experimented along these lines, ChatGPT had nothing to generate. Moreover, since it can only access content prior to 2021, developing questions on current topics, if feasible, is a strategy. Nevertheless, as this technology develops, my worry is that these are only band-aid solutions.
Over the coming months, we need to develop a sustainable approach to address the challenge of artificial intelligence, which includes ethical ways to incorporate it into teaching and learning. It also includes rethinking standard writing practices, particularly at the undergraduate level. For the most part, we assign students a paper, they write and submit that paper, and we provide summative feedback that justifies our mark on that paper. With this submit-it-and-forget-it approach, it’s rare for a student to receive formative feedback on their work. While there are exceptions – like students who receive feedback from their university’s writing centre – this focus on the final writing product does not set students up for success in their professional careers. Whether it’s a business report, a government policy brief, or an academic peer-reviewed journal article, most professional writing goes through an extensive feedback and revision process.
Writing scholars have long been advocates of providing students more opportunities to receive formative feedback on their work, whether from classmates, instructors, or writing centre tutors. Focusing on the process of writing, rather than the product, not only helps combat ChatGPT, but it helps students develop their writing, thinking, and metacognitive skills. Imagine a flipped classroom where students come to class with a draft paper and engage in a peer review exercise. Students read a peer’s work and provide feedback using a peer review handout that guides their response. Each student, upon receiving their feedback, then assesses the comments and decides what feedback to implement before engaging in revision. The students could use this feedback to expand a shorter paper into a longer work. Currently, ChatGPT has more difficulty replicating the feedback and revision process, including expanding shorter works into longer works. But more importantly, an emphasis on the process of writing is effective pedagogy. Giving, receiving, and implementing feedback develops foundational student skills. The literature on student peer review, for example, has shown that giving feedback improves metacognitive skills more than receiving feedback.
In addition to having students revise their written work based on formative feedback, there are other ways to overcome ChatGPT while building student skills. To accompany the final paper, instructors could assign a reflective paper for students to document their research, writing, and learning process. Although ChatGPT can generate reflective papers, it has more trouble with prompts that justify writing decisions like: “Why did you decide to integrate feedback regarding ‘x’ but not ‘y’.” Reflective questions like this add another metacognitive component to student learning.
Although ChatGPT is undoubtedly a disruptive technology, disruptions provide opportunities for improvement. By reorienting our writing pedagogy to the process of writing rather than the final product, we can improve student learning. It’s in the process, after all, where all the thinking happens.
James Southworth is a writing consultant, teaching and learning, at Wilfrid Laurier University.
Featured Jobs
- Economics - Associate/Full Professor of TeachingThe University of British Columbia
- Vice-President Research & Scientific EngagementMS Canada
- Indigenous Studies - Assistant Professor, 1-year termFirst Nations University of Canada
- Fashion - Instructional Assistant/Associate Professor (Creative & Cultural Industries)Chapman University - Wilkinson College of Arts, Humanities, and Social Sciences
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.
6 Comments
An experienced professor can trouble-shoot the use of ChatGPT and other text-generating software reasonably easily by having students write an in-class diagnostic at the beginning of a course. This provides an authentic piece of writing as a benchmark. Then, throughout the writing process, at the draft stages, dynamic feedback against a rubric that includes descriptors for authentic voice, style, tone and word choice can flag plagiarized text. As well, what text-generation does not do well is recognize and follow the Socratic method of argument, so text that is not developed appropriately according to a rhetorical mode is quite easily identified as inauthentic. Text-generating software cannot contribute the subtleties and intuitive connections between concepts that a human brain can do. It can really only report information.
This has less to do with experience and more to do with resourcing. For instance, if you are personally responsible for marking 200 papers on a course in a semester, you cannot be checking diagnostic writing against developed drafts. The time for this is not built in and time is among the most precious resources for academics. You also have no provision against someone not doing that diagnostic test, barely participating, and thus evading the your control measure.
Tutorials where students must explain and defend their writing would be great — the Oxbridge model — but this too presents resource issues. I think this is a bigger problem than even the author of this piece is foreboding given other trends in higher education to do with finances and staffing.
This is an excellent idea. Have you got any reference recommendations for the professors who lack expertise in teaching so that they can learn how to pull this off effectively? Thank you
As a writing instructor, I have also been wondering how to change my teaching approach and assignments to address the new reality of AI. I have the additional concerns of providing high quality and timely feedback in a high enrollment, online-only and self-paced course. This means learners do not have a cohort to receive in-class feedback on their writing. The course material also requires students to learn various document genres and rhetorical techniques.
Based on my experience teaching writing in-person, I am not convinced on the value of in-class peer feedback on classmates’ writing. I did this for a few years for my in-person courses and I found that many learners reported this did not provide them valuable feedback (e.g., due to fellow learners being too new to the necessary skills to provide meaningful feedback or due to some students not taking this exercise seriously, even if it counts towards their Participation grade). I have had learners complain vehemently that the quality of their classmate’s feedback was so poor that they felt this gave them an unfair disadvantage for the grading of their final paper.
To address some of these challenges, I redesigned my writing course a couple years ago using AI. The AI essentially assesses learners’ process and writing. It can point out omissions and provide high-level feedback. It is not the same level as a writing expert’s feedback, but it does get learners thinking about the writing process and provides some real-time feedback for iterative improvement.
Despite this, I’m still wondering if writing courses should just embrace AI as a tool – as many, if not all, writing instructors have with spell-check tools and Grammarly. I encourage my learners to use Grammarly and suggest this be part of their process. Instead of taking human instructor time to identify issues that these tools can easily spot, it enables the course to breeze past these simple topics to move on to more meaningful and advanced topics. Could we restructure writing courses along the same line – and encourage students to use AI instead of fighting it?
As a writing instructor, I have also been wondering how to change my teaching approach and assignments to address the new reality of AI. I have the additional concerns of providing high quality and timely feedback in a high enrollment, online-only and self-paced course. This means learners do not have a cohort to receive in-class feedback on their writing. The course material also requires students to learn various document genres and rhetorical techniques.
Based on my experience teaching writing in-person, I am not convinced on the value of in-class peer feedback on classmates’ writing. I did this for a few years for my in-person courses and I found that many learners reported this did not provide them valuable feedback (e.g., due to fellow learners being too new to the necessary skills to provide meaningful feedback or due to some students not taking this exercise seriously, even if it counts towards their Participation grade). I have had learners complain vehemently that the quality of their classmate’s feedback was so poor that they felt this gave them an unfair disadvantage for the grading of their final paper.
To address some of these challenges, I redesigned my writing course a couple years ago using AI. The AI essentially assesses learners’ process and writing. It can point out omissions and provide high-level feedback. It is not the same level as a writing expert’s feedback, but it does get learners thinking about the writing process and provides some real-time feedback for iterative improvement.
Despite this, I’m still wondering if writing courses should just embrace AI as a tool – as many writing instructors have with spell-check tools and Grammarly. I encourage my learners to use Grammarly and suggest this be part of their process. Instead of taking human instructor time to identify issues that these tools can easily spot, it enables the course to breeze past these simple topics to move on to more meaningful and advanced topics. Could we restructure writing courses along the same line – and encourage students to use AI instead of fighting it?
What defeats this is old-fashioned formal handwritten exams in an examination room. Students are screened for electronics ect.. before they go in and scripts are then collected. The solution is indeed simple. Universities just have to go back to the basics that many have forgotten. Students will then need to study for sure. Millions of world citizens went through this exact format and are fine.