Skip navigation
The Skills Agenda

Academic integrity in the age of ChatGPT

Evolving technologies present both challenges and opportunities for advancing academic integrity.

BY LOLEEN BERDAHL SUSAN BENS | JUN 16 2023

Welcome back to our special The Skills Agenda three-part series on academic integrity. In this series, Susan Bens, educational development specialist at the University of Saskatchewan’s Gwenna Moss Centre for Teaching and Learning, joins me to consider how instructors can approach issues of academic misconduct. In our first column of the series, Susan and I discussed student reasons for academic misconduct and how this information can help instructors reconsider their teaching approaches in order to promote academic integrity. In today’s column, we examine how ChatGPT (along with other generative AI technologies) complicates the issue of academic integrity. In our final column, available next week, we will share ideas for how you can design courses that promote academic integrity.

Linking ChatGPT with student academic misconduct

ChatGPT and its capacity to write coherent responses to prompts has considerable potential for improper use. This issue has been occupying faculty, teaching and learning centres, and academic leaders in 2023. This follows on an already heightened concern for academic misconduct and “contract cheating” that became more apparent during pandemic-driven remote instruction. The surge of issues was documented by Canadian practitioners in a collection of reflections. A Canadian study of faculty perspectives found it was common to feel frustration and even despair about academic misconduct during remote instruction. And then after all that adjustment and revised assessment, just as we were returning to more normal practice, ChatGPT burst on the scene, and within five days had over a million users.

But, is the use of ChatGPT academic misconduct? It depends. If assistance of the kind ChatGPT provides is not permitted for an assessment and the student used it anyway – this is likely academic misconduct. Or, if the student misrepresents themselves as the author of text that was generated by ChatGPT – probably, it’s academic misconduct. Recommendations about ethical use of AI in education offer us a new definition that covers off both contract cheating and artificial intelligence text generators:

“Unauthorized Content Generation (UCG) is the production of academic work, in whole or part, for academic credit, progression or award, whether or not a payment or other favour is involved, using unapproved or undeclared human or technological assistance.”

This might be a definition to add or adapt to your institutional policy or your syllabus statements.

How ChatGPT relates to issues of academic integrity

Like it or not, ChatGPT and similar technologies are now part of the teaching environment. Here, we focus on how it can be used in teaching and learning and not about ethics of its evolution and societal impact. As discussed in a previous Skills Agenda column, “ Embracing change means going beyond accepting change to asking where the opportunities lie for us to harness technology to make academia better.” Are there opportunities to use the disruption of technological advances to promote academic integrity?

In last week’s column, we identified six of the reasons that some students cheat. ChatGPT and similar technology raise new questions that complicate possible solutions to academic misconduct but may also offer opportunities:

Reason for academic misconduct Complications with ChatGPT Opportunities with ChatGPT
Lack of connection to material
  • How ought the material, the learning objectives, and the connections we want our students to make change in a ChatGPT context?
  • Use ChatGPT to help generate new teaching ideas
  • Shift to asking students to perform higher order thinking where they use a ChatGPT output to do things like fact-check, assess bias, verify, improve, evaluate, extend, edit, and justify
Skill gaps
  • What are the skills that our students need now to uphold academic integrity?
  • Enhance digital literacy where students learn to properly and ethically use a new and powerful tool for content creation
Misunderstanding
  • How can students keep it straight which professors are allowing ChatGPT – as well as under what conditions and for what purposes?
Lack of connection with the instructor
  • As technology mimics human dialogue and even individualized instruction and feedback, what are the relational roles for instructors?
Tolerance of academic misconduct
  • How can we give students the confidence of a fair system when ChatGPT is hard to detect and the use of detection tools raises privacy and copyright concerns?
  • Require students to document their ChatGPT process and to follow referencing conventions, since machine detection of artificial intelligence writing is not robust or proven (and may never be able to keep up)
Student stress
  • ChatGPT is so fast and so accessible – how can humans under pressure resist its charms and function?
  • Alleviate stress (even empower wellbeing) and help students overcome procrastination by making it easier for them to get started on major assignments or preparing for exams

Next week, we will bring the threads together to discuss how you can use the information on academic misconduct to inform your own course design. For now, please share this column to your colleagues to inform them about what ChatGPT and similar technologies mean for academic integrity.

Continuing the Skills Agenda conversation

Have you dealt with issues of academic integrity and ChatGPT or other technologies in your own teaching? Please let me know in the comments below. I also welcome the opportunity to speak with your university about skills training. Please connect with me at [email protected], subject line “The Skills Agenda”. And for additional teaching, writing, and time management discussion, please check out my Substack blog, Academia Made Easier.

I look forward to hearing from you. Until next time, stay well, my colleagues.

ABOUT LOLEEN BERDAHL SUSAN BENS
Loleen Berdahl is an award-winning university instructor, the executive director of the Johnson Shoyama Graduate School of Public Policy (Universities of Saskatchewan and Regina), and professor and former head of political studies at the University of Saskatchewan. Since 2016, Dr. Berdahl has spoken about student skills training and professional development at conferences and university campuses across Canada. Her research on these topics is funded by the Social Sciences and Humanities Research Council Insight Grant program. Dr. Berdahl’s most recent books include Work Your Career: Get What You Want from Your Social Sciences or Humanities PhD (University of Toronto Press; with Jonathan Malloy) and Explorations: Conducting Empirical Research in Canadian Political Science (Oxford University Press; now in its 4th edition with Jason Roy). Susan Bens is an educational development specialist at the University of Saskatchewan’s Gwenna Moss Centre for Teaching and Learning.
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

  1. Genevieve / June 29, 2023 at 10:07

    In the winter 2023, I marked a student paper that I strongly suspected was written partially using ChatGPT. Some of the tells were:
    (1) the level of the writing, which was quite superficial
    (2) the style of the writing, which was very similar to the ChatGPT output when I tested it with prompts related to the paper topic – and different from the rest of the paper
    (3) the general lack of references through the portions of the paper that were suspicious
    (4) the inclusion of a few references to papers that do not exist – the authors themselves exist, and the journal identified in the reference exists, but the reference (author X, article title, in journal Y) did not exist. These problematic references also did not include a DOI despite being for recent journal articles.

    Because there was no clear evidence that the paper had been produced by ChatGPT (only my suspicion), it was not reported for academic misconduct (this was following a discussion with the administrator responsible for academic integrity).

Click to fill out a quick survey