Skip navigation
Career Advice

AI, higher education and the future of academic supervision

Co-learning to use AI tools effectively.


The University of Alberta held a workshop for graduate student supervisors last year that focused on generative artificial intelligence (AI). Participants dove into the opportunities and hurdles of AI, shared their thoughts, optimism and worries, and brainstormed potential next steps. Universities worldwide are grappling with similar ideas, and it’s important that they share the outcomes of those discussions as part of a more extensive discussion about what we’re learning and where we’re going. Our hope? To help everyone better navigate the AI landscape in higher education through sharing our ideas.

In the face of novelty, we often seek understanding through familiarity. In this vein, for me, the workshop resonated with the words of astronomer and public intellectual Carl Sagan, who remarked, “We live in a society exquisitely dependent on science and technology, in which hardly anyone knows anything about science and technology.” In our context, Dr. Sagan’s observation isn’t a call for pessimism but rather guarded optimism. Mostly, it’s a challenge to rise to the occasion and thoughtfully navigate the uncertainties of this exquisite dependence.

While some of us are well-versed in AI, many are still exploring its uncertainties – and that’s alright. But the excitement that accompanies uncertainty, the spark that propels our quest for knowledge and exploration – this is what in large part defines us as academics and helps us shape the scholars of tomorrow. We see AI’s potential. We also see its dangers. Undeniably, though, we must learn to use these tools effectively in developing our graduate students.

Students entrust supervisors with guiding them through a web of research, academic citizenship and technology. AI tools like Google Gemini, ChatGPT and MidJourney, while powerful technologies, should not eclipse the learning process. Our role is to bridge the knowledge gap, to maintain the balance between leveraging these tools and preserving academic integrity and intellectual development. Based on our conversations in the workshop, here are some of the key takeaways to consider:

  • Co-learning AI proficiencies: As those supporting graduate students, it is incumbent upon us to be fluent in the language of AI, not just to provide guidance but to engage in meaningful dialogues with our students. It’s not about mastering this technology before our students do but about using this unique opportunity to co-learn with them.
  • Promoting ethical uses of AI: Supervisors must foster an environment of academic integrity amidst the increasing prevalence of AI tools in research. This fostering involves crafting clear guidelines about using AI in academic work. Beyond encouraging students to cite the AI tool and the specific prompt used in their research, it’s crucial to initiate discussions encouraging students to explore their “AI prompts” with actual people in their academic community.
  • Nurturing critical thinking with AI proficiency: Students must develop the skills necessary to thoughtfully evaluate the use of AI in their research. Supervisors should emphasize cultivating critical thinking skills with AI while promoting an understanding of data literacy and AI tools like ChatGPT. In an age where AI can provide ready-made answers, ensuring that students continue actively engaging with the material is paramount.
  • Addressing the AI access divide: As AI becomes increasingly integral to academic research, addressing economic disparities and technical challenges associated with access to AI tools is essential. Providing targeted support and resources, such as free subscriptions to AI tools, can promote equal access and foster an inclusive academic environment.
  • Engaging students in decisions that affect them: Students are at the forefront of AI usage in academia, making their insights invaluable in shaping AI policies and practices. Encouraging their participation in decision-making processes and integrating AI ethics into the curriculum can foster a sense of responsibility and critical engagement with AI technologies’ ethical implications. This process could range from students organizing workshops on AI skills to contributing to the institution’s policies on AI usage in academics.
  • Encouraging student involvement in research planning: As students become proficient in leveraging AI tools in their academic work, their active involvement in planning their research becomes critical. Open dialogues about AI’s potential benefits, pitfalls, biases, and ethical considerations can foster a sense of ownership, responsibility, and critical engagement in their research journey.

In this evolving landscape, the goal remains to equip our graduate students with the knowledge, competencies and ethical considerations necessary to conduct impactful research. Rather than replacing traditional educational practices, AI enhances them, offering new avenues for exploration while posing fresh challenges.

Jay Friesen is an educational curriculum developer in the faculty of graduate and postdoctoral studies and an assistant lecturer in the faculty of arts, community-service learning, University of Alberta. Renee Polziehn is the professional development director, faculty of graduate studies & research at the University of Alberta.

Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *