How should university education adapt to rapidly evolving artificial intelligence? What are we teaching our students that won’t eventually be replaced by machine learning?
Many people working in higher education have been asking questions like this since ChatGPT launched on Nov. 30, 2022. In the short period since then, academics have been struggling to make sense of it. Many have concerns about academic integrity. Some see opportunities for reimagining university teaching. Teaching and learning centres are offering information and programs to help faculty understand how and why the higher education landscape is changing.
Disruptive technologies alter how societies and industries function by replacing previous patterns of behaviour or processes. Think of automobiles, television, household appliances, computers, cellphones, e-commerce … the list goes on and on. It is clear that AI and machine learning will disrupt – and are already disrupting – higher education.
The question is how we will respond.
Embracing rather than fighting inevitable change
While it can be tempting to focus on the challenges of disruption, I feel higher education needs to adapt to and embrace these technologies. As I wrote in my December 2021 column (“Can technology help advance writing skills instruction?”): “Rather than lamenting them, let’s harness these technologies.” Among the ideas I put forward in that column was using technology to assist with formative and summative assessments, and teaching students both the skills and ethics of AI-assisted writing.
Embracing change means going beyond accepting change to asking where the opportunities lie for us to harness technology to make academia better. In a January 2023 Inside Higher Ed article, Steven Johnson argues that we should ask ourselves questions that push us to “go big. How do these tools allow us to achieve our intended outcomes differently and better? How can they promote equity and access? Better thinking and argumentation? How does learning take place in ways we haven’t experienced before?”
To these questions I add: how can new technological tools assist us in teaching students to think critically? How can they help us teach students to evaluate and engage with “evidence”? How can we use them to get students to reflect on what technology can and cannot do? Is there an opportunity for these tools to deepen disciplinary thinking while advancing interdisciplinarity understanding? Can these tools, and our responses to these tools, compel us to become better teachers who make more fulsome use of evidence-informed pedagogies?
Preparing students for the future
Students, the public, and the governments funding universities expect higher education to be linked with career outcomes. These linkages can be more direct or more subtle. Programs directed at specific professions or careers (such as engineering, education, law, social work, computer science, or medicine) often explicitly teach career skills. For programs that do not tie to specific professions or careers (such as many of the natural sciences and liberal arts), career skills training can be less evident. Nevertheless, I argue all instructors should integrate career skills into their courses.
Career preparation must take into account the ever-evolving technologies that our graduates will be using. In the above-noted Inside Higher Ed article, Ted Underwood argues: “Our approach to teaching should be guided not by one recent product but by reflection on the lives our students are likely to lead in the 2030s. … the uncertainty itself is a reminder that our goal is not to train students for specific tasks but to give them resilience founded on broad understanding.” Given this, disciplines less directly associated with specific careers may actually be the best positioned to help students learn how to incorporate and harness these new tools.
One option in this regard is to use our classes to help students build AI literacy – defined by Duri Long and Brian Magerko as “a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace.” Examples of such competencies include the abilities to:
- “Identify problem types that AI excels at and problems that are more challenging for AI. Use this information to determine when it is appropriate to use AI and when to leverage human skills.”
- “Understand that data cannot be taken at face-value and requires interpretation. Describe how the training examples provided in an initial dataset can affect the results of an algorithm.”
- “Identify and describe different perspectives on the key ethical issues surrounding AI (i.e., privacy, employment, misinformation, the singularity, ethical decision making, diversity, bias, transparency, accountability).”
It is easy to brainstorm ideas of how to build such competencies into courses across the disciplines. Such competencies equip students for careers in which they will need to engage with emerging technologies in responsible and ethical ways.
Continuing the Skills Agenda conversation
Over the coming months, I will use this column to discuss how we can consider skill development in higher education in the context of rapid technological change. In the meantime, I would love to hear your thoughts on what these changes mean for your own teaching in the comments below. I also welcome opportunities to speak with universities about skills training. Please connect with me at [email protected], subject line “The Skills Agenda.” Finally, for additional teaching, writing, and time management discussion, please check out my Substack blog, Academia Made Easier.
I look forward to hearing from you. Until next time, stay well, my colleagues.