Everything has (not) changed: artificial intelligence, teaching and learning

Despite new technologies, the foundations of teaching and learning have not changed.

August 10, 2023
A human hand reaching for a robot hand.

Undoubtedly, the widespread use of artificial intelligence (AI) technologies such as ChatGPT, ChatPDF, Scite.ai, and Transcript have changed the landscape of education. Although it is often hoped that students use these technologies as a “co-pilot” to scaffold their learning experiences, it is quite easy to have them do all of the work (ex. design studies, write papers, analyze data, etc).

The higher education sector is working to understand the impact of widespread access to advanced AI technology. There are many benefits to AI software, and the scope of AI assistance can vary – even in the writing of this article, a red squiggly line has identified typos, and a simple click fixed the errors. Referencing software can be incredibly helpful for academics who have to move between citation styles depending on an outlet. Pre-populated macros and syntax help with data analysis. New AI software goes far beyond this, rapidly synthesizing and analyzing incredible amounts of information. But where is the line that indicates co-piloting has gone too far?

The answer, of course, varies depending on the context. Before considering where the line is, it is helpful to come back to basics. Despite new technologies, the foundations of teaching and learning have not changed. Critically, the cognitive underpinnings of learning for humans remain unchanged. A very basic model of information processing is:

Exposure–>Attention–>Encoding–>Storage–>Retrieval

Exposure can be thought of as an opportunity to engage with information. Attention captures whether information is noticed (and to what degree). Encoding captures the degree to which information is processed in a meaningful way. Storage captures whether that information is committed to memory. Retrieval involves the degree to which information is remembered in accurate ways.

Learning can be facilitated or disrupted at any point in the information-processing chain, and this knowledge can be applied to course and assessment design. For example, if students are using artificial intelligence software (often called “co-pilot” software) to circumvent pieces of an assignment that would facilitate deep encoding (i.e., the thoughtful processing of relevant information), it is likely that they will not successfully retrieve the content later. Alternatively, if AI technology helps to facilitate deeper encoding, its use would contribute to later retrieval. Thus, when designing assessments to facilitate deep encoding, instructors should consider the ways in which co-piloting software can add to, or detract from, the learning goals.

Resources are limited, and it is important to know that there is no “one-size-fits-all” when it comes to designing for information processing. Counterintuitively, not all phases of information processing are always critically important for learning. For example, if mixing two chemicals creates a horrible odor, there may not need to be a lengthy lecture or written assignment to facilitate deep encoding of this. Making the mistake and experiencing the consequence may be sufficient. Likewise, some topics may naturally garner student attention, and investing significant resources to capture attention is likely unnecessary. In contrast, some topics may be “dry,” and instructors would benefit from investing resources into facilitating attention.

Depending on the skills and content that are being taught, designing assessments to emphasize specific phases of information processing can be helpful, and AI may have a role to play. In some courses, for example, there is a legitimate need for students to memorize specific items. In cases like these, designing learning opportunities that facilitate storage and retrieval, such as practice testing, can be helpful for recall. Importantly, without careful attention to assignment design, students may bypass online un-proctored practice tests if AI technology exists, using the “co-pilot” to provide the best answers to un-proctored assessments. All is not lost, and AI can still play a helpful role in learning in these contexts — consider using no-stakes practice testing with rapid AI feedback to facilitate accurate storage and retrieval of content. Proctored summative assessments can then be used for unassisted mastery checks.

Ultimately, high quality courses and assessments will provide an understanding of how people learn, and help address factors in the environment that can hinder learning. AI is one example of an environmental factor that can both help and hinder learning at multiple stages of information processing, depending on the context and goals within a course. By intentionally considering how people learn, and how learning can be facilitated and disrupted at different stages within the context of a course, instructors can gain important insight for designing learning experiences and assessments that best match their needs.

Meghan E. Norris is the chair of undergraduate studies in the department of psychology at Queen’s University.

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.