Skip navigation

More on student preferences: good lectures vs. classroom technology

A Q&A with Concordia professor Vivek Venkatesh on the effectiveness of educational technologies.


A recent article in University Affairs, “Students prefer good lectures over the latest technology in class,” struck a chord with readers. The article received nearly 10,000 page views and was shared on Twitter well over 100 times, as well as on Facebook and other social media. Many of the tweets were accompanied by an enthusiastic “agreed!” or “good to hear!”

The article also generated a high volume of online comments – some supportive of the conclusions, but many critical. Several readers argued the article didn’t accurately reflect, or misinterpreted, the report’s findings. In defence, it should be noted that the article – at less than 400 words in length – gave only a brief overview of what was a 64-page report based on an in-depth survey of more than 17,000 university students and teachers in Quebec.

With that in mind, we have grouped the criticisms into several themes and asked one of the report’s chief collaborators, Vivek Venkatesh, to comment. Dr. Venkatesh is the associate dean of academic programs and development in the school of graduate studies at Concordia University.

Q: Some view the report as demonstrating that students, in general, don’t like the latest educational technologies. However, one reader noted that, in fact, the majority of both teachers and students deemed technologies used in the classroom to be effective. So which is it? Are students and teachers “anti-technology” or “pro-technology” in the classroom setting?

A: Our study shows that teachers have a more positive outlook towards the use of educational technologies than their students. Descriptive analyses of the survey data show that 53 percent of the 15,020 students surveyed had a positive perception of how information and communication technologies (ICTs) were being used in their classes. On the other hand, 86 percent of the 2,640 teachers had a positive perception of the use of ICTs in their classrooms.

Elsewhere, we used multiple regression models to show how teachers’ and students’ perceptions of course effectiveness might be related to different instructional strategies and their respective attitudes towards the effectiveness of the ICTs being used. Teachers’ perceptions of their students’ positive learning experiences were strongly predicted by their own perceptions of the effectiveness of the ICTs being used in the classroom and by the use of collaborative discussion strategies. Students, however, associated an effective course with those that had engaging lectures as well as how effectively technologies were used in their courses, but the relationship between students’ attitudes towards the success of the course and the effective use of ICTs was much weaker than the relationship that teachers’ perceptions of course success had with the perceived effectiveness of use of ICTs.

Q: Students appear to have their preferences about how course material is delivered: they like “an engaging lecture.” But is this preference aligned with best practices? Some argue that it isn’t which teaching methodologies students prefer that should matter, but rather which methodologies are most effective – and the two may not be the same.

A: Our study was not designed to demonstrate the effectiveness of any one (or set of) instructional technique(s) over others. We set out to – and have succeeded in creating – robust, generalizable and predictive models of factors that impact attitudes towards university course effectiveness. Prior research (for example, Wright and Jenkins-Guarnieri, 2012) has analyzed the findings of 11 meta-analyses (193 studies) on student evaluations of teaching, or SETs, with a specific focus on their construct validity, susceptibility to bias, practical use and effective implementation. Their research provides support for the use of SET measures in evaluating instructor skill and teaching effectiveness.

We strongly believe that with a large enough representative sample and a probabilistic sampling strategy, which we have used in our study, gathering students’ perceptions on course effectiveness is a valid measure because it can reflect the reality of what is happening in the classroom – or, dare we say, what should be happening in the classroom. There have been various comments, both as a response to the UA article, as well as in the larger web sphere regarding the generalizability of our results due to a purportedly biased sample and the fact that our research was designed to reach specific conclusions. These assertions are simply untrue and bear very little logic.

Our report provides detailed explanations of the rigorous and probabilistic sampling strategy employed and how the results are, in fact, generalizable to populations of university-goers across similar education systems. Our sample includes a representative proportion of both full-time and part-time students, students who took online courses versus those who did not, as well as tenure-track and sessional instructors across the different faculties and programs offered in Quebec universities. Our instrument was rigorously pilot-tested and is derived from literature that adopts multiple perspectives on the factors that interact to create what we call course effectiveness in university classrooms.

Q: Some commenters claimed the article unjustly maligned active (also referred to as “engaged” or “interactive”) learning, whether technology-mediated or not. Many studies indicate this type of learning leads to the best outcomes. What would you say to that?

A. We disagree that the article in UA maligned active learning. It simply reports the results of our study. It should be noted that less than half (47 percent) of the students surveyed reported studying three hours or more per week outside of their class time, three hours being the minimum time recommended by professors in Quebec universities. In addition, a significantly large number of students we surveyed reported not engaging in study strategies that have been empirically demonstrated to be effective, including completing the readings before classes (26 percent), note-taking (31 percent), collaborative learning (36 percent), and self-reflection and monitoring of learning (43 percent).

Active and engaging learning demands a great deal of responsibility and advanced studying strategies by learners, which is something we did not see in our sample of students; and, given the sample size and probabilistic strategy we undertook to collect the data, our results are generalizable.

It should also be noted that “scaffolded” and interactive instruction is now facing stiff competition from newer, more radical strategies which place emphases on academic self-regulation of cognition and metacognition. For example, productive failure is an engaging new learning strategy developed in 2006 by Manu Kapur from the National Institute of Education in Singapore. The phenomenon refers to the implementation of ill-structured (or “unscaffolded”) problem solving as a primary learning task followed by teacher-led instruction (e.g., direct instruction or lectures), rather than relying solely on guided and/or well-structured problem-solving activities with a guaranteed positive result. Dr. Kapur’s recent line of research on productive failure suggests that designing instruction for initial failure at solving ill-structured tasks has a significantly more positive impact on learning outcomes measured via delayed post-tests than instruction that scaffolds learning at the outset.

Q: One reader commented that there is a middle ground: that of web-enhanced/hybrid/blended formats where students can receive the best of all worlds, traditional and technology-mediated learning, “all in one.” Your thoughts?

This is too simplistic a way of looking at the complex process of learning using multiple media. It is a grave mistake to transpose and compare teaching or learning methodologies which work in face-to-face or blended environments to those that work in online environments. I have provided interviews to the media on this issue as it relates to a recent finding from one of my research projects on interactions in online forums, and I encourage readers to listen to this interview on CBC’s Spark! radio show to find out more about this issue.

Q: Finally, someone argued that the article gave a confused message about online learning. If students want an “engaging and intellectually stimulating” lecture, there’s nothing preventing them from getting this experience via online video lectures. Agreed?

I would encourage readers to not conflate the educational, psychological, behavioral and neurological effects of watching an online lecture with those of attending a class – in person – in a lecture theatre/classroom. Our survey shows that 68 percent of students prefer to take face-to-face courses rather than distance-learning courses and online courses; this is not an inconsequential number, and probably relates to preferences that learners have towards how they wish to engage in a learning process. Again, I would refer readers to the Spark! interview that I gave for a deeper discussion of this issue.

Q: Is there anything else you’d like to add?

Thanks for the opportunity to speak to these issues. We are in the final phases of preparing our English-language manuscript for submission to the journal Computers & Education, so look out for that piece when it is published.

Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

  1. Reuben Kaufman / January 16, 2013 at 12:41

    Excellent follow-up, Leo, to a very important issue!

    Dr. Venkatesh’s detailed responses to your questions are much appreciated!



  2. Vince Salyers / January 16, 2013 at 14:35

    My comments are directed to Dr. Venkatesh’s dismissal of the “middle ground” approach when asked about this. Much research has shown that there are no differences in learning outcomes when comparing traditional teaching methods with those of blended and fully online methods. Rather than to get caught up in the “complex processes associated with learning”, I would suggest that the more important measures are the outcomes achieved at the end of the learning experience. I believe for far too long we have been painting the picture inaccurately. At the end of the day, it is about the pedagogy and student outcomes. Successful achievement of learning outcomes can be accomplished in f2f, blended, and fully online learning environments. I would encourage Dr. Venkatesh to get back into the literature with an open mind and revisit the f2f, blended and fully online tapestry as it changes daily and before our very eyes.

Click to fill out a quick survey