Skip navigation
In my opinion

Student grades: How confidence can hinder success

It’s exam time. Research suggests that while some students will be pleasantly surprised by how they did on exams, a larger group will falsely believe they did much better on their exams than they did.

BY MICHAEL J. ARMSTRONG | DEC 07 2017

At this time of year, university students across the country are preparing for exams. Some will happily get higher-than-expected marks. But a larger group instead will be surprised by lower scores.

Negative surprises are common partly because we humans tend to be overly optimistic. Look at how people buy lottery tickets, borrow money or invest in stocks.

Students also tend to be unduly optimistic about their learning and forthcoming grades. Less skilled students are especially likely to over-estimate. This may lead them to make poor choices. If they mistakenly believe they’re already doing well, they may not study enough.

I often see this problem among my undergraduate students. So, I’ve experimented by giving them extra feedback about their grades and then surveying their reactions. A Chancellor’s Chair for Teaching Excellence award from Brock University funded this research.

Excessive student optimism

My initial study gave students a chance to forecast their final grades while the course was still under way. First, I used statistical analysis of previous years’ grades to create a forecasting formula.

I put that formula into a spreadsheet and gave it to my students, who could then type in their own quiz and assignment marks. The spreadsheet estimated their final grade, along with the probabilities of getting an “A,” “B,” and so on.

In total, 144 students voluntarily tried this forecasting exercise. Interestingly, “A” students were seven times more likely than “D” students to participate. That’s ironic, as the “D” students could have benefited more from it.

The study revealed excessive student optimism. Twenty-nine percent of participants said their forecast grades were lower than expected. Only six percent said they were higher.

The key to hard work

One third of participants in this study said the forecasting experience made them feel more confident. Another third said it made them feel more worried. Despite that, most of them (74 percent) agreed that grade forecasting was worth doing in future courses. Only six percent recommended against it.

Importantly, nearly half of students said they were studying more than planned after the experiment. (As their teacher, I was happy to see that!) Just three percent said they were studying less.

Students tended to study more if the forecast worried them. They also studied more if the forecast grade was low, even if it matched their expectation.

Surprisingly, students didn’t study more if the forecast was lower than expected. The gap between the forecast and their expectation didn’t seem to matter.

Comparing grades and goals

To investigate further, I collaborated with marketing professor Herb MacKenzie on a second study. This time, before the grade forecasts, we asked students what grade they expected to get in the course.

We asked them twice: Near the beginning of the semester, and again near the end. This revealed how their goals evolved over time.

They also completed questionnaires measuring their sense of “personal control.” Students scoring high on the questionnaire feel in control of their own lives. They believe success mostly depends on their own actions. (“I got a low mark because I didn’t study enough.”)

Students scoring low believe they have little influence on their own outcomes. (“I got a low mark because of bad luck.”)

Some of our results were expected. Students again tended to be overly optimistic. Their actual grades ended up lower than their initial grade goals. They were also lower than their updated goals later.

Confidence as a hindrance

However, the grade gaps were smaller later in the course. This was partly because students studied more when their grades fell below their goals. But surprisingly, the gaps narrowed mostly because students lowered their goals. Many learning theories predict the former, but few discuss the latter.

At both points in time, the gaps between grades and goals were wider among students with weaker ability. It seems they didn’t know what they didn’t know.

Another surprise was that the gaps were wider among students with high personal control scores. They confidently set higher goals, but did not achieve them. In most contexts, a sense of control is helpful. But here it was a hindrance.

This may have been partly because our experiment involved first-year students. They were still adapting to differences between high school and university.

Students feeling in control may have believed that their high school study habits would still work in university. But what worked before may no longer have been good enough.

Finally, we again found that students didn’t study more in response to gaps between their expectations and their forecast grades. The gap between their goals and their current grade seemed more relevant.

Better learning, greater awareness?

These experiments gave students some unconventional feedback in addition to traditional grades. The immediate goal was to help students better understand their course progress and how much studying they needed to do.

The experience may also have helped them become better at self-assessment in general. That skill could help them make better decisions in future courses too.

This research is part of a larger program to help our students make better decisions about their studies. We want them to learn more, earn higher marks and avoid having to retake courses.

By helping students this way, our results should also benefit universities and their government funders.

Michael J. Armstrong is an associate professor of operations research at Brock University. This article was originally published on The Conversation. Read the original article.
The Conversation

COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *