One-third of peer reviewers for the Canadian Institutes of Health Research (CIHR) submitted outstanding reviews, while no more than one in 20 did not perform adequately, according to an analysis of the first three years of the funder’s review quality assurance (RQA) program.
“In general, the quality of peer reviewers in the CIHR system is in pretty good health,” said Clare Ardern, a physiotherapist at the University of British Columbia who led the analysis, published in the journal FACETS in August. “That’s important for applicants to know, it’s important for people to have trust in the system.”
Launched in 2019, the RQA process is part of CIHR’s efforts to professionalize peer review and improve the supports provided to reviewers. The program includes all CIHR grant competitions. “We’re giving out roughly $1.3 billion a year through peer review, we want to make sure it’s the best that it can be,” said Adrian Mota, CIHR’s associate vice-president of research programs – operations.
In the analysis of the program, Dr. Ardern and her colleagues looked at more than 4,000 RQA evaluations carried out from 2019 to 2021. As the analysis only covered three years, it was difficult to determine whether the RQA evaluations were leading to an overall improvement in the quality of peer review for CIHR grants, but she said there were no obvious spikes in either good or bad evaluations during that time.
“When we looked at these evaluations, the vast majority of reviewers were performing as people would expect,” Dr Arden said. “As a grant applicant myself, when I look at these numbers I feel like I can trust that this process is doing the things that we expect as researchers.”
Following each meeting of a peer review committee, the committee chairs, scientific officers and CIHR staff complete a standard evaluation of each reviewer. Performance is judged based on the quality of the written reviews, participation in the meeting and responsiveness – such as submitting reviews or conflict of interest forms on time.
“The idea is not to check if the scientific opinion is right, but we do have criteria in our competitions, we have standards with respect to what we need to provide back to the applicant so they understand the critique of their application and they have confidence it was done by experts, that it was fair and transparent,” said Mr. Mota.
The process helps identify which reviewers are particularly strong and could potentially become committee chairs, scientific officers or peer review mentors – and which reviewers are not up to the job. Depending on what issues are identified by the evaluation, CIHR has a variety of interventions to address the problem, ranging from a letter identifying the issue to providing extra training.
“We try to be positive, the goal is not to eliminate them but to get them up to the standard,” said Mr. Mota. “Though there are cases where some are not asked to participate again.”
The analysis could also help inform the current debate over whether CIHR should return to in-person peer review meetings after the pandemic forced them to go fully virtual. Dr. Ardern did not see any significant change in the scores between competitions that were reviewed in person and those reviewed online. “It’s not definitive, but we can say there didn’t seem to be a drastic change in quality assurance measures,” said Dr. Ardern.
CIHR is not the only funding agency to evaluate the quality of its peer review – a report by the European Science Foundation identified 25 funders who do so as standard practice – and Mr. Mota said many of these agencies consult with and build on each others’ systems. According to Dr. Ardern, few other funders publish any information about their evaluations. “Kudos to CIHR for having such a well-developed process in place,” she said. “Anything that brings more transparency to the system is a good thing.”