In a recent University Affairs article, Alex Gillis provides an excellent summary of the emergence of so-called “predatory” journals, which were brought to the attention of the scientific community by Jeffrey Beall. An academic librarian at the University of Colorado Denver, Mr. Beall has campaigned against these low-quality journals relentlessly, notably by means of his famous “blacklist.”
Then, in mid-January, there was a dramatic development: Without so much as a warning or explanation, Mr. Beall suddenly shut down his site. Countless followers expressed concern at the disappearance of Beall’s List. Archived versions are still accessible, however, and there have been rumours that a similar initiative is already being undertaken by Cabell’s, a commercial academic journal evaluation site, with Mr. Beall’s collaboration.
While I acknowledge Mr. Beall’s undeniable contribution, I think it necessary to add some qualification to the value of the list that he built up over the years and, more generally, to the usefulness of “blacklists” or “whitelists” when making informed decisions as to which open access journals to publish in – or not to publish in.
Mr. Beall has publicly posted the (rather long) list of criteria and indicators that he once used to assess a publisher or a journal. Although some clearly point to fraud or false representation, others are more questionable. One recent study shows that journals seen as legitimate and even prestigious by researchers in a particular field may fail Mr. Beall’s criteria. The problem stems from the lack of transparency regarding the manner in which Mr. Beall applied the criteria in his decisions – which he took alone – to put publishers or journals on his blacklist.
In many cases, he only gave general and sometimes very brief comments in support of his decisions, which made it difficult to understand what role the various criteria played, not to mention the threshold at which a publisher or journal would find itself included on the list. There was an “appeal committee” available to anyone wishing to challenge a decision, but nothing was known about its operating method or its makeup.
Another basis for questioning the relevance of Beall’s List, or any other list of this kind, is its binary nature: a journal or publisher was either on the list or not on the list. Although the list featured “potentially, possibly or probably” predatory publishers, no such distinction was made in the list itself, nor in most discussions in which it was cited. The term “predatory journal” thus covers a broad spectrum of situations ranging from outright fraud to dubious publishing or peer-review practices.
In this regard, it is worth pointing out that the same deficient practices are found in the traditional subscription publication model, which Mr. Beall did not scrutinize. Added to this limitation was another point about Beall’s List: If any publisher was included on the list, all of its journals were automatically labelled “predatory.” It should be pointed out that Mr. Beall has made no secret of his visceral aversion to open access, nor of his belief that the existence of “predatory” journals puts scientific publishing, and even science itself, in jeopardy. For an idea of Beall’s opinions, see “Debasing the Currency of Science: The Growing Menace of Predatory Open Access Journals,” and in particular his quasi-surrealist essay, “The Open-Access Movement is Not Really about Open Access.”
There is, however, another approach, the opposite of Beall’s: journal “whitelists.” Alex Gillis advises against them on the grounds that they include “many predatory journals.” However, one of the main lists of that type, the Directory of Open Access Journals, embarked on a review process over two years ago under which the 10,000 journals listed at the time had to file an application for readmission. This process, still under way, is based on more rigorous criteria and validation, as I was able to observe as an associate editor of one of the open access journals that had to reapply. In addition, the process is applied to individual journals rather than to publishers.
The review has had a major impact: thousands of journals have been removed, while only a third of new applications have been accepted. As one might expect, journals that were on Beall’s List are now much less visible on the DOAJ. Based on a compilation I made at the time and which I have just repeated, they have fallen from 10 to four percent of the total, a figure that is likely to fall even further by the time the review process ends.
In my view, the simple inclusion (or absence) of a review on a list, whether black or white, should not be taken as an absolute criterion of acceptability or unacceptability, but as a simple indicator whose importance depends on the reliability of the list. Without presuming anything about any future reincarnation of Beall’s List, I have a distinct preference for the DOAJ list, as does the thinkchecksubmit.org site recommended by Mr. Gillis.
At the same time, I remain convinced that there must be some scrutiny – cursory though it may be – of a journal and the articles it has published. The opinions of colleagues who have recently published in the field concerned can also be taken into consideration, keeping in mind the still widespread biases against open access. These preconceptions, sometimes fuelled by alarmist or simplistic contentions such as those of Mr. Beall, are most often the result of unfamiliarity with this method of publication.
Marc Couture is an honorary professor (retired) at TÉLUQ, Quebec’s distance-learning university.