Skip navigation

Could ChatGPT be your next co-author?

As generative AI disrupts the world of teaching and learning, academia has been slow to address its use in scholarly research.


Universities have made it clear: they are not fans of students using generative artificial intelligence (AI) in their coursework, as it could be considered academic misconduct. As the conversation on the use of AI in teaching and learning continues to evolve, many are starting to ask questions about the potential role these emerging technologies could play in academic publishing and authorship.

Mark Humphries, a Wilfrid Laurier University history professor who blogs about generative AI, is a member of the university’s generative AI working group who says faculty research has yet to appear on the group’s agenda. “Research is not part of the discussions right now.”

Isabel Pedersen, a professor at Ontario Tech University who has written extensively on AI and authorship is on the same page: “I’m not seeing any conversation about values and how we judge an author who discloses the use of AI or generative AI. This has happened so quickly and it’s so disruptive we don’t have a values system for what this means in academic publishing.”

AI already enables the research process via data mining by helping academics recognize text or sort through genetic information. This has triggered debates about bias and computers taking work and training opportunities from grad students. But there’s been little chatter around the ethics and practicalities of using AI to produce and publish journal articles.

In early 2023, numerous publishers issued policies about generative AI, with the Springer Nature journals, Elsevier and Cambridge University Press saying this technology cannot be considered as an author and must be cited, while the Science family of journals flat-out said no to AI-generated text or images.

However, universities have not followed suit with statements or polices of their own. “Our research office does not have a policy yet and I think that’s very typical of what other universities are doing too,” said Dr. Humphries.

Nick Baker, director of the office of open learning at the University of Windsor, said he’s also not surprised. “It’s much harder to know what to do with tenured faculty members if they use these tools, compared to a student. Research ethics are trickier to figure out,” he said. Dr. Baker expects to see increasingly sophisticated generative tools and updated versions geared to individual tasks and disciplines.

Already, academics can use the likes of ChatGPT to build an outline or write an early draft of a journal article. “As a historian, if you wanted [ChatGPT] to write your papers for you, it just doesn’t do that,” said Dr. Humphries. “It might be more helpful in disciplines where the articles are much shorter or the views are more standardized.”

As well, anyone who uses open, online tools has to be aware that these databases may store the material they input, risking a loss of ownership.

Dr. Pedersen said blanket policies at universities would fail at covering the wide range of attitudes around academic writing. “In some STEM-based disciplines, the art or practice of writing the article might not be as valued as much as the scientific outcomes or contribution to knowledge. But in the social science or humanities, the style of writing could be part of the contribution.”

Already, many academics use AI-enabled grammar-checkers and translators. “People who do not have English as a first language, in the current system, they’re quite disadvantaged,” said Yves Gendron, a professor of accounting at Université Laval who has written about the potential implications AI could have on academic publishing. “Software like DeepL [used for translation] for me is a kind of a miracle. I use it perhaps five or six times a day.”

Generative AI can also conduct literature searches to see who else has written on a topic and help build literature reviews. “The difference here is if I ask a question using Boolean logic in a search engine, I get back a thousand papers that mention the concept,” said Dr. Baker. Generative AI can tap into the right database and offer a more curated list.

And soon, generative AI will likely get much better at formatting. “The whole process of manuscript submission is something AI can help with. It can generate your footnotes, double check them with the references and automate the submission process,” said Nasser Saleh, librarian for educational initiatives at Queen’s University and a member of Making AI Generative for Higher Education, a two-year research project that includes 17 universities across North America. Dr. Pedersen noted that researchers could also use AI-powered tools to pull together a website for knowledge-translation, generate a lay summary of a study in advance of a talk or media interview, or even help automate workflows.

Even some journal publishers use AI behind the scenes. So realized Dr. Gendron, co-editor of Critical Perspectives on Accounting, when publisher Elsevier began providing him with lists of peer reviewers, the top ones being those with the highest h-index. “What I like to do is to get one experienced reviewer and one younger, to develop the community. AI does not do this,” said Dr. Gendron. He also gets peer review requests from an AI-driven publisher, Qeios, once a month — which operates without human editors.

“I am not on the optimistic side regarding AI, I am more on the worrying side,” he said. He also has concerns that generative AI could take over the publishing process, with the corporate interest of publishers dictating the rules, no human editors at the helms and simplistic bots deciding what’s readable and well researched.

This could lead to unethical people using AI to pump out articles backed by sketchy research. “It’s clear to us that digital so ware allows people to produce reviews of literature in very little time,” said Dr. Gendron.

Dr. Pedersen worries that thoughtful use of AI will be di cult to distinguish from those cutting corners. “I’m nervous when I see [AI-] generated datasets, fake datasets. How will human reviewers be able to spot these?”

Some academics are more optimistic. “It’s a good thing, it’s changing the game,” said Dr. Saleh. He thinks faculty who outright refuse or mock AI are missing out on the potential of these tools to save time and enable their research – plus there’s an opportunity at this nascent stage of generative AI to ensure it remains a useful tool for the sector. “I think it can be embedded into research ethics.”

Dr. Baker agrees that avoiding difficult conversations could put academic integrity, privacy and more at risk. “AI at the moment is the least capable it will be. Now is our opportunity to set policies and direction. Burying our heads in the sand leaves it to the bad actors to control how this goes.”

Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

Click to fill out a quick survey