GenAI is in — so why are we still burnt out?
Will generative artificial intelligence end up diminishing, or increasing, researchers' workloads?
We’ve all seen the hype since the advent of artificial intelligence: AI will push Canada’s GDP to new heights! AI is redefining the future of work!
The world of research in no exception: by now, most researchers are using generative AI to help with tasks like project planning and management, literature searches, writing, translation, coding, mapping, and more. Ideally, using generative AI to expedite these tasks would free up researchers’ time, allowing more time for creativity and thought.
But using generative AI also raises many concerns for academics, including the reliability and copyright protection of generated content. There are concerns over the possible weakening of academic integrity; atrophying of cognitive abilities; widening access gaps between individuals and institutions; and the technology’s ecological footprint. Various control measures are being proposed to address these widely recognized concerns. Along with the tri-agencies, several universities — including Université de Montréal and Université Laval — have published guidelines about the use of generative AI research. Conferences, workshops and training sessions are being offered.
But there are larger stakes that remain unaddressed, in particular, the socio-scientific context in which these technologies are emerging. Since 2012, the San Francisco Declaration on Research Assessment (DORA) and the Coalition for Advancing Research Assessment (CoARA) have been working to reform the academic ecosystem and encourage more open, responsible, and diversified research by rethinking how research excellence is assessed. But there is a tension between these reform efforts, and the basic functioning of generative AI systems.
Generative AI tools draw from existing databases, which are replete with methodological, linguistic, sociodemographic, and discipline-specific biases. Depending on the task it’s assigned, generative AI may skew towarddominant scientific norms, especially those from WEIRD (Western, Educated, Industrialized, Rich and Democratic) societies. These norms are exactly what reform efforts are attempting to transcend.
Add to this another well-known dynamic within academia: in a system where productivity is a synonym for quantity rather than quality of output, the use of tools that speed up certain tasks may actually exacerbate a scholar’s workload, rather than reducing it. Some researchers are already feeling the pressure. Any time saved is offset by pressure to produce more and faster, or risk falling behind their more tech-savvy colleagues.
So, is generative AI compatible with the principles of DORA and CoARA? Perhaps —but only if it isn’t used to reinforce exactly what these initiatives seek to change. Generative AI could contribute to a more holistic vision of research excellence, provided it is used to improve the clarity and transparency of science communications; improve linguistic and cultural accessibility to better reflect diversity; bolster collaboration for a healthier and more vibrant research environment; and support innovation and improved knowledge-sharing practices, especially between scientists and policy-makers.
By assisting with various tasks, generative AI should also enable us to better distribute our workloads, improve efficiency, and prevent us from having to work more than a healthy 40 hours a week. If, on the other hand, generative AI is used to increase expected rates of publication, it could perpetuate the very problems that DORA and CoARA are trying to correct. If generative AI perpetuates the current system, which values quantity over meaning, there is no guarantee that the technology will free up any time at all.
To ensure generative AI becomes a driver of positive change in academia, institutions, researchers, students and granting councils must together define terms of use that uphold a renewed understanding of research excellence. The time for open dialogue has come — not just for the research ecosystem as a whole, but for every researcher who deserves to be freed from overwork.
Featured Jobs
- Canada Impact+ Research ChairInstitut national de la recherche scientifique (INRS)
- Human Rights - Assistant Professor (Expertise in Human Rights related to Artificial Intelligence and Digital Security)University of Winnipeg
- Director of the McGill University Division of Orthopedic Surgery and Director of the Division of Orthopedic Surgery, McGill University Health Centre (MUHC) McGill University
- Human Rights - Assistant Professor (Expertise in Peace & Conflict Studies, Socioeconomic Rights and/or Indigenous Rights)University of Winnipeg
- Soil Physics - Assistant ProfessorUniversity of Saskatchewan
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.