Skip navigation
In my opinion

Universities should empower artificial intelligence, not denigrate it

Institutions have been distracted from the vast potential of AI and are overreacting to the perceived threats.

BY CONSTANTINE PASSARIS | SEP 24 2024

Artificial Intelligence (AI) is clearly a game changer for universities across Canada. AI can make a significant and profound impact on teaching and research capacity. Universities have been at the forefront of intellectual, scientific and technological discovery since time immemorial. AI is simply the contemporary wave of spectacular innovations on this journey of historical continuity.

However, universities across Canada are experiencing a Malthusian moment with respect to AI. Thomas Malthus was an 18th century economist and demographer who predicted the demise of humanity because population would increase at a geometric rate while food supply would increase at an arithmetic rate. In consequence, the food supply would be insufficient to meet the demand triggered by an increasing population. What he failed to account for was the positive role of scientific advances, groundbreaking inventions and innovative agricultural machinery that would enhance agricultural productivity and increase the food output of the agricultural sector.

Fast forward to the present, when universities across Canada are experiencing a frenzy of elevated anxiety and are pushing back on the significant potential of AI because it will facilitate student cheating and compromise academic integrity. In my opinion, universities have been distracted from the vast potential of AI and are overreacting to the perceived threats.

In effect, we are on the cusp of a transformational innovation that has the capacity to empower spectacular advances in pedagogy and facilitate cutting edge research at our universities. Universities have always aspired to push conventional boundaries and explore new scientific frontiers.

The composite of time from the 18th century to the present records only two major structural disruptions to the social and economic landscape. These are the Industrial Revolution and the IT Revolution, with AI serving as an extension of the latter. AI relies on algorithms and systems to perform generative human tasks and is capable of processing big data, archiving our cumulative knowledge and best practices, identifying patterns, making predictions and automating repetitive tasks.

Humanity has transitioned from valuing the resources under our feet to those between our ears. Human capital is the signature mark and the foundational pivot for the new global economy of the 21st century. AI has emerged as the leading innovation for the creation of human capital and its impactful contributions to our knowledge economy. Universities must take a leadership role in driving the evolution of AI to mitigate the harm and compound the benefits.

Universities are our leading institutions to create human capital, social change, economic growth, work force skills, disseminate knowledge, facilitate critical thinking, serve as knowledge keepers and incubators of scientific and technological discoveries. All of this for the purpose of creating a more prosperous society and a better world. During the last two decades, Canada’s economic potential and prosperity has been sidelined as a direct result of low productivity levels. AI has the capacity to reverse this downward spiral and set a new course for Canada that will result in enhanced productivity and higher levels of economic growth.

Along with empowering AI, we need to simultaneously prevent the collateral hiccups that it may cause by mitigating the risk of academic malfeasance. At the end of the day, universities should grab this opportunity to position AI for the purpose of enhancing humanity’s wellbeing rather than dwell on its capacity to create minor distractions.

My checklist of actionable priorities for Canadian universities regarding AI includes:

    1. Acknowledge the empowerment that AI can strategically contribute to the university’s academic mission.
    2. Develop academic policies and guidelines for harnessing the vast potential of responsible AI to support the university’s mission.
    3. Draft a Code for the Ethical Use of AI and submit it for approval to their university senate.
    4. Leverage AI capacity for the purpose of achieving the university’s research ambitions for its faculty, students, research output and funding.
    5. Position AI tools strategically to enhance the efficacy of teaching and pedagogical outcomes.
    6. Provide students with AI tools and skills that will accelerate their professional careers.
    7. Encourage all courses to update their curriculum by integrating AI capacity from a list of best practices for the purpose of blending technical skills and academic content.
    8. Combine AI and experiential learning for the purpose of enriching the student learning experience.
    9. Nurture the effective use of AI capacity as a tool for life-long learning, professional development, and career upskilling.
    10. Build AI as a bridge that will facilitate interdisciplinary learning and research.

    Universities should embrace the public good in AI and get in on the ground floor of this spectacular innovation. This will provide them with the moral authority to nurture its development, provide the intellectual leadership and operationalize its administration. All of this for the purpose of positioning AI as a catalyst to chart an inspired roadmap for the betterment of humanity.

Constantine Passaris is professor of economics at the University of New Brunswick, a Dobbin Scholar (Ireland), an Onassis Foundation Fellow (Greece) and a recipient of the Order of New Brunswick.

COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

  1. Kathryn J Norlock / September 25, 2024 at 14:37

    A point of disanalogy between Malthus and some university members today is that Malthus was wrong and those faculty or staff being described as ‘over’-reacting to cheating today are not wrong; it is not simply that AI “will” facilitate cheating on certain sorts of assignments – – Instead the problem is that it already has, and did so quite immediately upon LLMs becoming widely available. So Malthus’s beliefs were not factive but our colleagues’ complaints are factive. Early experiences with cheating were dispiriting and time-consuming for many of our colleagues. It is hasty to dismiss such reactions to LLM-misuse as over-reactions to all forms of AI.

    And I doubt that there really are whole universities at which the sum total of the engagement of many departments and individuals with any and all forms of AI is limited to negative responses to the ease of cheating with LLMs. At my university, for example, some of us conferred about how to respond to cheating with LLMs while others developed courses engaged with other forms of AI at the same time. This article seems to be straw-manning one understandable and, given the badness of cheating, reasonable response to just one form of AI, while characterizing forms of AI as implicitly unified and needing our “empowering,” which strikes me as false. Universities are already engaging with AI in sundry ways, and they can do so compatibly with lamenting the increase (not just a change in forms of cheating but an increase in instances) in the ease of and frequency of cheating.

  2. IRENE TEMPLEMAN / September 27, 2024 at 09:15

    Sadly AI is only as good as those who originally designed the algorithm. Human nature has many bias thoughts and beliefs and this can restrict what individuals want others to know. Yes, libraries etc could do the same and stop certain books being distributed, but you could go to another library, book store etc but many are now too lazy or lack initiative to dig deeper to find actual research papers and review the authors analysis.
    Students want a quick answer and again sadly due to the growth of many institutions, professors are now no longer able to assess an individual knowledge of a subject through face to face discussion, or even group discussion and if we trust they have absorbed everything they have submitted, which in many cases, they did not write a word or even read it then we are sadly mistaken. I do not put all students in this category but numbers are growing.
    There are now many articles shown AI bias, where it spills out answers which are limited in knowledge if not totally inaccurate. One day this will cost lives but by then we may have lost the ability to analyze, evaluate and make decisions based on real data, using our actual brains than listening to a computer tell us what their master wants us to know.

  3. Paul Allen / September 27, 2024 at 12:18

    Notice that this article says nothing about: 1) human meaning, 2) reading texts and 3) the dystopian effects of many technologies to list just a short sample. Instead we are treated to corporate cliches and jargon like “human capital”. Does he know anything about what universities traditionally did? Articles like this only deepen my suspicion of AI and its advocates. They lead me further to the belief that AI needs to be banned. Investors are abandoning it already: https://www.theglobeandmail.com/investing/markets/stocks/BRK-A-N/pressreleases/28669726/warren-buffetts-secret-portfolio-is-dumping-shares-of-3-supercharged-artificial-intelligence-ai-stocks-no-not-nvidia/

  4. Itiha Oswald Mwachande / October 2, 2024 at 03:10

    The truth has been spoken

  5. Sura / October 2, 2024 at 03:29

    Oh nice topic, It’s true, we as academicians have to embrace AI and it’s potentials in assisting us perform our activities timely and professionally

    • Ann Reynolds / October 4, 2024 at 00:14

      “It’s” is a grammatical mistake — the only time it should be used is as a contraction for “it is”. “Its”, on the other hand, is possessive by nature. Would you put an apostrophe in “his”?

      My take on AI, as a contingent academic worker, is that administrations will use AI to replace sessionals, tutors and other such employees. If you have AIs teaching (or simply “assessing”) and the students are using AI to do their work, what value will education have other than training AIs?