Skip navigation
News

Western appoints first-ever chief AI officer

The new position is intended to support the university’s efforts to adopt a progressive approach to machine learning and explore the capabilities of artificial intelligence.

BY EVAN BRYANT | DEC 19 2023

This past September, Western University appointed Mark Daley as its first-ever chief AI officer. The appointment comes at a time of rapid growth in artificial intelligence as universities attempt to navigate the use of emerging AI software in academia, including ChatGPT, a chatbot from OpenAI, GitHub Copilot, a coding assistant and Bing AI, Microsoft’s chatbot.

Dr. Daley has been at Western since 2004, when he first joined as a professor in the department of computer science. Since then, he has held a variety of roles including associate vice-president, research and special advisor in data strategy.

The new position is intended to support the university’s efforts to adopt a progressive approach to machine learning and explore the capabilities of artificial intelligence.

Mark Daley

In terms of what his exact responsibilities will cover, Dr. Daley is consulting with both students and faculty to learn what their aspirations and concerns are when it comes to integrating AI into their work. For example, the university hosted a town hall in the fall to hear thoughts on the school’s use of AI from students and professors.

He created an internal message board for faculty and staff to share their successes and concerns with the development and practices of AI. So far, Dr. Daley said he’s received more than 6,000 responses and the mood has been “overwhelmingly positive.” Faculty and staff at Western have shared what has worked for them with their colleagues when it comes to the use of AI as instructors.

“I’m a computer scientist. When I sit down to code, if I have GitHub Copilot in my development environment it makes me 10 times more productive as a developer … Students in various disciplines need to figure out what works for them, my job is only to provide advice and support to instructors,” he said.

However, Dr. Daley feels that placing all the responsibility on the students is unfair. “I feel a moral imperative as an educator to help them to understand what ethical uses [of AI] are, and it’s going to be co-discovery.”

Jim Davies, a professor at Carleton University in the department of cognitive science, agrees with that assessment. He said the use of these large-language models causes more cheating on assignments like essays as students get better at prompting the AI.

“The ease of use and availability of these new deep learning generative AI models can basically make it so that students aren’t learning anything,” Dr. Davies said.

He said the easy accessibility and use of these powerful software can cause students not to learn from their coursework.

According to Dr. Daley, there can be a “time and a place” to ban the use of these softwares, but they can also be incredibly beneficial for research, studying and intellectual debate.

“Just like getting feedback from a human, you’re not accepting it as absolutely true. [The chatbot] says things that are wrong, and it says things that I disagree with, but that’s healthy.”

COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

Click to fill out a quick survey