Dean, Prof call for societal impacts of AI to be considered alongside technical regulations

Wednesday, March 6, 2024

This opinion piece by Dean Mary Wells of Waterloo Engineering and Associate Professor and Canada Research Chair in Science, Health and Technology Communication Ashley Rose Mehlenbacher at the University of Waterloo recently appeared in the Hill Times, a popular news source for Canadian politics and government news.

The ancient Greeks gave us a helpful term for thinking about the opportune moment: kairos. This is an apt word to describe our present circumstances as policymakers look to regulate AI. It’s a moment ripe for an important discussion about how we should responsibly use this burgeoning technology in our everyday lives.

Engineers, developers, ethicists, and other scholars have long been thinking about how AI might impact society, but we need to conduct a wider conversation about AI to help determine what is good or ethical about this new technology. Consider social media, for example. We probably should have sought a broader conversation on its effect on society in the early days of MySpace or Friendster so we could have better anticipated its anti-social uses and impacts.

Indeed, the future of AI is our societal future, too. It’s essential to envision this future proactively to shape our practices for technology development. Practically, this means finding ways to bridge the process of developing this technology with the people who will be impacted by its use. Recent weeks have seen a steady drumbeat of calls to regulate AI in Canada. OpenAI’s release of a demo of Sora—its text-to-video tool—has shown how in just over a year the technology has moved from chatbots to chat-to-video production. Let’s build on that conversation to consider AI’s broader and more long-term impacts.

The federal government’s proposed Bill C-27 is one regulatory approach policymakers have put forward, but it has been slow-going compared with the rapid technical AI advancements. While the bill focuses on “high-impact” technologies, some AI advocates—such Yoshua Bengio, scientific director of Mila, an the AI institute in Quebec—noted during a parliamentary committee last month that societal impacts need to be considered alongside regulations.


Regulatory frameworks should focus on identifying gaps in existing legislation to plan out more fulsome approaches—for example, improving privacy measures for dealing with sensitive information in health, or ensuring distribution of non-consensual pornographic materials has a heavy penalty. These exercises move us from merely being reactionary to AI, and into a larger conversation about how we can improve everyone’s life by encoding long-term, strategic planning for technology innovation, development, and production in Canada—while addressing risks.

There are numerous precedents for this. Buildings, roads, and other physical infrastructure, as well as aircraft and automobiles, are highly regulated with the objective being to keep us physically safe. Likewise, environmental regulations have been enacted to help protect people’s health and safety, to protect our environment from pollution, and to protect people in the workplace.

As we do so, industry, researchers, and government policymakers must work collaboratively while critically listening to the voices of those whose lives are impacted by technological advancement.

These conversations are similar to a discussion we were part of with the University of Waterloo’s Trust in Research Undertaken in Science and Technology network, which is aimed at people who are going to be impacted by technology innovation.

Alongside AI industry experts, the discussion included members from Waterloo’s Faculty of Arts where the social impacts of technology are critically studied to help us see both the potentials and perils of technologies. We expect more conversations like these to continue.

Serious impacts will likely come as AI matures such as anticipating job losses, and adapting social measures to ensure support for those who are facing life-changing impacts of new technologies. Other impacts include the various environmental stresses on data centres that help run AI technology, or how humans contend with AI influencing their work in Canada’s gig economy.

As Ottawa mulls over Bill C-27 and puts its stamp on regulating AI, addressing AI’s challenges in the social arena will also help us to understand how new technologies could impact child development and touch many peoples’ lives. The pace of change of new technologies often makes it hard to keep up, so thinking ahead will be critical to shaping how we manage these changes as we enter this kairotic moment.
photo of Waterloo Engineering Dean Mary Wells

Dean Mary Wells