TRuST Scholarly Network’s “Conversations on Artificial Intelligence: Should It Be Trusted?” Lecture

Thursday, January 25, 2024
panel discussion with  Jenn Smith   Lai-Tze Fan   Makhan Virdi   Anindya Sen   Leah Morris

The University of Waterloo, in partnership with the Perimeter Institute, hosted the TRuST Scholarly Network’s “Conversations on Artificial Intelligence: Should It Be Trusted?”lecture on Wednesday, January 17 as part of their series “Conversation on….”. An esteemed panel of academic and technology leaders from the University of Waterloo including CPI’s Acting Executive Director Anindya Sen, Google, and NASA discussed multiple topics centering on how artificial intelligence (AI) and big data are significantly altering the way we work, live, and connect, and whether we should trust these technologies.

As AI solutions are being designed at a rapid pace to benefit myriad areas in society, the public perception of AI was discussed at length, as the support and trust for AI solutions is a major concern. Panelists agreed that the cultural response to AI was a broad spectrum with moderate to extreme opinions, underlining the need for concerted efforts to educate and inform the public on these topics. The benefits of AI technology may be substantial, but there are ongoing concerns about unintended consequences, levels of control and oversight, as well as the many potential risks; loss of privacy, data mis/disinformation campaigns, etc.

The example was given where MIT and McMaster University researchers, using an AI algorithm, have “narrowed down the haystack for finding a needle”, in terms of a new antibiotic that can kill a type of bacteria that is responsible for many drug-resistant infections. This type of advancement highlights the speed and efficacy of AI in research, although panelists agreed that there still needs to be a rigorous amount of oversight and caution when employing AI, stressing that while it is a world-changing tool, there is significant nuance to these discussions. Essentially, AI cannot be blindly implemented and trusted in pursuit of advancement. There is a great deal of degree and detail in how each application of AI must be designed in order to balance its power with safety and control.

In terms of consumer and privacy protections, the panel debated the need for policy changes and potential misuses of data, as well as the constant balancing act between providing reasonable levels of privacy and security while still utilizing the benefits of a given technology. Discussion also expanded upon the implicit biases that exist in data collection and practices, and how AI can continue to perpetuate these biases unless they are proactively addressed.

As the discussion turned to policy, it was noted that we cannot govern what we do not understand, again highlighting the need for both extensive public and governmental information campaigns, to both increase trust and provide the necessary background information to help policymakers form effective policies. The point was also made that as AI continues to develop at an exceedingly high rate, policymakers and innovators must remain diligent and proactive in order to continue adapting trust and policy strategies to this ever-changing landscape.

To view this highly informative and engaging presentation in its entirety, please visit the Perimeter Institute’s YouTube channel for more fascinating science videos.

Moderator: Jenn Smith, Engineering Director and WAT Site co-lead, Google Canada

Panelists:

  • Lai-Tze Fan, Professor of Sociology and Legal Studies and Canada Research Chair in Technology and Social Change
  • Makhan Virdi, NASA Researcher
  • Anindya Sen, Professor of Economics and Acting Executive Director of the Cybersecurity and Privacy Institute
  • Leah Morris, Senior Director, Velocity Program, Radical Venture
Remote video URL