The Waterloo.AI Institute, in partnership with Dr. Bessma Momani and Jean-François Bélanger from the Department of Political Science, are pleased to be hosting a series of webinars exploring the ethics of artificial intelligence and automatic warfare. Each webinar will explore a different aspect of AI and warfare, bringing together both technical and policy experts to discuss the big questions, concerns, and developments in the field.
The series will be developed into a digital series by the Centre for International Governance Innovation in 2022.
You will find more about the webinars below, including how to register and descriptions of the topics and panelists.
Recordings are also available below.
Webinar 1: The Ethics of Automated Warfare and AI
Our first webinar explored how interconnected the development of artificial intelligence and warfare is. This is a broad question, but one that is the overarching backbone of this webinar series. Discussions around the military application of AI are nothing new, and have sparked their lot of controversy, including the “Stop Killer Robots” campaign that has seen important AI scholars advocate against AI automated warfare. As the opening webinar, panelists will be tasked to define for the audience what artificial intelligence is, what it has been used for in the military context, and the state of the art today.
This webinar was also done in collaboration with the North American and Arctic Defence and Security Network (NAADSN) as well as the Defence and Security Foresight Group (DSFG).
Our panelists included, Drs. Maura R. Grossman (University of Waterloo), Matthew Kellett (Department of National Defence), and Robert Engen (Canadian Forces College). Dr. Bessma Momani (University of Waterloo) moderated the panel.
Webinar 2: The Big Question: What is the Future of Warfare?
Artificial intelligence in warfare is here, and it is here to stay. While the public debate in recent years has been centered whether “killer robots” should be developed and deployed, the question is no longer “should we automate warfare” but “what is the future of AI backed warfare.” This is the broader question asked of our panelists for this opening webinar. As the opener, panelists will be tasked to define for the audience what is artificial intelligence in this context, and where we are today. From there, they will engage with questions such as: what are the new frontier of AI in warfare? Should AI simply be used for threat assessment, or should we go as far as we can and have automated systems fight wars for us in the future?
Webinar 3: AI, Conflict, and the Laws of Armed Conflict
This webinar engaged two interrelated questions. First, as AI progress we will see more and more development meant as a way to counter the AI of an adversary, as in the development of most new technology of warfare. What is the implication of adversarial and counter AI technology from a normative and international law standpoint? Second, what happens when this type of decision no longer has a human element attached to it? How can we be certain that the balance between the necessity to use force in certain situations versus how it should be limited by the need to respect the dignity of those involved. How can AI make this distinction? Additionally, what are the legal and normative implications of targeting AI infrastructure and personnel in war? Can artificial intelligence meet the legal responsibility countries face with the Laws of Armed Conflict? This opens up the question of how AI can be used, when, and the intricacies of such use.
Webinar 4: The Morality of Artificial Intelligence in Warfare
What does AI in warfare really look like? What are the possible applications, and how will they be used? Who will use them and who will they be used against? This webinar will discuss the future of AI in warfare from a practical standpoint and seeks to make explicit how these weapons will be developed, deployed and used—and by whom. Rather than discussing the development of autonomous weapons in technical or logistical terms, this discussion aims to tease out how autonomous weapons operate within and augment the current geopolitical landscape and what the moral consequences of their deployment might be. We will also discuss what this means for those who work in the field of AI, and what responsibility we all have to ensure that the work we do leads to a more just and equitable future.
Webinar 5: What Happens when AI fails? Who is Responsible?
Artificial intelligence is bound to fail. It is a given. But what happens when it does, especially if it is designed to make policy decisions or execute orders in theater? While it may be able to identify its query 95 percent of the time, a tank or types of boat for example, the remaining 5 percent is elusive and difficult to predict. Moreover, why it did fail is often counter intuitive to us. The question then becomes: how do we handle AI failures? How can we mitigate those mistakes, and is it actually possible to mitigate those? When failure happens, who is responsible for the decisions taken, or recommendation made, by AI? More importantly, what happens when the person who is responsible does not understand how artificial intelligence programed and works in the first place? Are there areas/tasks where AI is better suited and more accurate than humans? How is responsibility distributed when you have human and AI interact? Does a human operator bear the same liability as AI when it comes to making a mistake? One way or another, what prior knowledge is required for the operator to be able to accept the risks of such a situation? Finally, how would accountability work in this case?
Webinar 6: Is AI the Solution? Or Is It Already Being Overused?
Artificial intelligence is the cutting edge in decision-making and computer science. However, there is also the sense that AI is nothing more than a buzzword used in almost every facet of our lives right now. Given the current buzz surrounding AI, it is relevant to ask: what are the limits of AI? In many instances, such as with military technology, the availability of artificial intelligence is what made the technology possible in the first place. Can we expect systems to achieve decision-making capabilities and performances that are better than human across domains? What are the trade-offs of acquiring such capabilities? This session wraps up the webinar series and asks the panelists to reflect on our AI centric future. Is it the panacea it has been described to be? The answer is likely not. However, the panelists parse out the unrealistic expectations we have about artificial intelligence and give us a clearer view of what the future actually entails.