Image of Webinar Series Banner - "The Ethics of Automated Warfare and AI"
The Waterloo.AI Institute, in partnership with Dr. Bessma Momani and Jean-François Bélanger from the Department of Political Science, are pleased to be hosting a series of webinars exploring the ethics of artificial intelligence and automatic warfare. Each webinar will explore a different aspect of AI and warfare, bringing together both technical and policy experts to discuss the big questions, concerns, and developments in the field.

The series will be developed into a digital series by the Center for International Governance Innovation in 2022.

You will find more about the webinars below, including how to register and descriptions of the topics and panelists.

Recordings are also available below.


Upcoming Webinars

Date Speakers Topic
November 29, 2021

Alex Wilner, Carleton University

Eleonore Pauwels, Global Center on Cooperative Security

What Happens when AI fails? Who is Responsible? 

REGISTER

January 2022

Heather Roff, University of Colorado

Denise Garcia, Northeastern University

The Chicken or AI Question: Should Ethics Come before Development? 

Register to be notified when registration opens

Past Webinars


Webinar Series: The Ethics of Automated Warfare and Artificial Intelligence

What Happens when AI fails? Who is Responsible? 

Artificial intelligence is bound to fail. It is a given. But what happens when it does, especially if it is designed to make policy decisions or execute orders in theater? While it may be able to identify its query 95 percent of the time, a tank or types of boat for example, the remaining 5 percent is elusive and difficult to predict. Moreover, why it did fail is often counterintuitive to us. The question then becomes: how do we handle AI failures? How can we mitigate those mistakes, and is it actually possible to mitigate those? When failure happens, who is responsible for the decisions taken, or recommendation made, by AI?  More importantly, what happens when the person who is responsible does not understand how artificial intelligence programed and works in the first place? Are there areas/tasks where AI is better suited and more accurate than humans? How is responsibility distributed when you have human and AI interact? Does a human operator bear the same liability as AI when it comes to making a mistake? One way or another, what prior knowledge is required for the operator to be able to accept the risks of such a situation? Finally, how would accountability work in this case?

Eleonore Pauwels

Eleonore Pauwels

Eleonore Pauwels is a Senior Fellow with the Global Center on Cooperative Security, NY. Eleonore conducts in-depth research on the security and governance implications generated by the convergence of artificial intelligence with other dual-use technologies, including cybersecurity, genomics and genome-editing.

Eleonore provides expertise to the World Bank and the United Nations, as well as to governments and private sector actors, on AI-Cyber Prevention, the changing nature of conflict, foresight and global security. In 2018 and 2019, Eleonore served as Research Fellow on Emerging Cybertechnologies for the United Nations University’s Centre for Policy Research. At the Woodrow Wilson International Center for Scholars, she spent ten years within the Science and Technology Innovation Program, leading the Anticipatory Intelligence Lab. She is also part of the Scientific Committee of the International Association for Responsible Research and Innovation in Genome-Editing (ARRIGE). Eleonore is a former official of the European Commission’s Directorate on Science, Economy and Society.

 

Alex Wilner image

Alex Wilner

Dr. Alex Wilner is an Associate Professor at the Norman Paterson School of International Affairs, Carleton University, Ottawa, Canada. He is a leading scholar of contemporary deterrence theory and practice. His research – which explores the nexus between deterrence theory and emerging security considerations, domains, and environments – has shaped the fourth, and now fifth, generation of deterrence scholarship. Among his over two dozen journal publications, his articles on the subject of deterring terrorism and cyber deterrence have been published in top-ranked IR journals, including International SecurityJournal of Strategic StudiesSecurity Studies, and Studies in Conflict & Terrorism. His books include Deterrence by Denial: Theory and Practice (eds., Cambria Press, 2021), Deterring Rational Fanatics (University of Pennsylvania Press, 2015), and Deterring Terrorism: Theory and Practice (eds., Stanford University Press, 2012).

Since joining NPSIA in 2015, his broader scholarship has been awarded over $1M (CAD) in external research funding: he was awarded a Government of Canada SSHRC Insight Development Grant (2016-2017), a prestigious SSHRC Insight Grant (2020-2025), and a Government of Ontario Early Researcher Award (2021-2026) to study state and non-state cyber deterrence; two major IDEaS grants (2018-2021) and several MINDS grant (2019, 2020) from the Department of National Defence to explore Artificial Intelligence (AI) and deterrence; several smaller research grants from the Canadian Network on Terrorism, Security, and Society (TSAS); and a major Mitacs grant (2020-2022) to explore emerging technology and Canadian defence policy and strategy.

 

The Chicken or AI Question: Should Ethics Come before Development? 

AI, just like nuclear weapons, will act as a great disruptor for warfare as the technologies develop further. In what way exactly will artificial intelligence be disruptive, and what are the ethical and normative implications here? Will AI follow the way of nuclear weapon whereas ethical and moral questions fully crept in after the arrival of the bomb, and after it became clear the spread of it could not be stopped? Or instead, will we learn from the past and develop strong ethical guidelines and standards to mold and shape the ways in which AI is developed and used in the future? This conundrum is perhaps most salient in the military usage of AI, where in the extreme it could potentially take decisions without human assistance or supervision that could result in battle deaths or unintended casualties. Additionally, how can ethical guidelines be universal given how divergent countries are on the question?

Heather Roff image

Heather Roff

Dr. Roff received her Ph.D in political science from the University of Colorado at Boulder (2010). She is currently a research scientist at DeepMind, one of the leading artificial intelligence companies in the world, as well as an Associate Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, a future of war fellow with New America in Washington D.C., and was previously a senior research fellow in the department of politics and international relations at the University of Oxford. She held posts at Arizona State University, the Korbel School of International Studies at the University of Denver, the University of Waterloo, and the United States Air Force Academy.

Her research interests include the law, policy and ethics of emerging technologies, such as autonomous systems, artificial intelligence, robotics and cyber, as well as international security and human rights protection. She is author of Global Justice, Kant and the Responsibility to Protect (Routledge 2013), as well as numerous scholarly articles. She blogs for the Huffington Post, the Duck of Minerva, and has written for the Wired Magazine, Bulletin of the Atomic Scientists, Slate, Defense One, the Wall Street Journal, the National Post and the Globe and Mail.

 
 
Denise Garcia picture

Denise Garcia

Denise Garcia researches on international law, and the questions of lethal robotics and artificial intelligence, global governance of security, and the formation of new international norms and their impact on peace and security.

She was the recipient of Northeastern’s College of Social Sciences and Humanities Outstanding Teaching Award in 2016. In 2017, Garcia was appointed to the International Panel for the Regulation of Autonomous Weapons (Germany’s Ministry of Foreign Affairs).

Garcia teaches the annual course titled “Global Governance of International Security and the World Politics of Diplomacy” at the United Nations in Geneva, in cooperation with the United Nations Institute for Disarmament Research and many other partners. In 2016, she testified to the United Nations on the question of lethal autonomous weapons and their impact on peace and security.

Author of Small Arms and Security – New Emerging International Norms, and Disarmament Diplomacy and Human Security – Norms, Regimes, and Moral Progress in International Relations, her articles have appeared in Foreign Affairs, the European Journal of International SecurityInternational AffairsEthics & International AffairsThird World QuarterlyGlobal Policy JournalInternational Relations, and elsewhere.

She is proud to have held the title of Sadeleer Family Research Faculty at Northeastern (2011-2016). Prior to joining the faculty of Northeastern University in 2006 (tenured in 2013), Garcia held a three-year appointment at Harvard, at the Belfer Center for Science and International Affairs, and the World Peace Foundation’s Intra-State Conflict Program. She is the vice-chair of the International Committee for Robot Arms Control, a member of the Academic Council of the United Nations and the Global South Unit for Mediation in Rio de Janeiro. A native of Brazil, and a naturalized citizen of the United States of America, Garcia is a devoted yogi, her hobbies include travel and surfing.

 
 

Webinar Recordings

Webinar 1: The Ethics of Automated Warfare and AI

May 4, 2021

Our first webinar explored how interconnected the development of artificial intelligence and warfare is. This is a broad question, but one that is the overarching backbone of this webinar series. Discussions around the military application of AI are nothing new, and have sparked their lot of controversy, including the “Stop Killer Robots” campaign that has seen important AI scholars advocate against AI automated warfare. As the opening webinar, panelists will be tasked to define for the audience what artificial intelligence is, what it has been used for in the military context, and the state of the art today.

This webinar was also done in collaboration with the North American and Arctic Defence and Security Network (NAADSN) as well as the Defence and Security Foresight Group (DSFG). 

Our panelists included, Drs. Maura R. Grossman (University of Waterloo), Matthew Kellett (Department of National Defence), and Robert Engen (Canadian Forces College). Dr. Bessma Momani (University of Waterloo) moderated the panel.


The Big Question: What is the Future of Warfare?

June 9, 2021

Artificial intelligence in warfare is here, and it is here to stay. While the public debate in recent years has been centered whether “killer robots” should be developed and deployed, the question is no longer “should we automate warfare” but “what is the future of AI backed warfare.” This is the broader question asked of our panelists for this opening webinar. As the opener, panelists will be tasked to define for the audience what is artificial intelligence in this context, and where we are today. From there, they will engage with questions such as: what are the new frontier of AI in warfare? Should AI simply be used for threat assessment, or should we go as far as we can and have automated systems fight wars for us in the future?

James Rogers

James Rogers is a war historian, DIAS professor, and a fellow of the London School of Economics. He works with the BBC, History Channel, and he is the presenter of the Untold History TV series on Dan Snow’s History Hit TV. James also presents the Warfare podcast, broadcast twice a week on Spotify, Apple Music, and Acast.

James advises governments and international organisations on the history of warfare, contemporary security, and issues of weapons development. He is currently Special Advisor to the UK Parliament's All-Party Parliamentary Group on Drones, a UK MoD Defence Opinion Leader, and an adviser to NATO and the United Nations.

He has previously been a Visiting Research Fellow at Stanford University, Yale University, and the University of Oxford and he is Co-founder and Co-Convenor of BISA War Studies, the War Studies section of the British International Studies Association.

 

Branka Marijan

Branka leads the research on the military and security implications of emerging technologies. Her work examines ethical concerns regarding the development of autonomous weapons systems and the impact of artificial intelligence and robotics on security provision and trends in warfare. She holds a PhD from the Balsillie School of International Affairs with a specialization in conflict and security. She has conducted research on post-conflict societies and published academic articles and reports on the impacts of conflict on civilians and diverse issues of security governance, including security sector reform.


AI, Conflict, and the Laws of Armed Conflict

This webinar engaged two interrelated questions. First, as AI progress we will see more and more development meant as a way to counter the AI of an adversary, as in the development of most new technology of warfare. What is the implication of adversarial and counter AI technology from a normative and international law standpoint? Second, what happens when this type of decision no longer has a human element attached to it? How can we be certain that the balance between the necessity to use force in certain situations versus how it should be limited by the need to respect the dignity of those involved.  How can AI make this distinction? Additionally, what are the legal and normative implications of targeting AI infrastructure and personnel in war? Can artificial intelligence meet the legal responsibility countries face with the Laws of Armed Conflict? This opens up the question of how AI can be used, when, and the intricacies of such use.

Rebecca Crootof

Rebecca Crootof is an Assistant Professor of Law at the University of Richmond School of Law. Dr. Crootof's primary areas of research include technology law, international law, and torts; her written work explores questions stemming from the iterative relationship between law and technology, often in light of social changes sparked by increasingly autonomous systems, artificial intelligence, cyberoperations, robotics, and the Internet of Things. She is interested in the ways both domestic and international legal regimes respond to and shape technological development, particularly in the armed conflict context.

Dr. Crootof earned a B.A. cum laude in English with a minor in Mathematics at Pomona College; a J.D. at Yale Law School; and a PhD at Yale Law School, where she graduated as a member of the first class of PhDs in law awarded in the United States. Her dissertation, Keeping Pace: New Technology and the Evolution of International Law, discusses how technology fosters change in the international legal order, both by creating a need for new regulations and by altering how sources of international governance develop and interact. 

She is an affiliated fellow of the Information Society Project at Yale Law School; she consults for the Institute for Defense Analyses; she is on the Editorial Board of the Journal of National Security Law and Policy and is an Associate Editor on AI and the Law for the Journal of Artificial Intelligence Research; and she is a member of the New York Bar, the Board of Directors of the Equal Rights Center, and the Center for New American Security's Task Force on Artificial Intelligence and National Security. She was a member of the Permanent Mission of the Principality of Liechtenstein to the United Nations' Council of Advisers on the Application of the Rome Statute to Cyberwarfare.

Michael Horowitz

Michael C. Horowitz is Director of Perry World House and Richard Perry Professor at the University of Pennsylvania. He is the author of The Diffusion of Military Power: Causes and Consequences for International Politics, and the co-author of Why Leaders Fight. He won the 2017 Karl Deutsch Award given by the International Studies Association for early career contributions to the fields of international relations and peace research. He has published in a wide array of peer reviewed journals and popular outlets. His research interests include the intersection of emerging technologies such as artificial intelligence and robotics with global politics, military innovation, the role of leaders in international politics, and geopolitical forecasting methodology. Professor Horowitz previously worked for the Office of the Undersecretary of Defense for Policy in the Department of Defense. He is affiliated with the Center for a New American Security, the Center for Strategic and International Studies, and the Foreign Policy Research Institute. He is a member of the Council on Foreign Relations. Professor Horowitz received his Ph.D. in Government from Harvard University and his B.A. in political science from Emory University.


The Morality of Artificial Intelligence in Warfare

What does AI in warfare really look like? What are the possible applications, and how will they be used? Who will use them and who will they be used against? This webinar will discuss the future of AI in warfare from a practical standpoint and seeks to make explicit how these weapons will be developed, deployed and used—and by whom. Rather than discussing the development of autonomous weapons in technical or logistical terms, this discussion aims to tease out how autonomous weapons operate within and augment the current geopolitical landscape and what the moral consequences of their deployment might be. We will also discuss what this means for those who work in the field of AI, and what responsibility we all have to ensure that the work we do leads to a more just and equitable future.

Laura Nolan

Laura Nolan is a senior software engineer who specialises in reliability in distributed software systems. In 2018, Laura left her role as a staff engineer at Google in response to the company's involvement in Project Maven, a Department of Defense program that aims to use machine learning to analyse drone surveillance video footage. As a member of the NGO International Committee for Robot Arms Control (ICRAC), Laura is part of a global campaign which aims to regulate the emerging category of autonomous weapons systems, which are weapons systems that independently select and engage targets without human input.

Laura holds an MSc in Advanced Software Engineering from University College Dublin, and is currently completing an MA in Strategic Studies at University College Cork.

Jack Poulson

Jack Poulson is the Executive Director of the nonprofit Tech Inquiry, where he leads the development of an open source tool for monitoring an international public/private interface (currently the Five Eyes alliance). [1] He was previously a Senior Research Scientist working at the intersection of natural language processing and recommendation systems in Google's AI division and, before that, an Assistant Professor of Mathematics at Stanford University.

[1] https://gitlab.com/tech-inquiry/InfluenceExplorer and https://techinquiry.org/explorer/


 

S M T W T F S
29
30
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
1
2
  1. 2022 (14)
    1. June (4)
    2. May (3)
    3. March (3)
    4. February (3)
    5. January (1)
  2. 2021 (16)
    1. November (5)
    2. July (1)
    3. June (1)
    4. May (3)
    5. April (2)
    6. March (3)
    7. January (1)
  3. 2020 (14)
  4. 2019 (16)
  5. 2018 (4)