Image of Webinar Series Banner - "The Ethics of Automated Warfare and AI"
The Waterloo.AI Institute, in partnership with Dr. Bessma Momani and Jean-François Bélanger from the Department of Political Science, are pleased to be hosting a series of webinars exploring the ethics of artificial intelligence and automatic warfare. Each webinar will explore a different aspect of AI and warfare, bringing together both technical and policy experts to discuss the big questions, concerns, and developments in the field.

The series will be developed into a digital series by the Center for International Governance Innovation in 2022.

You will find more about the webinars below, including how to register and descriptions of the topics and panelists.

Recordings are also available below.


Upcoming Webinars

Date Speakers Topic

June 21, 2021

12:00 - 1:30 PM ET

Rebecca Crootof, University of Richmond School of Law

Michael Horowitz, University of Pennsylvania

AI, Conflict, and the Laws of Armed Conflict

Register

July 15, 2021

12:00 - 1:30 PM ET

Laura Nolan, Robot Arms Control

Jack Poulson, Tech Inquiry

The Morality of Artificial Intelligence in Warfare

Register

Fall 2021 (TBC)

Heather Roff, University of Colorado

Denise Garcia, Northeastern University

The Chicken or AI Question: Should Ethics Come before Development? 

Register to be notified when registration opens

Fall 2021 (TBC)

Alex Wilner, Carleton University

Eleonore Pauwels, Global Center on Cooperative Security

What Happens when AI fails? Who is Responsible? 

Register to be notified when registration opens

Fall 2021 (TBC)

Toby Walsh, University of New South Wales Sydney

Joanna Bryson, The Hertie School of Governance 

Is AI the Solution? Are we Already Overusing Artificial Intelligence?

Register to be notified when registration opens

Past Webinars


Webinar Series: The Ethics of Automated Warfare and Artificial Intelligence

AI, Conflict, and the Laws of Armed Conflict

This webinar engages two interrelated questions. First, as AI progress we will see more and more development meant as a way to counter the AI of an adversary, as in the development of most new technology of warfare. What is the implication of adversarial and counter AI technology from a normative and international law standpoint? Second, what happens when this type of decision no longer has a human element attached to it? How can we be certain that the balance between the necessity to use force in certain situations versus how it should be limited by the need to respect the dignity of those involved.  How can AI make this distinction? Additionally, what are the legal and normative implications of targeting AI infrastructure and personnel in war? Can artificial intelligence meet the legal responsibility countries face with the Laws of Armed Conflict? This opens up the question of how AI can be used, when, and the intricacies of such use.

Rebecca Crootof Picture

Rebecca Crootof

Rebecca Crootof is an Assistant Professor of Law at the University of Richmond School of Law. Dr. Crootof's primary areas of research include technology law, international law, and torts; her written work explores questions stemming from the iterative relationship between law and technology, often in light of social changes sparked by increasingly autonomous systems, artificial intelligence, cyberoperations, robotics, and the Internet of Things. She is interested in the ways both domestic and international legal regimes respond to and shape technological development, particularly in the armed conflict context.

Dr. Crootof earned a B.A. cum laude in English with a minor in Mathematics at Pomona College; a J.D. at Yale Law School; and a PhD at Yale Law School, where she graduated as a member of the first class of PhDs in law awarded in the United States. Her dissertation, Keeping Pace: New Technology and the Evolution of International Law, discusses how technology fosters change in the international legal order, both by creating a need for new regulations and by altering how sources of international governance develop and interact. 

She is an affiliated fellow of the Information Society Project at Yale Law School; she consults for the Institute for Defense Analyses; she is on the Editorial Board of the Journal of National Security Law and Policy and is an Associate Editor on AI and the Law for the Journal of Artificial Intelligence Research; and she is a member of the New York Bar, the Board of Directors of the Equal Rights Center, and the Center for New American Security's Task Force on Artificial Intelligence and National Security. She was a member of the Permanent Mission of the Principality of Liechtenstein to the United Nations' Council of Advisers on the Application of the Rome Statute to Cyberwarfare.

 
 

Michael Horowitz picture

Michael Horowitz

Michael C. Horowitz is Director of Perry World House and Richard Perry Professor at the University of Pennsylvania. He is the author of The Diffusion of Military Power: Causes and Consequences for International Politics, and the co-author of Why Leaders Fight. He won the 2017 Karl Deutsch Award given by the International Studies Association for early career contributions to the fields of international relations and peace research. He has published in a wide array of peer reviewed journals and popular outlets. His research interests include the intersection of emerging technologies such as artificial intelligence and robotics with global politics, military innovation, the role of leaders in international politics, and geopolitical forecasting methodology. Professor Horowitz previously worked for the Office of the Undersecretary of Defense for Policy in the Department of Defense. He is affiliated with the Center for a New American Security, the Center for Strategic and International Studies, and the Foreign Policy Research Institute. He is a member of the Council on Foreign Relations. Professor Horowitz received his Ph.D. in Government from Harvard University and his B.A. in political science from Emory University.

 

The Morality of Artificial Intelligence in Warfare

What does AI in warfare really look like? What are the possible applications, and how will they be used? Who will use them and who will they be used against? This webinar will discuss the future of AI in warfare from a practical standpoint and seeks to make explicit how these weapons will be developed, deployed and used—and by whom. Rather than discussing the development of autonomous weapons in technical or logistical terms, this discussion aims to tease out how autonomous weapons operate within and augment the current geopolitical landscape and what the moral consequences of their deployment might be. We will also discuss what this means for those who work in the field of AI, and what responsibility we all have to ensure that the work we do leads to a more just and equitable future.

Laura Nolan Picture

Laura Nolan

Laura Nolan is a senior software engineer who specialises in reliability in distributed software systems. In 2018, Laura left her role as a staff engineer at Google in response to the company's involvement in Project Maven, a Department of Defense program that aims to use machine learning to analyse drone surveillance video footage. As a member of the NGO International Committee for Robot Arms Control (ICRAC), Laura is part of a global campaign which aims to regulate the emerging category of autonomous weapons systems, which are weapons systems that independently select and engage targets without human input.

Laura holds an MSc in Advanced Software Engineering from University College Dublin, and is currently completing an MA in Strategic Studies at University College Cork.

 
 

Jack Poulson Picture

Jack Poulson

Jack Poulson is the Executive Director of the nonprofit Tech Inquiry, where he leads the development of an open source tool for monitoring an international public/private interface (currently the Five Eyes alliance). [1] He was previously a Senior Research Scientist working at the intersection of natural language processing and recommendation systems in Google's AI division and, before that, an Assistant Professor of Mathematics at Stanford University.

[1] https://gitlab.com/tech-inquiry/InfluenceExplorer and https://techinquiry.org/explorer/

 
 

The Chicken or AI Question: Should Ethics Come before Development? 

AI, just like nuclear weapons, will act as a great disruptor for warfare as the technologies develop further. In what way exactly will artificial intelligence be disruptive, and what are the ethical and normative implications here? Will AI follow the way of nuclear weapon whereas ethical and moral questions fully crept in after the arrival of the bomb, and after it became clear the spread of it could not be stopped? Or instead, will we learn from the past and develop strong ethical guidelines and standards to mold and shape the ways in which AI is developed and used in the future? This conundrum is perhaps most salient in the military usage of AI, where in the extreme it could potentially take decisions without human assistance or supervision that could result in battle deaths or unintended casualties. Additionally, how can ethical guidelines be universal given how divergent countries are on the question?

Heather Roff image

Heather Roff

Dr. Roff received her Ph.D in political science from the University of Colorado at Boulder (2010). She is currently a research scientist at DeepMind, one of the leading artificial intelligence companies in the world, as well as an Associate Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, a future of war fellow with New America in Washington D.C., and was previously a senior research fellow in the department of politics and international relations at the University of Oxford. She held posts at Arizona State University, the Korbel School of International Studies at the University of Denver, the University of Waterloo, and the United States Air Force Academy.

Her research interests include the law, policy and ethics of emerging technologies, such as autonomous systems, artificial intelligence, robotics and cyber, as well as international security and human rights protection. She is author of Global Justice, Kant and the Responsibility to Protect (Routledge 2013), as well as numerous scholarly articles. She blogs for the Huffington Post, the Duck of Minerva, and has written for the Wired Magazine, Bulletin of the Atomic Scientists, Slate, Defense One, the Wall Street Journal, the National Post and the Globe and Mail.

 
 
Denise Garcia picture

Denise Garcia

Denise Garcia researches on international law, and the questions of lethal robotics and artificial intelligence, global governance of security, and the formation of new international norms and their impact on peace and security.

She was the recipient of Northeastern’s College of Social Sciences and Humanities Outstanding Teaching Award in 2016. In 2017, Garcia was appointed to the International Panel for the Regulation of Autonomous Weapons (Germany’s Ministry of Foreign Affairs).

Garcia teaches the annual course titled “Global Governance of International Security and the World Politics of Diplomacy” at the United Nations in Geneva, in cooperation with the United Nations Institute for Disarmament Research and many other partners. In 2016, she testified to the United Nations on the question of lethal autonomous weapons and their impact on peace and security.

Author of Small Arms and Security – New Emerging International Norms, and Disarmament Diplomacy and Human Security – Norms, Regimes, and Moral Progress in International Relations, her articles have appeared in Foreign Affairs, the European Journal of International SecurityInternational AffairsEthics & International AffairsThird World QuarterlyGlobal Policy JournalInternational Relations, and elsewhere.

She is proud to have held the title of Sadeleer Family Research Faculty at Northeastern (2011-2016). Prior to joining the faculty of Northeastern University in 2006 (tenured in 2013), Garcia held a three-year appointment at Harvard, at the Belfer Center for Science and International Affairs, and the World Peace Foundation’s Intra-State Conflict Program. She is the vice-chair of the International Committee for Robot Arms Control, a member of the Academic Council of the United Nations and the Global South Unit for Mediation in Rio de Janeiro. A native of Brazil, and a naturalized citizen of the United States of America, Garcia is a devoted yogi, her hobbies include travel and surfing.

 

What Happens when AI fails? Who is Responsible? 

Artificial intelligence is bound to fail. It is a given. But what happens when it does, especially if it is designed to make policy decisions or execute orders in theater? While it may be able to identify its query 95 percent of the time, a tank or types of boat for example, the remaining 5 percent is elusive and difficult to predict. Moreover, why it did fail is often counterintuitive to us. The question then becomes: how do we handle AI failures? How can we mitigate those mistakes, and is it actually possible to mitigate those? When failure happens, who is responsible for the decisions taken, or recommendation made, by AI?  More importantly, what happens when the person who is responsible does not understand how artificial intelligence programed and works in the first place? Are there areas/tasks where AI is better suited and more accurate than humans? How is responsibility distributed when you have human and AI interact? Does a human operator bear the same liability as AI when it comes to making a mistake? One way or another, what prior knowledge is required for the operator to be able to accept the risks of such a situation? Finally, how would accountability work in this case?

Eleonore Pauwels

Eleonore Pauwels

Eleonore Pauwels is a Senior Fellow with the Global Center on Cooperative Security, NY. Eleonore conducts in-depth research on the security and governance implications generated by the convergence of artificial intelligence with other dual-use technologies, including cybersecurity, genomics and genome-editing.

Eleonore provides expertise to the World Bank and the United Nations, as well as to governments and private sector actors, on AI-Cyber Prevention, the changing nature of conflict, foresight and global security. In 2018 and 2019, Eleonore served as Research Fellow on Emerging Cybertechnologies for the United Nations University’s Centre for Policy Research. At the Woodrow Wilson International Center for Scholars, she spent ten years within the Science and Technology Innovation Program, leading the Anticipatory Intelligence Lab. She is also part of the Scientific Committee of the International Association for Responsible Research and Innovation in Genome-Editing (ARRIGE). Eleonore is a former official of the European Commission’s Directorate on Science, Economy and Society.

 

Alex Wilner image

Alex Wilner

Dr. Alex S. Wilner is an Associate Professor of International Affairs at the Norman Paterson School of International Affairs, Carleton University. He teaches classes on terrorism and violent radicalization, intelligence in international affairs, strategic foresight in international security, and a capstone course on Canadian security policy. Past capstone partners have included FINTRAC, Public Safety Canada, Global Affairs Canada, and Policy Horizons Canada.

Professor Wilner’s research focuses on the application of deterrence theory to contemporary security issues, like terrorism, radicalization, organized crime, cyber threats, and proliferation. His books include Deterring Rational Fanatics (University of Pennsylvania Press, 2015) and Deterring Terrorism: Theory and Practice (eds., Stanford University Press, 2012). He has published articles in International SecurityNYU Journal of International Law and PoliticsSecurity StudiesJournal of Strategic Studies(2017 and 2011Comparative Strategy, and Studies in Conflict and Terrorism (2010 and 2011). In 2016, he was awarded a SSHRC Insight Development Grant to study state and non-state cyber deterrence in Canada. In 2017, he was awarded funding from the Department of National Defence, Policy Horizons Canada, and CSIS to organize a workshop and working group on strategic foresight in national security policy. Prior to joining NPSIA, Professor Wilner held a variety of positions at Policy Horizons Canada (the Government of Canada’s foresight laboratory), the Munk School of Global Affairs at the University of Toronto, the National Consortium for the Study of Terrorism and Responses to Terrorism (START) at the University of Maryland, and the ETH Zurich in Switzerland.

 

Is AI the Solution? Are we Already Overusing Artificial Intelligence?

Artificial intelligence is the cutting edge in decision-making and computer science. However, there is also the sense that AI is nothing more than a buzzword used in almost every facet of our lives right now. Given the current buzz surrounding AI, it is relevant to ask: what are the limits of AI? In many instances, such as with military technology, the availability of artificial intelligence is what made the technology possible in the first place. Can we expect systems to achieve decision-making capabilities and performances that are better than human across domains? What are the trade-offs of acquiring such capabilities? This session wraps up the webinar series and asks the panelists to reflect on our AI centric future. Is it the panacea it has been described to be? The answer is likely not. However, the panelists can parse out the unrealistic expectations we have about artificial intelligence and give us a clearer view of what the future actually entails.

Toby Walsh picture

Toby Walsh

Toby Walsh is a leading researcher in Artificial Intelligence. He is a Laureate Fellow and Scientia Professor of Artificial Intelligence in the School of Computer Science and Engineering at UNSW Sydney, and he also leads the Algorithmic Decision Theory group at CSIRO Data61. He was named by the Australian newspaper as a "rock star" of Australia's digital revolution. He has been elected a fellow of the Australian Academy of Science, a fellow of the ACM, the Association for the Advancement of Artificial Intelligence (AAAI) and of the European Association for Artificial Intelligence. He has won the prestigious Humboldt research award as well as the NSW Premier's Prize for Excellence in Engineering and ICT, and the ACP Research Excellence award. He has previously held research positions in England, Scotland, France, Germany, Italy, Ireland and Sweden. He has played a leading role at the UN and elsewhere on the campaign to ban lethal autonomous weapons (aka "killer robots").

Toby Walsh regularly appears in the media talking about the impact of AI and robotics on society. He is passionate that limits are placed on AI to ensure the public good such as with autonomous weapons. He has appeared on TV and radio stations on the ABC, BBC, Channel 7, Channel 9, Channel 10, CCTV, CNN, DW, NPR, RT, SBS, and VOA, as well as on numerous other radio stations and podcasts (a recent showreel). He also writes frequently for print and online media. His work has appeared in the New Scientist, American Scientist, Le Scienze, Cosmos, Technology Review, the New York Times, the Guardian, the Conversation and "The Best Writing in Mathematics". His twitter account has been voted one of the top ten to follow to keep abreast of developments in AI. He has given talks at public and trade events like CeBIT, the World Knowledge Forum, TEDx, New Scientist Live and writers festivals in Adelaide, Bendigo, Bhutan, Brisbane, Canberra, Geelong, Jaipur, Margaret River, Melbourne, Mumbai, Nagpur, Pune, Sydney and elsewhere. He has been profiled by the New York Times and the Brilliant, but was even more surprised (spelt embarrassed) to have an IMBD entry and to have been made the cover story of his old school magazine.

 
 
Joanna Bryson picture

Joanna Bryson

Joanna J Bryson is an academic recognised for broad expertise on intelligence, its nature, and its consequences. Holding two degrees each in psychology and AI (BA Chicago, MSc & MPhil Edinburgh, PhD MIT), she is since 2020 the Professor of Ethics and Technology at Hertie School of Governance in Berlin. Bryson advises governments, corporations, and other agencies globally, particularly on AI policy. Her work has appeared in venues ranging from reddit to the journal Science. From 2002-2019 she was Computer Science faculty at the University of Bath; she has also been affiliated with Harvard Psychology, Oxford Anthropology, The Mannheim Centre for Social Science Research, The Konrad Lorenz Institute for Evolution and Cognition Research, and the Princeton Center for Information Technology Policy.

Bryson first observed the confusion generated by anthropomorphised AI during her PhD, leading to her first AI ethics publication “Just Another Artifact” in 1998. She is now a leader in AI ethics, having since coauthored the first national-level AI ethics policy, the UK’s (2011) Principles of Robotics, and contributed to efforts by the OECD, EU, UN, OSCE, Red Cross and Google among others. She also continues to research the systems engineering of AI and the cognitive science of intelligence. Her present research focuses are the impacts of technology on human societies, and new models of governance for AI and digital technology. She is a founding member of Hertie School’s Centre for Digital Governance, and one of Germany’s nine nominated experts to the Global Partnership for AI.

 

Webinar Recordings

Webinar 1: The Ethics of Automated Warfare and AI

May 4, 2021

Our first webinar explored how interconnected the development of artificial intelligence and warfare is. This is a broad question, but one that is the overarching backbone of this webinar series. Discussions around the military application of AI are nothing new, and have sparked their lot of controversy, including the “Stop Killer Robots” campaign that has seen important AI scholars advocate against AI automated warfare. As the opening webinar, panelists will be tasked to define for the audience what artificial intelligence is, what it has been used for in the military context, and the state of the art today.

This webinar was also done in collaboration with the North American and Arctic Defence and Security Network (NAADSN) as well as the Defence and Security Foresight Group (DSFG). 

Our panelists included, Drs. Maura R. Grossman (University of Waterloo), Matthew Kellett (Department of National Defence), and Robert Engen (Canadian Forces College). Dr. Bessma Momani (University of Waterloo) moderated the panel.


The Big Question: What is the Future of Warfare?

June 9, 2021

Artificial intelligence in warfare is here, and it is here to stay. While the public debate in recent years has been centered whether “killer robots” should be developed and deployed, the question is no longer “should we automate warfare” but “what is the future of AI backed warfare.” This is the broader question asked of our panelists for this opening webinar. As the opener, panelists will be tasked to define for the audience what is artificial intelligence in this context, and where we are today. From there, they will engage with questions such as: what are the new frontier of AI in warfare? Should AI simply be used for threat assessment, or should we go as far as we can and have automated systems fight wars for us in the future?

James Rogers

James Rogers is a war historian, DIAS professor, and a fellow of the London School of Economics. He works with the BBC, History Channel, and he is the presenter of the Untold History TV series on Dan Snow’s History Hit TV. James also presents the Warfare podcast, broadcast twice a week on Spotify, Apple Music, and Acast.

James advises governments and international organisations on the history of warfare, contemporary security, and issues of weapons development. He is currently Special Advisor to the UK Parliament's All-Party Parliamentary Group on Drones, a UK MoD Defence Opinion Leader, and an adviser to NATO and the United Nations.

He has previously been a Visiting Research Fellow at Stanford University, Yale University, and the University of Oxford and he is Co-founder and Co-Convenor of BISA War Studies, the War Studies section of the British International Studies Association.

 

Branka Marijan

Branka leads the research on the military and security implications of emerging technologies. Her work examines ethical concerns regarding the development of autonomous weapons systems and the impact of artificial intelligence and robotics on security provision and trends in warfare. She holds a PhD from the Balsillie School of International Affairs with a specialization in conflict and security. She has conducted research on post-conflict societies and published academic articles and reports on the impacts of conflict on civilians and diverse issues of security governance, including security sector reform.

S M T W T F S
30
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
1
2
3
  1. 2021 (11)
    1. July (1)
    2. June (1)
    3. May (3)
    4. April (2)
    5. March (3)
    6. January (1)
  2. 2020 (14)
    1. December (1)
    2. November (1)
    3. October (2)
    4. September (1)
    5. May (1)
    6. April (1)
    7. March (3)
    8. February (2)
    9. January (2)
  3. 2019 (16)
  4. 2018 (4)