CPI Talk - Characterizing Machine Unlearning through Definitions and Implementations

Characterizing Machine Unlearning through Definitions and Implementations

CPI would like to extend an invitation to our newly rescheduled CPI Talk on Thursday June 20th from 10:30am-12:00pm in Arts Lecture Hall Room 113, taking place in person.

Nicolas Papernot will discuss Characterizing Machine Unlearning through Definitions and Implementations.

Speaker: Nicolas Papernot - Assistant Professor of Computer Engineering and Computer Science at the University of Toronto

CPI Talks are free and open to everyone regardless of affiliation! High school students and non-Waterloo students/staff are also welcome to join.

No prior knowledge will be expected from the audience.

Please register here.


In this CPI Talk, Nicolas Papernot will discuss:

Characterizing Machine Unlearning through Definitions and Implementations 

Abstract: The talk presents open problems in the study of machine unlearning. The need for machine unlearning, i.e., obtaining a model one would get without training on a subset of data, arises from privacy legislation and as a potential solution to data poisoning or copyright claims. The first part of the talk discusses approaches that provide exact unlearning; these approaches output the same distribution of models as would have been obtained by training without the subset of data to be unlearned in the first place. While such approaches can be computationally expensive, we discuss why it is difficult to relax the guarantee they provide to pave the way for more efficient approaches. The second part of the talk asks if we can verify unlearning. Here we show how an entity can claim plausible deniability when challenged about an unlearning request that was claimed to be processed, and conclude that at the level of model weights, being unlearnt is not always a well-defined property. Instead, unlearning is an algorithmic property. 


headshot of nicolas papernot

Nicolas Papernot is an Assistant Professor of Computer Engineering and Computer Science at the University of Toronto. He also holds a Canada CIFAR AI Chair at the Vector Institute, and is a faculty affiliate at the Schwartz Reisman Institute. His research interests span the security and privacy of machine learning. Some of his group’s recent projects include generative model collapse, cryptographic auditing of ML, private learning, proof-of-learning, and machine unlearning. Nicolas is an Alfred P. Sloan Research Fellow in Computer Science and a Member of the Royal Society of Canada’s College of New Scholars. His work on differentially private machine learning was awarded an outstanding paper at ICLR 2022 and a best paper at ICLR 2017. He co-created the IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) and is co-chairing its first two editions in 2023 and 2024. He previously served as an associate chair of the IEEE Symposium on Security and Privacy (Oakland), and an area chair of NeurIPS. Nicolas earned his Ph.D. at the Pennsylvania State University, working with Prof. Patrick McDaniel and supported by a Google PhD Fellowship. Upon graduating, he spent a year at Google Brain where he still spends some of his time.