Please note: This seminar will take place in DC 1304.
Avrim Blum, Professor and Chief Academic Officer
Toyota Technological Institute at Chicago
Machine learning systems have become impressively powerful, but they have also been shown to be extremely brittle and susceptible to adversarial attack.
This talk will describe two lines of work aiming to provide theoretical understanding of the power of data poisoning attacks, and how learning algorithms can give assurances of correctness in the face of them. The first part of the talk will focus on clean-label data-poisoning attacks, in which adversarial but correctly-labeled data is added to a training set with a goal of inducing specific failures; the second part will focus on more general kinds of attacks.
Portions of this talk are based on joint work with Maria-Florina Balcan, Steve Hanneke, Jian Qian, Han Shao, and Dravyansh Sharma.
Bio: Avrim Blum is Professor and Chief Academic Officer at the Toyota Technological Institute at Chicago (TTIC); prior to this he was on the faculty at Carnegie Mellon University for 25 years. His main research interests are in Machine Learning Theory, Algorithmic Game Theory, Privacy, and Algorithmic Fairness.
He has served as Program Chair for the Conference on Learning Theory (COLT), the IEEE Symposium on Foundations of Computer Science (FOCS), and the Innovations in Theoretical Computer Science Conference (ITCS). Blum is recipient of the AI Journal Classic Paper Award, the ICML/COLT 10-Year Best Paper Award, the ACM Paris Kanellakis Award, the Sloan Fellowship, the NSF National Young Investigator Award, and the Herbert Simon Teaching Award, and he is a Fellow of the ACM.