PhD Seminar • Machine Learning • Strategic and Adversarially Robust Learning with Unknown Manipulation Capabilities

Thursday, July 18, 2024 10:00 am - 11:00 am EDT (GMT -04:00)

Please note: This PhD seminar will take place online.

Tosca Lechner, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Shai Ben-David

There are many real-world settings in which learning with respect to the training distribution is not sufficient because manipulations can occur. These manipulations can be due to an adversary whose goal it is to fool the classifier, or due to feature manipulation from self-interested agents who want to achieve their preferred outcome, have become more prominent in recent years. These robustness requirements are captured by the settings of adversarially robust learning and strategically robust learning respectively.

Both settings share similarities in their modelling by a robust loss, which requires knowledge of the respective manipulation capabilities. However,  in many real-world settings the exact manipulation capabilities of an adversary or self-interested individuals are not plausibly available to a learner. In my work I explore settings in which the learner does not have full knowledge about the manipulation capabilities but only some prior information in the form of a restricted class of candidate manipulation graphs.

I explore ways to infer manipulation capabilities in order to achieve accurate prediction. In the strategically robust setting, where all instances can be assumed to act as self-interested agents, I explore learning the user manipulations through the observed distribution shifts over several rounds of classification. In the adversarially robust setting I assume either access to a perfect attack oracle or the possibility to abstain on manipulated points.


You can attend this seminar on Zoom.