PhD Seminar • Machine Learning • Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors

Thursday, March 28, 2024 12:00 pm - 1:00 pm EDT (GMT -04:00)

Please note: This PhD seminar will take place in DC 3317 and online.

Yiwei Lu, PhD candidate
David R. Cheriton School of Computer Science

Supervisors: Professors Yaoliang Yu, Sun Sun

Machine learning models have achieved great success in supervised learning tasks for end-to-end training, which requires a large amount of labeled data that is not always feasible. Recently, many practitioners have shifted to self-supervised learning methods that utilize cheap unlabeled data to learn a general feature extractor via pre-training, which can be further applied to personalized downstream tasks by simply training an additional linear layer with limited labeled data. However, such a process may also raise concerns regarding data poisoning attacks. For instance, indiscriminate data poisoning attacks, which aim to decrease model utility by injecting a small number of poisoned data into the training set, pose a security risk to machine learning models, but have only been studied for end-to-end supervised learning.

In this talk, I extend the exploration of the threat of indiscriminate attacks on downstream tasks that apply pre-trained feature extractors. Specifically, I introduce two types of attacks: (1) the input space attacks, where we modify existing attacks to directly craft poisoned data in the input space. However, due to the difficulty of optimization under constraints, we further propose (2) the feature targeted attacks, where we mitigate the challenge with a three-stage algorithm.

Our experiments examine such attacks in popular downstream tasks of fine-tuning on the same dataset and transfer learning that considers domain adaptation. Empirical results reveal that transfer learning is more vulnerable to our attacks. Additionally, input space attacks are a strong threat if no countermeasures are posed, but are otherwise weaker than feature targeted attacks.


To attend this PhD seminar in person, please go to DC 3317. You can also attend virtually using Zoom at https://uwaterloo.zoom.us/j/99998428907.