PhD Defence • Artificial intelligence | Machine Learning • Trustworthy Machine Learning with Deep Generative Models

Friday, September 13, 2024 12:00 pm - 3:00 pm EDT (GMT -04:00)

Please note: This PhD defence will take place online.

Dihong Jiang, PhD candidate
David R. Cheriton School of Computer Science

Supervisors: Professors Yaoliang Yu, Sun Sun

The recent decade has witnessed the remarkable progress of the deep generative models (DGMs). However, as machine learning (ML) systems become integral to sensitive applications, ensuring their trustworthiness becomes paramount. Trustworthy machine learning aims at enhancing the reliability and safety of  ML systems. We investigate the intersection of trustworthiness and DGMs, focusing on two pivotal aspects of trustworthiness: out-of-distribution (OOD) detection and privacy preservation.

Generative models serve purposes beyond generating realistic samples. Likelihood-based DGMs, e.g. flow generative models, can additionally compute the likelihood of input data, which can be used as an unsupervised OOD detector by thresholding the likelihood values. However, they have been found to occasionally assign higher likelihoods to OOD data compared to in-distribution (InD) data, raising concerns about their reliability in OOD detection. We show that flow generative models can reliably detect OOD data by leveraging their bijectivity property. The proposed approach involves comparing two high-dimensional distributions in the latent space, by extending a univariate statistical test (e.g., Kolmogorov-Smirnov (KS) test) into higher dimensions using random projections.

The second focus of this thesis is on privacy preservation in DGMs. Generative models can also be seen as proxies for publishing training data, and there is growing interest in ensuring privacy preservation beyond generation fidelity. Differentially private generative models (DPGMs) offer a solution to protect individual data privacy. Existing methods either apply the workhorse DP-SGD algorithm to DGMs, or use kernel methods by making the maximum mean discrepancy (MMD) objective differentially private. However, DP-SGD methods suffer from high training costs and poor scalability to stronger differential privacy (DP) guarantees, while kernel methods face the issue of mode collapse in generation. To alleviate the training cost overhead and scalability issues on small privacy budgets for DP-SGD, we propose to train a flow generative model in a lower-dimensional latent space, which significantly reduces the model size and thereby avoids unnecessary computation in the full pixel space. To improve the model utility of MMD methods, we propose to make the MMD objective differentially private without truncating the reproducing kernel Hilbert Space (RKHS).

The thesis is expected to provide new insights into the application of flow generative models in OOD detection, highlight practical challenges in training generative models with DP-SGD on high-dimensional datasets, bridge the gap between RDP and functional mechanisms, and expand the family of DPGM.


Attend this PhD defence on Zoom.