PhD Seminar • Machine Learning | Differential Privacy • Functional Rényi Differential Privacy for Generative Modeling

Thursday, July 18, 2024 1:30 pm - 2:30 pm EDT (GMT -04:00)

Please note: This PhD seminar will take place in DC 2102 and online.

Dihong Jiang, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professors Yaoliang Yu & Sun Sun

Differential privacy (DP) has emerged as a rigorous notion to quantify data privacy. Subsequently, Rényi differential privacy (RDP) has become an alternative to the ordinary DP notion in both theoretical and empirical studies, because of its convenient compositional rules and flexibility. However, most mechanisms with DP (RDP) guarantees are essentially based on randomizing a fixed, finite-dimensional vector output.

In this work, following Hall et al. (2013) we further extend RDP to functional outputs, where the output space can be infinite-dimensional, and develop all necessary tools, e.g., (subsampled) Gaussian mechanism, composition, and post-processing rules, to facilitate its practical adoption. As an illustration, we apply functional RDP (f-RDP) to functions in the reproducing kernel Hilbert space (RKHS) to develop a differentially private generative model (DPGM), where training can be interpreted as iteratively releasing loss functions (in an RKHS) with DP guarantees. Empirically, the new training paradigm achieves a significant improvement in privacy-utility trade-off compared to existing alternatives, especially when ϵ = 0.2.


To attend this PhD seminar in person, please go to DC 2102. You can also attend online using Zoom.