Please note: This master’s thesis presentation will be given online.
Amur Ghose, Master’s candidate
David R. Cheriton School of Computer Science
We present results obtained in the context of generative neural models — specifically autoencoders — utilizing standard results from coding theory. The methods are fairly elementary in principle, yet, combined with the ubiquitous practice of Batch Normalization in these models, yield excellent results when it comes to comparing with rival autoencoding architectures. In particular, we resolve a split that arises when comparing two different types of autoencoding models — VAEs versus regularized deterministic autoencoders — often simply called RAEs (Regularized Auto Encoder). The latter offer superior performance but lose guarantees on their latent space. Further, in the latter, a wide variety of regularizers are applied for excellent performance — ranging from L2 regularization to spectral normalization. We, on the other hand, show that a simple entropy like term suffices to kill two birds with one stone — that of offering good performance while keeping a well behaved latent space.
The primary thrust of the thesis exactly consists of a paper presented at UAI 2020 on these matters, titled “Batch norm with entropic regularization turns deterministic autoencoders into generative models”. This was a joint work with Abdullah Rashwan who was at the time with us at Waterloo as a postdoctoral associate, and is now at Google, and my supervisor, Pascal Poupart. This constitutes chapter 2. Extensions on this that relate to batch norm’s interplay with adversarial examples are in chapter 3. An overall overview is presented in chapter 1, which serves jointly as an introduction.
To join this master’s thesis presentation virtually om Zoom, please go to https://vectorinstitute.zoom.us/s/7499157050.
200 University Avenue West
Waterloo, ON N2L 3G1