Seminar: Learning Neural Networks with Adaptive Regularization

Friday, September 27, 2019 4:00 pm - 4:00 pm EDT (GMT -04:00)

Han Zhao, PhD candidate
Carnegie Mellon University

Feed-forward neural networks can be understood as a combination of an intermediate representation and a linear hypothesis. While most previous works aim to diversify the representations, we explore the complementary direction by performing an adaptive and data-dependent regularization motivated by the empirical Bayes method. In this talk, I will talk about our recent work on learning neural networks with adaptive regularization in the regime of limited data. Empirically, we demonstrate that the proposed method helps networks converge to local optima with smaller stable ranks and spectral norms. These properties suggest better generalizations and we present empirical results to support this expectation. We also verify the effectiveness of the approach on multiclass classification and multitask regression problems with various network structures. This is a joint work with Yao-Hung Hubert Tsai, Ruslan Salakhutdinov, and Geoffrey J. Gordon.


Bio: Han Zhao is a final-year PhD student in the Machine Learning Department at Carnegie Mellon University, advised by Prof. Geoffrey J. Gordon. He has broad interests in both theoretical and applied machine learning and artificial intelligence. In particular, he works on efficient probabilistic reasoning with Sum-Product Networks (SPNs), adversarial representation learning and its applications in domain adaptation, fair machine learning and privacy-preserving learning. Han Zhao has research experience at Huawei Noah's Ark Lab, Baidu Research, Microsoft Research, and D. E. Shaw. Before coming to CMU, he obtained his bachelor degree in computer science from Tsinghua University (honored as a Distinguished Graduate) and masters degree in mathematics from the University of Waterloo (honored with the Alumni Gold Medal Award).