Continuous Optimization Seminar - Shenghao Yang

Wednesday, December 5, 2018 4:00 pm - 4:00 pm EST (GMT -05:00)

Title: Stochastic subgradient method converges on tame functions

Speaker: Shenghao Yang
Affiliation: University of Waterloo
Room: MC 5479

Abstract: In this talk, I will present a paper (https://arxiv.org/abs/1804.07795) on the convergence of stochastic subgradient method in a nonconvex and nonsmooth setting.Stochastic subgradient method underlies several widely used solvers including Google’s TensorFlow and the open source PyTorch library. But its convergence guarantee is largely not understood in nonconvex and nonsmooth settings. Very recently, Davis et al prove that the stochastic subgradient method produces limit points that are all first-order stationary, on locally Lipschitz functions whose graph admits a Whitney stratification - a wide class of functions that subsumes all popular deep learning architectures.

In this talk, I will go through the ideas behind their proof.