Please Note: This seminar will be given online.
University of California, Berkeley
Link to join seminar: Hosted on Zoom
Understanding Statistical-vs-Computational Tradeoffs via Low-Degree Polynomials
Many high-dimensional statistical tasks---including sparse PCA (principal component analysis), community detection, gaussian clustering, tensor PCA, and more---exhibit a striking gap between what can be achieved statistically (by a brute force algorithm) and what can be achieved with the best known polynomial-time algorithms. An emerging framework to understand these gaps is based on analyzing the class of "low-degree polynomial algorithms". This is a powerful class of algorithms (which includes, for instance, spectral methods and approximate message passing), and so provable failure of this class of algorithms is a form of concrete evidence for inherent computational hardness of statistical problems.
While low-degree algorithms were initially studied in the context of hypothesis testing problems, I will present some new general-purpose techniques for determining the precise limitations of low-degree algorithms for other types of statistical tasks, including estimation and optimization. I will focus primarily on two problems: recovering a planted submatrix of elevated mean in a gaussian matrix, and finding a large independent set in a sparse random graph. This line of work illustrates that low-degree polynomials provide a unifying framework for studying the computational complexity of a wide variety of statistical tasks.
Based primarily on joint works with David Gamarnik, Aukosh Jagannath, and Tselil Schramm.