Seminar: Sirisha Rambhatla

Tuesday, May 11, 2021 1:00 pm - 2:15 pm EDT (GMT -04:00)

Provably Learning from Data: New Algorithms and Models for Matrix and Tensor Decompositions 

Dr. Sirisha Rambhatla

Department of Computer Science,

University of Southern California, Los Angeles, CA, USA

Via https://zoom.us/j/97054271634?pwd=aEhtQnZtYzNZc2VIQTFDL2N3MmdGQT09

Abstract

Learning and leveraging patterns from data has fueled the recent major advances in data driven services. As these solutions become ubiquitous and get incorporated into critical applications such as healthcare and transportation, there is an increasing need to understand their decision-making mechanism to know their limits and develop new algorithms with guarantees. Moreover, with data being generated at unprecedented rates, these algorithms also need to be fast, learn on-the-fly (online), handle large volumes of data (scalable), and be computationally efficient.

In this talk, I will present my recent work on provable algorithms for matrix and tensor factorization. Specifically, I will first present an algorithm for dictionary learning, where the task is to represent a given data vector as a sparse linear combination ( coefficients) of columns of a matrix known as a dictionary. Since both the dictionary and the coefficients parameterizing the linear model are a priori unknown, this entails solving an inherently non-convex optimization task. The current state-of-the-art provable dictionary learning algorithms focus only on the recovering the dictionary, which leads to an irreducible error in the estimation and consequently jeopardizes sparse coefficient recovery.

Overcoming these limitations, I will present a scalable online alternating optimization-based algorithm for dictionary learning with exact recovery guarantees on both the dictionary and the coefficients at a linear rate. Complementary to these theoretical results, I will present neural architectures which can speed-up computations and numerical simulations demonstrating significantly superior performance over other techniques. Furthermore, I will also present a provable algorithm for another inherently non-convex task of factorizing a structured tensor leveraging these results.

Finally, I will conclude with an overview of my other ongoing research efforts, including alternative ways of building reliable machine learning algorithms via model interpretability, and future directions.

Biographical Sketch

Dr. Sirisha Rambhatla is a Postdoctoral Researcher in the Computer Science department at the University of Southern California, Los Angeles. Her research focuses on building reliable machine learning algorithms for real-world applications with a focus on Artificial Intelligence (AI) for Healthcare and Surgery using provable algorithms, interpretable machine learning, and learning from limited labels using physics priors and transfer learning.

Recipient of academic awards such as E Bruce Lee Memorial Fellowship and Minnesota High Tech Association's SciTechsperience Fellowship, Dr. Rambhatla received her Ph.D. and Masters in Electrical Engineering from the University of Minnesota--Twin Cities, Minneapolis, MN, U.S.A. in 2019 and 2012, respectively. She received her B.Tech with Honors in Electronics and Telecommunication Engineering from College of Engineering Roorkee, Roorkee, India in 2010 (University Bronze Medalist).