Predictor-corrector algorithms for stochastic optimization under gradual distribution shift

Citation:

Maity, S. , Mukherjee, D. , Banerjee, M. , & Sun, Y. . (2023). Predictor-corrector algorithms for stochastic optimization under gradual distribution shift. In The Eleventh International Conference on Learning Representations. Retrieved from https://openreview.net/forum?id=2SV2dlfBuE3

Abstract:

Time-varying stochastic optimization problems frequently arise in machine learning practice (e.g., gradual domain shift, object tracking, strategic classification). Often, the underlying process that drives the distribution shift is continuous in nature. We exploit this underlying continuity by developing predictor-corrector algorithms for time-varying stochastic optimization that anticipates changes in the underlying data generating process through a predictor-corrector term in the update rule. The key challenge is the estimation of the predictor-corrector term; a naive approach based on sample-average approximation may lead to non-convergence. We develop a general moving-average based method to estimate the predictor-corrector term and provide error bounds for the iterates, both in presence of pure and noisy access to the queries from the relevant derivatives of the loss function. Furthermore, we show (theoretically and empirically in several examples) that our method outperforms non-predictor corrector methods that do not anticipate changes in the data generating process.

Notes:

Publisher's Version

Last updated on 12/07/2024