Adaptive Group Lasso Neural Network Models for Functions of Few Variables and Time-Dependent Data

Citation:

N. Richardson, Ho, L. Si Tung, and Tran, G. , “Adaptive Group Lasso Neural Network Models for Functions of Few Variables and Time-Dependent Data”, Sampling Theory, Signal Processing, and Data Analysis, 2023.

Abstract:

In this paper, we propose an adaptive group Lasso deep neural network for high-dimensional function approximation where input data are generated from a dynamical system and the target function depends on few active variables or few linear combinations of variables. We approximate the target function by a deep neural network and enforce an adaptive group Lasso constraint to the weights of a suitable hidden layer in order to represent the constraint on the target function. We utilize the proximal algorithm to optimize the penalized loss function. Using the non-negative property of the Bregman distance, we prove that the proposed optimization procedure achieves loss decay. Our empirical studies show that the proposed method outperforms recent state-of-the-art methods including the sparse dictionary matrix method, neural networks with or without group Lasso penalty.

Notes:

Publisher's Version

Last updated on 07/27/2023