Phd Thesis Defence | Yangang Chen, Numerical Methods for Hamilton-Jacobi-Bellman Equations with Applications

Tuesday, July 30, 2019 2:00 pm - 2:00 pm EDT (GMT -04:00)

MC 6460


Yangang Chen | Applied Math, University of Waterloo


Numerical Methods for Hamilton-Jacobi-Bellman Equations with Applications


Hamilton-Jacobi-Bellman (HJB) equations are nonlinear controlled partial differential equations (PDEs). In this thesis, we propose various numerical methods for HJB equations arising from three specific applications.

First, we study numerical methods for the HJB equation coupled with a Kolmogorov-Fokker-Planck (KFP) equation arising from mean field games. In order to solve the nonlinear discretized systems efficiently, we propose a multigrid method. The main novelty of our approach is that we add artificial viscosity to the direct discretization coarse grid operators, such that the coarse grid error estimations are more accurate. The convergence rate of the proposed multigrid method is mesh-independent and faster than the existing methods in the literature.

Next, we investigate numerical methods for the HJB formulation that arises from mass transport image registration model. We convert the PDE of the model (a Monge-Ampère equation) to an equivalent HJB equation, propose a monotone mixed discretization, and prove that it is guaranteed to converge to the viscosity solution. Then we propose multigrid methods for the mixed discretization, where we set wide stencil points as coarse grid points, use injection at wide stencil points as restriction, and achieve mesh-independent convergence rate. Moreover, we propose a novel periodic boundary condition for the image registration PDE, such that when two images are related by a combination of translation and non-rigid deformation, the numerical scheme recovers the underlying transformation correctly.

Finally, we propose a deep neural network framework for HJB equations emerging from the study of American options in high dimensions. We convert the HJB equation to an equivalent Backward Stochastic Differential Equation (BSDE), introduce the least squares residual of the BSDE as the loss function, and propose a new neural network architecture that utilizes the domain knowledge of American options. Our proposed framework yields American option prices and deltas on the entire spacetime, not only at a given point. The computational cost of the proposed approach is quadratic in dimension, which addresses the curse of dimensionality issue that state-of-the-art approaches suffer.