PhD Comprehensive Exam | Marty Mukherjee, Diffusion Models and Neural Operators for Solving PDEs with Applications in Control

Tuesday, March 5, 2024 10:00 am - 10:00 am EST (GMT -05:00)

M3 3127
<--break-><--break-><--break-><--break-><--break-><--break->

Candidate

Marty Mukherjee | Applied Mathematics, University of Waterloo

Title

Diffusion Models and Neural Operators for Solving PDEs with Applications in Control

Abstract

Diffusion models are a recent area of generative models that learn a score function to reverse the process where data is iteratively corrupted by adding noise. They have shown state-of-the-art performance in image generation, audio generation, and recently video generation. Recently, they have also been employed to solve partial differential equations (PDEs). In the first part of this talk, I will explain my latest research on the use of diffusion models for solving forward and inverse problems in PDEs. This work trains a diffusion model with pairs of solutions and parameters for the Poisson equation in 2D with homogenous Dirichlet boundary conditions. I explore the sampling of the solution or its parameters conditioned on its counterpart and observe that the pre-trained diffusion model does not perform too well. To mitigate this problem, I employ denoising diffusion restoration models (DDRM) that are originally used to solve linear inverse problems. This method outperforms other data-driven approaches (PINNs, DeepONets) in the restoration of the solution as well as the parameters.
 
I then aim to extend this work into the area of Lyapunov control. Lyapunov functions are positive definite functions that are decreasing along the trajectory of a system, thus being a powerful tool to verify that its trajectory stabilizes. However, training these functions involves verifying that a neural network satisfies all the conditions necessary to be a Lyapunov function, which is computationally expensive. Additionally, the neural Lyapunov function only aids in the verification of a single control system, thus failing to consider possible uncertainties in the model parameters or the ability to generalize into systems with slightly different dynamics. I aim to mitigate this problem by using diffusion models to output Lyapunov functions conditioned on Lyapunov-stable vector fields in 2D. I extend this model to derive stabilizing controllers in control-affine systems using a simple Langevin sampling method. This way, my proposed project can develop a foundational model that can stabilize a large class of 2D control problems.
 
Finally, I aim to explore how PDEs can be used to improve the training or sampling in generative models such as consistency models. I realized that the consistency function is simply the solution to the diffusion equation PDE with a time-dependent diffusion parameter, whose initial condition is just the identity function. For this project, I aim to solve this high-dimensional PDE using deep-learning-based methods such as PINNs.