PhD Comprehensive Exam | Marty Mukherjee, Diffusion Models and Neural Operators for Solving PDEs with Applications in ControlExport this event to calendar

Tuesday, March 5, 2024 10:00 AM EST

M3 3127
<--break-><--break-><--break-><--break-><--break-><--break->

Candidate

Marty Mukherjee | Applied Mathematics, University of Waterloo

Title

Diffusion Models and Neural Operators for Solving PDEs with Applications in Control

Abstract

Diffusion models are a recent area of generative models that learn a score function to reverse the process where data is iteratively corrupted by adding noise. They have shown state-of-the-art performance in image generation, audio generation, and recently video generation. Recently, they have also been employed to solve partial differential equations (PDEs). In the first part of this talk, I will explain my latest research on the use of diffusion models for solving forward and inverse problems in PDEs. This work trains a diffusion model with pairs of solutions and parameters for the Poisson equation in 2D with homogenous Dirichlet boundary conditions. I explore the sampling of the solution or its parameters conditioned on its counterpart and observe that the pre-trained diffusion model does not perform too well. To mitigate this problem, I employ denoising diffusion restoration models (DDRM) that are originally used to solve linear inverse problems. This method outperforms other data-driven approaches (PINNs, DeepONets) in the restoration of the solution as well as the parameters.
 
I then aim to extend this work into the area of Lyapunov control. Lyapunov functions are positive definite functions that are decreasing along the trajectory of a system, thus being a powerful tool to verify that its trajectory stabilizes. However, training these functions involves verifying that a neural network satisfies all the conditions necessary to be a Lyapunov function, which is computationally expensive. Additionally, the neural Lyapunov function only aids in the verification of a single control system, thus failing to consider possible uncertainties in the model parameters or the ability to generalize into systems with slightly different dynamics. I aim to mitigate this problem by using diffusion models to output Lyapunov functions conditioned on Lyapunov-stable vector fields in 2D. I extend this model to derive stabilizing controllers in control-affine systems using a simple Langevin sampling method. This way, my proposed project can develop a foundational model that can stabilize a large class of 2D control problems.
 
Finally, I aim to explore how PDEs can be used to improve the training or sampling in generative models such as consistency models. I realized that the consistency function is simply the solution to the diffusion equation PDE with a time-dependent diffusion parameter, whose initial condition is just the identity function. For this project, I aim to solve this high-dimensional PDE using deep-learning-based methods such as PINNs.

Event tags 

S M T W T F S
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
1
2
3
4
  1. 2024 (62)
    1. June (1)
    2. May (3)
    3. April (11)
    4. March (18)
    5. February (15)
    6. January (14)
  2. 2023 (96)
    1. December (6)
    2. November (11)
    3. October (7)
    4. September (8)
    5. August (12)
    6. July (5)
    7. June (6)
    8. May (5)
    9. April (14)
    10. March (7)
    11. February (8)
    12. January (7)
  3. 2022 (106)
  4. 2021 (44)
  5. 2020 (32)
  6. 2019 (86)
  7. 2018 (70)
  8. 2017 (72)
  9. 2016 (76)
  10. 2015 (77)
  11. 2014 (67)
  12. 2013 (49)
  13. 2012 (19)
  14. 2011 (4)
  15. 2009 (5)
  16. 2008 (8)