Zoom (Please contact ddelreyfernandez@uwaterloo.ca for meeting link)
Speaker
Suchuan Dong, Center for Computational and Applied Mathematics, Department of Mathematics, Purdue University
Title
Can Neural Network-Based Methods Out-compete Traditional Numerical Techniques for Computational PDEs?
Abstract
Scientific machine learning techniques have witnessed a dramatic growth in the past few years. Here we focus on the question posed in the title: Can neural network-based methods out-compete traditional numerical methods for computational PDEs? This question has been hanging in the air ever since the early works on neural networks for differential equations in the 1990s, and it has intrigued both computational mathematicians and machine learning practitioners. By "out-compete" we mean that one method achieves a better accuracy under the same computational budget/cost or incurs a lower computational cost to achieve the same accuracy. While their computational performance is promising (and has been improving), the existing deep neural network (DNN) based PDE solvers suffer from several drawbacks, which make them numerically less than satisfactory and computationally uncompetitive. The most prominent includes the limited accuracy, a general lack of convergence with a certain convergence rate, and extremely high computational cost (very long time to train). Owing to these limitations, these solvers seem to fall short, at least in their current state, and cannot compete with traditional numerical methods, except perhaps for certain problems such as high-dimensional ones.
In this talk I will present a neural network-based method (termed local extreme learning machines, or locELM) for solving linear and nonlinear PDEs that exhibits a disparate computational performance from the above DNN-based PDE solvers and in a sense overcomes the above drawbacks. This method combines the ideas of extreme learning machines, domain decomposition and local neural networks. The field solution on each sub-domain is represented by a local randomized feed-forward neural network, and C^k continuity is imposed on the sub-domain boundaries. In each local neural network the hidden-layer coefficients are assigned to random values and fixed (not trainable) and only the output-layer coefficients are trained. The overall neural network is trained by a linear or nonlinear least squares computation, not by the back-propagation (or gradient descent) type algorithms. The presented method exhibits a clear sense of convergence with respect to the degrees of freedom in the neural network. For smooth solutions its numerical errors decrease exponentially as the number of training parameters or the number of training data points increases, much like the traditional high-order spectral or spectral element type methods. We compare the locELM method with the state-of-the-art DNN-based PDE solvers, such as the physics-informed neural network (PINN) and the deep Galerkin method (DGM), and with the classical and high-order finite element methods (FEM). The numerical errors and network training time of locELM are considerably smaller, typically by orders of magnitude, than those of DGM and PINN. We show evidence that the current method far outperforms the classical 2nd-order FEM. The computational performance of the presented method is comparable to that of the high-order FEM for smaller problem sizes, and for larger problem sizes it markedly outperforms the high-order FEM. A number of numerical benchmarks with forward and inverse PDEs will be presented to demonstrate these points.