MC 6460

## Speaker

Duncan Mowbray

Professor, Department of Physics, School of Physical Sciences and Nanotechnology, Yachay Tech University, Ecuador

## Title

Using Neural Networks for Approximating Functionals: Applications in the Computational Materials Science

## Abstract

The mathematical problem of finding approximations to an unknown functional, that is, a function of a function, is still an open problem for material science almost twenty years after Walter Kohn won the Nobel Prize for the development of density functional theory (DFT). This theory is based on the Hohenberg-Kohn Theorem, which states that there exists a unique unknown functional, called the exchange and correlation functional, which relates the electron density of a system subject to a particular external potential to the system's ground state energy. In so doing, DFT effectively reduces a problem of order * O* (*p*^{3N}) to one of order * O* (*p*^{3}), where * N* is the number of electrons in the system. However, the lack of a systematic method for approximating this unknown functional has meant DFT calculations have an inherent and often unquantifiable error. Successive approximations to this unknown functional, based on the density at the same location (the local density approximation or LDA), incorporating the gradient of the density at the same location (the generalized gradient approximation or GGA), and also including approximations to the screening of the wavefunction (the so-called hybrid functionals), have met with varying degrees of success. However, the later are no longer functionals solely of the electron density, making their use much more computationally demanding. Moreover, all such approximations to the exact functional often fail to describe even simple systems such as H_{2}^{+} and H_{2} dissociation for various reasons. Altogether, this has made the search for better approximations to the exact functional an area of intensive research for more than thirty years. In contrast to this, the Universal Approximation Theorem states that a neural network with a single hidden layer containing a finite number of neurons can approximate continuous functions to an arbitrarily small accuracy * ε* for a given set of training data. In this work, we employ simple neural networks to reproduce standard DFT functionals, and propose exact methods for producing training sets to train neural networks to reproduce the exact exchange and correlation functional.