Location
MC 6460
Candidate
Avneet Kaur | Applied Mathematics, University of Waterloo
Title
State estimation using Jordan long short-term memory networks
Abstract
State estimation refers to determining the states of a dynamical system that evolves under disturbances, based on noisy measurements, partially known or unknown initial condition, and a known system model. Jordan recurrent networks have a structure that mimics that of a dynamical system and are thus attractive for estimator design. We extend the Jordan structure to long-short-term memory networks to obtain a JLSTM which, as we show in several examples, is comparatively more robust to changes in initial conditions and noise and performs better than a EKF and PF. It also trains faster than an ELSTM for state estimation when trained to achieve a similar normalized MSE.
We also compare a shallow and deep JLSTM and observe that they perform almost similarly in terms of average error across time-steps and MSE, but the deep JLSTM takes longer to train due to more layers.
We also train a JLSTM with a modified maximum likelihood equivalent loss function (JLSTM-ML). We observe that for Gaussian initial conditions and disturbances, the average error at each time step is best for estimates of JLSTM-ML. It also gave the best MSE (in dB) on test data generated with random initial conditions and varying disturbances in the systems considered.
Mass-spring system, down pendulum, reversed Van der Pol oscillator, Galerkin approximations of Burger's partial differential equation and of Kuramoto-Sivashinsky partial differential equation were discretized in time and used as examples for training and testing. The order of these systems range from 2- 41.