|Title||A multimodal variational approach to learning and inference in switching state space models|
|Publication Type||Conference Paper|
|Year of Publication||2004|
|Authors||Lee, L. J., H. Attias, L. Deng, and P. Fieguth|
|Conference Name||37th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)|
|Keywords||continuous state posterior distribution, discrete time systems, discrete-time signal processing, EM algorithm, frame based representation, frame-based likelihood Viterbi decoding, Gaussian processes, Gaussian state space model, Hidden Markov Model, hidden Markov models, inference, learning, learning (artificial intelligence), model-based reasoning, multimodal variational technique, parameter estimation, speech processing, SSS model approximation, state-space methods, switching state space models, time series, variational techniques, Viterbi decoding, windowing technique|
An important general model for discrete-time signal processing is the switching state space (SSS) model, which generalizes the hidden Markov model and the Gaussian state space model. Inference and parameter estimation in this model are known to be computationally intractable. This paper presents a powerful new approximation to the SSS model. The approximation is based on a variational technique that preserves the multimodal nature of the continuous state posterior distribution. Furthermore, by incorporating a windowing technique, the resulting EM algorithm has complexity that is just linear in the length of the time series. An alternative Viterbi decoding with frame-based likelihood is also presented which is crucial for the speech application that originally motivates this work. Our experiments focus on demonstrating the effectiveness of the algorithm by extensive simulations. A typical example in speech processing is also included to show the potential of this approach for practical applications.