Candidate: Ambareesh Ravi
Title: A Class of Augmented Convolutional Networks Architectures for Efficient Visual Anomaly Detection
Date: June 28, 2021
Time: 2:00 PM
Supervisor(s): Karray, Fakhri
Visual anomaly detection, the task of isolating visual data that do not conform to the defined notion of normality, is very crucial for the autonomous functioning of entities with exceptional potential in a spectrum of real-world applications. Prevalent methods of visual anomaly detection involve massive, complex, inefficient models whose performances are often restricted by the availability of data, the extent of hyper-parameter tuning and optimal model design. Moreover, popular deep learning approaches such as reconstruction-based methods that use a variant of AutoEncoders and generative methods like Generative Adversarial Network are not inherently designed for the task of anomaly detection. The factors discussed above, raise the following severe problems:
1. The general model design may not be efficient without a dedicated anomaly detection objective hence lacking the ability to well distinguish anomalies from the normal data 2. The immense time and effort spent in the search of hyper-parameters and optimal model design restricts models to be immediately deployed for applications 3. The functioning of models involves a lot of human intervention and is data-centric preventing them to be used in automated, online detection tasks 4. The high performing, complex models are too huge to be used in edge applications with low computational capacity that require models with a low memory footprint
To overcome these issues, several modular, model-agnostic, efficient and novel improvements to conventional architectures have been proposed and suggested in this work and they can potentially be employed in any AutoEncoder based anomaly detection task.
The focus of this work is to develop models that are simple, efficient, require low memory usage and reduced effort expended on hyperparameter tuning and the proposed improvements can aid in readily augmenting the performance over baseline models by a significant margin by producing robust, discriminative, and discernible representations to help better segregate anomalies from normal samples.
The overall generic framework proposed throughout this research consists of multiple, efficient architectures that can be used for immediate deployment of models for practical, real-world automated anomaly detection tasks with minimal human intervention and to impart capabilities like online learning and self-regularization for best performance on image and video tasks. The superiority and efficacy of the proposed solutions are enunciated through quantitative and qualitative performance evaluation on a variety of image and video datasets from diverse domains along with rich visualization and ablation studies. This work also focuses on the exploration of interpretability in AutoEncoder-based anomaly detection models with modifications to adapt popular classifier-centric explainability frameworks, to pave way for a better understanding of the function and decision of the models.