|Title||Safety-Oriented Stability Biases for Continual Learning|
|Year of Publication||2020|
|Academic Department||Computer Science|
|University||University of Waterloo|
Continual learning is often confounded by “catastrophic forgetting” that prevents neural networks from learning tasks sequentially. In the case of real world classification systems that are safety-validated prior to deployment, it is essential to ensure that validated knowledge is retained. We propose methods that build on existing unconstrained continual learning solutions, which increase the model variance or weaken the model bias to better retain more of the existing knowledge. We investigate multiple such strategies, both for continual classification as well as continual reinforcement learning. Finally, we demonstrate the improved performance of our methods against popular continual learning approaches, using variants of standard image classification datasets, as well as assess the effect of weaker biases in continual reinforcement learning.