Non-divergent Imitation for Verification of Complex Learned Controllers

Title Non-divergent Imitation for Verification of Complex Learned Controllers
Author
Keywords
Abstract

We consider the problem of verifying complex learned controllers using distillation. In contrast to previous work, we require that the distilled model maintains behavioural fidelity with an oracle, defining the notion of non-divergent path length (NPL) as a metric. We demonstrate that current distillation approaches with proven accuracy bounds do not have high expected NPL and can be out-performed by naive behavioural cloning. We thus propose a distillation algorithm that typically gives greater expected NPL, improved sample efficiency, and more compact models. We prove properties of NPL maximization and demonstrate the performance of our algorithm on deep Q-network controllers for three standard learning environments that have been used in this context: Pong, CartPole and MountainCar.

Year of Conference
2021
Publisher
IEEE
Conference Location
Shenzhen, China (virtual)
DOI
10.1109/IJCNN52387.2021.9533410
Download citation