Taylor
Denouden,
Master’s
candidate
David
R.
Cheriton
School
of
Computer
Science
Recently, much research has been published for detecting when a classification neural network is presented with data that does not fit into one of the class labels the network learned at train time. These so-called out-of-distribution (OOD) detection techniques hold promise for improving safety in systems where unusual or novel inputs may result in errors that endanger human lives. Autonomous vehicles could specifically benefit from the use of these techniques if they could be adapted to detect and localize unusual objects in a driving environment, allowing for such objects to be treated with a high degree of caution.
This thesis explores the modification of a selection of existing OOD detection methods from the image classification literature for use in a two-stage object detection network. It is found that the task of detecting objects as being OOD is difficult to define for object detection networks that include a high-variance background class label, but that these methods can instead be adapted for detecting when background regions are incorrectly classified as foreground and when foreground objects of interest are incorrectly classified as background in the final layers of the network. It is found that some methods provide a slight improvement over the baseline method that uses softmax confidence scores for detecting these kinds of errors.