Local and Cooperative Autonomous Vehicle Perception from Synthetic Datasets
Title | Local and Cooperative Autonomous Vehicle Perception from Synthetic Datasets |
---|---|
Author | |
Abstract | The purpose of this work is to increase the performance of autonomous vehicle 3D object detection using synthetic data. This work introduces the Precise Synthetic Image and LiDAR (PreSIL) dataset for autonomous vehicle perception. Grand Theft Auto V (GTA V), a commercial video game, has a large, detailed world with realistic graphics, which provides a diverse data collection environment. Existing works creating synthetic Light Detection and Ranging (LiDAR) data for autonomous driving with GTA V have not released their datasets, rely on an in-game raycasting function which represents people as cylinders, and can fail to capture vehicles past 30 metres. This work describes a novel LiDAR simulator within GTA V which collides with detailed models for all entities no matter the type or position. The PreSIL dataset consists of over 50,000 frames and includes high-definition images with full resolution depth information, semantic segmentation (images), point-wise segmentation (point clouds), and detailed annotations for all vehicles and people. Collecting additional data with the PreSIL framework is entirely automatic and requires no human intervention of any kind. The effectiveness of the PreSIL dataset is demonstrated by showing an improvement of up to 5\% average precision on the KITTI 3D Object Detection benchmark challenge when state-of-the-art 3D object detection networks are pre-trained with the PreSIL dataset. The PreSIL dataset and generation code are available at https://tinyurl.com/y3tb9sxy Synthetic data also enables data generation which is genuinely hard to create in the real world. In the next major chapter of this thesis, a new synthethic dataset, the TruPercept dataset, is created with perceptual information from multiple viewpoints. A novel system is proposed for cooperative perception, perception including information from multiple viewpoints. The TruPercept model is presented. TruPercept integrates trust modelling for vehicular ad hoc networks (VANETs) with information from perception, with a focus on 3D object detection. A discussion is presented on how this might create a safer driving experience for fully autonomous vehicles. The TruPercept dataset is used to experimentally evaluate the TruPercept model against traditional local perception (single viewpoint) models. The TruPercept model is also contrasted with existing methods for trust modeling used in ad hoc network environments. This thesis also offers insights into how V2V communication for perception can be managed through trust modeling, aiming to improve object detection accuracy, across contexts with varying ease of observability. The TruPercept model and data are available at https://tinyurl.com/y2nwy52o |
Year of Publication |
2019
|
URL |
https://uwspace.uwaterloo.ca/handle/10012/15118
|
Download citation |