Projects

Perception and Prediction

The Perception module is the first layer of software abstraction in the WATonoBus software stack and a precursor to autonomous decision making. The primary objective of this module is to enable scene awareness through information obtained and processed via multi-modal sensors. The Autonomous Bus is equipped with a suite of front, side, and rear facing cameras, 360 view and additional safety LIDARs, and high accuracy GPS, IMUs, and wheel encoders enabling broad coverage of its surroundings.

With the algorithms developed, real-time fusion of this sensory information allows for both relative and absolute 3D location and velocity estimation of surrounding dynamic and static objects. In addition, Object Classification and Confidence estimation performed through Deep Convolutional Networks trained on a blend of campus and public data provide information required for behavioural prediction, planning, and safe decision-making. Furthermore, various challenges such as intermittent sensor noise (such as motion blur in cameras) and occlusion are mitigated by this module through estimation and tracking algorithms.

Moreover, this module houses the behavioural prediction sub module. The primary objective of this sub module is to anticipate behaviour of pedestrians, cyclists, and vehicles for a finite future time period for intelligent planning and safe decision-making, especially at uncontrolled pedestrian crossings and stop-sign controlled intersections on campus. Prediction of object trajectories in the future, especially for pedestrians, is performed using probabilistic model-based and data data-driven approaches that are scene-aware and incorporate social interaction based on prior beliefs. In addition, prediction for traffic vehicles incorporates map information from the localization module and perception data to obtain likely estimates of their intent.

Overall, this module provides the following primary information about surrounding objects: (1) 3D location and velocity estimates, (2) Classification and confidence estimates (3) Tracking, and (3) Probable estimates of object intent with predicted location and velocity.

Remote video URL

Localization

Localization module captures information from cameras, LiDAR, IMU, GNSS and an existing map to perform a multi-sensor fusion algorithm to localize the ego vehicle. This module provides position, velocity and attitude of the ego vehicle as well as a refined map. In order to increase robustness and reliability, the localization output will be the fusion of two separate algorithms: camera- and LIDAR-based approaches.

In the former approach, a tightly coupled fusion of camera and inertial measurements (in the front-end) is being performed thorough an optimization problem (back-end). The algorithm is robust to measurement noises and outliers by considering vehicle dynamics as a constraint in the optimization problem as well as carefully dealing with objects in dynamic environments. Additionally, a Mont-Carlo based approach is used for robust initialization and re-localization tasks. The latter approach is LiDAR-based method. First, GNSS, vehicle dynamics (VD) and IMU data are fused to obtain a reference state (position and attitude). Then, with the original point cloud from the LiDAR and the state from the GNSS/INS/VD integration system, an accurate dense point cloud map is built. Finally, based on this refined map, an optimization-based interest area map matching algorithm is applied to match the current point cloud with the existing map.

Given the position, velocity and attitude (PVA) from the camera-based and LiDAR-based methods, a smart safety module is used to detect failure modes in each of the individual methods and also monitor the sensors and algorithms health status. Based on the safety module, the PVA from the camera-based and LiDAR-based are fused by a loosely coupled integration framework through an optimal estimator (KF or LMI-based) to obtain the full localization solution of the ego vehicle.

Localization Architecture

localization architecture diagram

Decision-Making

The Decision-Making module of the WATonoBus project is responsible for making correct, high-level decisions for ego vehicle motion under various driving scenarios.

The inputs to the Decision-Making module are the detected objects perceived by the Dynamic Feature Identification module as well as the predicted trajectories of objects from Prediction module. Through analyzing the received objects’ information, the Decision-Making module will then filter the perception results and identify the potential obstacles. Subsequently, discrete ego vehicle motion decisions like stop/go and pull over with the interested obstacles’ information is the output to the Path/Motion Planning module so that detailed, low-level vehicle control can be implemented correspondingly.

The major tasks of the Decision-making Module consist of understanding current scenarios as well as filtering interested obstacles from perception and prediction results. Particularly, through cross checking the ego vehicle localization and perception results, the Decision-making Module will perform accordingly under different driving scenarios. In addition, various techniques like constructing dynamic ego vehicle safety zone, referring to ego vehicle past trajectory, and adapting to the detected objects from different categories are implemented to ensure all the potential obstacles can be identified, even under certain edge cases such as pedestrian on sidewalk but tends to cross the driveway.

decision-making diagram

Path Planning

The main objective of the Path/Motion Planning module is to plan the vehicle’s trajectory such that the expected driving behavior is fulfilled. The expected driving behavior is intended by the Decision-Making module and contains all the normal driving scenarios such as stopping at a specific position, merging to the traffic, changing the lane, and following the target route. In all these scenarios, the vehicle ought to avoid the obstacles and execute an appropriate response in emergency cases. Finally, the Vehicle Control module generates suitable commands, front steering angle and traction/braking torques, to follow the planned path while maintaining the vehicle stability.

The Path/Motion Planning module receives the vehicle states and road features, identified objects, traffic condition, and the expected behavior from Mapping and Localization, Dynamic Feature Identification, Prediction, and Decision-Making modules, respectively. Using the aforementioned information, the Path/Motion Planning module exploits the artificial potential field (APF) method which enables the vehicle to avoid collisions by providing a collision-free trajectory. This trajectory, which is updated in real-time, is obtained even in a complex workspace that contains multiple stationary and moving obstacles. The figure below demonstrates the APF of each moving object and how a collision-free trajectory is defined. This trajectory is employed by the Vehicle Control component as a general solution.

Path planning

Safety and Reliability

The role of this module is to monitor the performance of different actuators, sensors and estimators used in WATonoBus to ensure the health and safe functioning of the whole system while driving. The reliability module uses methods from statistical analysis and machine learning as well as model descriptions with bounded uncertainties governing the motion of vehicle to satisfy certain tasks: (1) to come up with confidence regions which contain future state trajectories of the vehicle (based on control inputs) with certain confidence level; (2) to determine confidence tags regarding the level of certainty of performance of actuators, sensors, and estimation modules devised in the vehicle; and (3) to perform diagnosis/prognosis for possible operating faults on road, and determine corresponding strategies to cope with them.

To comply with task (1), physical model governing the equation of motion is used to analyze possible deviation of estimated position from the true position of the vehicle due to measurement error in sensor readings. At the same time, a learning algorithm is used to come up with bounds for possible position of vehicle based on the statistical analysis on the uncertainty involved in actuators, sensors, and estimation modules. For task (2), statistical analysis is carried out to determine confidence tags pertaining to the degree of accuracy of estimation tools. For task (3), Fault Detection and Isolation systems (FDI) are implemented to detect and classify faults for further use by Decision-Making module (tasks) and for pose prediction. In this context, pattern recognition systems are developed mostly basing on model-driven (white-box), data-driven (black box) and hybrid (gray-box) soft sensors using Machine Learning tools. Residue analysis of soft sensors during the Autonomous Bus operation is the dominant approach for the FDI systems.

The reliability analysis also has a tie with standard safety measures to ensure the functional safety of developed autonomous driving systems. Safety standards like ISO 26262 and ISO 21448 are referred at the system development level while constructing the architecture of this reliability module. Overall, this architecture provides vehicle-level solutions for the ego vehicle control module to maintain the safety of passengers as well as the surrounding after the faults are detected and reported from the abovementioned tasks. Specifically, following the functional safety concept recommended in the standards, a novel fault tolerance architecture is formulated with safety system engineering design principles in this project, aiming to transit the vehicle to the safety states when the faults are reported by the reliability module.

realibility module

Virtual Testing Environment

Autonomous driving in the urban setting would require testing the algorithm for numerous scenarios. This necessitates developing a virtual environment for testing not only the perception and decision-making modules, but also motion planning and control algorithms.

In this direction, Unreal Engine and the open source Carla car simulator have been employed for testing and development of the WATonoBus. Unreal Engine is an open source game engine that allows full customization of characters and environments. Carla utilizes the photorealistic environment, material surface properties and ray tracing technology provided by the Unreal Engine, and allows the control of vehicles, pedestrians and cyclists in the virtual environment. Every vehicle in the simulator can be spawned with a set of sensors that are traditionally used in autonomous driving, such as: RGB camera, GPS, IMU, automotive type RADAR and LIDAR.

Sensors can be modified/customized as per the real setup. The sensor information and vehicle control commands/inputs are accessed through ROS/Python API in real-time, synchronized with the simulation step. Carla simulator can be characterized as a car game with a Python API and a game editor that allows full customization. In addition to the existing game assets, new assets can be added such as pedestrians, vehicles, buildings, etc. and their texture modified and made look realistic.

MVS group has developed the University of Waterloo campus environment that matches the roads, traffic signs, buildings and structures. The Waterloo campus map allows the team to test different parts of the autonomous driving software stack separately or in combination before going out for testing the actual vehicle. Any type of scenario can be generated to test the object detection and tracking, navigation, decision-making, and motion planning algorithms that would guarantee the safety of the system and on-road readiness.

Certain scenarios, required for safety verification, can be tested only in the simulation, for instance: sudden jaywalker running across the street, cyclist rapidly cutting in front of the vehicle or another vehicle making a dangerous maneuver. Every actor such as a vehicle, cyclist or a pedestrian can be controlled in a manual mode and imitate human actions for uncertain cases like all-way stop signs or pedestrian only stop signs, that would allow testing the behavior planning algorithm beforehand. 

Remote video URL
Remote video URL
Remote video URL