MC 4206
Candidate
Chuanzheng Wang | Applied Math, University of Waterloo
Title
Motion Planning Under Uncertainty
Abstract
Motion planning that takes into consideration of uncertainty is important for autonomous robots to operate more reliably. Uncertainty is mainly in three aspects: environment uncertainty, dynamical uncertainty and measurements uncertainty. The environment uncertainty is usually caused by moving obstacles which result in dynamical maps or static but imperfect maps. Stochastic process such as Brownian motion is one critical reason for dynamical uncertainty. And measurement uncertainty is usually caused by imperfect information provided by sensors that are subject to noise. In this talk, we will show our preliminary work on motion planning uncer environment and dynamical uncertainty. For environment uncertainty, a continuous reactive path planning (CRPP) problem is considered. The environment model consists of multiple environments that the robot might be working in. Some previous works for reactive path planning problem in a discrete setting required a cost-map to store all the shortest paths starting from one point to every other point in all the environments. We show that the biggest bottleneck for such methods is the runtime as the number of environments increases in a continuous setting. As a result, a partial critical obstacle (PCO) algorithm that calculates the cost-map using critical obstacles is proposed. The analysis of the algorithm shows that the cost-map can be calculated in polynomial time and simulation results show that it can save more than 90% computing time compared to previous work. For dynamical uncertainty, a stochastic optimal control problem is considered. One popular way of solving such problems is abstraction-based method, in which a stochastic differential equation (SDE) is discretized into a Markov Decision Process (MDP) using Markov Chain Approximation (MCA). The resulting MDP problem is solved using value function decomposition or sampling-based methods. However, this method suffers from the curse of dimensionality, which means that the computation time scales exponentially with the number of dimensions. As a result, we show a way of solving a stochastic optimal control problem using reinforcement learning methods. Simulation results show that the calculated policy converges to an optimal policy.