**Contact Info**

Department of Applied Mathematics

University of Waterloo

Waterloo, Ontario

Canada N2L 3G1

Phone: 519-888-4567, ext. 32700

Fax: 519-746-4319

PDF files require Adobe Acrobat Reader

Thursday, January 10, 2019 2:00 PM EST

MC 4206

Chuanzheng Wang | Applied Math, University of Waterloo

Motion Planning Under Uncertainty

Motion planning that takes into consideration of uncertainty is important for autonomous robots to operate more reliably. Uncertainty is mainly in three aspects: environment uncertainty, dynamical uncertainty and measurements uncertainty. The environment uncertainty is usually caused by moving obstacles which result in dynamical maps or static but imperfect maps. Stochastic process such as Brownian motion is one critical reason for dynamical uncertainty. And measurement uncertainty is usually caused by imperfect information provided by sensors that are subject to noise. In this talk, we will show our preliminary work on motion planning uncer environment and dynamical uncertainty. For environment uncertainty, a continuous reactive path planning (CRPP) problem is considered. The environment model consists of multiple environments that the robot might be working in. Some previous works for reactive path planning problem in a discrete setting required a cost-map to store all the shortest paths starting from one point to every other point in all the environments. We show that the biggest bottleneck for such methods is the runtime as the number of environments increases in a continuous setting. As a result, a partial critical obstacle (PCO) algorithm that calculates the cost-map using critical obstacles is proposed. The analysis of the algorithm shows that the cost-map can be calculated in polynomial time and simulation results show that it can save more than 90% computing time compared to previous work. For dynamical uncertainty, a stochastic optimal control problem is considered. One popular way of solving such problems is abstraction-based method, in which a stochastic differential equation (SDE) is discretized into a Markov Decision Process (MDP) using Markov Chain Approximation (MCA). The resulting MDP problem is solved using value function decomposition or sampling-based methods. However, this method suffers from the curse of dimensionality, which means that the computation time scales exponentially with the number of dimensions. As a result, we show a way of solving a stochastic optimal control problem using reinforcement learning methods. Simulation results show that the calculated policy converges to an optimal policy.

**Contact Info**

Department of Applied Mathematics

University of Waterloo

Waterloo, Ontario

Canada N2L 3G1

Phone: 519-888-4567, ext. 32700

Fax: 519-746-4319

PDF files require Adobe Acrobat Reader

University of Waterloo

University of Waterloo

43.471468

-80.544205

200 University Avenue West

Waterloo,
ON,
Canada
N2L 3G1

The University of Waterloo acknowledges that much of our work takes place on the traditional territory of the Neutral, Anishinaabeg and Haudenosaunee peoples. Our main campus is situated on the Haldimand Tract, the land granted to the Six Nations that includes six miles on each side of the Grand River. Our active work toward reconciliation takes place across our campuses through research, learning, teaching, and community building, and is co-ordinated within the Office of Indigenous Relations.