Contact Info
Department of Applied Mathematics
University of Waterloo
Waterloo, Ontario
Canada N2L 3G1
Phone: 5198884567, ext. 32700
Fax: 5197464319
PDF files require Adobe Acrobat Reader
Visit our COVID19 information website to learn how Warriors protect Warriors.
Please note: The University of Waterloo is closed for all events until further notice.
MC 4206
Chuanzheng Wang  Applied Math, University of Waterloo
Motion Planning Under Uncertainty
Motion planning that takes into consideration of uncertainty is important for autonomous robots to operate more reliably. Uncertainty is mainly in three aspects: environment uncertainty, dynamical uncertainty and measurements uncertainty. The environment uncertainty is usually caused by moving obstacles which result in dynamical maps or static but imperfect maps. Stochastic process such as Brownian motion is one critical reason for dynamical uncertainty. And measurement uncertainty is usually caused by imperfect information provided by sensors that are subject to noise. In this talk, we will show our preliminary work on motion planning uncer environment and dynamical uncertainty. For environment uncertainty, a continuous reactive path planning (CRPP) problem is considered. The environment model consists of multiple environments that the robot might be working in. Some previous works for reactive path planning problem in a discrete setting required a costmap to store all the shortest paths starting from one point to every other point in all the environments. We show that the biggest bottleneck for such methods is the runtime as the number of environments increases in a continuous setting. As a result, a partial critical obstacle (PCO) algorithm that calculates the costmap using critical obstacles is proposed. The analysis of the algorithm shows that the costmap can be calculated in polynomial time and simulation results show that it can save more than 90% computing time compared to previous work. For dynamical uncertainty, a stochastic optimal control problem is considered. One popular way of solving such problems is abstractionbased method, in which a stochastic differential equation (SDE) is discretized into a Markov Decision Process (MDP) using Markov Chain Approximation (MCA). The resulting MDP problem is solved using value function decomposition or samplingbased methods. However, this method suffers from the curse of dimensionality, which means that the computation time scales exponentially with the number of dimensions. As a result, we show a way of solving a stochastic optimal control problem using reinforcement learning methods. Simulation results show that the calculated policy converges to an optimal policy.
S  M  T  W  T  F  S 

27

28

29

30

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

31

Contact Info
Department of Applied Mathematics
University of Waterloo
Waterloo, Ontario
Canada N2L 3G1
Phone: 5198884567, ext. 32700
Fax: 5197464319
PDF files require Adobe Acrobat Reader
The University of Waterloo acknowledges that much of our work takes place on the traditional territory of the Neutral, Anishinaabeg and Haudenosaunee peoples. Our main campus is situated on the Haldimand Tract, the land granted to the Six Nations that includes six miles on each side of the Grand River. Our active work toward reconciliation takes place across our campuses through research, learning, teaching, and community building, and is centralized within our Indigenous Initiatives Office.