PhD Comprehensive Exam | Chuanzheng Wang. Motion Planning Under UncertaintyExport this event to calendar

Thursday, January 10, 2019 2:00 PM EST

MC 4206<--break->

Candidate

Chuanzheng Wang | Applied Math, University of Waterloo

Title

Motion Planning Under Uncertainty

 Abstract

Motion planning that takes into consideration of uncertainty is important for autonomous robots to operate more reliably. Uncertainty is mainly in three aspects: environment uncertainty, dynamical uncertainty and measurements uncertainty. The environment uncertainty is usually caused by moving obstacles which result in dynamical maps or static but imperfect maps. Stochastic process such as Brownian motion is one critical reason for dynamical uncertainty. And measurement uncertainty is usually caused by imperfect information provided by sensors that are subject to noise. In this talk, we will show our preliminary work on motion planning uncer environment and dynamical uncertainty. For environment uncertainty, a continuous reactive path planning (CRPP) problem is considered. The environment model consists of multiple environments that the robot might be working in. Some previous works for reactive path planning problem in a discrete setting required a cost-map to store all the shortest paths starting from one point to every other point in all the environments. We show that the biggest bottleneck for such methods is the runtime as the number of environments increases in a continuous setting. As a result, a partial critical obstacle (PCO) algorithm that calculates the cost-map using critical obstacles is proposed. The analysis of the algorithm shows that the cost-map can be calculated in polynomial time and simulation results show that it can save more than 90% computing time compared to previous work.  For dynamical uncertainty, a stochastic optimal control problem is considered. One popular way of solving such problems is abstraction-based method, in which a stochastic differential equation (SDE) is discretized into a Markov Decision Process (MDP) using Markov Chain Approximation (MCA). The resulting MDP problem is solved using value function decomposition or sampling-based methods. However, this method suffers from the curse of dimensionality, which means that the computation time scales exponentially with the number of dimensions. As a result, we show a way of solving a stochastic optimal control problem using reinforcement learning methods. Simulation results show that the calculated policy converges to an optimal policy.

S M T W T F S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
1
2
3
4
  1. 2024 (70)
    1. June (3)
    2. May (7)
    3. April (12)
    4. March (19)
    5. February (15)
    6. January (14)
  2. 2023 (96)
    1. December (6)
    2. November (11)
    3. October (7)
    4. September (8)
    5. August (12)
    6. July (5)
    7. June (6)
    8. May (5)
    9. April (14)
    10. March (7)
    11. February (8)
    12. January (7)
  3. 2022 (106)
  4. 2021 (44)
  5. 2020 (33)
  6. 2019 (86)
  7. 2018 (70)
  8. 2017 (72)
  9. 2016 (76)
  10. 2015 (77)
  11. 2014 (67)
  12. 2013 (49)
  13. 2012 (19)
  14. 2011 (4)
  15. 2009 (5)
  16. 2008 (8)