PhD Defence Notice - Ahmad Bilal Asghar

Wednesday, April 15, 2020 9:30 am - 9:30 am EDT (GMT -04:00)

Candidate: Ahmad Bilal Asghar

Title: Multi-Robot Path Planning for Persistent Monitoring in Stochastic and Adversarial Environments

Date: April 15, 2020

Time: 9:30 AM

Place: REMOTE PARTICIPATION

Supervisor(s): Smith, Stephen

Abstract:

In this thesis, we study multi-robot path planning problems for persistent monitoring tasks. The goal of such persistent monitoring tasks is to deploy a team of cooperating mobile robots in an environment to continually observe locations of interest in the environment. Robots patrol the environment in order to detect events arriving at the locations of the environment. The events stay at those locations for a certain amount of time before leaving and can only be detected if one of the robots visits the location of an event while the event is there.

In order to detect all possible events arriving at a vertex, the maximum time spent by the robots between visits to that vertex should be less than the duration of the events arriving at that vertex. We consider the problem of finding the minimum number of robots to satisfy these revisit time constraints, also called latency constraints. The decision version of this problem is PSPACE-complete. We provide a $O(\log \rho)$ approximation algorithm for this problem where $\rho$ is the ratio of the maximum and minimum latency constraints. We also present heuristic algorithms to solve the problem and show through simulations that a proposed orienteering based heuristic algorithm gives better solutions than the approximation algorithm. We also provide an algorithm for the problem of minimizing the maximum weighted latency given a fixed number of robots.

In case the event stay durations are not fixed but are drawn from a known distribution, we consider the problem of maximizing the expected number of detected events. We motivate randomized patrolling paths for such scenarios and use Markov chains to represent those random patrolling paths. We characterize the expected number of detected events as a function of the Markov chains used for patrolling and show that the objective function is submodular for randomly arriving events. We propose an approximation algorithm for the case where the event durations for all the vertices is a constant. We also propose a centralized and an online distributed algorithm to find the random patrolling policies for the robots. We also consider the case where the events are adversarial and can choose where and when to appear in order to maximize their chances of remaining undetected.

The last problem we study in this thesis considers adversarial events that have a limited time to observe and learn the patrolling policy before they decide when and where to appear. We study the single robot version of this problem and model this problem as a multi-stage two player game. The adversarial event observes the patroller’s actions for a finite amount of time to learn the patroller’s strategy and then decides to either appear at a location or renege based on its confidence in the learned strategy. We characterize the expected payoffs for the players and propose a search algorithm to find a patrolling policy in such scenarios. We illustrate the trade off between hard to learn and hard to attack strategies through simulations.