MASc Seminar: Submodular Maximization Subject to Information Constraints

Friday, April 1, 2022 10:30 am - 11:30 am EDT (GMT -04:00)

Candidate: Andrew Coleman Downie

Title: Submodular Maximization Subject to Information Constraints

Date: April 1, 2022

Time: 10:30 AM

Place: Online

Supervisor(s): Smith, Stephen L.

Abstract:

In general, submodular maximization is relevant in many problems in controls, robotics and machine learning, because it models many computationally difficult problems. A simple greedy strategy can provide strong approximation guarantees for many of these problems. We wish to expand the set of scenarios where submodular maximization can be applied. More specifically, in this thesis we study submodular maximization problems where decision-makers are subject to information constraints.

The first type of information constraint we explore is when decision-makers can only partially observe the submodular objective function. This scenario can arise when the submodular function is expensive to compute or a physical constraint prevents the objective function from being evaluated. We formalize the problem and then show that in general, strong performance cannot be guaranteed. We then present two different greedy strategies that provide strong approximation guarantees when only having limited access to the objective under additional assumptions about the submodular function.

The second information constraint we explore is in the context of distributed submodular maximization.  In these scenarios, a team of agents wishes to maximize a submodular objective collaboratively but are constrained to make decisions based on a subset of the other agents' actions. We explore how submodular functions that exhibit worst-case performance for greedy strategies can be formulated through a linear program. This approach provides a means to numerically compute worst-case performance bounds as well as functions where a team of agents will exhibit their worst-case performance. These numerical examples provide insight into how the information available to the agents affects their performance.  Using this technique, we provide performance bounds that are stronger than the known bounds in the literature.