Wednesday, October 25, 2017 — 2:00 PM EDT
There have been broad advances in the fields of Artificial Intelligence and Machine Learning (AI/ML) in the past decade, especially in the areas of Deep Learning and Reinforcement Learning (RL) which allow us to more easily learn predictive models and control policies for large, complex systems than ever before. One subset of problems that remains very challenging are domains that contain some form of spatially spreading process (SSP) where some local features change over time across based on proximity in space. An SSP is not merely the fact that the domain is spatial and that states change over time. There needs to be a spreading component where local changes at one location influence other locations in a regular way. This commonly arises as a correlation between neighbouring locations, but the connection could be more distant than that. This class of problems includes important real world domains such as forest wildfire spread, flooding and disease spread. Even domains such as diffusion MRI brain scans can be seen in this way as they model fluid flow in the brain. SSPs have all the hallmarks of complex systems; a small change in one location at a particular point in time can have vast influence on the future outcome at other locations. The way this complexity arises and presents itself, and the degree to which it can be learned, depends on the dynamics, the rate of change and the resolution of the data.
In this talk I will review the basics of the relevant Deep Learning and RL algorithms and present some new ways of learning and representing dependencies across space and time using recent advances in these areas. One way we’ve approached this already in ECEUWML lab is in the forest fire domain were we use RL to learn a model of fire spread from satellite images. The algorithm then learns a policy for a wildfire spreading across a landscape based on the local conditions as if the wildfire were an agent making decisions about where to move next. One exciting possibility for this kind of work is the promise of learning interpretable models and aiding domain experts in creating rich, agent-based models based on data by automatically generating agent-based components from raw data such as satellite imagery. This approach is still new but could provide a much needed connection between parallel research already going on in the agent-based modelling and AI/ML research communities.
Mark Crowley is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Waterloo. He received his Ph.D. in Computer Science from the University of British Columbia in 2011.
He is a core member of the Waterloo Institute for Complexity and Innovation. His research seeks to find dependable and transparent ways to augment human decision making in complex and safety-critical domains.
His research program investigates the theoretical and practical challenges that arise from this goal for domains where the complexity arises from the presence of spatial structure, large scale streaming data, or uncertainty.
These types of domains offer unique challenges for traditional artificial intelligence and machine learning (AI/ML) algorithms for decision making, prediction and anomaly detection. Most of his work focusses on developing new algorithms within the fields of Reinforcement Learning, Deep Learning and Random Forests. Dr. Crowley often works in collaboration with researchers in other fields such as sustainable forest management, ecology, automotive technology and medical imaging. He is an active part of building the interdisciplinary Computational Sustainability research community with a focus on forest wildfire prediction and control, invasive species management and flood prediction.