AI seminar: Foraging strategies of artificial ants
Speaker: Chris Marriott, University of Waterloo
Speaker: Chris Marriott, University of Waterloo
Speaker: Mohammad Ghavamzadeh, French Institute for Research in Computer Science and Automation (INRIA), France
Andreas Stöckel, PhD candidate
David R. Cheriton School of Computer Science
The artificial neurons typically employed in machine learning and computational neuroscience bear little resemblance to biological neurons. They are often derived from the “leaky integrate and fire” (LIF) model, neglect spatial extent, and assume a linear combination of input variables. It is well known that these simplifications have a profound impact on the family of functions that can be computed in a single-layer neural network.
Nabiha Asghar, PhD candidate
David R. Cheriton School of Computer Science
We address the problem of incremental domain adaptation (IDA). We assume each domain comes one after another, and that we could only access data in the current domain. The goal of IDA is to build a unified model performing well on all the domains that we have encountered. We propose to augment a recurrent neural network (RNN) with a directly parameterized memory bank, which is retrieved by an attention mechanism at each step of RNN transition. The memory bank provides a natural way of IDA: when adapting our model to a new domain, we progressively add new slots to the memory bank, which increases the number of parameters, and thus the model capacity.