Colloquium Series 2018-2019

Colloquium Series 2018-2019

Colloquia are generally on Tuesdays at 3:30 p.m., once per month. They are usually held in E5-6111 (exceptions will be noted). Abstracts are posted as available. If you'd like to be on the mailing list announcing these events, please sign up here.

Here is a list of our upcoming speakers for the 2018 and 2019 academic year:

October 2, 2018 - Subutai Ahmad (Numenta, Inc)

October 23, 2018 - Doug Crawford (York)

November 20, 2018 - Roland Memisevic (Twenty Billion Neurons) 

November 27, 2018 - Stefan Mihalas (Allen Institute for Brain Science)

December 11, 2018 - Yan Wu (DeepMind)

March 12, 2019 - Joel Zylberberg (York)

May 14, 2019 - Javier F Medina (Baylor)


Tuesday October 2, 2018
E5 6127
3:30 p.m.-5:00 p.m.

Subutai Ahmad
Numenta, Inc

Have We Missed Half of What the Neocortex Does? A New Predictive Framework Based on Cortical Grid Cells

How the neocortex works is a mystery. Traditional feed forward models of perception cannot account for the vast majority of cortical connections. In this talk, I will describe a theory that sensory regions of the neocortex process two inputs. One input is the well-known sensory data arriving via thalamic relay cells. The second is an allocentric representation of location, which we propose is derived from motion inputs in the sub-granular layers of each cortical column. The allocentric location represents where a sensed feature is relative to the object being sensed. As the sensors move, cortical columns learn complete predictive models of objects by integrating feature and location representations over time. We propose a theory where the location signal is derived in each column using the same principles as grid cells in the entorhinal cortex. In this proposal, individual cortical columns are able to model complete objects and are far more powerful than currently believed. I will discuss our model, the mechanisms, and the implications for hierarchy and cortical function.

Tuesday October 23, 2018
3:30 p.m.-5:00 p.m.

Doug Crawford
York University

Spatial Updating During Smooth Pursuit: From Models to Neurons

Spatial updating is the computational problem that arises when remembered visual information is stored relative to eye position, and then he eyes move before that information can be retrieved. This has mainly been studied in the context of rapid eye movements called saccades. Most recent models and neurophysiological studies agree that the primary mechanism for trans-saccade spatial updating involves the ‘remapping’ of stored visual information within an eye-fixed frame. It is also known that humans and primates can spatially update during slow eye movements, like smooth pursuit of a moving visual target, but the mechanisms for updating remembered locations during slow eye movements have largely been ignored. Here, I will describe a state-space network model that is able to reproduce spatial updating during both saccades and smooth pursuit. This model predicts that, in contrast with the discrete remapping observed during saccades, smooth pursuit should be accompanied by continuous updating, corresponding to a ‘moving hill’ of neural activity for remembered locations within the brain’s internal retinotopic maps. I will then describe neurophysiological data collected in monkeys, confirming this prediction in superior colliculus visual neurons. I will further show that this mechanism depends on attention, and appears to augment visual responses even when the visual stimulus is still visible. Thus, continuous updating is a mechanism that complements discrete trans-saccadic updating but accounts for the different computational requirements of pursuit, and is likely used both for updating visual memory and augmenting spatial attention during active vision.

Tuesday, November 27, 2018
3:30 p.m. - 5:00 p.m.

Stefan Mihalas
Allen Institute for Brain Science

Bio-inspired models of machine learning in vision

Deep neural network have been inspired by biological networks. Convolutional neural network, a frequently used form of deep networks, have had great success in many real-world applications and have been used to model visual processing in the brain. However, these networks require large amounts of labeled data to train and are quite brittle: for example, small changes in the input image can dramatically change the network's output prediction. In contrast to what is known from biology, these networks rely on feedforward connections, largely ignoring the influence of recurrent connections.

In this study we construct deep neural networks which make use of knowledge of local circuits, and test some predictions of the network against observed data. For the local circuit, we used a model based on the assumption that the lateral connections of neurons implement optimal integration of context. The optimal computations require more complex neurons, but they can be approximated by a standard artificial neuron. We tested this hypothesis using natural scene statistics and mouse v1 recordings which allows us to construct a parameter-free model for lateral connections. The optimal structure matches the observed structure (like-to-like pyramidal connectivity and distance dependence of connections) better than receptive field correlation models.

Subsequently we integrated these local circuits in traditional convolutional neural networks. Models with optimal lateral connections are more robust to noise and achieve better performance on noisy versions of the MNIST and CIFAR-10 datasets. These models also reproduce salient features of observed neuronal recordings: e.g. positive signal and noise correlations. Our results demonstrate the usefulness of combining knowledge of local circuits with machine learning techniques in real-world vision tasks and studying cortical computations.

Tuesday, December 11, 2018
3:30 p.m.-5:00 p.m.

Yan Wu
Google DeepMind

Learning Attractor Dynamics for Generative Memory

A central challenge faced by memory systems is the robust retrieval of a stored pattern in the presence of interference due to other stored patterns and noise. A theoretically well-founded solution to robust retrieval is given by attractor dynamics, which iteratively cleans up patterns during recall. However, incorporating attractor dynamics into modern deep learning systems poses difficulties: attractor basins are characterized by vanishing gradients, which are known to make training neural networks difficult. In this work, we exploit recent advances in variational inference and avoid the vanishing gradient problem by training a generative distributed memory with a variational lower-bound-based Lyapunov function. The model is minimalistic with surprisingly few parameters. Experiments shows it converges to correct patterns upon iterative retrieval and achieves competitive performance as both a memory model and a generative model.

Tuesday March 12, 2019
E5-6111
3:30 p.m.-5:00 p.m.

Joel Zylberberg
York University

(Learning) Visual Representations

Visual stimuli elicit action potentials in the retina, that propagate to the brain, where further action potentials are elicited. What is the language of this signalling? In other words, how do patterns of action potentials in each neural circuit correspond to stimuli in the outside world? The first part of this talk will highlight recent work from my laboratory that confronts this problem in the retina and visual cortex. Next, I will discuss on-going work that asks how those representations are learned. Specifically, I will highlight a joint theory-experiment research program that investigates whether and how the brain's visual neural circuits implement the same kinds of learning algorithms as are found in modern artificial intelligence systems.

Tuesday May 14, 2019
E5-6111
3:30 p.m.-5:00 p.m.

Javier F Medina
Baylor College of Medicine

A New Network Architecture for Supervised Learning in the Cerebellum

The cerebellum is often described as a neural machine for supervised learning. Its network architecture consists of anatomically segregated learning modules, which are thought to be specialized for distinct functions defined by the error-related information received via the climbing fiber input, and by the few individual muscles each module is able to control. My talk will present our recent work on mouse eyeblink conditioning, focusing on two unpublished experiments that challenge this classic view about the organization of the cerebellum. First, I will describe a new recurrent circuit that allows some Purkinje cells in the cerebellar cortex to learn in the absence of error-related information in their climbing fiber input. Second, I will show that the output of a single cerebellar module can be used to control a complex motor synergy that requires coordination of multiple muscles. Altogether, the results suggest a new organizational framework for understanding what the cerebellum learns, and how.