Colloquium Series 2022-2023

Colloquium Series 2022-2023

Colloquia are generally on Tuesdays at 2:30 p.m., once per month. For the first two talks of Fall 2022 they will be online (links forthcoming). We anticipate a return to live events imminently. If you'd like to be on the mailing list announcing these events, please sign up here.


Here is a list of our speakers for the 2022-2023 (this will be updated as additional speakers are scheduled).

Winter 2023 Term

January 17 14:30 (virtual) - Sara Solla (NorthWestern)

Title: Low Dimensional Manifolds for Neural Dyanamics

The ability to simultaneously record the activity from tens to hundreds to thousands of neurons has allowed us to analyze the computational role of population activity as opposed to single neuron activity.  Recent work on a variety of cortical areas suggests that neural function may be built on the activation of population-wide activity patterns, the neural modes, rather than on the independent modulation of individual neural activity. These neural modes, the dominant covariation patterns within the neural population, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics and argue that latent cortical dynamics within the manifold are the fundamental and stable building blocks of neural population activity.


February 7 15:30 (virtual) - Eric Shea-Brown (Washington)

Title: When do high dimensional networks learn to produce low dimensional dynamics?

Abstract:  Neural networks in biology and in engineering have tremendous numbers of interacting units, yet often produce dynamics with many fewer degrees of freedom — that is, of low dimensionality.  We explore when general network learning rules tend to produce such low dimensional dynamics.  We demonstrate two main applications, in networks producing highly compressed representations that may support generalization, and in networks extracting latent variables that may efficiently describe more complex environments.


March 21 15:30 In person *ROOM E5-2004* - Maurizio de Pitta (Krembil/UofT) *

Neuron-glial switches

Healthy brain functions rely on the intricate interaction of neurons with glial cells. Among the latter, astrocytes are ubiquitous in our cortical circuits and can affect synaptic transmission on multiple time scales. On the short time scale, they are responsible, for example, for glutamate clearance, which is critical in setting the tone of neural activity. On a longer time scale, astrocytes operate as endocrine cells, modulating synaptic function by releasing common transmitter molecules. Although different in nature, both pathways may mediate positive feedback on neural activity, resulting in the emergence of multistability. In this scenario, the multiple activity states emerging from neuron-astrocyte interactions could account for various cognitive-related mechanisms in the healthy and diseased brain: from working-memory tasks to dementia-related neural correlates.

*Full Affiliations:

Scientist, Krembil Research Institute

Assistant Professor, Department of Physiology, Temerty Faculty of Medicine, University of Toronto

Scientific Associate, Basque Center for Applied Mathematics, Bilbao, Spain

Professor, Department of Neurosciences, University of the Basque Country, Leioa, Spain


April 25 15:30 *In Person*

Speaker: Jeff Orchard (CS, Waterloo)

Title: Cognition using Spiking-Phasor Neurons

Abstract: Vector Symbolic Architectures (VSAs) are a powerful framework for representing compositional reasoning and lend themselves to neural-network implementations. This allows us to create neural networks that can perform cognitive functions, like spatial reasoning, arithmetic, reasoning over sequences, symbol binding, and logic. But the vectors involved can be quite large -- hence the alternative label “Hyperdimensional (HD) computing”. Advances in neuromorphic hardware hold the promise of reducing the running time and energy footprint of neural networks by orders of magnitude. In this talk, I will extend some pioneering work, and run VSA algorithms on a substrate of spiking neurons that could be run efficiently on neuromorphic hardware.


Unfortunately needs to reschedule for Fall 2023. Stephanie Palmer (Chicago)

Title/Abstract to follow



Fall 2022 Term

Oct 25 14:30 Adrien Peyrache (McGill)

Title:

The origin of symmetry: Reciprocal feature encoding by cortical excitatory and inhibitory neurons.  

Abstract: In the cortex, the interplay between excitation and inhibition determines the fidelity of neuronal representations. However, while the receptive fields of excitatory neurons are often fine-tuned to the encoded features, the principles governing the tuning of inhibitory neurons are still
elusive. We addressed this problem by recording populations of neurons in the postsubiculum (PoSub), a cortical area where the receptive fields of most excitatory neurons correspond to a specific head-direction (HD). In contrast to PoSub-HD cells, the tuning of fast-spiking (FS) cells, the
largest class of cortical inhibitory neurons, was broad and heterogeneous. However, we found that PoSub-FS cell tuning curves were often
fine-tuned in the spatial frequency domain, which resulted in various radial symmetries in their HD tuning. In addition, the average frequency
spectrum of PoSub-FS cell populations was virtually indistinguishable from that of PoSub-HD cells but different from that of the upstream
thalamic HD cells, suggesting that this population co-tuning in the frequency domain has a local origin. Two observations corroborated this
hypothesis. First, PoSub-FS cell tuning was independent of upstream thalamic inputs. Second, PoSub-FS cell tuning was tightly coupled to
PoSub-HD cell activity even during sleep. Together, these findings provide evidence that the resolution of neuronal tuning is an intrinsic
property of local cortical networks, shared by both excitatory and inhibitory cell populations. We hypothesize that this reciprocal feature
encoding supports two parallel streams of information processing in thalamocortical networks.

Nov 1 14:30 Yalda Mohsenzadeh (Western)

Talk Title: Understanding, Predicting, and Manipulating Image Memorability with Representation Learning

Abstract: Everyday, we are bombarded with hundreds of images on our smart phone, on television, or in print. Recent work shows that images
differ in their memorability, some stick in our mind while others are fade away quickly, and this phenomenon is consistent across people. While it
has been shown that memorability is an intrinsic feature of an image, still it’s largely unknown what features make images memorable. In this
talk, I will present a series of our studies which aim to address this question by proposing a fast representation learning approach to modify and
control the memorability of images. The proposed method can be employed in photograph editing applications for social media, learning aids, or
advertisement purposes.

Dec 6 14:30 Leyla  Isik (Johns Hopkins) Virtual on Zoom

Title: The neural computations underlying real-world social interaction perception


Abstract: Humans perceive the world in rich social detail. We effortlessly recognize not only objects and people in our environment, but also social interactions between people. The ability to perceive and understand social interactions is critical for functioning in our social world. We recently identified a brain region that selectively represents others’ social interactions in the posterior superior temporal sulcus (pSTS) in a manner that is distinct from other visual and social processes, like face recognition and theory of mind. However, it is unclear how social interactions are processed in the real world where they co-vary with many other sensory and social features. In the first part of my talk, I will discuss new work using naturalistic movie fMRI paradigms and novel machine learning analyses to understand how humans process social interactions in real-world settings. We find that social interactions guide behavioral judgements and are selectively processed in the pSTS, even after controlling for the effects of other co-varying perceptual and social information, including faces, voices, and theory of mind. In the second part of my talk, I will discuss the computational implications of social interaction selectivity and present a novel graph neural network model, SocialGNN, that instantiates these insights. SocialGNN reproduces human social interaction judgements in both controlled and natural videos using only visual information, but requires relational, graph structure and processing to do so. Together, this work suggests that social interaction recognition is a core human ability that relies on specialized, structured visual representations.