Sugandha Sharma, masters student graduate of the University of Waterloo's CTN, discusses her research and time in the laboratory of CTN Founding Director Chris Eliasmith as well as her current PhD research at MIT on the Generally Intelligent Podcast. Give it a listen.
Colloquium Series 2023-2024
Colloquia are generally on Tuesdays at 3:30 p.m., once per month or so. If you'd like to be on the mailing list announcing these events, please sign up here.
Here is a list of our speakers for the Fall Term 2023
Fall 2023 Term
Moving in time with rhythmic music is nearly universal across human cultures, and group rhythmic coordination produces remarkable group cohesion effects, in part by dissolving subjective boundaries between self and other. How can we make sense of this unique sensorimotor behavior in the context of the wide human repertoire of perceptual and motor processes? In this talk, I propose that the theory of Bayesian predictive processing provides not only a conceptual framework but also a clear, intuitive mathematical modeling language for rhythm perception, rhythm production, and sensorimotor synchronization through self/other integration.
Following the predictive processing account of perceptual inference, I propose a computational model in which we perform approximate Bayesian inference to estimate the momentary phase and tempo of ongoing underlying metrical cycles using learned metrical models. Then, drawing on the theory of “active inference” which extends predictive processing to the realm of action, I propose a closely related computational model of rhythm production as a closed loop: timely feedback from our actions informs a dynamic model of our moving body, and that model guides the timing of subsequent action. Bringing these two computational models together, I propose a new formal account of sensorimotor synchronization: by modeling a heard rhythm and our own motor feedback as though they arise from the same underlying metrical cycle (i.e., modeling “self” and “other” as a single unified process), active inference naturally brings our actions into synch with what we hear. I explore evidence for this model, its new predictions, and experiments that might test those predictions.
Note (Oct 13 2023): Jonathan's talk was recorded and is available on his youtube channel.
October 24 15:30 (in person) Stephanie Palmer, Chicago in room E5 2004
How behavioral and evolutionary constraints sculpt early visual processing
Biological systems must selectively encode partial information about the environment, as dictated by the capacity constraints at work in all living organisms. For example, we cannot see every feature of the light field that reaches our eyes; temporal resolution is limited by transmission noise and delays, and spatial resolution is limited by the finite number of photoreceptors and output cells in the retina. Classical efficient coding theory describes how sensory systems can maximize information transmission given such capacity constraints, but it treats all input features equally. Not all inputs are, however, of equal value to the organism. Our work quantifies whether and how the brain selectively encodes stimulus features, specifically predictive features, that are most useful for fast and effective movements. We have shown that efficient predictive computation starts at the earliest stages of the visual system, in the retina. We borrow techniques from statistical physics and information theory to assess how we get terrific, predictive vision from these imperfect (lagged and noisy) component parts. In broader terms, we aim to build a more complete theory of efficient encoding in the brain, and along the way have found some intriguing connections between formal notions of coarse graining in biology and physics.
November 7 15:30 (in-person) Michael Anderson, Western in room E5 2004
Title: Neural reuse, dynamics, and constraints: Getting beyond componential mechanistic explanation of neural function
Abstract: In this talk, I will review some of the evidence for neural reuse--a form of neural plasticity whereby existing neural resources are put to many different uses--and use it to motivate an argument that we need to move beyond (although not necessarily abandon) componential mechanistic explanation in the neurosciences. I claim that what is needed to capture the full range of neural plasticity and dynamics is a style of explanation based on the notion of constraints--enabling constraints in particular. I will give examples of neural phenomena that are hard to capture in the mechanistic framework, and show that they are naturally handled by enabling constraints. As this moves us away from faculty psychology, it has some important implications for the ontology of cognition.
December 5 15:30 (in -person) Matt van der Meer, Dartmouth in room E5 2004
Title: Three lies, and a cognitive process model, about information processing in the rodent hippocampus
Abstract: I will present results from my lab that cast doubt on three commonly held ideas about the rodent hippocampus, and then propose a cognitive process model that assigns synergistic functions to each of the experimentally observed alternatives. First, contrary to the idea that hippocampal replay reflects recent experience and/or upcoming goals, we show “paradoxical replay” of non-chosen options. Second, contrary to the idea that remapping across different environments is random, we observe various kinds of structure across the encoding of different environments. Third, contrary to the idea that the hippocampus faithfully encodes ongoing experience, we synthesize evidence that it actually blends the past, present and future together into subjective belief states. We propose that paradoxical replay protects these subjective belief states from interference, and that representational similarity structure of these belief states underlies behavioral generalization.
Colloquium Series 2022-2023
Colloquia are generally on Tuesdays at 2:30 p.m., once per month. For the first two talks of Fall 2022 they will be online (links forthcoming). We anticipate a return to live events imminently. If you'd like to be on the mailing list announcing these events, please sign up here.
Here is a list of our speakers for the 2022-2023 (this will be updated as additional speakers are scheduled).
Winter 2023 Term
January 17 14:30 (virtual) - Sara Solla (NorthWestern)
Title: Low Dimensional Manifolds for Neural Dyanamics
The ability to simultaneously record the activity from tens to hundreds to thousands of neurons has allowed us to analyze the computational role of population activity as opposed to single neuron activity. Recent work on a variety of cortical areas suggests that neural function may be built on the activation of population-wide activity patterns, the neural modes, rather than on the independent modulation of individual neural activity. These neural modes, the dominant covariation patterns within the neural population, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics and argue that latent cortical dynamics within the manifold are the fundamental and stable building blocks of neural population activity.
February 7 15:30 (virtual) - Eric Shea-Brown (Washington)
Title: When do high dimensional networks learn to produce low dimensional dynamics?
Abstract: Neural networks in biology and in engineering have tremendous numbers of interacting units, yet often produce dynamics with many fewer degrees of freedom — that is, of low dimensionality. We explore when general network learning rules tend to produce such low dimensional dynamics. We demonstrate two main applications, in networks producing highly compressed representations that may support generalization, and in networks extracting latent variables that may efficiently describe more complex environments.
March 21 15:30 In person *ROOM E5-2004* - Maurizio de Pitta (Krembil/UofT) *
Healthy brain functions rely on the intricate interaction of neurons with glial cells. Among the latter, astrocytes are ubiquitous in our cortical circuits and can affect synaptic transmission on multiple time scales. On the short time scale, they are responsible, for example, for glutamate clearance, which is critical in setting the tone of neural activity. On a longer time scale, astrocytes operate as endocrine cells, modulating synaptic function by releasing common transmitter molecules. Although different in nature, both pathways may mediate positive feedback on neural activity, resulting in the emergence of multistability. In this scenario, the multiple activity states emerging from neuron-astrocyte interactions could account for various cognitive-related mechanisms in the healthy and diseased brain: from working-memory tasks to dementia-related neural correlates.
Scientist, Krembil Research Institute
Assistant Professor, Department of Physiology, Temerty Faculty of Medicine, University of Toronto
Scientific Associate, Basque Center for Applied Mathematics, Bilbao, Spain
Professor, Department of Neurosciences, University of the Basque Country, Leioa, Spain
April 25 15:30 *In Person*
Speaker: Jeff Orchard (CS, Waterloo)
Title: Cognition using Spiking-Phasor Neurons
Abstract: Vector Symbolic Architectures (VSAs) are a powerful framework for representing compositional reasoning and lend themselves to neural-network implementations. This allows us to create neural networks that can perform cognitive functions, like spatial reasoning, arithmetic, reasoning over sequences, symbol binding, and logic. But the vectors involved can be quite large -- hence the alternative label “Hyperdimensional (HD) computing”. Advances in neuromorphic hardware hold the promise of reducing the running time and energy footprint of neural networks by orders of magnitude. In this talk, I will extend some pioneering work, and run VSA algorithms on a substrate of spiking neurons that could be run efficiently on neuromorphic hardware.
Unfortunately needs to reschedule for Fall 2023. Stephanie Palmer (Chicago)
Title/Abstract to follow
Fall 2022 Term
Oct 25 14:30 Adrien Peyrache (McGill)
The origin of symmetry: Reciprocal feature encoding by cortical excitatory and inhibitory neurons.
Abstract: In the cortex, the interplay between excitation and inhibition determines the fidelity of neuronal representations. However, while the receptive fields of excitatory neurons are often fine-tuned to the encoded features, the principles governing the tuning of inhibitory neurons are still
elusive. We addressed this problem by recording populations of neurons in the postsubiculum (PoSub), a cortical area where the receptive fields of most excitatory neurons correspond to a specific head-direction (HD). In contrast to PoSub-HD cells, the tuning of fast-spiking (FS) cells, the
largest class of cortical inhibitory neurons, was broad and heterogeneous. However, we found that PoSub-FS cell tuning curves were often
fine-tuned in the spatial frequency domain, which resulted in various radial symmetries in their HD tuning. In addition, the average frequency
spectrum of PoSub-FS cell populations was virtually indistinguishable from that of PoSub-HD cells but different from that of the upstream
thalamic HD cells, suggesting that this population co-tuning in the frequency domain has a local origin. Two observations corroborated this
hypothesis. First, PoSub-FS cell tuning was independent of upstream thalamic inputs. Second, PoSub-FS cell tuning was tightly coupled to
PoSub-HD cell activity even during sleep. Together, these findings provide evidence that the resolution of neuronal tuning is an intrinsic
property of local cortical networks, shared by both excitatory and inhibitory cell populations. We hypothesize that this reciprocal feature
encoding supports two parallel streams of information processing in thalamocortical networks.
Nov 1 14:30 Yalda Mohsenzadeh (Western)
Talk Title: Understanding, Predicting, and Manipulating Image Memorability with Representation Learning
Abstract: Everyday, we are bombarded with hundreds of images on our smart phone, on television, or in print. Recent work shows that images
differ in their memorability, some stick in our mind while others are fade away quickly, and this phenomenon is consistent across people. While it
has been shown that memorability is an intrinsic feature of an image, still it’s largely unknown what features make images memorable. In this
talk, I will present a series of our studies which aim to address this question by proposing a fast representation learning approach to modify and
control the memorability of images. The proposed method can be employed in photograph editing applications for social media, learning aids, or
Dec 6 14:30 Leyla Isik (Johns Hopkins) Virtual on Zoom
Title: The neural computations underlying real-world social interaction perception
Abstract: Humans perceive the world in rich social detail. We effortlessly recognize not only objects and people in our environment, but also social interactions between people. The ability to perceive and understand social interactions is critical for functioning in our social world. We recently identified a brain region that selectively represents others’ social interactions in the posterior superior temporal sulcus (pSTS) in a manner that is distinct from other visual and social processes, like face recognition and theory of mind. However, it is unclear how social interactions are processed in the real world where they co-vary with many other sensory and social features. In the first part of my talk, I will discuss new work using naturalistic movie fMRI paradigms and novel machine learning analyses to understand how humans process social interactions in real-world settings. We find that social interactions guide behavioral judgements and are selectively processed in the pSTS, even after controlling for the effects of other co-varying perceptual and social information, including faces, voices, and theory of mind. In the second part of my talk, I will discuss the computational implications of social interaction selectivity and present a novel graph neural network model, SocialGNN, that instantiates these insights. SocialGNN reproduces human social interaction judgements in both controlled and natural videos using only visual information, but requires relational, graph structure and processing to do so. Together, this work suggests that social interaction recognition is a core human ability that relies on specialized, structured visual representations.
Colloquium Series 2021-2022
Colloquia are generally on Tuesdays at 2:30 p.m., once per month. For the Fall 2021 they will be online (link to be provided shortly). We anticipate a return to live events in Winter 2022 (formerly held in E5-6111). Abstracts are posted as available. If you'd like to be on the mailing list announcing these events, please sign up here.
Here is a list of our speakers so far for the 2021 and 2022 academic year:
Winter 2022 Term
Feb 22 Richard Naud (U Ottawa)
March 8 Mayank Mehta (UCLA)
Title: From virtual reality to reality: How neurons make maps
A part of the brain called the hippocampus is thought to be crucial for learning and memory and implicated in many incurable disorders ranging from Autism to Alzheimer's. Hence, it is crucial to understand how the hippocampus works. Decades of research shows that hippocampal damage in humans causes loss of episodic or autobiographical memory. But, such memory traces in hippocampal single neurons are hard to find. Instead research in the rodent hippocampus shows that the neurons encode spatial maps, or place cells. Place cells are common in rodents but rare in humans or nonhuman primates. These major discrepancies have hampered not only scientific progress, but also diagnosis and treatment of major disorders including ADRD. I will share our findings using virtual reality that address these questions and provide surprising answers that can significantly advance translation of basic science to treatments.
April (TBD) We are rescheduling in hope of doing this live and in-person.
(titles, abstracts, and times to follow)
Fall 2021 Term