Colloquium Series 2023-2024

Colloquia are generally on Tuesdays at 3:30 p.m., once per month or so. If you'd like to be on the mailing list announcing these events, please sign up here.

Here is a list of our speakers for the Winter Term 2024 (full titles and abstracts will follow as available)

Winter 2024

Feb 6, 3:30,  E5 2004 , Milad Lankarany, Krembil Research Institute

Title: Using Computational Neuroscience and Neuromodulation Techniques to Uncover Mechanisms of Neural Systems

Abstract: Deep brain stimulation, as an invasive neuromodulation technique, has been successfully used to reduce symptoms of movement disorders by delivering electrical pulses to the substructures of the basal ganglia and thalamus. Existing efforts to understand the mechanisms of action of DBS rely on detecting biomarkers during (or after) delivering clinically approved high-frequency pulses and assessing their clinical outcomes. However, less consideration has been given to how DBS modulates information processing in the human brain. Specifically, it is not yet discovered how modulated neural activities observed in a sub-cortical region represent dynamics of underlying circuitry. Building on our published data of in-vivo single-unit recordings in patients with Essential Tremor (n = 20), we discovered that neuronal dynamics (in the sense of instantaneous firing rate) of thalamic ventral intermediate nucleus (Vim) in response to various frequencies of deep brain stimulation (DBS) can be explained by Balanced Amplification mechanism, an established theoretical framework that suggests the large amplification of excitation — caused by either strong external input or positive recurrent circuitry — can be balanced by strong feedback inhibition. We developed a network rate model, together with a sequential optimization algorithm, that accurately reproduces the instantaneous firing rate of Vim neurons, the primary surgical target of DBS for reducing symptoms of essential tremor, in response to low- and high-frequency DBS. Our study revealed that the computer simulation of a single population of Vim neurons (appeared in our recent publication PMID: 37140523) cannot capture the dynamics of the firing rate, thus the presence of other neuronal populations is essential to track the firing rate reliably and mechanistically. Interestingly, our work suggests, in an unsupervised manner, that the presence of an inhibitory neuronal population is necessary to replicate Vim firing rates across different DBS frequencies. We anticipate that our study provides a conceptual modeling framework to uncover mechanisms of information processing in different DBS-modulated sub-cortical regions in the human brain.


March 12,  3:30, NOTE Room Change E7-7363,  Memming Park, Champalimaud Centre

Title: Persistent learning signals and working memory without continuous attractors

Abstract: Neural dynamical systems with stable attractor structures, such as point attractors and continuous attractors, are hypothesized to underlie meaningful temporal behavior that requires working memory. However, working memory may not support useful learning signals necessary to adapt to changes in the temporal structure of the environment. We show that in addition to the continuous attractors that are widely implicated, periodic and quasi-periodic attractors can also support learning arbitrarily long temporal relationships. Unlike the continuous attractors that suffer from the fine-tuning problem, the less explored quasi-periodic attractors are uniquely qualified for learning to produce temporally structured behavior. Our theory has broad implications for the design of artificial learning systems and makes predictions about observable signatures of biological neural dynamics that can support temporal dependence learning and working memory. Based on our theory, we developed a new initialization scheme for artificial recurrent neural networks that outperforms standard methods for tasks that require learning temporal dynamics. Moreover, we propose a robust recurrent memory mechanism for integrating and maintaining head direction without a ring attractor.


March 19, 3:30, Online, Megan Peters,

Title: Theory-informed methods for studying metacognition and consciousness

Abstract: Under typical, everyday scenarios, we ought to feel more confident when we are more likely to correctly perceive the world. But we also know that clever lab-based tasks can pull these capacities apart, creating conditions where observers’ confidence fails to adequately track their task accuracy. When, why, and how would you feel more confident than you “should”, or less confident than you “should”? How can we quantify this phenomenon, and use it to study how confidence judgments are constructed in the first place? How can theory-informed measures and paradigms help clarify the link between these dissociations and the neurocomputational architectures that drive not only metacognition, but also perceptual awareness? In this talk I will discuss recent findings probing these questions from perspectives of experimental paradigm design, analysis approaches, and theory.

UC Irvine https://faculty.sites.uci.edu/cnclab/

Here is a list of our speakers for the Fall Term 2023

Fall 2023 Term

September 26 15:30 (in person)  - Jonathan Cannon, McMaster, head of the Trimba Lab, will give a  CTN Seminar on Dynamic inference in rhythm perception, production, and synchronization in E5 2004

Moving in time with rhythmic music is nearly universal across human cultures, and group rhythmic coordination produces remarkable group cohesion effects, in part by dissolving subjective boundaries between self and other. How can we make sense of this unique sensorimotor behavior in the context of the wide human repertoire of perceptual and motor processes? In this talk, I propose that the theory of Bayesian predictive processing provides not only a conceptual framework but also a clear, intuitive mathematical modeling language for rhythm perception, rhythm production, and sensorimotor synchronization through self/other integration.

Following the predictive processing account of perceptual inference, I propose a computational model in which we perform approximate Bayesian inference to estimate the momentary phase and tempo of ongoing underlying metrical cycles using learned metrical models. Then, drawing on the theory of  “active inference” which extends predictive processing to the realm of action, I propose a closely related computational model of rhythm production as a closed loop: timely feedback from our actions informs a dynamic model of our moving body, and that model guides the timing of subsequent action. Bringing these two computational models together, I propose a new formal account of sensorimotor synchronization: by modeling a heard rhythm and our own motor feedback as though they arise from the same underlying metrical cycle (i.e., modeling “self” and “other” as a single unified process), active inference naturally brings our actions into synch with what we hear. I explore evidence for this model, its new predictions, and experiments that might test those predictions.

Note (Oct 13 2023): Jonathan's talk was recorded and is available on his youtube channel.

October 24  15:30  (in person) Stephanie Palmer, Chicago in room E5 2004


How behavioral and evolutionary constraints sculpt early visual processing


Biological systems must selectively encode partial information about the environment, as dictated by the capacity constraints at work in all living organisms. For example, we cannot see every feature of the light field that reaches our eyes; temporal resolution is limited by transmission noise and delays, and spatial resolution is limited by the finite number of photoreceptors and output cells in the retina. Classical efficient coding theory describes how sensory systems can maximize information transmission given such capacity constraints, but it treats all input features equally. Not all inputs are, however, of equal value to the organism. Our work quantifies whether and how the brain selectively encodes stimulus features, specifically predictive features, that are most useful for fast and effective movements. We have shown that efficient predictive computation starts at the earliest stages of the visual system, in the retina. We borrow techniques from statistical physics and information theory to assess how we get terrific, predictive vision from these imperfect (lagged and noisy) component parts. In broader terms, we aim to build a more complete theory of efficient encoding in the brain, and along the way have found some intriguing connections between formal notions of coarse graining in biology and physics. 

November 7 15:30 (in-person)  Michael Anderson, Western in room E5 2004

Title: Neural reuse, dynamics, and constraints: Getting beyond componential mechanistic explanation of neural function

Abstract: In this talk, I will review some of the evidence for neural reuse--a form of neural plasticity whereby existing neural resources are put to many different uses--and use it to motivate an argument that we need to move beyond (although  not necessarily abandon) componential mechanistic explanation in the neurosciences. I claim that what is needed  to capture the full range of neural plasticity and dynamics is a style of explanation based on the notion of constraints--enabling constraints in particular. I will give examples of neural phenomena that are hard to capture in the mechanistic framework, and show that they are naturally handled by enabling constraints. As this moves us away from faculty psychology, it has some important implications for the ontology of cognition.

December 5 15:30 (in -person)  Matt van der Meer, Dartmouth in room E5 2004

Title: Three lies, and a cognitive process model, about information processing in the rodent hippocampus

Abstract: I will present results from my lab that cast doubt on three commonly held ideas about the rodent hippocampus, and then propose a cognitive process model that assigns synergistic functions to each of the experimentally observed alternatives. First, contrary to the idea that hippocampal replay reflects recent experience and/or upcoming goals, we show “paradoxical replay” of non-chosen options. Second, contrary to the idea that remapping across different environments is random, we observe various kinds of structure across the encoding of different environments. Third, contrary to the idea that the hippocampus faithfully encodes ongoing experience, we synthesize evidence that it actually blends the past, present and future together into subjective belief states. We propose that paradoxical replay protects these subjective belief states from interference, and that representational similarity structure of these belief states underlies behavioral generalization.

  1. 2024 (4)
    1. April (1)
    2. March (2)
    3. February (1)
  2. 2023 (9)
    1. December (1)
    2. November (1)
    3. October (2)
    4. September (1)
    5. April (1)
    6. March (1)
    7. February (1)
    8. January (1)
  3. 2022 (6)
  4. 2021 (3)
  5. 2020 (4)
  6. 2019 (7)
  7. 2018 (4)
  8. 2017 (7)
  9. 2016 (8)
  10. 2015 (9)
  11. 2014 (6)
  12. 2013 (8)
  13. 2012 (4)

Brain Day 2023 Videos On-line

The videos from Brain Day 2023 are now available on line at our youtube channel. Hope you enjoy.

CTN Masters Student Graduate Sugandha Sharma Appears on Generally Intelligent Podcast

Sugandha Sharma, masters student graduate of the University of Waterloo's CTN, discusses her research and time in the laboratory of CTN Founding Director Chris Eliasmith as well as her current PhD research at MIT on the Generally Intelligent Podcast. Give it a listen.

Sue Ann Campbell Presents at International Conference on Mathematical Neurosci 2022

Sue Ann Campbell (Applied Math/CTN core member) recently presented "Modulation of Synchronization by a Slowly Varying Current"  in July 2022 at the International Conference on Mathematical Neuroscience; Watch it on YouTubesue ann campbell presentation image of spikes

CTN Research Day 2023 Oct 17 16:30 - 19:00 QNC 0101

The Centre for Theoretical Neuroscience will be hosting its second Research Day. This will be a chance to start the new academic year by getting re-acquainted with each other and the diversity of research conducted by CTN core and affiliate faculty. The format will be to have a number of CTN faculty share short overviews of their lab's and projects (16:30-17:30) and then, following a short coffee break (17:30-18:00), hear from a dozen current graduate students and post-docs giving short three minute talks on an aspect of their current research (18:00-19:00).

Bots and Beasts. New book by CTN Founding Member Paul Thagard

Paul Thagard, philosopher, cognitive scientist, Killam prize winner, and founding CTN member has a new book out: Bots and Beasts. bots and beasts book cover