Seminar

Tuesday, October 21, 2025 3:30 pm - 4:30 pm EDT (GMT -04:00)

CTN Seminar Patrick Shöfer DC 1304

Speaker: Patrick Schöfer,

Department of Neuromorphic Information Processing

University of Leipzig

Title: Boredom as Homeostasis of Cognitive Resource Utilization using Spiking Neural Networks

Abstract: In this talk, I will present our approach to modelling boredom as a homeostatic mechanism that maintains an optimal level of cognitive engagement. When engagement deviates from this “Goldilocks zone” due to under- or overstimulation, the system dynamically adjusts neural activity to restore balance. Implemented as a control loop in a spiking neural network, the model monitors and regulates simulated cognitive resource utilization through excitation and inhibition.

Tuesday, December 2, 2025 3:30 pm - 4:30 pm EST (GMT -05:00)

CTN Seminar Jonathan A. Michaels

Location: DC 1304 


Title: Sensory expectations shape neural population dynamics in motor circuits

Abstract: The neural basis of movement preparation has been extensively studied during self-initiated actions where motor cortical activity during preparation shows a lawful relationship to the parameters of the subsequent action. However, movements are regularly triggered or corrected based on sensory inputs caused by disturbances to the body. Since such disturbances are often predictable and since preparing for disturbances would make movements better, we hypothesized that expectations about sensory inputs also influence preparatory activity in motor circuits. Here we show that when humans and monkeys are probabilistically cued about the direction of future mechanical perturbations, they incorporate sensory expectations into their movement preparation and improve their corrective responses. Using high-density neural recordings, we establish that sensory expectations are widespread across the brain, including the motor cortical areas involved in preparing self-initiated actions. The geometry of these preparatory signals in the neural population state is simple, directly scaling with the probability of each perturbation direction. After perturbation onset, a condition-independent signal shifts the neural state leading to rapid responses that initially reflect sensory expectations. Based on neural networks coupled to a biomechanical model of the arm, we show that this neural geometry emerges only when sensory inputs signal that a perturbation has occurred before resolving the direction of the perturbation. Thus, just as preparatory activity sets the stage for self-initiated movement, it also configures motor circuits to respond efficiently to sensory inputs.

Tuesday, June 10, 2025 9:30 am - 10:30 am EDT (GMT -04:00)

William Lytton SUNY Downstate (Applied Math Seminar)

Speaker: Prof. William Lytton, SUNY Downstate Health Sciences University

Title: Neurons and synapses working together happily in brain health; not so happily in brain disease

Abstract: At first approximation, we currently think of the brain as a set of neurons as nodes connected by directed edges, akin to the mathematical description of an Erdős–Rényi graph model. It is now time to redirect attention on the individual neurons, the massive complex entities that are often a locus of disease progression and may also be an additional locus of computation.  I will focus on the role of the cortical corticospinal cell in Parkinson's disease (PD) and in migraine/ischemia. In both cases a class of neuron becomes damaged as an effect of disease: the effect becomes a site for the burgeoning disorder.

More generally, I wish to refocus on cell physiology as a basis of brain function. This will help us to better explain how cell pathology produces dysfunction in neurodegenerative disorders such as Alzheimer's, Parkinson's, and mild cognitive impairment. The roles played by particular neuron types in performing the computations that underlie brain function will provide a new Neuron-based Computational Theory (NCT) to complement and augment the current dominant Synapse-based Computational Theory (SCT), which gave us Hebbian/Hopfieldian cell assemblies reified in modern
large-language models (LLMs).

Monday, March 31, 2025 11:00 am - 12:00 pm EDT (GMT -04:00)

Mark Reimers Centre for Theoretical Neuroscience Seminar

Mark Reimers, Michigan State (https://iq.msu.edu/mark-reimers/)

Location: E5 2004

Title: A new and inexpensive method for high-resolution imaging of neural activity across the cortex of small animals

Abstract: In this talk I will introduce a new system for imaging the activity of several thousand labelled neurons distributed sparsely across the dorsal cortex of a mouse at high speed. The key is to use extensive computation to make up for the deficits of simple imaging systems. I will describe the ideas behind our system and the technology that we're using to implement these ideas, at a cost of under $50,000. I will describe some of the technical issues we've addressed, and issues that we’re still working on. A natural question to ask is how much of the complex cortical activity can be inferred by recording from a small fraction of neurons in each area. I will present evidence from large-scale Zebrafish and mouse brain recordings to suggest that a surprisingly small fraction of labelled neurons may be sufficient to represent most of the population activity in the upper layers of cortex.

Thursday, March 6, 2025 3:30 pm - 5:00 pm EST (GMT -05:00)

CTN Seminar Eva Dyer

Prof. Eva Dyer (home page) will present on her work on Thursday, March 6, 3:30 p.m. in E5 2004.

Scaling Up Neural Data Pretraining to Uncover Shared Structure in Brain Function

The brain is incredibly complex, with diverse functions that emerge from the coordinated activity of billions of neurons. These functions vary across brain regions and adapt dynamically as we engage in different tasks, process sensory information, or generate behavior. Yet, each neural recording captures only a small glimpse of this immense complexity, offering a limited view of the broader system. This motivates the need for an algorithmic approach to stitch together diverse datasets, integrating neural activity across brain regions, cell types, and individuals. In this talk, I will present our work on building scalable models pretrained on a broad corpus of neural recordings. Our findings demonstrate positive transfer across tasks, cell types, and individuals, effectively bridging gaps between isolated studies. This unified framework opens new possibilities for neural decoding, brain-machine interfaces, and cross-species neuroscience, offering a path toward more generalizable models of brain function.