Past Colloquium Speakers 2009 and 2010

Colloquium Series

Colloquia are generally on Tuesdays at 3:30 p.m., once per month. They are usually held in the new Centre for Theoretical Neuroscience (CTN) seminar room in the Psychology, Anthropology, Sociology building (PAS), room 2464; exceptions will be noted. Abstracts are posted as available.

Here is a link for past events, colloquia, and speakers.

  1. Sept. 15 - Charles H. Anderson (Washington), 3:30 p.m., PAS 2464
  2. Oct. 13 - Randy McIntosh (Toronto), 3:30 p.m., PAS 2464
  3. Nov. 3 - John K Tsotsos (York), 3:30 p.m., PAS 2464
  4. Dec. 1 - Doug Bors (Toronto), 3:30 p.m., PAS 2464
  5. Apr. 6 - Waterloo Brain Day (4 speakers), PAS 2083
  6. Jan. 12 - Stephanie Chow (Princeton), 3:30 p.m., PAS 2464

Date: Tues., Sept. 15, 2009
Location: PAS 2464
Time: 3:30 p.m.
Speaker: Charles H. Anderson, Washington University
Title: Application of Systems Engineering Principles to the Visual System

Abstract: This talk will present a systems engineering integrated view of the primate visual system.

a) Retina: spatial sampling strategies
b) Brain area V1: signal to noise ratio (SNR) and population coding of wavelet coefficients.
c) Parietal Cortex: 30x30 window of attention, routing circuits, pointers.
d) Temporal Cortex: statistical measures and object recognition.

The talk will end with a brief statement about the need for fundamental principles for learning/adaption in neural circuits to complete the Neural Engineering modeling framework.

Back to top


Date: Tues., Oct. 13, 2009
Location: PAS 2464
Time: 3:30 p.m.
Speaker: Randy McIntosh, University of Toronto
Title: Rethinking Signal and Noise in Brain Imaging Data

Abstract: In relating brain signals measured with fMRI to mental processes, the assumption is that engaging such processes will activate key regions of the brain. Much like a computer, the region is 'on' when the process it subserves is required and 'off' when it is not. Some critical features of brain organization suggest we need to rethink this mapping. First, the network architecture enables the pattern of information flow to change without appreciable activity changes. Second, as a nonlinear system, the brain relies on both signal and noise to ensure optimal function. Indeed, the noise may be vital for enabling a full exploration of the cognitive landscape. Considering these two features defines new principles of brain-behaviour linkages, which may also impact our conceptualization of the relevant cognitive constructs.

Back to top


Date: Tues., Nov. 3, 2009
Location: PAS 2464
Time: 3:30 p.m.
Speaker: John K. Tsotsos, Centre for Vision Research, York University
Title: The Different Stages of Visual Recognition Need Different Attentional Binding Strategies

Abstract: Many think that visual attention needs an executive to allocate resources. Although the cortex exhibits substantial plasticity, dynamic allocation of neurons seems outside its capability. Suppose instead that the visual processing architecture is fixed, but can be ‘tuned’ dynamically to task requirements: the only remaining resource that can be allocated is time. How can this fixed, yet tunable, structure be used over periods of time longer than one feed-forward pass? With the goal of developing a computational theory and model of vision and attention that has both biological predictive power as well as utility for computer vision, I propose that by using multiple passes of the visual processing hierarchy, both bottom-up and top-down, and using task information to tune the processing prior to each pass, we can explain the different recognition behaviors that human vision exhibits. By examining in detail the basic computational infrastructure provided by the Selective Tuning model and using its functionality, four different binding processes – Convergence Binding and Partial, Full and Iterative Recurrence Binding – are introduced and tied to specific recognition tasks and their time course. The key is a provable method to trace neural activations through multiple representations from higher order levels of the visual processing network down to the early levels.

Back to top


Date: Tues., Dec. 1, 2009
Location: PAS 2464
Time: 3:30 p.m.
Speaker: Doug Bors, University of Toronto
Title: The Raven: A Strange Bird

Abstract: This talk will summarize some of the Raven (Advanced Progressive matrices) research that I have done over the past 20 years in collaboration with several colleagues and students. That research has been driven by three related questions. First, what does the Raven primarily measure? Second, what is the source of task difficulty and how might that difficulty be overcome? Finally, what aspect of the Raven is responsible for the test’s reliable individual differences? Some of the work has used experimental components, such as our attempts at understanding the basis for the reliable correlation between simple cognitive tasks such as inspection time and the Raven. Some of the work has been more purely experimental. For example, we have observed significant changes in the error rates of individual items as we changed their position in the list of items. In other studies we have used exploratory and confirmatory factor analysis to evaluate particular factor structures that had been proposed by others and ourselves. Finally, we recorded the eye movements of subjects as they solved the items. Surprisingly, this provided us with our best predictors of Raven performance and provided evidence for the constructive matching versus response elimination distinction. The more a subject equally distributes his or her time over the “problem matrix,” the more likely he or she was to correctly solve that item and the higher his or her overall score on the test. The Raven must first and foremost be understood as a single test. As a single test it is highly reliable in terms of individual differences and is moderately predictive of other measures. This does not mean that the Raven can best be understood as being comprised of a single predominant fact such a g. Individual difference and item difficulties appear to be the products of various factors, including strategies on the part of subjects.

Back to top


Date: Tues., Jan. 12, 2010
Location: PAS 2464
Time: 3:30 p.m.
Speaker: Stephanie Chow, Princeton University
Title: Context-dependent Modulation of Functional Connectivity

Abstract: In a complex world, a sensory cue may prompt different actions in different contexts. A laboratory example of context-dependent sensory processing is the two-stimulus-interval discrimination task. In each trial, a first stimulus (f1) must be stored in short-term memory, and later compared to a second stimulus (f2), in order for the animal to come to a binary decision. Prefrontal cortex (PFC) neurons need to interpret the f1 information in one way (perhaps with a positive weight) and the f2 information in an opposite way (perhaps with a negative weight), even though they come from the very same secondary somatosensory cortex (S2) neurons; therefore, a functional sign inversion is required. This task thus provides a clear example of context-dependent processing.

Here we develop a biologically plausible model of a context-dependent signal transformation of the stimulus encoding from S2 to PFC. To ground our model in experimental neurophysiology, we use neurophysiological data recorded by R. Romo's laboratory from both cortical area S2 and PFC in monkeys performing the task. Our main goal is to use experimentally-observed context-dependent modulations of firing rates in cortical area S2 as the basis for a model that achieves a context-dependent inversion of the sign of S2 to PFC connections. This is done without requiring any changes in connectivity. We (a) characterize the experimentally-observed context-dependent firing rate modulation in area S2; (b) construct a model that results in the sign transformation; (c) characterize the robustness and consequent biological plausibility of the model.

Back to top