Colloquium Series 2014-2015

Colloquium Series 2014-2015

Here is a list of our speakers for the 2014 and 2015 academic year:

September 30, 2014 - Chris Honey (University of Toronto)

November 25, 2014 - Ben Thompson (University of Waterloo)

December 16, 2014 - Graham Taylor (University of Guelph)

February 24, 2015 - Melvyn A. Goodale (University of Western Ontario)

April 8, 2015 - Waterloo Brain Day (9th Annual)


Date: September 30, 2014
Location: PAS 2464
Time: 3:30 p.m.
Speaker: Chris Honey (University of Toronto)
Title:  Uncovering Stimulus-Induced Network Dynamics during Narrative Comprehension

Real world cognition requires the coordination of information
processing between modalities (e.g. auditory and visual) and systems
(e.g. language and memory). Therefore, the brain must continually
reorganize interactions between and within networks. I will describe a
new method for mapping these network changes, and how it can reveal
precise and reliable network dynamics during the perception of a
naturalistic narrative stimulus.

Back to the top


Date: November 25, 2014
Location:  PAS 2464
Time: 3:30 p.m.
Speaker: Ben Thompson (University of Waterloo)
Title: Learning to See with a “Lazy Eye”; Harnessing Visual Cortex Plasticity to Treat Amblyopia

Amblyopia is a neurodevelopmental disorder of the visual cortex that is often considered to be untreatable in adulthood due to insufficient neural plasticity. I will present a series of studies which indicate that both binocular perceptual learning and non-invasive brain stimulation techniques can improve visual function in adult patients with amblyopia, possibly by reducing inhibitory interactions within the visual cortex. I will also present new data from a recent study on the use of the anti-depressant drug citalopram to promote visual cortex plasticity in adults with amblyopia.

Back to the top


Date: December 16, 2014
Location: PAS 2464
Time: 3:30 p.m.
Speaker: Graham Taylor (University of Guelph)
Title: Learning Representations with Multiplicative Interactions

Representation learning algorithms are machine learning algorithms which involve the learning of features or explanatory factors. Deep learning techniques, which employ several layers of representation learning, have achieved much recent success in machine learning benchmarks and competitions, however, most of these successes have been achieved with purely supervised learning methods and have relied on large amounts of labeled data. In this talk, I will discuss a lesser-known but important class of representation learning algorithms that are capable of learning higher-order features from data. The main idea is to learn relations between pixel intensities rather than the pixel intensities themselves by structuring the model as a tri-partite graph which connects hidden units to pairs of images.  If the images are different, the hidden units learn how the images transform. If the images are the same, the hidden units encode within-image pixel covariances. Learning such higher-order features can yield improved results on recognition and generative tasks. I will discuss recent work on applying these methods to structured prediction problems.

Back to the top


Date: February 24, 2015

Location: PAS 2464
Time: 3:30 p.m.
Speaker: Melvyn A. Goodale (University of Western Ontario)
Title: How We See and Hear Stuff: Visual and Auditory Routes to Understanding the Material Properties of Objects

Almost all studies of object recognition, particularly in brain imaging, have focused on the geometric structure of objects (i.e. ‘things’).  Until recently, little attention has been paid to the recognition of the materials from which objects are made (i.e. ‘stuff’), information that is often signalled by surface-based visual cues (the sheen of polished metal) as well as auditory cues (the sound of water being poured into a glass).  But knowledge about stuff (the material properties of objects) has profound implications, not only for understanding what an object is, but also for the planning of actions, such as the setting of initial grip and load forces during grasping.  In recent years, our lab has made some headway in delineating the neural systems that mediate the recognition of stuff (as opposed to things), not only in sighted people but also in blind individuals who use echoes from tongue clicks to recognize the material properties of objects they encounter.  I will discuss evidence from both neuropsychological and fMRI studies demonstrating that lateral occipital regions in the ventral stream play a critical role in processing the 3-D structure and geometry of objects, whereas more anteromedial regions (particularly areas in the parahippocampal gyrus and collateral sulcus) are engaged in processing visual and auditory cues that signal the material properties of objects.

Back to the top