Events

April 2018

Sun Mon Tue Wed Thu Fri Sat
1
2
3
4
5
6
7
 
 
 
 
 
 
8
9
10
11
12
13
14
 
 
 
 
 
 
 
15
16
17
18
19
20
21
 
 
 
 
 
 
22
23
24
25
26
27
28
 
 
 
 
 
 
29
30
1
2
3
4
5
 
 
 
 
 
 
 
Wednesday, April 4, 2018 — 12:00 to 12:00 PM EDT

Speaker: Feng-Xuan Choo, PhD candidate

Building large-scale brain models is one method used by theoretical neuroscientists to understand the way the human brain functions. Researchers typically use either a bottom-up approach, which focuses on the detailed modelling of various biological properties of the brain and places less importance on reproducing functional behaviour, or a top-down approach, which generally aim to reproduce the behaviour observed in real cognitive agents, but typically sacrifices adherence to constraints imposed by the neuro-biology. 

The focus of this thesis is Spaun, a large-scale brain model constructed using a combination of the bottom-up and top-down approaches to brain modelling. Spaun is currently the world's largest functional brain model, capable of performing 8 distinct cognitive tasks ranging from digit recognition to inductive reasoning. The thesis is organized to discuss three aspects of the Spaun model.

Friday, April 20, 2018 — 9:30 AM EDT

Speaker: Zhucheng Tu, Master's Candidate

Modelling the similarity of two sentences is an important problem in natural language processing and information retrieval, with applications in tasks such as paraphrase identification and answer selection in question answering. The Multi-Perspective Convolutional Neural Network (MP-CNN) is a model that improved previous state-of-the-art models in 2015 and has remained a popular model for sentence similarity tasks. However, until now, there has not been a rigorous study of how the model actually achieves competitive accuracy. 

Thursday, April 26, 2018 — 10:00 to 10:00 AM EDT

Speaker: Amir-Hossein Karimi, Master’s candidate

The story of this work is dimensionality reduction. Dimensionality reduction is a method that takes as input a point-set P of n points in \(R^d\) where d is typically large and attempts to find a lower-dimensional representation of that dataset, in order to ease the burden of processing for down-stream algorithms. In today’s landscape of machine learning, researchers and practitioners work with datasets that either have a very large number of samples and/or include high-dimensional samples. Therefore, dimensionality reduction is applied as a pre-processing technique primarily to overcome the curse of dimensionality.

S M T W T F S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
1
2
3
4
5
  1. 2018 (10)
    1. May (1)
    2. April (3)
    3. March (3)
    4. February (1)
    5. January (2)
  2. 2017 (19)
    1. December (1)
    2. November (2)
    3. October (1)
    4. September (1)
    5. August (3)
    6. July (4)
    7. June (3)
    8. May (2)
    9. April (1)
    10. February (1)
  3. 2015 (4)
  4. 2012 (4)
  5. 2011 (27)
  6. 2010 (12)
  7. 2009 (18)
  8. 2008 (15)
  9. 2007 (24)
  10. 2006 (36)
  11. 2005 (13)