Events

Tuesday, June 12, 2018 — 3:00 PM EDT

Priyank Jaini, PhD candidate
David R. Cheriton School of Computer Science

At their core, many unsupervised learning models provide a compact representation of homogeneous density mixtures, but their similarities and differences are not always clearly understood. In this work, we formally establish the relationships among latent tree graphical models (including special cases such as hidden Markov models and tensorial mixture models), hierarchical tensor formats and sum-product networks.

Tuesday, June 5, 2018 — 12:00 PM EDT

Nisarg Shah, Department of Computer Science
University of Toronto

Algorithms are increasingly making decisions that affect humans. The field of computational social choice deals with algorithms for eliciting individual preferences and making collective decisions. Everyday examples of such decisions include citizens electing their representatives, roommates dividing collectively purchased items, or residents voting over allocation of city's budget. Making reasonable collective decisions requires viewing the problem through the lenses of elicitation, fairness, efficiency, incentives, and ethics.

Friday, May 11, 2018 — 2:00 PM EDT

Speaker: Junnan Chen, Master’s candidate

Conversations depend on information from the context. To go beyond one-round conversation, a chatbot must resolve contextual information such as: 1) co-reference resolution, 2) ellipsis resolution, and 3) conjunctive relationship resolution.

Wednesday, May 9, 2018 — 10:00 AM EDT

Speaker: Ivana Kajić, PhD candidate

The representation of semantic knowledge poses a central modelling decision in many models of cognitive phenomena. However, not all such representations reflect properties observed in human semantic networks. Here, we evaluate the psychological plausibility of two distributional semantic models widely used in natural language processing: word2vec and GloVe. We use these models to construct directed and undirected semantic networks and compare them to networks of human association norms using a set of graph-theoretic analyses. 

Friday, May 4, 2018 — 2:00 PM EDT

Speaker: Meng Tang, PhD candidate

Minimization of regularized losses is a principled approach to weak supervision well established in deep learning, in general. However, it is largely overlooked in semantic segmentation currently dominated by methods mimicking full supervision via "fake" fully-labeled training masks (proposals) generated from available partial input. To obtain such full masks the typical methods explicitly use standard regularization techniques for "shallow" segmentation, e.g., graph cuts or dense CRFs. In contrast, we integrate such standard regularizers and clustering criteria directly into the loss functions over partial input. This approach simplifies weakly-supervised training by avoiding extra MRF/CRF inference steps or layers explicitly generating full masks, while improving both the quality and efficiency of training. 

Thursday, May 3, 2018 — 2:00 PM EDT

Speaker: Daniel Recoskie, PhD candidate

We propose a new method for learning filters for the 2D discrete wavelet transform. We extend our previous work on the 1D wavelet transform in order to process images. We show that the 2D wavelet transform can be represented as a modified convolutional neural network (CNN). Doing so allows us to learn wavelet filters from data by gradient descent. Our learned wavelets are similar to traditional wavelets which are typically derived using Fourier methods.

Thursday, April 26, 2018 — 10:00 AM EDT

Speaker: Amir-Hossein Karimi, Master’s candidate

The story of this work is dimensionality reduction. Dimensionality reduction is a method that takes as input a point-set P of n points in \(R^d\) where d is typically large and attempts to find a lower-dimensional representation of that dataset, in order to ease the burden of processing for down-stream algorithms. In today’s landscape of machine learning, researchers and practitioners work with datasets that either have a very large number of samples and/or include high-dimensional samples. Therefore, dimensionality reduction is applied as a pre-processing technique primarily to overcome the curse of dimensionality.

Friday, April 20, 2018 — 9:30 AM EDT

Speaker: Zhucheng Tu, Master's Candidate

Modelling the similarity of two sentences is an important problem in natural language processing and information retrieval, with applications in tasks such as paraphrase identification and answer selection in question answering. The Multi-Perspective Convolutional Neural Network (MP-CNN) is a model that improved previous state-of-the-art models in 2015 and has remained a popular model for sentence similarity tasks. However, until now, there has not been a rigorous study of how the model actually achieves competitive accuracy. 

Wednesday, April 4, 2018 — 12:00 PM EDT

Speaker: Feng-Xuan Choo, PhD candidate

Building large-scale brain models is one method used by theoretical neuroscientists to understand the way the human brain functions. Researchers typically use either a bottom-up approach, which focuses on the detailed modelling of various biological properties of the brain and places less importance on reproducing functional behaviour, or a top-down approach, which generally aim to reproduce the behaviour observed in real cognitive agents, but typically sacrifices adherence to constraints imposed by the neuro-biology. 

The focus of this thesis is Spaun, a large-scale brain model constructed using a combination of the bottom-up and top-down approaches to brain modelling. Spaun is currently the world's largest functional brain model, capable of performing 8 distinct cognitive tasks ranging from digit recognition to inductive reasoning. The thesis is organized to discuss three aspects of the Spaun model.

Tuesday, March 20, 2018 — 4:00 PM EDT

Speaker: Daniel Recoskie, PhD candidate

We propose a method for learning wavelet filters directly from data. We accomplish this by framing the discrete wavelet transform as a modified convolutional neural network. We introduce an autoencoder wavelet transform network that is trained using gradient descent.

S M T W T F S
27
28
29
30
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
  1. 2018 (17)
    1. June (4)
    2. May (4)
    3. April (3)
    4. March (3)
    5. February (1)
    6. January (2)
  2. 2017 (19)
    1. December (1)
    2. November (2)
    3. October (1)
    4. September (1)
    5. August (3)
    6. July (4)
    7. June (3)
    8. May (2)
    9. April (1)
    10. February (1)
  3. 2015 (4)
  4. 2012 (4)
  5. 2011 (27)
  6. 2010 (12)
  7. 2009 (18)
  8. 2008 (15)
  9. 2007 (24)
  10. 2006 (36)
  11. 2005 (13)