Events

Filter by:

Limit to events where the title matches:
Limit to events where the first date of the event:
Date range
Limit to events where the first date of the event:
Limit to events where the type is one or more of:
Limit to events tagged with one or more of:
Limit to events where the audience is one or more of:
Thursday, December 5, 2019 4:00 pm - 4:00 pm EST (GMT -05:00)

AI Seminar: Deep Machines That Know When They Do Not Know

Kristian Kersting
Computer Science Department
Centre for Cognitive Science
Technische Universität Darmstadt

Our minds make inferences that appear to go far beyond standard machine learning. Whereas people can learn richer representations and use them for a wider range of learning tasks, machine learning algorithms have been mainly employed in a stand-alone context, constructing a single function from a table of training examples. 

Friday, December 6, 2019 11:00 am - 11:00 am EST (GMT -05:00)

PhD Seminar: Addressing Labels Shortage: Segmentation with 3% Supervision

Dmitrii Marin, PhD candidate
David R. Cheriton School of Computer Science

Deep learning models generalize limitedly to new datasets and require notoriously large amounts of labeled data for training. The latter problem is exacerbated by the need of ensuring that trained models are accurate in large variety of image scenes. The diversity of images comes from combinatorial nature of real world scenes, occlusions, variations in lightning, acquisition methods, etc. Many rare images may have little chance to be included in a dataset, but are still very important, as they often represent situations where a recognition mistake has a high cost.

Friday, December 6, 2019 11:30 am - 11:30 am EST (GMT -05:00)

AI Seminar: Recent Applications of Stein’s Method in Machine Learning

Qiang Liu, Department of Computer Science
University of Texas at Austin

As a fundamental technique for approximating and bounding distances between probability measures, Stein’s method has caught the attention in the machine learning community recently; some of the key ideas in Stein’s method have been leveraged and extended for developing practical and efficient computational methods for learning and using large scale, intractable probabilistic models. 

Friday, December 6, 2019 2:00 pm - 2:00 pm EST (GMT -05:00)

PhD Defence: Likelihood-based Density Estimation using Deep Architectures

Priyank Jaini, PhD candidate
David R. Cheriton School of Computer Science

Multivariate density estimation is a central problem in unsupervised machine learning that has been studied immensely in both statistics and machine learning. Several methods have thus been proposed for density estimation including classical techniques like histograms, kernel density estimation methods, mixture models, and more recently neural density estimation that leverages the recent advances in deep learning and neural networks to tractably represent a density function. In today's age when large amounts of data are being generated in almost every field it is of paramount importance to develop density estimation methods that are cheap both computationally and in memory cost. The main contribution of this thesis is in providing a principled study of parametric density estimation methods using mixture models and triangular maps for neural density estimation. 

Wei Tao Chen, Master’s candidate
David R. Cheriton School of Computer Science

Image semantic segmentation is an important problem in computer vision. However, training a deep neural network for semantic segmentation in supervised learning requires expensive manual labeling. Active learning (AL) addresses this problem by automatically selecting a subset of the dataset to label and iteratively improve the model. This minimizes labeling costs while maximizing performance. Yet, deep active learning for image segmentation has not been systematically studied in the literature. 

Monday, January 20, 2020 10:30 am - 10:30 am EST (GMT -05:00)

AI Seminar: Risk-Aware Machine Learning at Scale

Jacob Gardner, Research Scientist
Uber AI Labs

In recent years, machine learning has seen rapid advances with increasingly large scale and complex data modalities, including processing images, natural language and more. As a result, applications of machine learning have pervaded our lives to make them easier and more convenient. Buoyed by this success, we are approaching an era where machine learning will be used to autonomously make increasingly risky decisions that impact the physical world and risk life, limb, and property.

Aravind Balakrishnan, Master’s candidate
David R. Cheriton School of Computer Science

The behaviour planning subsystem, which is responsible for high-level decision making and planning, is an important aspect of an autonomous driving system. There are advantages to using a learned behaviour planning system instead of traditional rule-based approaches. However, high quality labelled data for training behaviour planning models is hard to acquire. Thus, reinforcement learning (RL), which can learn a policy from simulations, is a viable option for this problem.

Tuesday, January 21, 2020 4:00 pm - 4:00 pm EST (GMT -05:00)

Master’s Thesis Presentation: Safety-Oriented Stability Biases for Continual Learning

Ashish Gaurav, Master’s candidate
David R. Cheriton School of Computer Science

Continual learning is often confounded by “catastrophic forgetting” that prevents neural networks from learning tasks sequentially. In the case of real world classification systems that are safety-validated prior to deployment, it is essential to ensure that validated knowledge is retained. 

Wednesday, January 22, 2020 4:00 pm - 4:00 pm EST (GMT -05:00)

Master’s Thesis Presentation: Classifier-based Approach for Out-of-distribution Detection

Sachin Vernekar, Master’s candidate
David R. Cheriton School of Computer Science

Discriminatively trained neural classifiers can be trusted only when the input data comes from the training distribution (in-distribution). Therefore, detecting out-of-distribution (OOD) samples is very important to avoid classification errors.

Friday, January 24, 2020 1:00 pm - 1:00 pm EST (GMT -05:00)

AI Seminar: ALOHA: Artificial Learning of Human Attributes for Dialogue Agents

Steven Y. Feng
David R. Cheriton School of Computer Science

For conversational AI and virtual assistants to communicate with humans in a realistic way, they must exhibit human characteristics such as expression of emotion and personality. Current attempts toward constructing human-like dialogue agents have presented significant difficulties.