Current students

Thursday, March 12, 2020 10:30 am - 11:30 am EDT (GMT -04:00)

AI Seminar: Deep Learning on Graphs

Renjie Liao, Department of Computer Science
University of Toronto

Graphs are ubiquitous in many domains like computer vision, natural language processing, computational chemistry, and computational social science. Although deep learning has achieved tremendous success, effectively handling graphs is still challenging due to their discrete and combinatorial structures. In this talk, I will discuss my recent work which improves deep learning on graphs from both modeling and algorithmic perspectives.

Taylor Denouden, Master’s candidate
David R. Cheriton School of Computer Science

Recently, much research has been published for detecting when a classification neural network is presented with data that does not fit into one of the class labels the network learned at train time. These so-called out-of-distribution (OOD) detection techniques hold promise for improving safety in systems where unusual or novel inputs may result in errors that endanger human lives.

Monday, February 3, 2020 10:30 am - 10:30 am EST (GMT -05:00)

AI Seminar: Costs and Benefits of Invariant Representation Learning

Han Zhao, Machine Learning Department
Carnegie Mellon University

The success of supervised machine learning in recent years crucially hinges on the availability of large-scale and unbiased data, which is often time-consuming and expensive to collect. Recent advances in deep learning focus on learning invariant representations that have found abundant applications in both domain adaptation and algorithmic fairness. However, it is not clear what price we have to pay in terms of task utility for such universal representations. In this talk, I will discuss my recent work on understanding and learning invariant representations. 

Monday, January 27, 2020 10:30 am - 10:30 am EST (GMT -05:00)

AI Seminar: Cold-Start Universal Information Extraction

Lifu Huang, Department of Computer Science
University of Illinois at Urbana–Champaign

Who? What? When? Where? Why? are fundamental questions asked when gathering knowledge about and understanding a concept, topic, or event. The answers to these questions underpin the key information conveyed in the overwhelming majority, if not all, of language-based communication. Unfortunately, typical machine learning models and Information Extraction (IE) techniques heavily rely on human annotated data, which is usually very expensive and only available and compiled for very limited types or languages, rendering them incapable of dealing with information across various domains, languages, or other settings.

Friday, January 24, 2020 1:00 pm - 1:00 pm EST (GMT -05:00)

AI Seminar: ALOHA: Artificial Learning of Human Attributes for Dialogue Agents

Steven Y. Feng
David R. Cheriton School of Computer Science

For conversational AI and virtual assistants to communicate with humans in a realistic way, they must exhibit human characteristics such as expression of emotion and personality. Current attempts toward constructing human-like dialogue agents have presented significant difficulties. 

Wednesday, January 22, 2020 4:00 pm - 4:00 pm EST (GMT -05:00)

Master’s Thesis Presentation: Classifier-based Approach for Out-of-distribution Detection

Sachin Vernekar, Master’s candidate
David R. Cheriton School of Computer Science

Discriminatively trained neural classifiers can be trusted only when the input data comes from the training distribution (in-distribution). Therefore, detecting out-of-distribution (OOD) samples is very important to avoid classification errors.

Monday, January 20, 2020 10:30 am - 10:30 am EST (GMT -05:00)

AI Seminar: Risk-Aware Machine Learning at Scale

Jacob Gardner, Research Scientist
Uber AI Labs

In recent years, machine learning has seen rapid advances with increasingly large scale and complex data modalities, including processing images, natural language and more. As a result, applications of machine learning have pervaded our lives to make them easier and more convenient. Buoyed by this success, we are approaching an era where machine learning will be used to autonomously make increasingly risky decisions that impact the physical world and risk life, limb, and property.

Wei Tao Chen, Master’s candidate
David R. Cheriton School of Computer Science

Image semantic segmentation is an important problem in computer vision. However, training a deep neural network for semantic segmentation in supervised learning requires expensive manual labeling. Active learning (AL) addresses this problem by automatically selecting a subset of the dataset to label and iteratively improve the model. This minimizes labeling costs while maximizing performance. Yet, deep active learning for image segmentation has not been systematically studied in the literature. 

Aravind Balakrishnan, Master’s candidate
David R. Cheriton School of Computer Science

The behaviour planning subsystem, which is responsible for high-level decision making and planning, is an important aspect of an autonomous driving system. There are advantages to using a learned behaviour planning system instead of traditional rule-based approaches. However, high quality labelled data for training behaviour planning models is hard to acquire. Thus, reinforcement learning (RL), which can learn a policy from simulations, is a viable option for this problem.