Faculty

Sachin Vernekar, Master’s candidate
David R. Cheriton School of Computer Science

Discriminatively trained neural classifiers can be trusted only when the input data comes from the training distribution (in-distribution). Therefore, detecting out-of-distribution (OOD) samples is very important to avoid classification errors.

Jacob Gardner, Research Scientist
Uber AI Labs

In recent years, machine learning has seen rapid advances with increasingly large scale and complex data modalities, including processing images, natural language and more. As a result, applications of machine learning have pervaded our lives to make them easier and more convenient. Buoyed by this success, we are approaching an era where machine learning will be used to autonomously make increasingly risky decisions that impact the physical world and risk life, limb, and property.

Wei Tao Chen, Master’s candidate
David R. Cheriton School of Computer Science

Image semantic segmentation is an important problem in computer vision. However, training a deep neural network for semantic segmentation in supervised learning requires expensive manual labeling. Active learning (AL) addresses this problem by automatically selecting a subset of the dataset to label and iteratively improve the model. This minimizes labeling costs while maximizing performance. Yet, deep active learning for image segmentation has not been systematically studied in the literature. 

Aravind Balakrishnan, Master’s candidate
David R. Cheriton School of Computer Science

The behaviour planning subsystem, which is responsible for high-level decision making and planning, is an important aspect of an autonomous driving system. There are advantages to using a learned behaviour planning system instead of traditional rule-based approaches. However, high quality labelled data for training behaviour planning models is hard to acquire. Thus, reinforcement learning (RL), which can learn a policy from simulations, is a viable option for this problem.

Ashish Gaurav, Master’s candidate
David R. Cheriton School of Computer Science

Continual learning is often confounded by “catastrophic forgetting” that prevents neural networks from learning tasks sequentially. In the case of real world classification systems that are safety-validated prior to deployment, it is essential to ensure that validated knowledge is retained. 

Researchers in artificial intelligence have developed an innovative way to identify a range of anti-social behaviour online. The new technique, led by Alex Parmentier, a master’s student at Waterloo’s David R. Cheriton School of Computer Science, detects anti-social behaviour by examining the reaction to a post among members of an online forum rather than examining features of the original post itself.

Dmitrii Marin, PhD candidate
David R. Cheriton School of Computer Science

Deep learning models generalize limitedly to new datasets and require notoriously large amounts of labeled data for training. The latter problem is exacerbated by the need of ensuring that trained models are accurate in large variety of image scenes. The diversity of images comes from combinatorial nature of real world scenes, occlusions, variations in lightning, acquisition methods, etc. Many rare images may have little chance to be included in a dataset, but are still very important, as they often represent situations where a recognition mistake has a high cost.

Qiang Liu, Department of Computer Science
University of Texas at Austin

As a fundamental technique for approximating and bounding distances between probability measures, Stein’s method has caught the attention in the machine learning community recently; some of the key ideas in Stein’s method have been leveraged and extended for developing practical and efficient computational methods for learning and using large scale, intractable probabilistic models.