Current students

Tuesday, January 21, 2020 4:00 pm - 4:00 pm EST (GMT -05:00)

Master’s Thesis Presentation: Safety-Oriented Stability Biases for Continual Learning

Ashish Gaurav, Master’s candidate
David R. Cheriton School of Computer Science

Continual learning is often confounded by “catastrophic forgetting” that prevents neural networks from learning tasks sequentially. In the case of real world classification systems that are safety-validated prior to deployment, it is essential to ensure that validated knowledge is retained. 

Researchers in artificial intelligence have developed an innovative way to identify a range of anti-social behaviour online. The new technique, led by Alex Parmentier, a master’s student at Waterloo’s David R. Cheriton School of Computer Science, detects anti-social behaviour by examining the reaction to a post among members of an online forum rather than examining features of the original post itself.

Friday, December 6, 2019 11:00 am - 11:00 am EST (GMT -05:00)

PhD Seminar: Addressing Labels Shortage: Segmentation with 3% Supervision

Dmitrii Marin, PhD candidate
David R. Cheriton School of Computer Science

Deep learning models generalize limitedly to new datasets and require notoriously large amounts of labeled data for training. The latter problem is exacerbated by the need of ensuring that trained models are accurate in large variety of image scenes. The diversity of images comes from combinatorial nature of real world scenes, occlusions, variations in lightning, acquisition methods, etc. Many rare images may have little chance to be included in a dataset, but are still very important, as they often represent situations where a recognition mistake has a high cost.

Friday, December 6, 2019 11:30 am - 11:30 am EST (GMT -05:00)

AI Seminar: Recent Applications of Stein’s Method in Machine Learning

Qiang Liu, Department of Computer Science
University of Texas at Austin

As a fundamental technique for approximating and bounding distances between probability measures, Stein’s method has caught the attention in the machine learning community recently; some of the key ideas in Stein’s method have been leveraged and extended for developing practical and efficient computational methods for learning and using large scale, intractable probabilistic models. 

Thursday, December 5, 2019 4:00 pm - 4:00 pm EST (GMT -05:00)

AI Seminar: Deep Machines That Know When They Do Not Know

Kristian Kersting
Computer Science Department
Centre for Cognitive Science
Technische Universität Darmstadt

Our minds make inferences that appear to go far beyond standard machine learning. Whereas people can learn richer representations and use them for a wider range of learning tasks, machine learning algorithms have been mainly employed in a stand-alone context, constructing a single function from a table of training examples. 

Thursday, December 5, 2019 2:30 pm - 2:30 pm EST (GMT -05:00)

AI Seminar: Towards Fair and Accurate Peer Review

Ivan Stelmakh, PhD candidate
Machine Learning Department, School of Computer Science
Carnegie Mellon University

Peer review is the backbone of scholarly research and fairness of this process is crucial for the successful development of academia. In this talk, we will discuss our two recent works on fairness of peer review. In the first part of the talk, we will focus on the automated assignment of papers to reviewers in the conference setup. We will show that the assignment procedure currently employed by NeurIPS and ICML does not guarantee fairness and may discriminate against some submissions. In contrast, we will present the assignment algorithm that simultaneously ensures fairness and accuracy of the resulting allocation. 

Xin Lian, Master’s candidate
David R. Cheriton School of Computer Science

The problem of language alignment has long been an exciting topic for Natural Language Processing researchers. Current methods for learning cross-domain correspondences at the word level rely on distributed representations of words. Therefore, the recent development in the word computational linguistics and neural language modeling has led to the development of the so-called zero-shot learning paradigm.

Thursday, November 28, 2019 2:00 pm - 2:00 pm EST (GMT -05:00)

AI Seminar: Fair Representation in Group Decision-making

Edith Elkind, Department of Computer Science
University of Oxford

Suppose that a group of agents want to select k > 1 alternatives from a given set, and each agent indicates which of the alternatives are acceptable to her: the alternatives could be conference submissions, applicants for a scholarship or locations for a fast food chain. In this setting it is natural to require that the set of winners represents the voters fairly, in the sense that large groups of voters with similar preferences have at least some of their approved alternatives in the winning set.