Events

Filter by:

Limit to events where the title matches:
Limit to events where the first date of the event:
Date range
Limit to events where the first date of the event:
Limit to events where the type is one or more of:
Limit to events tagged with one or more of:
Limit to events where the audience is one or more of:
Monday, January 27, 2020 10:30 am - 10:30 am EST (GMT -05:00)

AI Seminar: Cold-Start Universal Information Extraction

Lifu Huang, Department of Computer Science
University of Illinois at Urbana–Champaign

Who? What? When? Where? Why? are fundamental questions asked when gathering knowledge about and understanding a concept, topic, or event. The answers to these questions underpin the key information conveyed in the overwhelming majority, if not all, of language-based communication. Unfortunately, typical machine learning models and Information Extraction (IE) techniques heavily rely on human annotated data, which is usually very expensive and only available and compiled for very limited types or languages, rendering them incapable of dealing with information across various domains, languages, or other settings.

Monday, February 3, 2020 10:30 am - 10:30 am EST (GMT -05:00)

AI Seminar: Costs and Benefits of Invariant Representation Learning

Han Zhao, Machine Learning Department
Carnegie Mellon University

The success of supervised machine learning in recent years crucially hinges on the availability of large-scale and unbiased data, which is often time-consuming and expensive to collect. Recent advances in deep learning focus on learning invariant representations that have found abundant applications in both domain adaptation and algorithmic fairness. However, it is not clear what price we have to pay in terms of task utility for such universal representations. In this talk, I will discuss my recent work on understanding and learning invariant representations. 

Taylor Denouden, Master’s candidate
David R. Cheriton School of Computer Science

Recently, much research has been published for detecting when a classification neural network is presented with data that does not fit into one of the class labels the network learned at train time. These so-called out-of-distribution (OOD) detection techniques hold promise for improving safety in systems where unusual or novel inputs may result in errors that endanger human lives.

Thursday, March 12, 2020 10:30 am - 11:30 am EDT (GMT -04:00)

AI Seminar: Deep Learning on Graphs

Renjie Liao, Department of Computer Science
University of Toronto

Graphs are ubiquitous in many domains like computer vision, natural language processing, computational chemistry, and computational social science. Although deep learning has achieved tremendous success, effectively handling graphs is still challenging due to their discrete and combinatorial structures. In this talk, I will discuss my recent work which improves deep learning on graphs from both modeling and algorithmic perspectives.

Thursday, March 19, 2020 10:30 am - 11:30 am EDT (GMT -04:00)

AI Seminar: Zero-Shot Learning: Generalized Information Transfer Across Classes

Yuhong Guo, School of Computer Science
Carleton University

The need for annotated data is a fundamental bottleneck in developing automated prediction systems. A key strategy for reducing the reliance on human annotation is to exploit generalized information transfer, where a limited data resource is augmented with labeled data collected from related sources. 

Thursday, April 2, 2020 10:30 am - 11:30 am EDT (GMT -04:00)

CANCELLED • AI Seminar: Graph Guided Predictions

Vikas Garg, Electrical Engineering & Computer Science
Massachusetts Institute of Technology

In this talk I will describe our recent work on effectively using graph structured data. Specifically, I will discuss how to compress graphs to facilitate predictions, understand the capacity of algorithms operating on graphs, and how to infer interaction graphs so as to predict deliberative outcomes.

Wednesday, April 15, 2020 11:00 am - 11:00 am EDT (GMT -04:00)

Master’s Thesis Presentation: Asking for Help with a Cost in Reinforcement Learning

Colin Vandenhof, Master’s candidate
David R. Cheriton School of Computer Science

Reinforcement learning (RL) is a powerful tool for developing intelligent agents, and the use of neural networks makes RL techniques more scalable to challenging real-world applications, from task-oriented dialogue systems to autonomous driving. However, one of the major bottlenecks to the adoption of RL is efficiency, as it often takes many time steps to learn an acceptable policy. 

Alexandre Parmentier, Master’s candidate
David R. Cheriton School of Computer Science

This thesis presents two works with the shared goal of improving the capacity of multiagent trust modeling to be applied to social networks. 

Gaurav Gupta, Master’s candidate
David R. Cheriton School of Computer Science

We propose a mechanism for achieving cooperation and communication in Multi-Agent Reinforcement Learning (MARL) settings by intrinsically rewarding agents for obeying the commands of other agents. At every timestep, agents exchange commands through a cheap-talk channel. During the following timestep, agents are rewarded both for taking actions that conform to commands received as well as for giving successful commands. We refer to this approach as obedience-based learning.