Events

Filter by:

Limit to events where the first date of the event:
Date range
Limit to events where the first date of the event:
Limit to events where the title matches:
Limit to events where the type is one or more of:
Limit to events tagged with one or more of:
Limit to events where the audience is one or more of:

Please note: This PhD seminar will be given online.

Akshay Ramachandran, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Lap Chi Lau

The matrix normal model, the family of Gaussian matrix-variate distributions whose covariance matrix is the Kronecker product of two lower dimensional factors, is frequently used to model matrix-variate data. The tensor normal model generalizes this family to Kronecker products of three or more factors. 

Please note: This seminar will be given online.

Florian Tramèr, Computer Science Department
Stanford University

Failures of machine learning systems can threaten both the security and privacy of their users. My research studies these failures from an adversarial perspective, by building new attacks that highlight critical vulnerabilities in the machine learning pipeline, and designing new defenses that protect users against identified threats.

Monday, March 22, 2021 12:00 pm - 12:00 pm EDT (GMT -04:00)

Seminar • Machine Learning — The Surprising Power of Little Data

Please note: This seminar will be given online.

Weihao Kong, Postdoctoral researcher
Department of Computer Science, University of Washington

In this talk, I will discuss several examples of my research that reveal a surprising ability to extract accurate information from modest amounts of data.

Please note: This PhD defence will be given online.

Ryan Goldade, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Christopher Batty

Thursday, March 25, 2021 12:00 pm - 12:00 pm EDT (GMT -04:00)

Seminar • Systems and Networking — Resource-Efficient Execution for Deep Learning

Please note: This seminar will be given online.

Deepak Narayanan, Department of Computer Science
Stanford University

Deep Learning models have enabled state-of-the-art results across a broad range of applications; however, training these models is extremely time- and resource-intensive, taking weeks on clusters with thousands of expensive accelerators in the extreme case.