You are here

Colloquia and Joint Seminars

Upcoming colloquiums

There are no colloquiums scheduled at this time.

Past colloquiums

Dr. Jingbo Wang 
School of Physics, The University of Western Australia

June 28, 2017
4:00pm in MC5501
Refreshments at 3:45pm

Efficient Decomposition of Quantum Walk Operators

Quantum walk has shown much potential as a general framework for developing novel quantum algorithms. The efficiency of these algorithms depends on interference between the multiple paths that are simultaneously traversed by a quantum walker, as well as local interaction and intrinsic quantum correlation if multiple quantum walkers are involved. As such, quantum walk has become a subject of intense theoretical and experimental studies. An increasingly pressing challenge is to demonstrate quantum supremacy of the quantum-walk-based algorithms over classical computation; this would require an efficient decomposition of the prescribed quantum walk operators.  In this talk, I will discuss the design principles for the development of efficient quantum circuits for quantum walks of several distinct types on a wide range of undirected and directed graphs, aiming to provide some intuition on how such decomposition is derived.

This talk is intended for a general audience and will not assume specialist knowledge of the field.

Bruno Salvy
École Normale Supérieure De Lyon

June 15, 2017
4:30pm in MC5479
Refreshments at 4:15pm

Explicit Continued Fractions for Riccati-type Equations

Most classical C-fractions are special cases of continued fractions due to Euler and Gauss for the quotient of contiguous hypergeometric series. These power series are solutions of Riccati equations, from which these continued fractions can be derived directly by a method due to Lagrange. In this talk, we consider Lagrange’s method with a symbolic point of view, in order to determine all the equations for which it produces explicit continued fractions. The classical results are thus obtained in a unified way, as well as their q-analogues (continued fractions due to Heine). The method also applies to discrete Riccati equations, where it recovers a continued fraction on the Gamma function due to Brouncker.

This is joint work with Sébastien Maulat  

Dr. David Correa 
Assistant Professor, School of Architecture, University of Waterloo

March 22nd 2017
4:00pm in MC5501
Refreshments at 3:45pm

Material Informed Computational Design in Architecture

No longer bound by the production line, designers, engineers and computer scientists have moved to the development of adaptive construction processes capable of autonomous sensing, simulation and response. The integration of computational design and simulation tools, in the form of customized software and hardware, has provided designers with new access to material properties resulting on unprecedented levels of performance complexity. This talk will present an overview of robotically enabled and digital fabrication projects developed at the Institute of Computational Design and Construction (ICD) in Germany.

David Correa is an Assistant Professor at the University of Waterloo and Doctoral Candidate at the Institute for Computational Design (ICD) at the University of Stuttgart. At the ICD, David initiated and lead the research field of Bio-inspired 3D Printed Programmable Material Systems. His Doctoral research investigates the reciprocal relationship between material design and fabrication from a multi-scalar perspective. With a focus on climate responsive materials for the built environment, the research integrates computational tools, simulation and digital fabrication with bio-inspired design strategies for material architectures. As a designer in architecture, product design and commercial digital media, David’s professional work engages multiple disciplines and environments – from dense urban settings to remote northern regions.

Dr. Mauro Maggioni (CANCELLED)
Bloomberg Distinguished Professor in Mathematics and Applied Mathematics and Statistics at Johns Hopkins University,

March 9th 2017
3:30pm in MC5501
Refreshments at 3:15pm

Geometric Methods for the Approximation of High-dimensional Dynamical Systems

We discuss a geometry-based statistical learning framework for performing model reduction and modeling of stochastic high-dimensional dynamical systems. We consider two complementary settings. In the first one, we are given long trajectories of a system, e.g. from molecular dynamics, and we discuss new techniques for estimating, in a robust fashion, an effective number of degrees of freedom of the system, which may vary in the state space of then system, and a local scale where the dynamics is well-approximated by a reduced dynamics with a small number of degrees of freedom. We then use these ideas to produce an approximation to the generator of the system and obtain, via eigenfunctions of an empirical Fokker-Planck question, reaction coordinates for the system that capture the large time behavior of the dynamics. We present various examples from molecular dynamics illustrating these ideas. In the second setting we only have access to a (large number of expensive) simulators that can return short simulations of high-dimensional stochastic system, and introduce a novel statistical learning framework for learning automatically a family of local approximations to the system, that can be (automatically) pieced together to form a fast global reduced model for the system, called ATLAS. ATLAS is guaranteed to be accurate (in the sense of producing stochastic paths whose distribution is close to that of paths generated by the original system) not only at small time scales, but also at large time scales, under suitable assumptions on the dynamics. We discuss applications to homogenization of rough diffusions in low and high dimensions, as well as relatively simple systems with separations of time scales, and deterministic chaotic systems in high-dimensions, that are well-approximated by stochastic differential equations.

Yaoliang Yu
University of Waterloo

February 8th 2017
3:30pm in MC5501
Refreshments at 3:15pm

Fast gradient algorithms for structured sparsity

Structured sparsity is an important modeling tool that expands the applicability of convex formulations for data analysis, however it also creates significant challenges for efficient algorithm design. In this talk I will discuss how gradient algorithms can be adapted to meet the modern computational needs in large-scale machine learning, thanks to their cheap per-iteration costs. I will give results on how to efficiently compute the proximal map, the key component in gradient algorithms, through (a) identifying sufficient conditions that reduce the proximal map of a sum of functions to the composition of the proximal maps of each individual summand; (b) exploiting the proximal average as a provably sound approximation and yielding strictly better convergence than the usual smoothing strategy, without incurring any overhead; (c) completely bypassing the proximal map by proposing a generalized conditional gradient algorithm that requires sometimes a significantly cheaper polar operation. Throughout the talk I will demonstrate the application of these results in matrix completion, dictionary learning, isotonic regression, event detection, etc. I will conclude my talk by mentioning some progress and challenges in the nonconvex and distributed setting.

David Duvenaud
University of Toronto

January 20th 2017
3:30pm in MC5479
Refreshments at 3:15pm

Title: 
Composing graphical models with neural networks for structured representations and fast inference

Abstract:

 How can we build structured, but flexible models? We propose a general modeling and inference framework that combines the complementary strengths of probabilistic graphical models and deep learning methods. Our model family combines latent graphical models with neural network observation models. For inference, we use recognition networks restricted to output evidence potentials that are conjugate to the latent model. These local potentials are then combined using efficient graphical model inference algorithms. All components are trained simultaneously with a single scalable stochastic variational objective. We illustrate this framework with several example models, and by showing how to automatically segment and categorize mouse behavior from raw video.

Bio: 

David Duvenaud is an Assistant Professor in Computer Science and Statistics at the University of Toronto. He did his postdoc at Harvard University with Prof Ryan P. Adams, working on hyperparameter optimization, variational inference, deep learning methods and  automatic chemical design. He did his Ph.d. at the University of Cambridge, studying Bayesian nonparametrics with Zoubin Ghahramani and Carl Rasmussen.  David also spent two summers in the machine vision team at Google Research, and co-founded Invenia, an energy forecasting and trading company.

Past CM Colloquiums