*Simulating fluid flows with large and small scales without having to pay (too much) for it*

[This talk is geared toward a general computational math audience]

There are two fundamental numerical challenges associated with solving fluid flow problems involving multiple fluids/components/phases/scales: (1) solving PDEs with discontinuous coefficients and interface conditions, (2) evolving in time the geometry (e.g., a density, a concentration, the interface between air and water …)

In this talk I will present high-order numerical techniques to solve these problems on a regular Cartesian grid. First, I will introduce the Correction Function Method (CFM) framework and will apply it to solve a canonical problem: Poisson’s equation with interface jump discontinuities. Second, I will introduce the Gradient-Augmented Level Set Method (GALSM) and will apply it to the problem of evolving interfaces separating the various fluid domains.

Throughout this talk I will illustrate our approach with simulations of physical systems. I will end by showing a surprising extension of the methods developed to solve with arbitrary resolution the incompressible Euler equations.

**Dr. Chee Yap **

Computer Science, New York University

**December 12, 2017**

3:30pm in MC 5501

refreshments at 3:15pm

*On Soft Geometric Computation*

Soft Geometric Computation is a numerical, certified approach to designing geometric algorithms with guaranteed topological properties. "Softness" here is contrasted to "hard" approaches that are traditionally associated with exact algorithms. Barriers to "hard" approaches include high complexity and the non-existence of hard solutions. Many soft solutions are practical with adaptive complexity. We describe conceptual as well as computational tools to support soft algorithms. These ideas are framed in the classic subdivision framework. We illustrate the ideas in areas such as root isolation and clustering, robot motion planning, Voronoi diagrams and surface meshing.

**Dr. Shawn Wang**

University of British Columbia

**December 8, 2017**

2:30pm in DC 1302

refreshments at 2:15pm

*Linear convergence of gradient descent methods in the framework of Bregman distance *

The gradient descent method is a powerful first-order algorithm for convex optimization problems and has extensive applications. However, it usually requires the convexity and Lipschitz gradient continuity of the objective function. We investigate the linear convergence of gradient descent methods using the Bregman distances. A generalized gradient descent method with decreasing property is proposed for solving nonconvex minimization problems. Linear convergence is established by using the L-convexity condition and Bregman-Polyak-Lojasiewicz condition. Joint work with Bauschke, Bolte, Chen, and Teboulle.

**Dr. Roger Melko**

Physics and Astronomy, University of Waterloo

**November 30, 2017**

3:30pm in DC 1304

refreshments at 3:15pm

**Machine Learning the Many-Body Problem**

Condensed matter physics is the study of the collective behavior of infinitely complex assemblies of interacting electrons, magnetic moments, atoms or qubits. This complexity is reminiscent of the “curse of dimensionality” commonly encountered in machine learning. Despite this curse, the machine learning community has developed techniques with remarkable abilities to classify, characterize and interpret complex sets of real-world data, such as images or natural languages. Here, we show that modern neural network architectures for supervised learning can be used to identify phases and phase transitions in a variety of condensed matter Hamiltonians. These neural networks can be trained to detect ordered states, as well as topological states with no conventional order, directly from raw state configurations sampled theoretically or experimentally. Further, such configurations can be used to train a stochastic variant of a neural network, called a Restricted Boltzmann Machine (RBM), for use in unsupervised learning applications. We show how RBMs can be sampled much like a physical Hamiltonian to produce configurations useful for estimating physical observables. Finally, we examine the power of RBMs for the efficient representation of classical and quantum Hamiltonians, and explore applications in quantum state tomography useful for near-term multi-qubit devices.

**Dr. Yang Cao**

Computer Science, Virginia Tech.

**November 9, 2017**

3:30pm in MC 5501

refreshments at 3:15pm

**Hybrid stochastic modeling of the budding yeast cell cycle control mechanism**

The budding yeast cell cycle is regulated by complex and multi-scale control mechanisms, and is subject to inherent noise, resulted from low copy numbers of species in a cell. Noise in cellular systems is often modeled and simulated with Gillespie's stochastic simulation algorithm (SSA). However, the low efficiency of SSA limits its application to large practical biochemical networks, which often present multi-scale features in two aspects: species with different scales of abundances and reactions with different scales of firing frequencies.

To improve the efficiency of stochastic simulations, Haseltine and Rawlings (HR) proposed a hybrid algorithm, which combines ordinary differential equations (ODEs) for traditional deterministic models and SSA for stochastic models. In this talk, we will present a comprehensive hybrid model that represents a gene-protein regulatory network of the budding yeast cell cycle control mechanism, respectively, by Gillespie’s stochastic simulation algorithm (SSA) and ordinary differential equations (ODEs). Simulation results of our model are compared with published experimental measurement on the budding yeast cell cycle, which demonstrates that our hybrid model well represents many critical characteristics of the budding yeast cell cycle, and reproduces phenotypes of more than 100 mutant cases. The proposed scheme is considerably faster in both modeling and simulation than the equivalent stochastic simulation. Meanwhile, the accuracy of the HR hybrid method is studied based on a linear chain reaction system. Our analysis shows that the hybrid method is valid for a much greater region in system parameter space than those for the slow scale SSA (ssSSA) and the stochastic quasi steady state assumption (SQSSA) methods.

**Dr. David Richter **

Civil and Environmental Engineering and Earth Sciences, University of Notre Dame

**October 19, 2017**

3:30pm in MC 5417

refreshments at 3:15pm

**Droplets and dust in atmospheric turbulence: Insight from numerical simulations**

In the natural environment, water and air move over enormous ranges of temporal and spatial scales, and are typically subject to a wide variety of complex physical and chemical processes. While this already makes systematic and rigorous observation difficult in practice, studying these flows can be further inhibited by hazardous or inaccessible conditions which preclude direct measurements or analyses. This in turn negatively impacts the accuracy and reliability of large-scale modeling efforts which require robust knowledge of small-scale details -- for example in hurricane forecasting models, climate models, or contaminant dispersion models.

In this talk I will present ongoing work dedicated to using direct numerical simulations coupled with Lagrangian point particles as an experimental tool for understanding and parameterizing basic physical processes in multiphase environmental flows where measurements are almost completely lacking. In particular, energy and momentum transfer at the high-wind, spray-laden air-sea interface will be used as an example to show, fundamentally, what the ejection and suspension of evaporating water droplets can (and cannot) do to the budgets of momentum, heat, and moisture flux in the near-surface turbulent boundary layer. The implications of these findings will be interpreted in the context of the actual “outside” flows and larger-scale model development, and the extension of this problem to other environmental dispersed phase flows (e.g., dust transport, riverbed dynamics, blowing snow, etc.) will be discussed.

**Dr. Jingbo Wang **

School of Physics, The University of Western Australia

**June 28, 2017**

4:00pm in MC5501

Refreshments at 3:45pm

**Efficient Decomposition of Quantum Walk Operators**

Quantum walk has shown much potential as a general framework for developing novel quantum algorithms. The efficiency of these algorithms depends on interference between the multiple paths that are simultaneously traversed by a quantum walker, as well as local interaction and intrinsic quantum correlation if multiple quantum walkers are involved. As such, quantum walk has become a subject of intense theoretical and experimental studies. An increasingly pressing challenge is to demonstrate quantum supremacy of the quantum-walk-based algorithms over classical computation; this would require an efficient decomposition of the prescribed quantum walk operators. In this talk, I will discuss the design principles for the development of efficient quantum circuits for quantum walks of several distinct types on a wide range of undirected and directed graphs, aiming to provide some intuition on how such decomposition is derived.

This talk is intended for a general audience and will not assume specialist knowledge of the field.

**Bruno Salvy**

École Normale Supérieure De Lyon

**June 15, 2017**

4:30pm in MC5479

Refreshments at 4:15pm

**Explicit Continued Fractions for Riccati-type Equations**

Most classical C-fractions are special cases of continued fractions due to Euler and Gauss for the quotient of contiguous hypergeometric series. These power series are solutions of Riccati equations, from which these continued fractions can be derived directly by a method due to Lagrange. In this talk, we consider Lagrange’s method with a symbolic point of view, in order to determine all the equations for which it produces explicit continued fractions. The classical results are thus obtained in a unified way, as well as their q-analogues (continued fractions due to Heine). The method also applies to discrete Riccati equations, where it recovers a continued fraction on the Gamma function due to Brouncker.

This is joint work with Sébastien Maulat

**Dr. David Correa**

Assistant Professor, School of Architecture, University of Waterloo

**March 22nd 2017**

4:00pm in MC5501

Refreshments at 3:45pm

**Material Informed Computational Design in Architecture**

No longer bound by the production line, designers, engineers and computer scientists have moved to the development of adaptive construction processes capable of autonomous sensing, simulation and response. The integration of computational design and simulation tools, in the form of customized software and hardware, has provided designers with new access to material properties resulting on unprecedented levels of performance complexity. This talk will present an overview of robotically enabled and digital fabrication projects developed at the Institute of Computational Design and Construction (ICD) in Germany.

David Correa is an Assistant Professor at the University of Waterloo and Doctoral Candidate at the Institute for Computational Design (ICD) at the University of Stuttgart. At the ICD, David initiated and lead the research field of Bio-inspired 3D Printed Programmable Material Systems. His Doctoral research investigates the reciprocal relationship between material design and fabrication from a multi-scalar perspective. With a focus on climate responsive materials for the built environment, the research integrates computational tools, simulation and digital fabrication with bio-inspired design strategies for material architectures. As a designer in architecture, product design and commercial digital media, David’s professional work engages multiple disciplines and environments – from dense urban settings to remote northern regions.

**Dr. Mauro Maggioni (CANCELLED)**

Bloomberg Distinguished Professor in Mathematics and Applied Mathematics and Statistics at Johns Hopkins University,

**March 9th 2017**

3:30pm in MC5501

Refreshments at 3:15pm

**Geometric Methods for the Approximation of High-dimensional Dynamical Systems**

We discuss a geometry-based statistical learning framework for performing model reduction and modeling of stochastic high-dimensional dynamical systems. We consider two complementary settings. In the first one, we are given long trajectories of a system, e.g. from molecular dynamics, and we discuss new techniques for estimating, in a robust fashion, an effective number of degrees of freedom of the system, which may vary in the state space of then system, and a local scale where the dynamics is well-approximated by a reduced dynamics with a small number of degrees of freedom. We then use these ideas to produce an approximation to the generator of the system and obtain, via eigenfunctions of an empirical Fokker-Planck question, reaction coordinates for the system that capture the large time behavior of the dynamics. We present various examples from molecular dynamics illustrating these ideas. In the second setting we only have access to a (large number of expensive) simulators that can return short simulations of high-dimensional stochastic system, and introduce a novel statistical learning framework for learning automatically a family of local approximations to the system, that can be (automatically) pieced together to form a fast global reduced model for the system, called ATLAS. ATLAS is guaranteed to be accurate (in the sense of producing stochastic paths whose distribution is close to that of paths generated by the original system) not only at small time scales, but also at large time scales, under suitable assumptions on the dynamics. We discuss applications to homogenization of rough diffusions in low and high dimensions, as well as relatively simple systems with separations of time scales, and deterministic chaotic systems in high-dimensions, that are well-approximated by stochastic differential equations.

**Yaoliang Yu
University of Waterloo**

**February 8th 2017**

3:30pm in MC5501

Refreshments at 3:15pm

**Fast gradient algorithms for structured sparsity**

Structured sparsity is an important modeling tool that expands the applicability of convex formulations for data analysis, however it also creates significant challenges for efficient algorithm design. In this talk I will discuss how gradient algorithms can be adapted to meet the modern computational needs in large-scale machine learning, thanks to their cheap per-iteration costs. I will give results on how to efficiently compute the proximal map, the key component in gradient algorithms, through (a) identifying sufficient conditions that reduce the proximal map of a sum of functions to the composition of the proximal maps of each individual summand; (b) exploiting the proximal average as a provably sound approximation and yielding strictly better convergence than the usual smoothing strategy, without incurring any overhead; (c) completely bypassing the proximal map by proposing a generalized conditional gradient algorithm that requires sometimes a significantly cheaper polar operation. Throughout the talk I will demonstrate the application of these results in matrix completion, dictionary learning, isotonic regression, event detection, etc. I will conclude my talk by mentioning some progress and challenges in the nonconvex and distributed setting.

**David Duvenaud****University of Toronto**

**January 20th 2017**

3:30pm in MC5479

Refreshments at 3:15pm

**Title:**

Composing graphical models with neural networks for structured representations and fast inference

**Abstract:**

How can we build structured, but flexible models? We propose a general modeling and inference framework that combines the complementary strengths of probabilistic graphical models and deep learning methods. Our model family combines latent graphical models with neural network observation models. For inference, we use recognition networks restricted to output evidence potentials that are conjugate to the latent model. These local potentials are then combined using efficient graphical model inference algorithms. All components are trained simultaneously with a single scalable stochastic variational objective. We illustrate this framework with several example models, and by showing how to automatically segment and categorize mouse behavior from raw video.

**Bio: **

David Duvenaud is an Assistant Professor in Computer Science and Statistics at the University of Toronto. He did his postdoc at Harvard University with Prof Ryan P. Adams, working on hyperparameter optimization, variational inference, deep learning methods and automatic chemical design. He did his Ph.d. at the University of Cambridge, studying Bayesian nonparametrics with Zoubin Ghahramani and Carl Rasmussen. David also spent two summers in the machine vision team at Google Research, and co-founded Invenia, an energy forecasting and trading company.