2013 talks

Building Fields

By Eric Schost, Canada Research Chair in Algebra (Western University)

Monday, December 16, 2013
Location: DC 1304
Time: 1:30 - 2:30 p.m.

Eric Schost

Abstract: Finite fields appear in many branches of pure and applied mathematics, prominently so in areas such as number theory, cryptography and coding theory. As a result, building and computing in arbitrary finite fields is a fundamental task for computer algebra systems.

Many questions remain open regarding the complexity of this kind of calculation, some of them very challenging. The most famous one is probably whether one can construct an arbitrary finite field in deterministic polynomial time.

In this talk, our objective is more modest. If we allow probabilistic algorithms, it is known that one can build and compute in arbitrary finite finite fields in polynomial time; our goal is then to provide algorithms that take (quasi-)linear time. I will describe work in this direction by Shoup, Lenstra, Couveignes and Lercier, as well as recent results obtained with De Feo and Doliskani.

Fast and Robust iterative solvers for the Helmholtz equation

By Kees Vuik (Delft University of Technology)

Monday, November 4, 2013
Location: DC 1304
Time: 3:30 - 4:30 p.m.

Kees Vuik

Abstract: Many wave phenomena are well described by the wave equation. When the considered wave has a fixed frequency the wave equation is mostly re-written in the frequency domain which results in the Helmholtz equation. It is also possible to approximate the time domain solution with a summation of solutions for several frequencies. Applications are the propagation of sound, sonar, seismics, and many more. We take as an example the search for oil and gas using seismics. It is well known that an increase of the frequency leads to a higher resolution, so more details of the underground become visible.

The Helmholtz equation in its most simple form is a combination of the symmetric positive Poisson operator and a negative constant, the so-called wave number, multiplied with the identity operator. In order to find the solution in a complicated domain a discretization has to be done. There are two characteristic properties of the discretized system:

  • the product of the wave number and the step size should be smaller than a given constant.
  • if the wave number increases, the operator has more and more negative eigenvalues.

If damping is involved, the operator also has a part with an imaginary value. The resulting matrix is however not Hermitian.

In the years around 2000, no good iterative solvers are known. The standard approaches: Krylov and multi-grid break down if the wave number increases. 

Around 2005 a new pre-conditioner based on the shifted Laplace pre-conditioner was proposed, which leads to a class of fast and robust Helmholtz solvers. It appears that the amount of work increases linearly with the wave number. At this moment, this is the method of choice to solve the Helmholtz equation. Various papers have appeared to analyze the good convergence behaviour.

Recently, a multi-level Krylov solver has been proposed that seems to be scalable, which means that the number of iterations is also independent of the wave number. An analysis of this method is given and recent results for industrial problems are given.

The HDG methods for partial differential equations

By Bernardo Cockburn (University of Minnesota)

Wednesday, October 2, 2013
Location: DC 1304
Time: 3:30 - 4:30 p.m.

Abstract: The hybridizable discontinuous Galerkin methods constitute a new class of discontinuous Galerkin methods whose distinctive feature is efficient implementation for steady-state problems and implicit time-marching discretizations. In this talk, we introduce these methods in the framework of heat conduction. We show that the HDG methods are obtained as a discrete version of a suitable characterization of the exact solution which is amenable to the efficient implementation of the resulting method. We then show examples of these methods and uncover the special built-in stabilization mechanism they have. We relate to well-known methods for second-order elliptic problems and show that they are actually better than any other discontinuous Galerkin method. We end by giving an overview of the work already done and mention ongoing and open problems.

Annealing Between Distributions by Averaging Moments

By Ruslan Salakhutdinov (University of Toronto)

Friday, September 20, 2013
Location: DC 1304
Time: 3:30 p.m. - 4:30 p.m.

Abstract: Many powerful Monte Carlo techniques for estimating partition functions, such as annealed importance sampling (AIS), are based on sampling from a sequence of intermediate distributions which interpolate between a tractable initial distribution and an intractable target distribution. The near-universal practice is to use geometric averages of the initial and target distributions, but alternative paths can perform substantially better. We present a novel sequence of intermediate distributions for exponential families: averaging the moments of the initial and target distributions. We derive an asymptotically optimal piecewise linear schedule for the moments path and show that it performs at least as well as geometric averages with a linear schedule. Moment averaging performs well empirically at estimating partition functions of restricted Boltzmann machines (RBMs), which form the building blocks of many deep learning models, including Deep Belief Networks and Deep Boltzmann Machines.

Joint work with Roger Grosse and Chris Maddison.

Variational and RKHS approach for image Colorization and Segmentation

By Sung Ha Kang (Georgia Tech University)

Friday, July 19, 2013
Location: M3 3103
Time: 11 a.m. - 12 p.m.

Abstract: This talk will start with an introduction to image restoration, starting from Total Variation minimizing denoising, and consider inpainting and colorization problems. The term "colorization" was introducted by Wilson Markle who first processed the gray scale moon image from the Apollo mission. Couple of variational colorization models will be presented which demonstrates different effects. Another direction of using Reproducing Kernel Hilbert Space approach will be presented for an effective colorization application. A link to image segmentation will be made throuh medical image application. Image segmentation separates the image into different regions to simplify the image and identify the objects easily. Mumforf-Shah and Chan-Vese model are one of the most well-known variational models in the field. If time permits, this talk will include a model segmenting piecewise constant images with irregular object boundaries, and consider some features of multiphase segmentation.

Generating safe primes and safe moduli

By Joachim von zur Gathen (University of Bonn)

Friday, May 31, 2013
Location: DC 1302
Time: 11 - 11:50 a.m.

Abstract: Safe primes and safe RSA moduli are used in several cryptographic schemes. The most common notion is that of a prime p, where (p-1)/2 is also prime. The latter is then a Sophie Germain prime. Under appropriate heuristics, they exist in abundance and can be generated efficiently. But the modern methods of analytic number theory have --so far-- not even allowed to prove that there are infinitely many of them. Also for other notions of safe primes, there is no algorithm in the literature that in unconditionally proven to terminate, let alone to be efficient.

This talk considers a different notion of safe primes and moduli. They can be generated in polynomial time, without any unproven assumptions, and are good enough for the crpotographic applications that we are aware of.

Joint work with Igor Shparlinski, Sydney.

On the Integrability and Summability of Bivariate Rational Functions

By Shaoshi Chen (North Carolina State University)

Friday, May 31, 2013
Location: DC 1304
Time: 2:30 - 3:30 p.m.

Abstract: Hermite reduction decides whether a univariate rational funtion has a rational indefinite integral. The discrete analogue of hermite reduction is Abramov's algorithm. In this talk, we study the rational integrability and summability problem for bivariate rational functions. First, we recall a classical result by Picard in 1902 for testing rational integrability of bivariate rational functions and its application in creative telescoping. Second, we present criteria for deciding whether a bivariate rationale function in two variables can be written as a sum of two (q-) differences of bivariate rational functions. Using these criteria, we show how certain double sums can be evaluated, first, in terms of single sums and finally, in terms of values of special functions. 

Joint work with Manuel Kauers and Michale F. Singer.

Past CM Colloquiums