2014 talks

Randomized Algorithms in Numerical Linear Algebra (RandNLA)

By Petros Drineas, Associate Professor of Computer Science at Rensselaer Polytechnic Institute.

Monday, November 17, 2014
Location: DC 1304
Time: 3:30 - 4:30 p.m.

Petros Drineas

Abstract: The introduction of randomization in the design and analysis of algorithms for matrix computations (such as matrix multiplication, least-squares regression, the Singular Value Decomposition (SVD), etc.) over the last decade provided a new paradigm and a complementary perspective to traditional numerical linear algebra approaches. These novel approaches were motivated by technological developments in many areas of scientific research that permit the automatic generation of large data sets, which are often modeled as matrices.

In this talk we will outline how such approaches can be used to approximately solve problems in numerical linear algebra, ranging from the Singular Value Decomposition (SVD) to the Column Subset Selection Problem and the related CX decomposition. Application of the proposed algorithms to data analysis tasks in population genetics will also be discussed.

Cleaning-up Data With Errors: When Symbolic-Numeric Sparse Interpolation Meets Error-Correcting Codes

By Erich Kaltofen is a Professor of Mathematics at North Carolina State University.

Friday, October 10, 2014
Location: DC 1304
Time: 1:30 - 2:30 p.m.

Erich Kaltofen

Abstract: Algebraic error-correcting decoding is generalized to multivariate sparse polynomial and rational function interpolation from evaluations that can be numerically inaccurate and where several evaluations can have severe errors ("outliers").

Our univariate solution for numerical Prony-Blahut/Giesbrecht-Labahn-Lee sparse interpolation can identify and
remove outliers ("cleans up data") by a voting scheme and Sudan's list decoding. Our multivariate solution for numerical Zippel/Kaltofen-Yang-Zhi sparse rational function interpolation removes outliers by techniques from the Welch/Berlekamp decoder for Reed-Solomon codes. Our algorithms are doubly hybrid: they combine exact with numerical methods, in fact, constitute a numerical version of the algebraic error-correcting decoders, and recover both discrete outputs, the term degrees in the sparse support, and continuous data, the real or complex coefficients that approximately fit the data.

This is joint work with Matthew Comer [now at CNU],Clement Pernet [U. Grenoble] and Zhengfeng Yang [ECNU Shanghai].

Games, Learning, and the Price of Anarchy

By Éva Tardos, winner of the Fulkerson Prize, and professor of Computer Science at Cornell University

Monday, September 22, 2014
Location: DC 1304
Time: 3:30 - 4:30 p.m.

Eva Tardos

Abstract: Selfish behavior can often lead to suboptimal outcome for all participants, a phenomenon illustrated by classical examples in game theory, such as the prisoner dilemma. In this talk, we'll consider how to quantify the impact of strategic user behavior on overall performance. We'll will consider traffic routing as well as online auctions from this perspective, and providing robust guarantees for their performance even when the system is not in equilibrium, assuming participants are using learning strategies to deal with an uncertain environment.

Learning with Marginalized Corruption

By Manuel Kauers, Research Institute for Symbolic Computation (RISC)

Wednesday, April 9, 2014
Location: MC 5158
Time: 1:30 - 2:30 p.m.

Manuel Kauers

Abstract: Singularities of differential operators or recurrence operators are points at which the numerical computation of solutions is cumbersome. In some cases, this is unavoidable because there is a solution which has a strange behaviour (e.g. a pole) at this point. But sometimes a singularity of a differential operator is only a "false alarm" and does not really correspond to a singularity of a solution. Such singularities are called apparent. Desingularization algorithms eliminate apparent singularities from a given operator. For differential operators such algorithms were already known in the 19th century, and for recurrence operators they were worked out by Abramov and van Hoeij in the 1990s. In the talk we will explain the ideas behind these techniques and then present a recent observation made in joint work with Shaoshi Chen and Michael Singer that leads to a more elegant and more general desingularization algorithm.

The Monotone Acceptance Ordered Upwind Method (A Causal Algorithm for Minimum Time / Cost Optimal Control) and some Lessons in (Ir) Reproducible Research in Computational Math

By Ian M. Mitchell, Department of Computer Science at the University of British Columbia

Thursday, April 3, 2014
Location: MC 5158
Time: 3:30 - 4:30 p.m.

*Joint Colloquium with Applied Mathematics

Ian M. Mitchell

Abstract: The static or time-independent Hamilton-Jacobi equation -- an example of which is the Eikonal equation -- is a fully degenerate nonlinear elliptic PDE. The viscosity solution of this PDE is useful in applications such as minimum time or cost optimal path planning or control to a target set. Fast label setting algorithms are available for approximating the viscosity solution of some classes of these equations; most notably the Fast Marching Method (which is a continuous version of Dijkstra's algorithm for shortest path through a discrete graph) applies to some forms of the Eikonal equation. The Ordered Upwind Method introduced by Sethian & Vladimirsky [SINUM 2003] expanded the class of equations to which these fast algorithms can be applied. The Monotone Acceptance Ordered Upwind Method (MAOUM) solves the same class of problems as does Sethian's & Vladimirsky's algorithm, but it does so with a precomputed stencil that can adapt to local grid spacing. Consequently, MAOUM is able to guarantee that nodes are accepted in order of their value and performs considerably better than the older algorithm when significant grid refinement is used to improve approximation quality for problems with nonsmooth solutions. [Alton & Mitchell, J. Scientific Computing 2012]

This research was undertaken with the best of intentions and a typically rigorous numerical analysis approach. Unfortunately, although the code is available in a zip-file, I doubt I could recreate any of the computational results today without significant effort. In the second half of the talk I will discuss the dangers of the status quo in computational science, and some tools and techniques that can significantly reduce your risk, improve your reproducibility, and in most cases increase your productivity if consistently adopted.

Past CM Colloquiums