Scheduling Major League Baseball

By Dr. Michael Trick (Carnegie Mellon University)

Monday, December 12

Michael TrickAbstract: Real sports scheduling problems are difficult to solve due to the variety of different constraints that might be imposed. We look at integer and constraint programming approaches to creating tournament schedules, and describe our use of these methods in creating schedules such as the 2005 schedule for Major League Baseball.

A Conservative Discretization for Hydrodynamics on the Sphere: Simulating Global Ocean Dynamics on Unstructured Grids

By Dr. Gordan Stuhne (University of Toronto)

Monday, November 28

Gordan StuhneAbstract: My talk will open with a brief discussion about conservation laws in hydrodynamic models of complex systems, with reference to geophysical, biological, and biomedical examples that I have worked on in the past. Attention will then be focused on the problem of ocean climate modeling, and I will describe a novel discretization for the hydrostatic Boussinesq equations on unstructured triangular grids on the sphere. A key feature of this finite volume/finite difference scheme is its correct representation of both linear and quadratic conservation properties at the discrete level. To maintain rigorous conservation over successive time levels, an implicit time-stepping scheme with a non-symmetric propagation matrix must be invoked. The iterative solution of the associated algebraic system introduces some additional computational cost, but greatly enhances dynamical stability and enforces a rational energy budget, a characteristic that is absent in existing ocean models. Results will be discussed from oceanic simulations based on the new methods. If time permits, I will also discuss the geometric problem of unstructured mesh generation on the sphere, and address issues relating to mixing parameterization in ocean models.

Zeros of the Riemann Zeta Function: Computations and Implications

By Andrew Odlyzko (University of Minnesota)

Monday, November 21

Abstract: The Riemann Hypothesis is now left as the most famous unsolved problem in mathematics. Extensive computations of zeros have been used not only to provide evidence for its truth, but also for the truth of deeper conjectures that predict fine scale statistics on the distribution of zeros of various zeta functions. These conjectures connect number theory with physics, and are regarded by many as the most promising avenue towards a proof of the Riemann Hypothesis. However, as is often true in mathematics, numerical data is subject to a variety of interpretations, and it is possible to argue that the numerical evidence we have gathered so far is misleading. Whatever the truth may be, the computational exploration of zeros of zeta functions is flourishing, and through projects such as the ZetaGrid is drawing many amateurs into contact with higher mathematics.

Mixed-integer Programming to Solve Power Grid Blackout Problems

By Sara Mattia (La Sapienza, Rome, Italy)

Wednesday, November 16

Abstract: None

Systems biology of cellular regulation and signaling: multiscale and multidisciplinary approaches to network structure and function

By Dr. Chris Myers (Cornell University)

Monday, Novermber 14

Chris MyersAbstract: Systems biology has arisen as a new field focused on understanding the emergence of complex biological function from its constituent molecular parts. An important area of interest centers on molecular information processing activities embedded within regulatory and signaling networks. I will describe work on three different sets of problems that illustrate some multidisciplinary aspects of systems biology: the use of bioinformatic and machine learning techniques to infer gene regulation networks from high-throughput data; the application of ideas from statistical physics to analyze the dynamics of regulatory and signaling networks; and the study of the topology of software systems to gain insight into the forces that drive the organization and evolution of complex, adaptive, information processing networks.

Generalized Nash Equilibrium Problems and Newton Methods

By Dr. Veronica Piccialli (Universita' di Roma "La Sapienza")

Monday, October 31

Veronica PiccialliAbstract: The Generalized Nash Equilibrium Problem (GNEP) extends the classical Nash equilibrium problem by assuming that each player's feasible set can depend on the rival player's strategies. GNEP is a very difficult problem which has been widely used outside the mathematical programming community and it is a powerful tool for modeling a host of interesting problems arising in economy and, more recently, computer science, telecommunications and deregulated markets. Interesting results exist in the literature on existence of a solution, and some connections to quasi-variational inequalities have been highlighted. However, advancements on the algorithmic side have been rather scarce. On the other hand, interest in this class of problem is growing since it permits to attack some very hard problems describing complex competition situations. The aim of this work is to study Newton methods for the computation of Generalized Nash equilibria. We develop a theory for several cases of the GNEP and also identify specific structures that are likely to occur in GNEPs and analyze their properties and peculiarities. Finally, we illustrate the results on a practical application consisting in the problem of Internet switching where the traffic is generated by selfish users. It is a joint work with Francisco Facchinei and Andreas Fischer.

Genaralized Nash Equilibrium Problems and Newton Methods

By Dr. Ron Kimmel (Technion, Department of Computer Science)

Monday, October 24

Ron KimmelAbstract: I will discuss the problem of matching isometric manifolds by embedding their intrinsic geometric structures in simple domains. The focus would be the 3DFace project, an expression invariant face recognition system: At the Geometric Image Processing Lab, together with the Bronstein brothers, we built a system that can distinguish between two identical twins based on 3D pictures of their faces. The 3DFace system can recognize faces under various poses and facial expressions. I will review the components of our system, starting from general biometrics and face recognition techniques. I will emphasis some theoretical problems and technical challenges.

HOPE A Homotopy Optimization Method for Protein Structure Prediction

By Dr. Dianne P. O'Leary (University of Waterloo)

Friday, October 21

Dianne P. O'LearyAbstract: The shape of a protein can be predicted by minimizing a model of potential energy. We propose a homotopy optimization method, HOPE, to use the shape of one protein as a template to predict the shape of another. Ensembles of predictions are produced by perturbing shapes along the path defined by the homotopy, increasing the likelihood of success. This is joint work with Daniel Dunlavy, Dmitri Klimov, and D. Thirumalai.

Computational Tools and Experiments for the Webgraph

By Dr. Stefano Leonardi (Universita' di Roma "La Sapienza")

Monday, October 17

Stefano LeonardiAbstract: The Webgraph is the graph whose vertices are Web pages and directed edges are hyperlinks between pages. The study of the structure of this graph is of great importance for several Web applications as for instance Web search and spam detection. This seminar presents several computational tools able to compute topological properties such as degree distributions, connected components, dense subgraphs, Pagerank distribution on webgraphs of several billion edges with limited computational resources. We'll also report on an extensive experimental work that uses these computational tools.

Development of a High-Fidelity Numerical Model for Hazard Prediction in the Urban Environment

By Dr. Fue-Sang Lien (University of Waterloo)

Wedensday, October 5

Fue-Sang LienAbstract: The release of chemical, biological, radiological, or nuclear (CBRN) agents by terrorists or rogue states in a North American city (densely populated urban centre) and the subsequent exposure, deposition, and contamination are emerging threats in an uncertain world. The transport, dispersion, deposition, and fate of a CBRN agent released in an urban environment is an extremely complex problem that encompasses potentially multiple space and time scales. The availability of high-fidelity, time-dependent models for the prediction of a CBRN agent’s movement and fate in a complex urban environment can provide the strongest technical and scientific foundation for support of Canada’s more broadly based effort at advancing counter-terrorism planning and operational capabilities.

The objective of this talk is to report the progress of developing and validating an integrated, state-of-the-art, high-fidelity multi-scale, multi-physics modeling system for the accurate and efficient prediction of urban flow and dispersion of CBRN materials. Development of this proposed multi-scale modeling system will provide the real-time modeling and simulation tool required to predict injuries, casualties, and contamination and to make relevant decisions (based on the strongest technical and scientific foundations) in order to minimize the consequences of a CBRN incident based on a pre-determined decision making framework

Mixed-Integer Programming---It works out of the box

By Dr. Robert Bixby (ILOG inc. and Rice University)

Friday, September 9

Robert BixbyAbstract: In the early 1960's, it was recognized that as a modeling paradigm, mixed-integer programming (MIP) represented a powerful tool with essentially limitless practical applications. However, while the modeling paradigm was strong, the available solution methodologies were not. This fact led to considerable doubt about the real value of integer programming.

In the 1970s and 80s, that picture began to change, with the development of specialized solutions for particular, well-studied problems. However, the general consensus remained one in which there was little hope held out for the development of general-purpose integer programming tools, with robust out-of-the box behavior. However, in the late 1990s that unlikely development began to become reality.

We will look at the quite remarkable developments of MIP codes over the last several years and why it has happened, including a look at more recent developments.

A new image transform with medical applications

By Dr. Hongmei Zhu (York University)

Thursday, September 8

Hongmei ZhuAbstract: Biomedical signals are typically finite duration, dynamic and non-stationary processes whose frequency characteristics vary over time or space. This often requires techniques capable of locally analyzing and processing signals. An integral transform known as the S-transform combines the classic Gabor transform and the current and versatile wavelet transform. It allows more accurate detection of subtle signal changes while interpretation in a time frequency domain is easy to understand. In this talk, we introduce the ST and illustrate its effectiveness in biomedical applications such as MRI.

A New Markov Approach to Finding Correlated, Modular Patterns in Random Sequences with Applications to RNA Structure

By Dr. Rob Knight (University of Colorado at Boulder)

Monday, August 15

Rob KnightAbstract: Many sequence matching problems in biology can be described as lists of important 'modules', possibly containing correlated sites, which appear in a mostly random string of 'spacer'. One important class of these problems is the problem of finding RNA motifs that match a particular sequence and secondary structure pattern. In this talk, I describe a new approach for estimating the probability of occurrence of these patterns using random walks in a set of correlated state machines. We apply this new approach to the problem of finding RNA motifs in collections of random sequences, a problem that has important technological implications for a laboratory technique called SELEX in which functional molecules are artificially selected from random-sequence pools, and important implications for the "RNA World" theory of the origin of life, in which RNA came before DNA and proteins and acted both as genetic material and as a catalyst. The technique is widely applicable to other biological pattern-matching problems.

Unsupervised Learning: Estimating the Cluster Tree of a Density

By Dr. Werner Stuetzle (University of Washington)

Monday, May 16

Werner StuetzleAbstract: Clustering problems occur in many domains, from genomics and astronomy to document analysis and marketing. The general goal is to identify distinct groups in a collection of objects. To cast clustering as a statistical problem we regard the feature vectors characterizing the objects as a sample from some unknown probability density. The premise of nonparametric clustering is that groups correspond to modes of this density. Building on ideas of David Wishart and John Hartigan I will introduce the cluster tree of a density as a summary statistic reflecting the group structure, and I will describe methods for estimating the cluster tree.

Evaluation of European and American Options with Grid Stretching and Accurate Discretization

By Dr. Kees Oosterlee (Delft University of Technology)

Wednesday, May 4

Kees OosterleeAbstract: In this talk, we present several numerical issues, that we currently pursue, related to accurate approximation of option prices. Next to the numerical solution of the Black-Scholes equation by means of accurate finite differences and an analytic coordinate transformation, we present results for options under the Variance Gamma Process with a grid transformation. The techniques are evaluated for European and American options.

Simulating Protein Folding Pathway and the Need of Distributed Computation

By Jeff Chen (University of Waterloo)

Monday, April 18

Jeff ChenAbstract: The current understanding of the characteristics of protein folding is widely based on statistical-physics models of polymers that capture the essential interactions in real protein systems. The reduction of the degrees of freedom of the involved coordinates in such a model, in comparison with the all-atom modelling approach, allows for accumulation of adequate statistics in computer simulations. This type of models has been successfully used to explore the underlying physical mechanism of structural formation, folding dynamics and protein-protein interaction.

We have recently presented a unified potential-energy model that successfully reproduces the hydrogen bonding effect and hydrophobicity in proteins. The model provides structural insight into the physical mechanism of such problems as structural conversion due to mutation and double native conformations with different a-helix and b-sheet contents in a prion protein.

Simulating Protein Folding Pathway and the Need of Distributed Computation

By Dr. Ashwin Nayak (University of Waterloo)

Monday, April 11

Ashwin NayakAbstract: The goal of this expository talk is to introduce some basic elements of quantum computation and a class of quantum algorithm inspired by random walk on graphs. We will begin by describing quantum analogues of random walk and some of their peculiar properties. Such analogues have recently been used in the design of fast quantum algorithms. We will pay particular attention to algorithms for search-type problems, especially to the problem of testing whether an implicitly specified algebraic group is abelian.

Numerical Computation of Bifurcations in a Model of Differentially Heated Rotating Fluid System

By Dr. Greg Lewis (University of Ontario Institute of Technology)

Wednesday, April 6

Greg LewisAbstract: Mathematical models of fluid systems that isolate the effects of differential heating and rotation are useful tools for studying the behaviour of large-scale geophysical fluids, such as the Earth's atmosphere. In this talk, I discuss the bifurcations of steady axisymmetric solutions that occur in one such model that considers a fluid contained in a rotating spherical shell. A differential heating of the fluid is imposed between the pole and equator of the shell, and gravity is assumed to act in the radial direction. The Navier-Stokes equations in the Boussinesq approximation are used to describe the fluid flow. With the assumption of axisymmetry, these become a system of partial differential equations in two spatial dimensions, and thus discretization leads to large sparse systems. Numerical continuation is used to follow the axisymmetric solutions through parameter space, and the Implicitly Restarted Arnoldi method with Cayley transformation is used to compute the eigenvalues of interest.

This is joint work with Bill Langford, University of Guelph.

Probkems with von Neumann's Theory: A New Theory of Measurement

By Dr. Jonathan Barzilai (Dalhousie University)

Monday, March 21

Jonathan BarzilaiAbstract: The question of the mathematical basis for measurement in the physical and non-physical sciences has been studied since 1887 and is conceptually difficult. This problem is of theoretical and practical importance: among other things, subjective measurement underpins utility theory, the theory of games, decision theory, mathematical psychology and other applied fields.

Some major problems in the classical theory have been unsolved until now. These include the so-called "scale-type" problem which deals with issues concerning the classification of measurement scales and the "meaningfulness" problem which has nothing to do with meaning but rather deals with the applicability of mathematical operations to scales. This state of affairs may indicate that there are fundamental errors of measurement in the social sciences and this, indeed, is the case.

I shall present a new theory of measurement including necessary and sufficient conditions for the applicability of the operations of addition and multiplication and a new classification of measurement models.

The new theory addresses the major problems of the theory of measurement and has far-reaching implications. For example, none of the models of the classical theory of measurement, including von Neumann and Morgenstern's utility, satisfies the conditions for applicability of addition and multiplication.

High-Dimensional Data Analysis by Action-Respecting Embedding

By Ali Ghodsi (University of Waterloo)

Monday, March 14

Ali GhodsiAbstract: Complex, high-dimensional data can often be well described by a space of much lower dimension. Modern dimensionality reduction techniques, such as Kernel PCA, Isomap, and Local Linear Embedding (LLE), attempt to discover non-linear sub-manifolds by exploiting some form of similarity measure between data points. However, extra information is often available that cannot be conveniently captured in the form of similarities. An important example I address is temporal data obtained from a controlled process, where, in addition to temporal structure, it is also known which actions were taken between observations.

In this talk I present an algorithm that exploits this additional knowledge and greatly enhances manifold discovery. The idea is to estimate a kernel matrix that implicitly maps the original data space into a feature space, automatically discovering a mapping that unfolds the underlying manifold from which the data was sampled. The computational method involves optimizing a positive semi-definite matrix by maximizing the variance in the feature space, subject to constraints that preserve the distances between nearest neighbors and consistency constraints drawn from action labels.

A Parallel Adaptive Mesh Refinement (AMR) Computational Framework for Physically Complex Flows

By Dr. Clinton Groth (University of Toronto)

Monday, March 7

Clinton GrothAbstract: A parallel adaptive mesh refinement (AMR) framework will be described for the solution of systems of conservation equations governing physically complex flows with disparate spatial and temporal scales on body-fitted multi-block mesh. The AMR algorithm makes use of a finite-volume spatial discretization procedure to solve the governing equations on multi-block mesh. A parallel Newton's method is used to solve the system of nonlinear algebraic equations arising from the discretization procedure. A flexible block-based hierarchical data structure is used to facilitate automatic solution-directed mesh adaptation and efficient and scalable parallel implementation via domain decomposition. The use of body-fitted mesh permits more accurate solutions near boundaries, enables anisotropic grid refinement, and allows better resolution of thin boundary and shear layers. In addition, a mesh adjustment algorithm has been developed for treating stationary and moving boundaries and interfaces not aligned with the mesh. The application of the AMR framework to several problems including compressible aerodynamic flows with shocks, multi-phase core flows in solid rocket motors, and combusting flows will be discussed and used to demonstrate the validity and suitability of the approach for predicting a wide range of physically complex flows.

Three Problems in Computational Number Theory

By Dr. Kevin Hare (University of Waterloo)

Monday, February 8

Kevin HareAbstract: In this talk we will give an outline of three different problems in computational number theory, describing some of the techniques used to attack these problems, and some of the results. Odd Perfect Numbers: A perfect number is an integer n such that the sum of the proper divisors of n is equal to n. The first two examples are 6 and 28. (1, 2 and 3 divide 6, and 1+2+3 = 6, etc). This problem goes back to the time of the Greeks. To date, no odd perfect numbers are known, and most people conjecture that they do not exist. With the advent of computers, the bounds for non-existence are constantly being pushed farther. I will discuss some of these bounds, and give a quick outline of some of the techniques.

Reverse Symbolic Computation: There are entire courses for computing various constants to an arbitrary number of decimal places. For example, it is easy to determine that e + pi = 5.859874482... But how would we go the other way? Let's say we were given the number 1.324717957..., could way say where it *most probably* came from? I will discuss some of the techniques for problems of this type.

Roots of Polynomials with restricted coefficients: Consider the set of all polynomials whose coefficients are +1 or -1. Now consider the roots of all of these polynomials in the complex plane. If we plot all of these roots, we get a surprising complicated fractal like image in the complex plane. I will discuss some of the results that are known about this image, and how one could go about plotting this image in something other than the naive fashion.

Semidefinite Programming Relaxations for the Facility Layout Design

By Dr. Matthias Takouda (University of Waterloo, Management Sciences Department)

Monday, February 21

Matthias TakoudaAbstract: The unequal-area facility layout problem is an extremely hard optimization problem. It consists in partitioning a rectangular facility of known dimensions into departments which have pre-specified but possibly unequal areas. The objective is to minimize the total cost associated with the known (or projected) interactions between the departments. Instances of this problem occur in a variety of applications, such as hospital layout, service centre layout, and Very Large Scale Integration (VLSI) circuit design. SDP is a mathematical programming technique involving optimization over matrices. It can be viewed as a generalization of linear programming, where the cone of non-negative reals is replaced by the (more general) cone of positive semidefinite matrices. Our main objective is to investigate the potential of semidefinite programming-based algorithms for tackling facility layout problems.

We introduce a new SDP-based model of the facility layout problem inspired by the experience acquired using Linear Programming. This model is shown to lead to non-trivial global bounds.

A Stable Iterative Method for Linear Programming

By Dr. Henry Wolkowicz (University of Waterloo)

Monday, February 14

Henry WolkowiczAbstract: We present a new primal-dual interior/exterior-point method for linear programming. We use a simple preprocessing step to eliminate both the primal and dual feasibility equations. We then apply an iterative method, within an inexact Newton framework, directly on the linearized equations. We present numerical examples where roundoff error causes problems for the Normal Equation approach. The numerical tests show that our method takes direct advantage of sparsity and stability of the data.

Elliptic Curves and Random Matrix Theory

By Dr. Michael Rubinstein (University of Waterloo)

Monday, February 7

Michael RubinsteinAbstract: Elliptic curves feature prominently in the proof to Fermat's last theorem and in cryptography, yet remain highly mysterious. Surprisingly, random matrix theory, a field first developed by physicists, has begun to shed light on difficult, unanswered questions concerning elliptic curves, from the Riemann Hypothesis to the problems regarding the rank of an elliptic curve. I'll describe the successes and limitations of the random matrix theory approach and discuss the role computation has played in this area.

Three years among the acronyms: a computational mathematics look at climate modelling

By Dr. Marek Stastna (University of Waterloo)

Monday, January 31

Marek StastnaAbstract: The science of climate modeling can be difficult to penetrate tangle of acronyms. From ENSO to NAO and with a generous helping of THC, one is often left wondering if anyone knows what (if anything) all these terms in the news actually mean. In this talk, Dr. Stastna will use his own experience in the modeling of climate variability during deglaciation (the last 20,000 years or so) to present his own view of how computational mathematics fits in with climate science. With a generous helping of visual images, he will contrast climate models and computational fluid dynamics. He will also outline the many different ways undergraduates can get involved with climate modeling. Finally, he will try to define several of the more common acronyms.

Real Roots of Polynomial Systems via Gröbner Bases with an Application to Robotics

By Dr. Fabrice Rouillier (Charge de Recherces, INRIA - Lorraine, Nancy, France)

Thursday, January 27

Abstract: Gröbner bases form one of the main methods used in computer algebra systems for solving or reducing systems of algebraic polynomial equations in many variables. Advances and applications of algorithms to compute such Gröbner bases are of great interest to the computer algebra community because of their many uses.

In this talk, we show how Gröbner bases are used in algorithms for studying the real roots of polynomial systems (with or without inequalities, with or without a finite number of roots, with or without parameters).

We will give a live demonstration of solving a problem in robotics whose solution is unattainable for all the other known software approaches. In particular, it will allow us to progressively decompose the several steps of our algorithm and thus the several sub-algorithms that we have developed.

More information about this application can be found at the web site http://www.inria.fr/actualites/inedit/inedit23_parta.en.html

Solving Gröbner Bases of Polynomial Systems and an Application in Cryptography

By Dr. Jean-Charles Faugere (CNRS, Universiof of Paris 6, Paris, France)

Wednesday, January 26

Abstract: Gröbner bases are mathematical objects introduced by B. Buchberger in 1965 that can be used to solve a system of algebraic equations in general fashion, independent of the number of equations or their context. While the use of Gröbner bases is common today (they are the subject of over 3,000 indexed articles) the algorithms available up to now have been extremely slow.

In 2002 we took up a challenge of J. Patarin for his HFE (Hidden Fields Equations) public key cryptosystem. Using some new Gröbner bases algorithms we were able to break the code using one machine and a couple of days computation. Effectively, this was almost saying that the code could be broken by anybody. In our case we solved a large algebraic system of equations over GF(2) (80 quadratic dense equations in 80 variables). In that case, the computation of the Gröbner basis "reduces'' to solving several huge linear systems over GF(2) (on the order of millions of columns).

The purpose of this talk is to describe some of the efficient methods that we developed for computing Gröbner bases (the algorithms F4/F5) and describe how they were used in the Cryptography application mentioned above. Cryptography is an interesting application since the number of equations and variables tends to be abnormally large. In the above example the reduced matrix has no fewer than 4.6 million columns, which require around 8 gigabytes of memory.

A conic interior point Dantzig-Wolfe decomposition approach for large scale semidefinite programming

By Dr. Kartik Krishnan (McMaster University)

Monday, January 24

Kartik KrishnanAbstract: Conic programming is perhaps 'linear programming (LP)' for the 21st century and has a variety of applications in science and engineering. Other than LP, the two important classes of conic programming include second order cone programming (SOCP) and semidefinite programming (SDP). Interior point methods are currently the most popular techniques for solving conic programs; they can solve large scale LPs and SOCPs, however, they are fairly limited in the size of SDPs they can handle.

They recently proposed a conic interior point decomposition approach for solving large scale semidefinite programs. This is reminiscent to the decomposition approach developed by Dantzig-Wolfe, Benders, and Kelley for large scale linear programs in the 1960s. The idea is to solve a large SDP in an iterative fashion between a master problem and a separation oracle for SDP in an overall interior point decomposition approach. The master problem is a mixed conic program over LP, SOCP, and smaller SDP blocks, which can be solved more easily than the original SDP. In this way, the decomposition approach improves the scalability of current interior point solvers for solving large scale semidefinite programs in practice.

In this talk, they motivate and present this conic interior point decomposition approach for SDP, and some of the issues involved in an efficient implementation of the algorithm. It will also present some of the preliminary computational experiences with this algorithm.

This is joint work with Gema Plaza Martinez (University of Alicante, Spain), and Tamas Terlaky (McMaster University).

Automatic Differentiation and Structure (or, Practical Problems Require Semi-automatic Differentiation)

By Dr. Thomas F. Coleman (Cornell University)

Monday, January 17

Thomas F. ColemanAbstract: Computational mathematics often involves the use of derivatives – usually first and sometimes second derivatives. For example, the numerical solution of systems of nonlinear equations can require the computation of the Jacobian matrix; nonlinear optimization typically needs the gradient vector (at the current point) and may also require the matrix of second derivatives, the Hessian matrix. Indeed, computation of these partial derivatives is often the most time-consuming (and error-prone) component of the overall procedure.

The recent arrival of robust methods and software for automatic differentiation (AD) yields an error-free, and painless mechanism for derivative computations. Unfortunately, the ‘automatic’ application of AD can lead to very expensive computations. Fortunately, most serious applications are structured in such a way that a judicious application of AD, exploiting the structure, can yield accurate derivative matrices, and gradients, in a very cost-effective manner.

In this talk, which requires no background on automatic differentiation, we illustrate how AD can be applied for large-scale problems effectively, by exploiting the structure of the computation. The techniques rely on simple understanding of numerical linear algebra, graph theory, and appreciation of applications.

Empirical FDR Analysis for Functional Neuroimaging

By Dr. Jonathan Taylor (Stanford University)

Thursday, January 13

Jonathan TaylorAbstract: The False Discovery Rate (FDR) has proven to be a popular and useful statistical tool for "signal detection" problems in fields such as genomics, astrophysics and neuroimaging when researchers would like to increase their power of detection while keeping some control of the Type I errors or false discoveries. We begin the talk describing the signal detection problem in functional MRI (fMRI) studies.

The FDR "methodology" relies on the assumption that there is a large proportion of "true" null hypotheses, that is, genes that are inactive or regions of the brain that are inactive. In certain (and perhaps the majority of) cases this assumption proves to be false. We describe some recent work of Efron that addresses this problem, by choosing the null distribution empirically based on the data rather than based on a theoretical null distribution. Our experience shows that this is slightly unsatisfactory for neuroimaging. We conclude with a compromise between the theoretical and empirical null distribution.

Return to top