University COVID-19 update

The University of Waterloo is constantly updating our most Frequently Asked Questions.

Questions about buildings and services? Visit the list of Modified Services.

Please note: The University of Waterloo is closed for all events until further notice.

Events - November 2018

Thursday, November 29, 2018 — 4:00 PM EST

Computational Aspects of Robust Optimized Certainty Equivalent and Option Pricing

We present a robust extension under distribution uncertainty of optimized certainty equivalent that includes the expected shortfall. We show that the infinite dimensional optimization problem can be reduced to a finite one using transport duality methods. Some important cases such as the Expected Shortfall can even be computed explicitly and provide insights about the additional costs from distributional uncertainty.

The general result can be further applied for explicit computation of robust option price where we also provide some explicit formulas in cases of call options. We finally address dual representation of the robust optimized certainty equivalent.

This talk is based on a joint work with Daniel Bartle and Ludovic Tangpi.

Thursday, November 22, 2018 — 4:00 PM EST

A Bayesian Approach to Joint Modeling of Matrix-valued Imaging Data and Treatment Outcome with Applications to Depression Studies

In this talk, we discuss a unified Bayesian joint modeling framework for studying association between a binary treatment outcome and a baseline matrix-valued predictor. Specifically, a joint modeling approach relating an outcome to a matrix-valued predictor through a probabilistic formulation of multilinear principal component analysis (MPCA) is developed. This framework establishes a theoretical relationship between the outcome and the matrix-valued predictor although the predictor is not explicitly expressed in the model. Simulation studies are provided showing that the proposed method is superior or competitive to other methods, such as a two-stage approach and a classical principal component regression (PCR) in terms of both prediction accuracy and estimation of association; its advantage is most notable when the sample size is small and the dimensionality in the imaging covariate is large. Finally, our proposed joint modeling approach is shown to be a very promising tool in an application exploring the association between baseline EEG data and a favorable response to treatment in a depression treatment study by achieving a substantial improvement in prediction accuracy in comparison to competing methods.

Wednesday, November 21, 2018 — 4:00 PM EST

Eigen Portfolio Selection: A Robust Approach to Sharpe Ratio Maximization

We show that even when a covariance matrix is poorly estimated, it is still possible to obtain a robust maximum Sharpe ratio portfolio by exploiting the uneven distribution of estimation errors across principal components. This is accomplished by approximating an investor’s view on future asset returns using a few relatively accurate sample principal components. We discuss two approximation methods. The first method leads to a subtle connection to existing approaches in the literature; while the second one is novel and able to address main shortcomings of existing methods. 


** Pizza & refreshments will be provided **

Everyone welcome!

Friday, November 16, 2018 — 11:00 AM EST

The Ising model: series expansions and new algorithms

We propose new and simple Monte Carlo methods to estimate the partition function of the Ising model. The methods are based on the well-known series expansion of the partition function from statistical physics. For the Ising model, typical Monte Carlo methods work well at high temperature, but fail in the low-temperature regime. We demonstrate that our proposed Monte Carlo methods work differently: they behave particularly well at low temperature. We also compare the accuracy of our estimators with the state-of-the-art variational methods.

Thursday, November 8, 2018 — 4:00 PM EST

Ghost Data

As natural as the real data, ghost data is everywhere—it is just data that you cannot see.  We need to learn how to handle it, how to model with it, and how to put it to work.  Some examples of ghost data are (see, Sall, 2017):

  (a) Virtual data—it isn’t there until you look at it;

  (b) Missing data—there is a slot to hold a value, but the slot is empty;

  (c) Pretend data—data that is made up;

  (d) Highly Sparse Data—whose absence implies a near zero, and

  (e) Simulation data—data to answer “what if.”

For example, absence of evidence/data is not evidence of absence.  In fact, it can be evidence of something.  More Ghost Data can be extended to other existing areas: Hidden Markov Chain, Two-stage Least Square Estimate, Optimization via Simulation, Partition Model, Topological Data, just to name a few.

Three movies will be discussed in this talk: (1) “The Sixth Sense” (Bruce Wallis)—I can see things that you cannot see; (2) “Sherlock Holmes” (Robert Downey)—absence of expected facts; and (3) “Edge of Tomorrow” (Tom Cruise)—how to speed up your learning (AlphaGo-Zero will also be discussed).  It will be helpful, if you watch these movies before coming to my talk.   This is an early stage of my research in this area--any feedback from you is deeply appreciated.  Much of the basic idea is highly influenced via Mr. John Sall (JMP-SAS). 

Thursday, November 1, 2018 — 4:00 PM EDT

Copula Gaussian graphical models for functional data

We consider the problem of constructing statistical graphical models for functional data; that is, the observations on the vertices are random functions. This types of data are common in medical applications such as EEG and fMRI. Recently published functional graphical models rely on the assumption that the random functions are Hilbert-space-valued Gaussian random elements. We relax this assumption by introducing a  copula Gaussian random elements  Hilbert spaces,  leading to what we call the  Functional Copula Gaussian Graphical Model (FCGGM). This model removes the marginal Gaussian assumption but retains the simplicity of the Gaussian dependence structure, which is particularly attractive for large data. We develop four estimators, together with their implementation algorithms, for the FCGGM. We establish the consistency and the convergence rates of one of the estimators under different sets of sufficient conditions with varying strengths. We compare our FCGGM with the existing functional Gaussian graphical model by simulation, under both non-Gaussian and Gaussian graphical models, and apply our method to an EEG data set to construct brain networks.

  1. 2021 (55)
    1. October (1)
    2. September (2)
    3. July (4)
    4. June (3)
    5. May (6)
    6. April (8)
    7. March (13)
    8. February (7)
    9. January (12)
  2. 2020 (71)
    1. December (2)
    2. November (13)
    3. October (16)
    4. September (7)
    5. August (5)
    6. July (3)
    7. June (2)
    8. May (1)
    9. March (4)
    10. February (4)
    11. January (14)
  3. 2019 (65)
  4. 2018 (44)
    1. November (6)
    2. October (6)
    3. September (4)
    4. August (3)
    5. July (2)
    6. June (1)
    7. May (4)
    8. April (2)
    9. March (4)
    10. February (2)
    11. January (10)
  5. 2017 (55)
  6. 2016 (44)
  7. 2015 (37)
  8. 2014 (44)
  9. 2013 (46)
  10. 2012 (44)