Thursday, September 26, 2019 — 4:15 PM EDT

**Optimal Transport, Entropy, and Risk Measures on Wiener space**

We discuss the interplay between entropy, large deviations, and optimal couplings on Wiener space.

In particular we prove a new rescaled version of Talagrand’s transport inequality. As an application, we consider rescaled versions of the entropic risk measure which are sensitive to risks in the fine structure of Brownian paths.

Thursday, September 19, 2019 — 4:00 PM EDT

**Simulation Optimization under Input Model Uncertainty**

Simulation optimization is concerned with identifying the best solution for large, complex and stochastic physical systems via computer simulation models. Its applications span across various fields such as transportation, finance, power, and healthcare. A stochastic simulation model is driven by a set of distributions, known as “input model”. However, since these distributions are usually estimated using finite real-world data, the simulation output is subject to the so-called “input model uncertainty”. Ignoring input uncertainty can cause a higher risk of selecting an inferior solution in simulation optimization. In this talk, I will first present a new framework called Bayesian Risk Optimization (BRO) that hedges against the risk of input uncertainty in simulation optimization. Then I will focus on the problem of optimizing over a finite solution space, a problem known as Ranking and Selection in statistics literature, or Best-Arm Identification in Multi-Armed Bandits literature, and present two new algorithms that can handle input uncertainty.

Friday, September 13, 2019 — 10:30 AM EDT

**Robust Distortion Risk Measures**

In the presence of uncertainty, robustness of risk measures, which are prominent tools for the assessment of financial risks, is of crucial importance. Distributional uncertainty may be accounted for by providing bounds on the values of a risk measure, so-called worst- and best-case risk measures. Worst (best)-case risk measures are determined as the maximal (minimal) value a risk measure can attain when the underlying distribution is unknown – typically up to its first moments. However, these bounds as well as the (worst- and best-case) distributions that attain the worst- and best-case values are too large, respectively “unrealistic”, to be practically relevant.

We provide sharp bounds for the class of distortion risk measures with constraints on the first two moments combined with a constraint on the Wasserstein distance with respect to a reference distribution. Adding the Wasserstein distance constraint, leads to significantly improved bounds and more “realistic” worst-case distributions. Specifically, the worst-case distribution of the two most widely used risk measures, the Value-at-Risk and the Tail-Value-at-Risk, depend on the reference distribution and thus, are no longer two-point distributions.

This is a join publication by Carole Bernard, Silvana M. Pesenti, Steven Vanduffel

Thursday, September 12, 2019 — 4:00 PM EDT

**Nonparametric failure time with Bayesian Additive Regression Trees**

Bayesian Additive Regression Trees (BART) is a nonparametric machine learning method for continuous, dichotomous, categorical and time-to-event outcomes. However, survival analysis with BART currently presents some challenges. Two current approaches each have their pros and cons. Our discrete time approach is free of precarious restrictive assumptions such as proportional hazards and Accelerated Failure Time (AFT), but it becomes increasingly computationally demanding as the sample size increases. Alternatively, a Dirichlet Process Mixture approach is computationally friendly, but it suffers from the AFT assumption. Therefore, we propose to further nonparametrically enhance this latter approach via heteroskedastic BART which will remove the restrictive AFT assumption while maintaining its desirable computational properties.