Events

Filter by:

Limit to events where the title matches:
Limit to events where the first date of the event:
Date range
Limit to events where the first date of the event:
Limit to events where the type is one or more of:
Limit to events tagged with one or more of:
Limit to events where the audience is one or more of:
Tuesday, November 26, 2013 4:00 pm - 4:00 pm EST (GMT -05:00)

WatRISQ seminar by Steven Kou, National University of Singapore

Robust measurement of economic tail risk

We prove that the only tail risk measure that satisfies a set of economic axioms proposed by Schmeidler (1989, Econometrica) and a statistical requirement called elicitability (i.e. there exists an objective function such that a reasonable estimator must be a solution of minimizing the expected objective function) is the median shortfall, which is the median of the tail loss distribution and is also the VaR at a high confidence level.

Some new phenomena in high-dimensional statistics and optimization

Statistical models in which the ambient dimension is of the same order
or larger than the sample size arise frequently in different areas of
science and engineering.  Examples include sparse regression in
genomics; graph selection in social network analysis; and low-rank
matrix estimation in video segmentation.  Although high-dimensional
models of this type date back to seminal work of Kolmogorov and

Assessing financial model risk


Model risk has a huge impact on any financial or insurance risk measurement procedure and its quantification is therefore a crucial step. In this talk, we introduce three quantitative measures of model risk when choosing a particular reference model within a given class: the absolute measure of model risk, the relative measure of model risk and the local measure of model risk. Each of the measures has a specific purpose and so allows for flexibility. We illustrate the various notions by studying some relevant examples, so as to emphasize the practicability and tractability of our approach.

Uncovering the Mechanisms of General Anesthesia: Where Neuroscience Meets Statistics


General anesthesia is a drug-induced, reversible condition involving unconsciousness, amnesia (loss of memory), analgesia (loss of pain sensation), akinesia (immobility), and hemodynamic stability. I will describe a primary mechanism through which anesthetics create these altered states of arousal. Our studies have allowed us to give a detailed characterization of the neurophysiology of loss and recovery of consciousness​, in the case of propofol, and we have demonstrated ​​ that the state of general anesthesia can be rapidly reversed by activating specific brain circuits. The success of our research has depended critically on tight coupling of experiments, ​statistical signal processing​​ and mathematical modeling.

A Machine Learning Approach to Portfolio Risk Management


Risk measurement, valuation and hedging form an integral task in portfolio risk management for insurance companies and other financial institutions. Portfolio risk arises because the values of constituent assets and liabilities change over time in response to changes in the underlying risk factors. The quantification of this risk requires modeling the dynamic portfolio value process. This boils down to compute conditional expectations of future cash flows over long time horizons, e.g., up to 40 years and beyond, which is computationally challenging. 

This lecture presents a framework for dynamic portfolio risk management in discrete time building on machine learning theory. We learn the replicating martingale of the portfolio from a finite sample of its terminal cumulative cash flow. The learned replicating martingale is in closed form thanks to a suitable choice of the reproducing kernel Hilbert space. We develop an asymptotic theory and prove
convergence and a central limit theorem. We also derive finite sample error bounds and concentration inequalities. As application we compute the value at risk and expected shortfall of the one-year loss of some stylized portfolios.

Monday, January 20, 2020 10:00 am - 10:00 am EST (GMT -05:00)

Department seminar by Jared Huling, Ohio State University

Sufficient Dimension Reduction for Populations with Structured Heterogeneity

Risk modeling has become a crucial component in the effective delivery of health care. A key challenge in building effective risk models is accounting for patient heterogeneity among the diverse populations present in health systems. Incorporating heterogeneity based on the presence of various comorbidities into risk models is crucial for the development of tailored care strategies, as it can provide patient-centered information and can result in more accurate risk prediction. Yet, in the presence of high dimensional covariates, accounting for this type of heterogeneity can exacerbate estimation difficulties even with large sample sizes. Towards this aim, we propose a flexible and interpretable risk modeling approach based on semiparametric sufficient dimension reduction. The approach accounts for patient heterogeneity, borrows strength in estimation across related subpopulations to improve both estimation efficiency and interpretability, and can serve as a useful exploratory tool or as a powerful predictive model. In simulated examples, we show that our approach can improve estimation performance in the presence of heterogeneity and is quite robust to deviations from its key underlying assumption. We demonstrate the utility of our approach in the prediction of hospital admission risk for a large health system when tested on further follow-up data.

Tuesday, January 21, 2020 10:00 am - 10:00 am EST (GMT -05:00)

Department seminar by Lu Yang, University of Amsterdam

Diagnostics for Regression Models with Discrete Outcomes

Making informed decisions about model adequacy has been an outstanding issue for regression models with discrete outcomes. Standard residuals such as Pearson and deviance residuals for such outcomes often show a large discrepancy from the hypothesized pattern even under the true model and are not informative especially when data are highly discrete. To fill this gap, we propose a surrogate empirical residual distribution function for general discrete (e.g. ordinal and count) outcomes that serves as an alternative to the empirical Cox-Snell residual distribution function. When at least one continuous covariate is available, we show asymptotically that the proposed function converges uniformly to the identity function under the correctly specified model, even with highly discrete (e.g. binary) outcomes. Through simulation studies, we demonstrate empirically that the proposed surrogate empirical residual distribution function is highly effective for various diagnostic tasks, since it is close to the hypothesized pattern under the true model and significantly departs from this pattern under model misspecification.

Wednesday, January 22, 2020 10:00 am - 10:00 am EST (GMT -05:00)

Department seminar by Lin Liu, Harvard University

The possibility of nearly assumption-free inference in causal inference

In causal effect estimation, the state-of-the-art is the so-called double machine learning (DML) estimators, which combine the benefit of doubly robust estimation, sample splitting and using machine learning methods to estimate nuisance parameters. The validity of the confidence interval associated with a DML estimator, in most part, relies on the complexity of nuisance parameters and how close the machine learning estimators are to the nuisance parameters. Before we have a complete understanding of the theory of many machine learning methods including deep neural networks, even a DML estimator may have a bias so large that prohibits valid inference. In this talk, we describe a nearly assumption-free procedure that can either criticize the invalidity of the Wald confidence interval associated with the DML estimators of some causal effect of interest or falsify the certificates (i.e. the mathematical conditions) that, if true, could ensure valid inference. Essentially, we are testing the null hypothesis that if the bias of an estimator is smaller than a fraction $\rho$ its standard error. Our test is valid under the null without requiring any complexity (smoothness or sparsity) assumptions on the nuisance parameters or the properties of machine learning estimators and may have power to inform the analysts that they have to do something else than DML estimators or Wald confidence intervals for inference purposes. This talk is based on joint work with Rajarshi Mukherjee and James M. Robins.

Friday, January 24, 2020 10:00 am - 10:00 am EST (GMT -05:00)

Department seminar by Michael Gallaugher, McMaster University

Clustering and Classification of Three-Way Data

Clustering and classification is the process of finding and analyzing underlying group structure in heterogenous data and is fundamental to computational statistics and machine learning. In the past, relatively simple techniques could be used for clustering; however, with data becoming increasingly complex, these methods are oftentimes not advisable, and in some cases not possible. One such such example is the analysis of three-way data where each data point is represented as a matrix instead of a traditional vector. Examples of three-way include greyscale images and multivariate longitudinal data. In this talk, recent methods for clustering three-way data will be presented including high-dimensional and skewed three-way data. Both simulated and real data will be used for illustration and future directions and extensions will be discussed.