Friday, November 29, 2019 — 10:30 AM EST

More information about this seminar will be added as soon as possible.

Thursday, November 28, 2019 — 4:00 PM EST

More information about this seminar will be added as soon as possible.

Friday, November 22, 2019 — 10:30 AM EST

More information about this seminar will be added as soon as possible.

Thursday, November 21, 2019 — 4:00 PM EST

More information about this seminar will be added as soon as possible.

Friday, November 15, 2019 — 10:30 AM EST

More information about this seminar will be added as soon as possible.

Thursday, November 14, 2019 — 4:00 PM EST

On Khintchine's Inequality for Statistics


In complex estimation and hypothesis testing settings, it may be impossible to compute p-values or construct confidence intervals using classical analytic approaches like asymptotic normality.  Instead, one often relies on randomization and resampling procedures such as the bootstrap or permutation test.  But these approaches suffer from the computational burden of large scale Monte Carlo runs.  To remove this burden, we develop analytic methods for hypothesis testing and confidence intervals by specifically considering the discrete finite sample distributions of the randomized test statistic.  The primary tool we use to achieve such results is Khintchine's inequality and its extensions and generalizations.

Thursday, November 7, 2019 — 4:00 PM EST

Nonregular and Minimax Estimation of Individualized Thresholds in High Dimension with Binary Responses


Given a large number of covariates $\bZ$, we consider the estimation of a high-dimensional parameter $\btheta$ in an individualized linear threshold $\btheta^T\bZ$ for a continuous variable $X$, which minimizes the disagreement between $\sign{X-\btheta^T\bZ}$ and a binary response $Y$. While the problem can be formulated into the M-estimation framework, minimizing the corresponding empirical risk function is computationally intractable due to discontinuity of the sign function. Moreover, estimating $\btheta$ even in the fixed-dimensional setting is known as a nonregular problem leading to nonstandard asymptotic theory. To tackle the computational and theoretical challenges in the estimation of the high-dimensional parameter $\btheta$, we propose an empirical risk minimization approach based on a regularized smoothed non-convex loss function. The Fisher consistency of the proposed method is guaranteed as the bandwidth of the smoothed loss is shrunk to 0. Statistically, we show that the finite sample error bound for estimating $\btheta$ in $\ell_2$ norm is $(s\log d/n)^{\beta/(2\beta+1)}$, where $d$ is the dimension of $\btheta$, $s$ is the sparsity level, $n$ is the sample size and $\beta$ is the smoothness of the conditional density of $X$ given the response $Y$ and the covariates $\bZ$. The convergence rate is nonstandard and slower than that in the classical Lasso problems. Furthermore, we prove that the resulting estimator is minimax rate optimal up to a logarithmic factor. The Lepski's method is developed to achieve the adaption to the unknown sparsity $s$ and smoothness $\beta$. Computationally, an efficient path-following algorithm is proposed to compute the solution path. We show that this algorithm achieves geometric rate of convergence for computing the whole path. Finally, we evaluate the finite sample performance of the proposed estimator in simulation studies and a real data analysis from the ChAMP (Chondral Lesions And Meniscus Procedures) Trial.

Thursday, October 31, 2019 — 4:00 PM EDT

Variable selection for structured high-dimensional data using known and novel graph information


Variable selection for structured high-dimensional covariates lying on an underlying graph has drawn considerable interest. However, most of the existing methods may not be scalable to high dimensional settings involving tens of thousands of variables lying on known pathways such as the case in genomics studies, and they assume that the graph information is fully known. This talk will focus on addressing these two challenges. In the first part, I will present an adaptive Bayesian shrinkage approach which incorporates known graph information through shrinkage parameters and is scalable to high dimensional settings (e.g., p~100,000 or millions). We also establish theoretical properties of the proposed approach for fixed and diverging p. In the second part, I will tackle the issue that graph information is not fully known. For example, the role of miRNAs in regulating gene expression is not well-understood and the miRNA regulatory network is often not validated. We propose an approach that treats unknown graph information as missing data (i.e. missing edges), introduce the idea of imputing the unknown graph information, and define the imputed information as the novel graph information.  In addition, we propose a hierarchical group penalty to encourage sparsity at both the pathway level and the within-pathway level, which, combined with the imputation step, allows for incorporation of known and novel graph information. The methods are assessed via simulation studies and are applied to analyses of cancer data.

Friday, October 25, 2019 — 10:30 AM EDT

On the properties of Lambda-quantiles


We present a systematic treatment of Lambda-quantiles, a family of generalized quantiles introduced in Frittelli et al. (2014) under the name of Lambda Value at Risk. We consider various possible definitions and derive their fundamental properties, mainly working under the assumption that the threshold function Lambda is nonincreasing. We refine some of the weak continuity results derived in Burzoni et al. (2017), showing that the weak continuity properties of Lambda-quantiles are essentially similar to those of the usual quantiles. Further, we provide an axiomatic foundation for Lambda-quantiles based on a locality property that generalizes a similar axiomatization of the usual quantiles based on the ordinal covariance property given in Chambers (2009). We study scoring functions consistent with Lambda-quantiles and as an extension of the usual quantile regression we introduce Lambda-quantile regression, of which we provide two financial applications.

(joint work with Ilaria Peri).

Friday, October 18, 2019 — 8:00 AM to Saturday, October 19, 2019 — 5:00 PM EDT
First student conference in Statistics, Actuarial Science, and Finance
Thursday, October 17, 2019 — 4:00 PM EDT

Building Deep Statistical Thinking for Data Science 2020: Privacy Protected Census, Gerrymandering, and Election


The year 2020 will be a busy one for statisticians and more generally data scientists.  The US Census Bureau has announced that the data from the 2020 Census will be released under differential privacy (DP) protection, which in layperson’s terms means adding some noises to the data.  While few would argue against protecting data privacy, many researchers, especially from the social sciences, are concerned whether the right trade-offs between data privacy and data utility are being made. The DP protection also has direct impact on redistricting, an issue that is already complicated enough with accurate counts, due to the need of guarding against excessive gerrymandering.  The central statistical problem there is a rather unique one:  how to determine whether a realization is an outlier with respect to a null distribution, when that null distribution itself cannot be fully determined?  The 2020 US election will be another highly watched event, with many groups already busy making predictions. Will the lessons from predicting the 2016 US election be learned, or the failure be repeated?  This talk invites the audience on a journey of deep statistical thinking prompted by these questions, regardless whether they have any interest in the US Census or politics.


Tuesday, October 15, 2019 — 4:00 PM EDT

Graphical Models and Structural Learning for Extremes


Conditional independence, graphical models and sparsity are key notions for parsimonious models in high dimensions and for learning structural relationships in the data. The theory of multivariate and spatial extremes describes the risk of rare events through asymptotically justified limit models such as max-stable and multivariate Pareto distributions. Statistical modeling in this field has been limited to moderate dimensions so far, owing to complicated likelihoods and a lack of understanding of the underlying probabilistic structures.

We introduce a general theory of conditional independence for multivariate Pareto distributions that allows to define graphical models and sparsity for extremes. New parametric models can be built in a modular way and statistical inference can be simplified to lower-dimensional margins. We define the extremal variogram, a new summary statistics that turns out to be a tree metric and therefore allows to efficiently learn an underlying tree structure through Prim's algorithm. For a popular parametric class of multivariate Pareto distributions we show that, similarly to the Gaussian case, the sparsity pattern of a general graphical model can be easily read of from suitable inverse covariance matrices. This enables the definition of an extremal graphical lasso that enforces sparsity in the dependence structure. We illustrate the results with an application to flood risk assessment on the Danube river.

This is joint work with Adrien Hitz. Preprint available on \texttt{https://arxiv.org/abs/1812.01734}.

Friday, October 11, 2019 — 10:30 AM EDT

Precision Factor Investing: Avoiding Factor Traps by Predicting Heterogeneous Effects of Firm Characteristics


We apply ideas from causal inference and machine learning to estimate the sensitivity of future stock returns to observable characteristics like size, value, and momentum. By analogy with the informal notion of a "value trap," we distinguish "characteristic traps" (stocks with weak sensitivity) from "characteristic responders" (those with strong sensitivity). We classify stocks by interpreting these distinctions as heterogeneous treatment effects (HTE), with characteristics interpreted as treatments and future returns interpreted as responses. The classification exploits a large set of stock features and recent work applying machine learning to HTE. Long-short strategies based on sorting stocks on characteristics perform significantly better when applied to characteristic responders than traps. A strategy based on the difference between these long-short returns profits from the predictability of HTE rather than from factors associated with the characteristics themselves. This is joint work with Pu He.

Thursday, October 10, 2019 — 4:00 PM EDT

Estimating Time-Varying Directed Networks


The problem of modeling the dynamical regulation process within a gene network has been of great interest for a long time. We propose to model this dynamical system with a large number of nonlinear ordinary differential equations (ODEs), in which the regulation function is estimated directly from data without any parametric assumption. Most current research assumes the gene regulation network is static, but in reality, the connection and regulation function of the network may change with time or environment. This change is reflected in our dynamical model by allowing the regulation function varying with the gene expression and forcing this regulation function to be zero if no regulation happens. We introduce a statistical method called functional SCAD to estimate a time-varying sparse and directed gene regulation network, and simultaneously, to provide a smooth estimation of the regulation function and identify the interval in which no regulation effect exists. The finite sample performance of the proposed method is investigated in a Monte Carlo simulation study. Our method is demonstrated by estimating a time-varying directed gene regulation network of 20 genes involved in muscle development during the embryonic stage of Drosophila melanogaster.

Thursday, October 3, 2019 — 4:00 PM EDT

Real World EHR Big Data: Challenges and Opportunities


The real world EHR and health care Big Data may bring a revolutionary thinking on how to evaluate therapeutic treatments and clinical pathways in a real world setting. Big EHR data may also allow us to identify specific patient populations for a specific treatment so that the concept of personalized treatment can be implemented and deployed directly on the EHR system. However, it is quite challenging to use the real world data in treatment assessment and disease predictions due to various reasons. In this talk, I will share our experiences on EHR and health care Big Data research. First, I will discuss the basic infrastructure and multi-disciplinary team that is necessary in order to deal with the EHR data. Then I will use an example of  subarachnoid hemorrhage (SAH) study to demonstrate a procedure with eight steps that we have developed to use EHR data for research purpose. In particular, the EHR data extraction, cleaning, pre-processing and preparation are the major steps that require more novel statistical methods to deal with. Finally I will discuss the challenges and opportunities for statisticians to use EHR data for research.

Thursday, September 26, 2019 — 4:15 PM EDT

Optimal Transport, Entropy, and Risk Measures on Wiener space


We discuss the interplay between entropy, large deviations, and optimal couplings on Wiener space.

In particular we prove a new rescaled version of Talagrand’s transport inequality. As an application, we consider rescaled versions of the entropic risk measure which are sensitive to risks in the fine structure of Brownian paths. 

Thursday, September 19, 2019 — 4:00 PM EDT

Simulation Optimization under Input Model Uncertainty


Simulation optimization is concerned with identifying the best solution for large, complex and stochastic physical systems via computer simulation models. Its applications span across various fields such as transportation, finance, power, and healthcare. A stochastic simulation model is driven by a set of distributions, known as “input model”. However, since these distributions are usually estimated using finite real-world data, the simulation output is subject to the so-called “input model uncertainty”. Ignoring input uncertainty can cause a higher risk of selecting an inferior solution in simulation optimization. In this talk, I will first present a new framework called Bayesian Risk Optimization (BRO) that hedges against the risk of input uncertainty in simulation optimization. Then I will focus on the problem of optimizing over a finite solution space, a problem known as Ranking and Selection in statistics literature, or Best-Arm Identification in Multi-Armed Bandits literature, and present two new algorithms that can handle input uncertainty. 

Friday, September 13, 2019 — 10:30 AM EDT

Robust Distortion Risk Measures


In the presence of uncertainty, robustness of risk measures, which are prominent tools for the assessment of financial risks, is of crucial importance. Distributional uncertainty may be accounted for by providing bounds on the values of a risk measure, so-called worst- and best-case risk measures. Worst (best)-case risk measures are determined as the maximal (minimal) value a risk measure can attain when the underlying distribution is unknown – typically up to its first moments. However, these bounds as well as the (worst- and best-case) distributions that attain the worst- and best-case values are too large, respectively “unrealistic”, to be practically relevant.

We provide sharp bounds for the class of distortion risk measures with constraints on the first two moments combined with a constraint on the Wasserstein distance with respect to a reference distribution. Adding the Wasserstein distance constraint, leads to significantly improved bounds and more “realistic” worst-case distributions. Specifically, the worst-case distribution of the two most widely used risk measures, the Value-at-Risk and the Tail-Value-at-Risk, depend on the reference distribution and thus, are no longer two-point distributions.

This is a join publication by Carole Bernard, Silvana M. Pesenti, Steven Vanduffel
Thursday, September 12, 2019 — 4:00 PM EDT

Nonparametric failure time with Bayesian Additive Regression Trees


Bayesian Additive Regression Trees (BART) is a nonparametric machine learning method for continuous, dichotomous, categorical and time-to-event outcomes.  However, survival analysis with BART currently presents some challenges.  Two current approaches each have their pros and cons.  Our discrete time approach is free of precarious  restrictive assumptions such as proportional hazards and Accelerated Failure Time (AFT), but it becomes increasingly computationally demanding as the sample size increases.  Alternatively, a Dirichlet Process Mixture approach is computationally friendly, but it suffers from the AFT assumption.  Therefore, we propose to further nonparametrically enhance this latter approach via heteroskedastic BART which will remove the restrictive AFT assumption while maintaining its desirable computational properties.

Thursday, August 22, 2019 — 4:00 PM EDT

Development and Application of A Measure of Prediction Accuracy for Binary and Censored Time to Event Data

Clinical preventive care often uses risk scores to screen population for high risk patients for targeted intervention. Typically the prevalence is low, meaning extremely unbalanced classes. Positive predictive value and true positive fraction have been recognized as relevant metrics in this imbalanced setting. However, for commonly used continuous or ordinal risk scores, these measures require a subjective cut-off threshold value to dichotomize and predict class membership. In this talk, I describe a summary index of positive predictive value (AP) for binary and event time outcome data. Similar to the widely used AUC, AP is rank based and a semi-proper scoring rule. We also study the behavior of incremental values of AUC, AP and the strict proper scoring rule scaled Brier score (sBrier) when an additional risk factor Z is included. It is shown that the incremental values agreement between AP and sBrier increases as the class unbalance increases, while the agreement between AUC and sBrier decreases as class unbalance increases. Under certain configurations, the changes in AP and sBrier indicate worse prediction performance when Z is added to the risk profile, while the changes in AUC are almost always favor the addition of Z. Several real world examples are used throughout the talk to illustrate and contrast these metrics.

Tuesday, August 13, 2019 — 4:00 PM EDT

Spatial Cauchy processes with local tail dependence


We study a class of models for spatial data obtained using Cauchy convolution processes with random indicator kernel functions. We show that the resulting spatial processes have some appealing dependence properties including tail dependence at smaller distances and asymptotic independence at larger distances. We derive extreme-value limits of these processes and consider some interesting special cases. We show that estimation is feasible in high dimensions and the proposed class of models allows for a wide range of dependence structures.

Monday, July 22, 2019 — 4:00 PM EDT

Negative Marginal Option Values: The Interaction of Frictions and Option Exercise in Variable Annuities


Market frictions can affect option exercise, which in turn affects the value of a marginal option to the writer—and may even yield negative marginal option values. We demonstrate the relevance of this mechanism in the context of variable annuities with popular withdrawal guarantees, both theoretically and empirically. More precisely, we show that in the presence of income and capital gains taxation for the policyholder, adding on a common death benefit option—allowing to continue the withdrawal guarantee in case of death—changes the policy- holder’s optimal withdrawal behavior. As a consequence, the total value of the contract from the perspective of the insurer may decrease, i.e. the marginal option value is negative. This explains the common practice of including death benefit options without additional charges in these products.

Thursday, July 11, 2019 — 4:00 PM EDT

On making valid inferences by combining data from multiple sources: An appraisal


National statistical agencies have long been using probability samples from multiple sources in conjunction with census and administrative data to make valid and efficient inferences on population parameters of interest, leading to reliable official statistics. This topic has received a lot of attention more recently in the context of decreasing response rates from probability samples and availability of data from non-probability samples and in particular “big data”. In this talk, I will discuss some methods, based on models for the non-probability samples, which could lead to useful inferences when combined with probability samples observing only auxiliary variables related to a variable of interest.  I will also explain how big data may be used as predictors in small area estimation and comment on using non-probability samples to produce “real time” official statistics.

Monday, June 24, 2019 — 4:00 PM EDT

A regularization approach to the dynamic panel data model estimation

In a dynamic panel data model, the number of moment conditions may be very large even if the time dimension is moderately large. Even though the use of many moment conditions improves the asymptotic efficiency, the inclusion of an excessive number of moment conditions increases the bias in finite samples. An immediate consequence of a large number of instruments is a large dimensional covariance matrix of the instruments. As a consequence, the condition number (the largest eingenvalue divided by the smallest one) is very high especially when the autoregressive parameter is close to unity. Inverting covariance matrix of instruments with high condition number can badly impact the properties of the estimators. This paper proposes a regularization approach to the estimation of such models using three regularization schemes based on three different ways of inverting the covariance matrix of the instruments. Under double asymptotic, we show that our regularized estimators are consistent and asymptotically normal. These regularization schemes involve a regularization or smoothing parameter so that we derive a data driven selection of this regularization parameter based on an approximation of the Mean Square Error and show its optimality. The simulations confirm that regularization improves the properties of the usual GMM estimator. As empirical application, we investigate the effect of financial development on economic growth. Regularization corrects the bias of the usual GMM estimator which seems to underestimate the financial development - economic growth effect.

Friday, June 14, 2019 — 10:30 AM EDT

Aggregate Risk and Bank Regulation in General Equilibrium


We examine the optimal design of bank regulation in a general equilibrium model. The unregulated economy has multiple equilibria that feature varying sizes of the financial sector and bank fragility. The economy underinvests (overinvests) in risky production when aggregate risk is low (high). We characterize and implement the efficient allocations via capital and reserve requirements, deposit insurance and bailouts. There is a range of efficient regulatory policies with a stricter capital requirement on banks being accompanied by a looser reserve requirement and less deposit insurance. We derive novel insights into how aggregate risk influences capital and reserve requirements as well as the efficiency of depositor subsidies.

Pages

S M T W T F S
29
30
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
1
2
  1. 2019 (62)
    1. November (7)
    2. October (8)
    3. September (4)
    4. August (2)
    5. July (2)
    6. June (2)
    7. May (7)
    8. April (7)
    9. March (6)
    10. February (4)
    11. January (13)
  2. 2018 (44)
    1. November (6)
    2. October (6)
    3. September (4)
    4. August (3)
    5. July (2)
    6. June (1)
    7. May (4)
    8. April (2)
    9. March (4)
    10. February (2)
    11. January (10)
  3. 2017 (55)
  4. 2016 (44)
  5. 2015 (38)
  6. 2014 (44)
  7. 2013 (46)
  8. 2012 (44)