Tuesday, October 15, 2019 — 4:00 PM EDT

Graphical Models and Structural Learning for Extremes


Conditional independence, graphical models and sparsity are key notions for parsimonious models in high dimensions and for learning structural relationships in the data. The theory of multivariate and spatial extremes describes the risk of rare events through asymptotically justified limit models such as max-stable and multivariate Pareto distributions. Statistical modeling in this field has been limited to moderate dimensions so far, owing to complicated likelihoods and a lack of understanding of the underlying probabilistic structures.

We introduce a general theory of conditional independence for multivariate Pareto distributions that allows to define graphical models and sparsity for extremes. New parametric models can be built in a modular way and statistical inference can be simplified to lower-dimensional margins. We define the extremal variogram, a new summary statistics that turns out to be a tree metric and therefore allows to efficiently learn an underlying tree structure through Prim's algorithm. For a popular parametric class of multivariate Pareto distributions we show that, similarly to the Gaussian case, the sparsity pattern of a general graphical model can be easily read of from suitable inverse covariance matrices. This enables the definition of an extremal graphical lasso that enforces sparsity in the dependence structure. We illustrate the results with an application to flood risk assessment on the Danube river.

This is joint work with Adrien Hitz. Preprint available on \texttt{https://arxiv.org/abs/1812.01734}.

Thursday, October 17, 2019 — 4:00 PM EDT

Building Deep Statistical Thinking for Data Science 2020: Privacy Protected Census, Gerrymandering, and Election


The year 2020 will be a busy one for statisticians and more generally data scientists.  The US Census Bureau has announced that the data from the 2020 Census will be released under differential privacy (DP) protection, which in layperson’s terms means adding some noises to the data.  While few would argue against protecting data privacy, many researchers, especially from the social sciences, are concerned whether the right trade-offs between data privacy and data utility are being made. The DP protection also has direct impact on redistricting, an issue that is already complicated enough with accurate counts, due to the need of guarding against excessive gerrymandering.  The central statistical problem there is a rather unique one:  how to determine whether a realization is an outlier with respect to a null distribution, when that null distribution itself cannot be fully determined?  The 2020 US election will be another highly watched event, with many groups already busy making predictions. Will the lessons from predicting the 2016 US election be learned, or the failure be repeated?  This talk invites the audience on a journey of deep statistical thinking prompted by these questions, regardless whether they have any interest in the US Census or politics.


Friday, October 18, 2019 — 8:00 AM to Saturday, October 19, 2019 — 5:00 PM EDT
First student conference in Statistics, Actuarial Science, and Finance
Friday, October 25, 2019 — 10:30 AM EDT

On the properties of $\Lambda$-quantiles


We present a systematic treatment of $\Lambda$-quantiles, a family of generalized quantiles introduced in Frittelli et al. (2014) under the name of Lambda Value at Risk. We consider various possible definitions and derive their fundamental properties, mainly working under the assumption that the threshold function $\Lambda$ is nonincreasing. We refine some of the weak continuity results derived in Burzoni et al. (2017), showing that the weak continuity properties of $\Lambda$-quantiles are essentially similar to those of the usual quantiles. Further, we provide an axiomatic foundation for $\Lambda$-quantiles based on a locality property that generalizes a similar axiomatization of the usual quantiles based on the ordinal covariance property given in Chambers (2009). We study scoring functions consistent with $\Lambda$-quantiles and as an extension of the usual quantile regression we introduce $\Lambda$-quantile regression, of which we provide two financial applications (joint work with Ilaria Peri).

Thursday, October 31, 2019 — 4:00 PM EDT

Variable selection for structured high-dimensional data using known and novel graph information


Variable selection for structured high-dimensional covariates lying on an underlying graph has drawn considerable interest. However, most of the existing methods may not be scalable to high dimensional settings involving tens of thousands of variables lying on known pathways such as the case in genomics studies, and they assume that the graph information is fully known. This talk will focus on addressing these two challenges. In the first part, I will present an adaptive Bayesian shrinkage approach which incorporates known graph information through shrinkage parameters and is scalable to high dimensional settings (e.g., p~100,000 or millions). We also establish theoretical properties of the proposed approach for fixed and diverging p. In the second part, I will tackle the issue that graph information is not fully known. For example, the role of miRNAs in regulating gene expression is not well-understood and the miRNA regulatory network is often not validated. We propose an approach that treats unknown graph information as missing data (i.e. missing edges), introduce the idea of imputing the unknown graph information, and define the imputed information as the novel graph information.  In addition, we propose a hierarchical group penalty to encourage sparsity at both the pathway level and the within-pathway level, which, combined with the imputation step, allows for incorporation of known and novel graph information. The methods are assessed via simulation studies and are applied to analyses of cancer data.

Thursday, November 7, 2019 — 4:00 PM EST

Nonregular and Minimax Estimation of Individualized Thresholds in High Dimension with Binary Responses


Given a large number of covariates $\bZ$, we consider the estimation of a high-dimensional parameter $\btheta$ in an individualized linear threshold $\btheta^T\bZ$ for a continuous variable $X$, which minimizes the disagreement between $\sign{X-\btheta^T\bZ}$ and a binary response $Y$. While the problem can be formulated into the M-estimation framework, minimizing the corresponding empirical risk function is computationally intractable due to discontinuity of the sign function. Moreover, estimating $\btheta$ even in the fixed-dimensional setting is known as a nonregular problem leading to nonstandard asymptotic theory. To tackle the computational and theoretical challenges in the estimation of the high-dimensional parameter $\btheta$, we propose an empirical risk minimization approach based on a regularized smoothed non-convex loss function. The Fisher consistency of the proposed method is guaranteed as the bandwidth of the smoothed loss is shrunk to 0. Statistically, we show that the finite sample error bound for estimating $\btheta$ in $\ell_2$ norm is $(s\log d/n)^{\beta/(2\beta+1)}$, where $d$ is the dimension of $\btheta$, $s$ is the sparsity level, $n$ is the sample size and $\beta$ is the smoothness of the conditional density of $X$ given the response $Y$ and the covariates $\bZ$. The convergence rate is nonstandard and slower than that in the classical Lasso problems. Furthermore, we prove that the resulting estimator is minimax rate optimal up to a logarithmic factor. The Lepski's method is developed to achieve the adaption to the unknown sparsity $s$ and smoothness $\beta$. Computationally, an efficient path-following algorithm is proposed to compute the solution path. We show that this algorithm achieves geometric rate of convergence for computing the whole path. Finally, we evaluate the finite sample performance of the proposed estimator in simulation studies and a real data analysis from the ChAMP (Chondral Lesions And Meniscus Procedures) Trial.

Thursday, November 14, 2019 — 4:00 PM EST

On Khintchine's Inequality for Statistics


In complex estimation and hypothesis testing settings, it may be impossible to compute p-values or construct confidence intervals using classical analytic approaches like asymptotic normality.  Instead, one often relies on randomization and resampling procedures such as the bootstrap or permutation test.  But these approaches suffer from the computational burden of large scale Monte Carlo runs.  To remove this burden, we develop analytic methods for hypothesis testing and confidence intervals by specifically considering the discrete finite sample distributions of the randomized test statistic.  The primary tool we use to achieve such results is Khintchine's inequality and its extensions and generalizations.

Friday, November 15, 2019 — 10:30 AM EST

More information about this seminar will be added as soon as possible.

Thursday, November 21, 2019 — 4:00 PM EST

More information about this seminar will be added as soon as possible.

Friday, November 22, 2019 — 10:30 AM EST

More information about this seminar will be added as soon as possible.

S M T W T F S
29
30
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
1
2
  1. 2019 (62)
    1. November (7)
    2. October (8)
    3. September (4)
    4. August (2)
    5. July (2)
    6. June (2)
    7. May (7)
    8. April (7)
    9. March (6)
    10. February (4)
    11. January (13)
  2. 2018 (44)
    1. November (6)
    2. October (6)
    3. September (4)
    4. August (3)
    5. July (2)
    6. June (1)
    7. May (4)
    8. April (2)
    9. March (4)
    10. February (2)
    11. January (10)
  3. 2017 (55)
  4. 2016 (44)
  5. 2015 (38)
  6. 2014 (44)
  7. 2013 (46)
  8. 2012 (44)