Events

Thursday, January 24, 2019 — 4:00 PM EST

Some Priors for Nonparametric Shrinkage and Bayesian Sparsity Inference 


In this talk, I introduce two novel classes of shrinkage priors for different purposes: functional HorseShoe (fHS) prior for nonparametric subspace shrinkage and neuronized priors for general sparsity inference. 

  In function estimation problems, the fHS prior encourages shrinkage towards parametric classes of functions. Unlike other shrinkage priors for parametric models, the fHS shrinkage acts on the shape of the function rather than inducing sparsity on model parameters. I study some desirable theoretical properties including an optimal posterior concentration property on the function and the model selection consistency. I apply the fHS prior to nonparametric additive models for some simulated and real data sets, and the results show that the proposed procedure outperforms the state-of-the-art methods in terms of estimation and model selection.

  For general sparsity inference, I propose the neuronized priors to unify and extend existing shrinkage priors such as one-group continuous shrinkage priors, continuous spike-and-slab priors, and discrete spike-and-slab priors with point-mass mixtures. The new priors are formulated as the product of a weight variable and a transformed scale variable via an activation function.  By altering the activation function, practitioners can easily implement a large class of Bayesian variable selection procedures. Compared with classic spike and slab priors, the neuronized priors achieve the same explicit variable selection without employing any latent indicator variable, which results in more efficient MCMC algorithms and more effective posterior modal estimates. I also show that these new formulations can be applied to more general and complex sparsity inference problems, which are computationally challenging, such as structured sparsity and spatially correlated sparsity problems.

Friday, January 25, 2019 — 4:00 PM EST

The Cost of Privacy: Optimal Rates of Convergence for Parameter Estimation with Differential Privacy


With the unprecedented availability of datasets containing personal information, there are increasing concerns that statistical analysis of such datasets may compromise individual privacy. These concerns give rise to statistical methods that provide privacy guarantees at the cost of some statistical accuracy. A fundamental question is: to satisfy certain desired level of privacy, what is the best statistical accuracy one can achieve?  Standard statistical methods fail to yield sharp results, and new technical tools are called for.

In this talk, I will present a general lower bound argument to investigate the tradeoff between statistical accuracy and privacy, with application to three problems: mean estimation, linear regression and classification, in both the classical low-dimensional and modern high-dimensional settings. For these statistical problems, we also design computationally efficient algorithms that match the minimax lower bound under the privacy constraints. Finally I will show the applications of those privacy-preserving algorithms to real data containing sensitive information, such as SNPs and body fat, for which privacy-preserving statistical methods are necessary.

Tuesday, January 29, 2019 — 4:00 PM EST

Asymptotically optimal multiple testing with streaming data


The problem of testing multiple hypotheses with streaming (sequential) data arises in diverse applications such as multi-channel signal processing, surveillance systems, multi-endpoint clinical trials, and online surveys. In this talk, we investigate the problem under two generalized error metrics. Under the first one, the probability of at least k mistakes, of any kind, is controlled. Under the second, the probabilities of at least k1 false positives and at least k2 false negatives are simultaneously controlled. For each formulation, we characterize the optimal expected sample size to a first-order asymptotic approximation as the error probabilities vanish, and propose a novel procedure that is asymptotically efficient under every signal configuration.  These results are established when the data streams for the various hypotheses are independent and each local log-likelihood ratio statistic satisfies a certain law of large numbers. Further, in the special case of iid observations, we quantify the asymptotic gains of sequential sampling over fixed-sample size schemes.

Wednesday, January 30, 2019 — 4:00 PM EST

If Journals Embraced Conditional Equivalence Testing, Would Research be Better?


Motivated by recent concerns with the reproducibility and reliability of scientific research, we introduce a publication policy that incorporates "conditional equivalence testing" (CET), a two-stage testing scheme in which standard null hypothesis significance testing (NHST) is followed conditionally by testing for equivalence. We explain how such a policy could address issues of publication bias, and investigate similarities with a Bayesian approach.  We then develop a novel optimality model that, given current incentives to publish, predicts a researcher's most rational use of resources. Using this model, we are able to determine whether a given policy, such as our CET policy, can incentivize more reliable and reproducible research.

Thursday, January 31, 2019 — 4:00 PM EST

How does consumption habit affect the household’s demand for life-contingent claims?


This paper examines the impact of habit formation on demand for life-contingent claims. We propose a life-cycle model with habit formation and solve the optimal consumption, portfolio choice, and life insurance/annuity problem analytically. We illustrate how consumption habits can alter the bequest motive and therefore drive the demand for life-contingent products. Finally, we use our model to examine the mismatch in the life insurance market between the life insurance holdings of most households and their underlying financial vulnerabilities, and the mismatch in the annuity market between the lack of any annuitization and the risk of outliving financial wealth.

Friday, February 1, 2019 — 4:00 PM EST

From Random Landscapes to Statistical inference


Consider the problem of recovering a rank 1 tensor of order k that has been subject to additive Gaussian Noise. It is information theoretically possible to recover the tensor with a finite number of samples via maximum likelihood estimation, however, it is expected that one needs a polynomially diverging number of samples to efficiently recover it. What is the cause if this large statistical-to-algorithmic gap? To understand this interesting question of high dimensional statistics, we begin by studying an intimately related question: optimization of random homogenous polynomials on the sphere in high dimensions. We show that the estimation threshold is related to a geometric analogue of the BBP transition for matrices. We then study the threshold for efficient recovery for a simple class of algorithms, Langevin dynamics and gradient descent. We view this problem in terms of a broader class of polynomial optimization problems and propose a mechanism or success/failure of recovery in terms of the strength of the signal on the high entropy region of the initialization. We will review several results including joint works with Ben Arous-Gheissari and Lopatto-Miolane. 

Tuesday, February 5, 2019 — 4:00 PM EST

Space-filling Designs for Computer Experiments and Their Application to Big Data Research


Computer experiments provide useful tools for investigating complex systems, and they call for space-filling designs, which are a class of designs that allow the use of various modeling methods. He and Tang (2013) introduced and studied a class of space-filling designs, strong orthogonal arrays. To date, an important problem that has not been addressed in the literature is that of design selection for such arrays. In this talk, I will first give a broad introduction to space-filling designs, and then present some results on the selection of strong orthogonal arrays.

The second part of my talk will present some preliminary work on the application of space-filling designs to big data research. Nowadays, it is challenging to use current computing resources to analyze super-large datasets. Subsampling-based methods are the common approaches to reducing data sizes, with the leveraging method (Ma and Sun, 2014) being the most popular. Recently, a new approach, information-based optimal subdata selection (IBOSS) method was proposed (Wang, Yang and Stufken, 2018), which applies the design methodology to the big data problem. However, both the leveraging method and the IBOSS method are model-dependent. Space-filling designs do not suffer this drawback, as shown in our simulation studies.

Friday, February 22, 2019 — 10:30 AM EST

TBA

Friday, March 8, 2019 — 10:30 AM EST

TBA

S M T W T F S
30
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
1
2
  1. 2019 (24)
    1. May (3)
    2. April (2)
    3. March (3)
    4. February (3)
    5. January (13)
  2. 2018 (44)
    1. November (6)
    2. October (6)
    3. September (4)
    4. August (3)
    5. July (2)
    6. June (1)
    7. May (4)
    8. April (2)
    9. March (4)
    10. February (2)
    11. January (10)
  3. 2017 (55)
  4. 2016 (44)
  5. 2015 (38)
  6. 2014 (44)
  7. 2013 (46)
  8. 2012 (44)