Events

Wednesday, October 17, 2018 — 4:00 PM EDT
Emery Brown lecture banner

Uncovering the Mechanisms of General Anesthesia: Where Neuroscience Meets Statistics


General anesthesia is a drug-induced, reversible condition involving unconsciousness, amnesia (loss of memory), analgesia (loss of pain sensation), akinesia (immobility), and hemodynamic stability. I will describe a primary mechanism through which anesthetics create these altered states of arousal. Our studies have allowed us to give a detailed characterization of the neurophysiology of loss and recovery of consciousness​, in the case of propofol, and we have demonstrated ​​ that the state of general anesthesia can be rapidly reversed by activating specific brain circuits. The success of our research has depended critically on tight coupling of experiments, ​statistical signal processing​​ and mathematical modeling.

Friday, October 19, 2018 — 11:00 AM EDT

Efficient Estimation, Robust Testing and Design Optimality for Two-Phase Studies


Two-phase designs are cost-effective sampling strategies when some covariates are expensive to be measured on all study subjects. Well-known examples include case-control, case-cohort, nested case-control and extreme tail sampling designs. In this talk, I will discuss three important aspects in two-phase studies: estimation, hypothesis testing and design optimality. First, I will discuss efficient estimation methods we have developed for two-phase studies. We allow expensive covariates to be correlated with inexpensive covariates collected in the first phase. Our proposed estimation is based on maximization of a modified nonparametric likelihood function through a generalization of the expectation-maximization algorithm. The resulting estimators are shown to be consistent, asymptotically normal and asymptotically efficient with easily estimated variances. Second, I will focus on hypothesis testing in two-phase studies. We propose a robust test procedure based on imputation. The proposed procedure guarantees preservation of type I error, allows high-dimensional inexpensive covariates, and yields higher power than alternative imputation approaches. Finally, I will present some recent development on design optimality. We show that for general outcomes, the most efficient design is an extreme-tail sampling design based on certain residuals. This conclusion also explains the high efficiency of extreme tail sampling for continuous outcomes and balanced case-control design for binary outcomes. Throughout the talk, I will present numerical evidences from simulation studies and illustrate our methods using different applications.

Thursday, October 25, 2018 — 4:00 PM EDT

Causal Inference with Unmeasured Confounding: an Instrumental Variable Approach

Causal inference is a challenging problem because causation cannot be established from observational data alone. Researchers typically rely on additional sources of information to infer causation from association. Such information may come from powerful designs such as randomization, or background knowledge such as information on all confounders. However, perfect designs or background knowledge required for establishing causality may not always be available in practice. In this talk, I use novel causal identification results to show that the instrumental variable approach can be used to combine the power of design and background knowledge to draw causal conclusions. I also introduce novel estimation tools to construct estimators that are robust, efficient and enjoy good finite sample properties. These methods will be discussed in the context of a randomized encouragement design for a flu vaccine.

Tuesday, October 30, 2018 — 4:00 PM EDT

Systemic risk and the optimal capital requirements in a model of financial networks and fire sales


I consider an interbank network with fire sales externalities and multiple illiquid assets and study the problem of optimally trading off between capital reserves and systemic risk. I find that the problem of measuring systemic risk and the optimal capital requirements under various liquidation rules can be formulated as a convex and convex mixed integer programming. To solve the convex MIP, I offer an iterative algorithm that converges to the optimal solutions. I show the results of the methodology through numerical examples and provide implications for regulatory policies and related research topics.

Thursday, November 1, 2018 — 4:00 PM EDT

Copula Gaussian graphical models for functional data


We consider the problem of constructing statistical graphical models for functional data; that is, the observations on the vertices are random functions. This types of data are common in medical applications such as EEG and fMRI. Recently published functional graphical models rely on the assumption that the random functions are Hilbert-space-valued Gaussian random elements. We relax this assumption by introducing a  copula Gaussian random elements  Hilbert spaces,  leading to what we call the  Functional Copula Gaussian Graphical Model (FCGGM). This model removes the marginal Gaussian assumption but retains the simplicity of the Gaussian dependence structure, which is particularly attractive for large data. We develop four estimators, together with their implementation algorithms, for the FCGGM. We establish the consistency and the convergence rates of one of the estimators under different sets of sufficient conditions with varying strengths. We compare our FCGGM with the existing functional Gaussian graphical model by simulation, under both non-Gaussian and Gaussian graphical models, and apply our method to an EEG data set to construct brain networks.

Thursday, November 8, 2018 — 4:00 PM EST

Ghost Data


As natural as the real data, ghost data is everywhere—it is just data that you cannot see.  We need to learn how to handle it, how to model with it, and how to put it to work.  Some examples of ghost data are (see, Sall, 2017):

  (a) Virtual data—it isn’t there until you look at it;

  (b) Missing data—there is a slot to hold a value, but the slot is empty;

  (c) Pretend data—data that is made up;

  (d) Highly Sparse Data—whose absence implies a near zero, and

  (e) Simulation data—data to answer “what if.”

For example, absence of evidence/data is not evidence of absence.  In fact, it can be evidence of something.  More Ghost Data can be extended to other existing areas: Hidden Markov Chain, Two-stage Least Square Estimate, Optimization via Simulation, Partition Model, Topological Data, just to name a few.

Three movies will be discussed in this talk: (1) “The Sixth Sense” (Bruce Wallis)—I can see things that you cannot see; (2) “Sherlock Holmes” (Robert Downey)—absence of expected facts; and (3) “Edge of Tomorrow” (Tom Cruise)—how to speed up your learning (AlphaGo-Zero will also be discussed).  It will be helpful, if you watch these movies before coming to my talk.   This is an early stage of my research in this area--any feedback from you is deeply appreciated.  Much of the basic idea is highly influenced via Mr. John Sall (JMP-SAS). 

S M T W T F S
30
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
1
2
3
  1. 2018 (40)
    1. November (2)
    2. October (6)
    3. September (4)
    4. August (3)
    5. July (2)
    6. June (1)
    7. May (4)
    8. April (2)
    9. March (4)
    10. February (2)
    11. January (10)
  2. 2017 (55)
    1. December (3)
    2. November (11)
    3. October (5)
    4. September (7)
    5. August (1)
    6. July (1)
    7. June (2)
    8. May (4)
    9. April (2)
    10. March (3)
    11. February (4)
    12. January (12)
  3. 2016 (44)
  4. 2015 (38)
  5. 2014 (44)
  6. 2013 (46)
  7. 2012 (44)