November 2019

Thursday, November 7, 2019 — 4:00 PM EST

Nonregular and Minimax Estimation of Individualized Thresholds in High Dimension with Binary Responses

Given a large number of covariates $\bZ$, we consider the estimation of a high-dimensional parameter $\btheta$ in an individualized linear threshold $\btheta^T\bZ$ for a continuous variable $X$, which minimizes the disagreement between $\sign{X-\btheta^T\bZ}$ and a binary response $Y$. While the problem can be formulated into the M-estimation framework, minimizing the corresponding empirical risk function is computationally intractable due to discontinuity of the sign function. Moreover, estimating $\btheta$ even in the fixed-dimensional setting is known as a nonregular problem leading to nonstandard asymptotic theory. To tackle the computational and theoretical challenges in the estimation of the high-dimensional parameter $\btheta$, we propose an empirical risk minimization approach based on a regularized smoothed non-convex loss function. The Fisher consistency of the proposed method is guaranteed as the bandwidth of the smoothed loss is shrunk to 0. Statistically, we show that the finite sample error bound for estimating $\btheta$ in $\ell_2$ norm is $(s\log d/n)^{\beta/(2\beta+1)}$, where $d$ is the dimension of $\btheta$, $s$ is the sparsity level, $n$ is the sample size and $\beta$ is the smoothness of the conditional density of $X$ given the response $Y$ and the covariates $\bZ$. The convergence rate is nonstandard and slower than that in the classical Lasso problems. Furthermore, we prove that the resulting estimator is minimax rate optimal up to a logarithmic factor. The Lepski's method is developed to achieve the adaption to the unknown sparsity $s$ and smoothness $\beta$. Computationally, an efficient path-following algorithm is proposed to compute the solution path. We show that this algorithm achieves geometric rate of convergence for computing the whole path. Finally, we evaluate the finite sample performance of the proposed estimator in simulation studies and a real data analysis from the ChAMP (Chondral Lesions And Meniscus Procedures) Trial.

Friday, November 8, 2019 — 10:30 to 10:30 AM EST

Transformed norm risk measures on their natural domain

P. Cheridito, and T. Li (2009) introduced the class of transformed norm risk measures. This is a fairly large class of real-valued convex law-invariant monetary risk measures, which includes the expected shortfall, the Haezendonck-Goovaerts risk measure, the entropic risk measure and other important examples. The natural domain of a transformed norm risk measure T is an appropriate Orlicz space. Nonetheless, dual representations for this class of risk measures are only known if T is restricted on the Orlicz heart. In this talk we will explore dual representations on their natural domain. Moreover we will discuss continuity properties of dilatation monotone risk measures on general model spaces that are of independent interest.

Thursday, November 14, 2019 — 4:00 PM EST

On Khintchine's Inequality for Statistics

In complex estimation and hypothesis testing settings, it may be impossible to compute p-values or construct confidence intervals using classical analytic approaches like asymptotic normality.  Instead, one often relies on randomization and resampling procedures such as the bootstrap or permutation test.  But these approaches suffer from the computational burden of large scale Monte Carlo runs.  To remove this burden, we develop analytic methods for hypothesis testing and confidence intervals by specifically considering the discrete finite sample distributions of the randomized test statistic.  The primary tool we use to achieve such results is Khintchine's inequality and its extensions and generalizations.

Friday, November 15, 2019 — 10:30 to 10:30 AM EST

Insurance Pricing in a Competitive Market

Insurance is usually defined as "the contribution of the many to the misfortune of the few". This idea of pooling risks together using the law of large number legitimates the use of the expected value as actuarial "fair" premium. In the context of heterogeneous risks, nevertheless, it is possible to legitimate price segmentation based on observable characteristics. But in the context of "Big Data", intensive segmentation can be observed, with a much wider range of offered premium, on a given portfolio. In this talk, we will briefly get back on economical, actuarial and philosophical approaches of insurance pricing, trying to link a fair unique premium on a given population and a highly segmented one. We will then get back on recent experiments (so-called "actuarial pricing game") organized since 2015, where (real) actuaries were playing in competitive (artificial) market, that mimic real insurance market. We will get back on conclusions obtained on two editions, the first one, and the most recent one, where a dynamic version of the game was launched.

Thursday, November 21, 2019 — 4:00 PM EST

A General Framework for Quantile Estimation with Incomplete Data

Quantile estimation has attracted significant research interests in recent years. However, there has been only a limited literature on quantile estimation in the presence of incomplete data. In this paper, we propose a general framework to address this problem. Our framework combines the two widely adopted approaches for missing data analysis, the imputation approach and the inverse probability weighting approach, via the empirical likelihood method. The proposed method is capable of dealing with many different missingness settings. We mainly study three of them: (i) estimating the marginal quantile of a response that is subject to missingness while there are fully observed covariates; (ii) estimating the conditional quantile of a fully observed response while the covariates are partially available; and (iii) estimating the conditional quantile of a response that is subject to missingness with fully observed covariates and extra auxiliary variables. The proposed method allows multiple models for both the missingness probability and the data distribution. The resulting estimators are multiply robust in the sense that they are consistent if any one of these models is correctly specified. The asymptotic distributions are established using the empirical process theory.

 

Joint work with Peisong Han, Jiwei Zhao and Xingcai Zhou.

Friday, November 22, 2019 — 10:30 to 10:30 AM EST

Do Jumps Matter in the Long Run? A Tale of Two Horizons

Economic scenario generators (ESGs) for equities are important components of the valuation and risk management process of life insurance and pension plans. As the resulting liabilities are very long-lived, it is a desired feature of an ESG to replicate equity returns over such horizons. However, the short-term performance of the assets backing these liabilities may also trigger significant losses and in turn, affect the financial stability of the insurer or plan. For example, a line of GLWBs with frequent withdrawals may trigger losses when subaccounts suddenly lose after a stock market crash or pension contributions may also need to be revised after a long-lasting economic slump. Therefore, the ESG must replicate both short- and long-term stock price dynamics in a consistent manner, which is a critical problem in actuarial finance. Popular features of financial models include stochastic volatility and jumps, and as such, we would like to investigate how these features matter for typical long-term actuarial applications.

For a model to be useful in actuarial finance, it should at least replicate the dynamics of daily, monthly and annual returns (and any frequency in between). A crucial characteristic of returns at these scales is that the kurtosis tends to be very high on a daily basis (25-30) but close to 4-5 on an annual basis. We show that jump-diffusion models, featuring both stochastic volatility and jumps, cannot replicate such features if estimated with the maximum likelihood. Using the generalized method of moments, we find that simple jump-diffusion models or regime-switching models (with at least three regimes) have an excellent fit for various moments observed at different time scales. Finally, we investigate three typical actuarial applications: $1 accumulated in the long run with no intermediate monitoring, a long-term solvency analysis with frequent monitoring and a portfolio rebalancing problem, also with frequent monitoring and updates. Overall, we find that a stochastic volatility model with independent jumps or a regime-switching lognormal model with three regimes, both fitted with the GMM, yield the best fit to moments at different scales and also provide the most conservative figures in actuarial applications, especially when there is intermediate monitoring.

So yes, jumps or jump-like features are essential in the long run. This also illustrates how typical actuarial models fitted with the maximum likelihood may be inadequate for reserving, economic capital and solvency analyses.

Thursday, November 28, 2019 — 4:00 PM EST

Bayesian inference of dynamic systems via constrained Gaussian processes

Ordinary differential equations are an important tool for modeling behaviors in science, such as gene regulation, epidemics, etc.  An important statistical problem is to infer and characterize the uncertainty of parameters that govern the equations.  We present a fast Bayesian inference method using constrained Gaussian processes, such that the derivatives of the Gaussian process must satisfy the dynamics of the differential equations.  Our method completely avoids the numerical solver and is thus practically fast to compute. Our construction is cleanly embedded in a rigorous Bayesian framework, and is demonstrated to yield fast and reliable inference in a variety of practical scenarios.

Friday, November 29, 2019 — 10:30 to 10:30 AM EST

Noncausal Affine Processes with Applications to Derivative Pricing

Linear factor models, where the factors are affine processes, play a key role in Finance, since they allow for quasi-closed form expressions of the term structure of risks. We introduce the class of noncausal affine linear factor models by considering factors that are affine in reverse time. These models are especially relevant for pricing sequences of speculative bubbles. We show that they feature much more complicated non affine dynamics in calendar time, while still providing (quasi) closed form term structures and derivative pricing formulas. The framework is illustrated with zero-coupon bond and European call option pricing examples.

S M T W T F S
27
28
29
30
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
  1. 2019 (64)
    1. December (1)
    2. November (8)
    3. October (8)
    4. September (4)
    5. August (2)
    6. July (2)
    7. June (2)
    8. May (7)
    9. April (7)
    10. March (6)
    11. February (4)
    12. January (13)
  2. 2018 (44)
    1. November (6)
    2. October (6)
    3. September (4)
    4. August (3)
    5. July (2)
    6. June (1)
    7. May (4)
    8. April (2)
    9. March (4)
    10. February (2)
    11. January (10)
  3. 2017 (55)
  4. 2016 (44)
  5. 2015 (38)
  6. 2014 (44)
  7. 2013 (46)
  8. 2012 (44)