Welcome to the Department of Statistics and Actuarial Science

The Department of Statistics and Actuarial Science is among the top academic units for statistical and actuarial science in the world and is home to more than 40 research active full-time faculty working in diverse and exciting areas. The Department is also home to over 900 undergraduate students and about 150 graduate students in programs including Actuarial Science, Biostatistics, Quantitative Finance, Statistics, and Statistics-Computing.

We are located on University of Waterloo main campus, which is located at the heart of Canada's Technology Triangle about 100 kilometers west of Toronto.

  1. May 31, 2019Master of Actuarial Science (MACTSC) 10th AnniversaryMACTSC 10th Anniversary Banner

    In 2019, the Master of Actuarial Science (MActSc) professional degree program will be celebrating 10 wonderful years at the University of Waterloo. 

    MActSc is an internationally renowned program in actuarial science and risk management, and is located within the Department of Statistics and Actuarial Science. This fast track professional program is only offered to the best and brightest students from around the world. Once accepted these students receive one-on-one interpersonal training from prominent faculty in the field of actuarial science. After 10 rigorous and demanding years, staying on the cutting edge of the industry and training the most elite in this field, the MActSc program will be celebrating by hosting a banquet dinner on May 31, 2019. 

    This event will be great opportunity for past and current students, faculty, and industry supporters to celebrate their hard work over the past decade.     

  2. Nov. 23, 2018Inspiring Student - Tianyi LiuTianyi Liu

    MQF is such a great program. It’s a good combination of studying, researching, and finding a job – doing an internship.

    Tianyi Liu, Master of Quantitative Finance student

    A graduate of Waterloo’s undergraduate mathematical finance program, Tianyi was a natural fit for the Master of Quantitative Finance program. He knew many of his professors, and appreciated the computing facilities and the support of his department.

  3. Nov. 19, 2018The Vector Institute recognizes MMath Data Science for Artificial Intelligence Scholarship Vector Institute logo
    Source: Vector Institute for Artificial Intelligence

    There are currently eight programs recognized by the Vector Institute for Artificial Intelligence for ensuring that master’s graduates are equipped with the knowledge, skills and competencies sought by industry.  The Department of Statistics and Actuarial Science's Master of Mathematics (MMath) Data Science Specialization is one of the few recognized programs. 

    The Vector Institute (Vector) has been tasked with supporting Ontario’s growing AI eco-system by aiding Ontario’s AI companies and labs source talent. This will be achieved through a combination of three primary activities: talent placement via internships; networking and AI community building; and, applied AI education.

    A primary goal associated with applied AI education is to increase enrolment and the number of graduates from AI-related master’s programs. To support universities in achieving this goal, Vector has introduced the Vector Scholarships in Artificial Intelligence (VSAI) to recognize top students enrolled in AI-related master’s programs. 

    The VSAI are expected to enhance recruitment of top tier students, increase the number of applicants to recognized AI programs, increase access to advanced studies in AI, and build community among scholars.

Read all news
  1. Jan. 22, 2019Department seminar by Alex Shestopaloff, The Alan Turing Institute

    Strategies for scaling iterated conditional Sequential Monte Carlo methods for high dimensional state space models

    The iterated Conditional Sequential Monte Carlo (cSMC) method is a particle MCMC method commonly used for state inference in non-linear, non-Gaussian state space models. Standard implementations of iterated cSMC provide an efficient way to sample state sequences in low-dimensional state space models. However, efficiently scaling iterated cSMC methods to perform well in models with a high-dimensional state remains a challenge. One reason for this is the use of a global proposal, without reference to the current state sequence in the MCMC run. In high dimensions, such a proposal will typically not be well-matched to the posterior and impede efficient sampling. I will describe a technique based on the embedded HMM (Hidden Markov Model) framework to construct efficient proposals in high dimensions that are local relative to the current state sequence. A second obstacle to scalability of iterated cSMC is not using the entire observed sequence to construct the proposal. Typical implementations of iterated cSMC use a proposal at time t that that relies only on data up to time t. In high dimensions and in the presence of informative data, such proposals become inefficient, and can considerably slow down sampling. I will introduce a principled approach to incorporating future observations in the cSMC proposal at time t. By considering several examples, I will demonstrate that both strategies improve the performance of iterated cSMC for sequence sampling in high-dimensional state space models.

  2. Jan. 24, 2019Department seminar by Minsuk Shin, Harvard University

    Some Priors for Nonparametric Shrinkage and Bayesian Sparsity Inference 

    In this talk, I introduce two novel classes of shrinkage priors for different purposes: functional HorseShoe (fHS) prior for nonparametric subspace shrinkage and neuronized priors for general sparsity inference. 

      In function estimation problems, the fHS prior encourages shrinkage towards parametric classes of functions. Unlike other shrinkage priors for parametric models, the fHS shrinkage acts on the shape of the function rather than inducing sparsity on model parameters. I study some desirable theoretical properties including an optimal posterior concentration property on the function and the model selection consistency. I apply the fHS prior to nonparametric additive models for some simulated and real data sets, and the results show that the proposed procedure outperforms the state-of-the-art methods in terms of estimation and model selection.

      For general sparsity inference, I propose the neuronized priors to unify and extend existing shrinkage priors such as one-group continuous shrinkage priors, continuous spike-and-slab priors, and discrete spike-and-slab priors with point-mass mixtures. The new priors are formulated as the product of a weight variable and a transformed scale variable via an activation function.  By altering the activation function, practitioners can easily implement a large class of Bayesian variable selection procedures. Compared with classic spike and slab priors, the neuronized priors achieve the same explicit variable selection without employing any latent indicator variable, which results in more efficient MCMC algorithms and more effective posterior modal estimates. I also show that these new formulations can be applied to more general and complex sparsity inference problems, which are computationally challenging, such as structured sparsity and spatially correlated sparsity problems.

  3. Jan. 25, 2019Department seminar by Linjun Zhang, University of Pennsylvania

    The Cost of Privacy: Optimal Rates of Convergence for Parameter Estimation with Differential Privacy

    With the unprecedented availability of datasets containing personal information, there are increasing concerns that statistical analysis of such datasets may compromise individual privacy. These concerns give rise to statistical methods that provide privacy guarantees at the cost of some statistical accuracy. A fundamental question is: to satisfy certain desired level of privacy, what is the best statistical accuracy one can achieve?  Standard statistical methods fail to yield sharp results, and new technical tools are called for.

    In this talk, I will present a general lower bound argument to investigate the tradeoff between statistical accuracy and privacy, with application to three problems: mean estimation, linear regression and classification, in both the classical low-dimensional and modern high-dimensional settings. For these statistical problems, we also design computationally efficient algorithms that match the minimax lower bound under the privacy constraints. Finally I will show the applications of those privacy-preserving algorithms to real data containing sensitive information, such as SNPs and body fat, for which privacy-preserving statistical methods are necessary.

All upcoming events

Faculty Joint Publications

Map of Faculty and PhD Students backgrounds

Meet our people

Jun Cai

Jun Cai


Contact Information:
Jun Cai

Research interests

Professor Cai's research interests are in the fields of actuarial science, applied probability, and mathematical finance, including distribution theory; insurance mathematics; modelling in insurance and finance; reliability; risk management; ruin theory; and stochastic ordering and stochastic dependence with applications in insurance and finance.