Research strengths

Actuarial Science


Actuarial risk management

The modern financial industry, comprising the banking and investment sectors, as well as insurance companies and pension funds, relies heavily on modern risk analysis and risk management. As the array of financial products grows in variety and complexity, accurate and reliable risk management has become both more complex and more essential. Several tragic failures in recent years - such as the rapid demise of Confederation Life, Enron, Barings Bank, and Long Term Capital and the subprime crisis - have highlighted the importance of sound risk management and the prudent solvency and capital requirement. These are precisely the areas where some of our faculty members have made significant contributions. Some examples include developing a framework for modelling and analyzing the risks involved in long-term insurance contracts with embedded financial guarantees such as segregated funds and variable annuities; analyzing the impact of regulation changes on the overall insurance market and how such changes affect both market stability and the risk-sharing among policyholders, insurers, and re-insurers (such as on the optimal design of insurance contracts and reinsurance arrangements under the Conditional Tail Expectation or Value at Risk objective/constraint); the behaviour of optimal credit portfolios in the "large portfolio" regime that serves as the foundation for current credit risk regulations; fast algorithms for portfolio optimization using the Omega performance measure. Another key area of research is the mortality modelling and longevity risk management, as triggered by people have been living longer than they were expected to, and this has resulted in huge financial losses in annuity and pension portfolios. We have developed stochastic mortality model, proposed sustainable hybrid pension scheme solutions that combine the best features of the defined benefit and defined contribution systems, analyzed the demographic and financial risks associated with reverse mortgage contracts, as well as devised an economic pricing model for mortality-linked securities.

Faculty Specialists:

  • Jun Cai: optimal reinsurance, capital allocation, stochastic modelling in quantitative risk management.
  • Ben Feng: portfolio optimization; Monte Carlo and quasi-Monte Carlo; quantitative management
  • Mario Ghossoub: Efficiency and equilibria in risk-sharing markets; optimal reinsurance; contract design.
  • Adam Kolkiewicz: statistical inference for diffusion processes; Monte Carlo and quasi-Monte Carlo.
  • Bin Li: optimal insurance/reinsurance contract design, investment-linked products
  • Johnny Li: stochastic mortality modeling; securitization of longevity risk; reverse mortgages; pricing and hedging equity-linked insurance products.
  • David Saunders: pricing and hedging of investment-linked products.
  • Alexander Schied: risk measures
  • Ruodu Wang: Risk measures; risk sharing; risk aggregation
  • Chengguo Weng: optimal reinsurance design; portfolio insurance.
  • Gord Willmot: insurance solvency with jump processes, loss modelling and analysis.
  • Tony S. Wirjanto: statistical modeling, estimation and inference of financial data and risk measures.
  • Fan Yang: catastrophe insurance markets; risk aggregation; risk measures; insurance-linked securities.


Biostatistics is an exciting area of statistics devoted to the development and application of innovative methods for using data to answer important questions in areas ranging from clinical trials, population health to biological science. When evaluating new pharmaceutical treatments biostatisticians play an important role in all phases of drug development through the design and analysis of clinical trials. In population health research biostatisticians can work closely with epidemiologists in the design of cross-sectional, cohort or retrospective studies to elucidate the causal mechanisms of chronic disease. In biology interest may lie in how individual organisms or ecosystems respond to pollutants through the analysis of experimental or observational data. Our department has a strong record of impact in a diverse range of areas in biostatistics. Our strengths include the analysis of life history data, longitudinal data analysis, the design and analysis of clinical trials, epidemiological studies, clustered data, methods for dealing with incomplete data and measurement error, causal inference and studies of biological systems.

Faculty Specialists

  • Audrey Béliveau: Network meta-analysis, evidence synthesis methods.
  • Christian Boudreau: survival analysis; competing risks; event history analysis; longitudinal studies.
  • Steve Brown: cluster randomized trial design and analysis, statistical models and methods in public health, smoking prevention and smoking cessation trials and observational studies; school based studies.
  • Shoja'-eddin Chenouri: statistical neuroscience; network data; high-dimensional data.
  • Richard Cook: life history analysis; longitudinal data; incomplete data; clinical trial design.
  • Cecilia Cotton: causal inference; longitudinal data; survival analysis.
  • Liqun Diao: survival and life history analysis; multi-state transitional models; incomplete data
  • Joel Dubin: longitudinal data methods and analysis, flexible models, survival analysis, predictive models, public health and biomedical applications.
  • Ali Ghodsi: bioinformatics.
  • Jerry Lawless: survival and life history analysis, statistical genetics, randomized trials.
  • Kun Liang: methods for high-throughput genomic data.
  • Martin Lysy: survival analysis; continuous-time models in pharmacodynamics and pharmacokinetics.
  • Paul Marriott: neuroinformatics.
  • Glen McGee: longitudinal data; informative observation mechanisms; Bayesian methods; epidemiology and environmental health
  • Zelalem Negeri: Aggregate data meta-analysis; Individual participant data meta-analysis; Diagnostic and screening test evaluation
  • Wayne Oldford: exploratory data analysis, data visualization.
  • Stefan Steiner: performance monitoring; diagnostic test reliability.
  • Cyntha Struthers: methods for modelling continuous-time longitudinal data subject to missingness

  • Michael Wallace: causal inference, longitudinal data, dynamic treatment regimes, precision medicine, measurement error.
  • Lan Wen: causal inference; missing data; longitudinal data; observational studies.
  • Changbao Wu: longitudinal surveys, observational studies and missing data
  • Leilei Zeng: longitudinal data analysis; multivariate and clustered data; multi-state transitional models; missing data problem.
  • Yeying Zhu: causal inference, mediation analysis.

Business and industrial statistics

Business and Industrial Statistics develops and applies quantitative methods and paradigms for inquiry and decision-making in business and industrial contexts. In business and industry, a wide variety of data is routinely collected from customers, products and processes. Using statistics-based quality and process improvement methods, such as Six Sigma or Statistical Engineering, we combine these existing data streams together with additional data from planned studies. The goal is to increase productivity, reduce costs, make better decisions and have more satisfied customers. Our department has research strengths in experimental design, control charting, observational studies, problem-solving systems and measurement system assessment. We have business contacts through the Business and Industrial Statistics Research Group (BISRG).

  • Ryan Browne: measurement system assessment, design of experiments.
  • Shoja'-eddin Chenouri: high-dimensional data.
  • Jerry Lawless: analysis of systems and processes, prediction, reliability.
  • Wayne Oldford: data analytics, data visualization, statistical methods, internet applications
  • Stefan Steiner: process improvement systems; measurement system assessment; statistical process control; design of experiments.
  • Nathaniel Stevens: data analytics, online controlled experiments, measurement system evaluation, process monitoring.
  • Mu Zhu: recommender systems; president of the Business and Industrial Statistics Section in the Statistical Society of Canada (2012-13).

Computational statistics

Ongoing and seemingly relentless technological advances have made computers an indispensable tool for the modern statistician. On one hand, "big data" are becoming commonplace in areas of research such as brain imaging, high-frequency trading, machine learning, quality control, and climatology. Not only are sophisticated computational techniques required to be efficiently scale-able to these large datasets, but their high dimensionality also calls for innovative methods of visualization, dimension reduction, and data-mining. On the other hand, the new technology encourages the use of more complex and realistic statistical models, prohibitively intractable only a decade ago. Such models may incorporate missing data imputation mechanisms, random effects, or intricate patterns of spatial and temporal dependence. Inference, prediction, and extreme value analysis for these models is an active area of departmental research, leading to the development of many algorithms and software packages involving Monte Carlo and quasi Monte Carlo methods, approximation by surrogate models, and a variety of problem-specific optimization techniques.

  • Ryan Browne: Clustering, missing data problems.
  • Shoja'-eddin Chenouri: statistical neuroscience; network data; high-dimensional data.
  • Liqun Diao: machine learning; classification and regression trees; ensemble learning; recursive partitioning methods
  • Joel Dubin: longitudinal data methods and analysis, flexible models, survival analysis, predictive models, public health and biomedical applications.
  • Ben Feng: Monte Carlo and quasi-Monte Carlo; statistical learning and machine learning
  • Ali Ghodsi: machine learning, unsupervised learning, dimensionality reduction.
  • Aukosh Jagannath: theory of machine learning, particularly neural networks; computational complexity of statistical problems, particularly Statistical-Computational tradeoffs and probabilistic analysis of algorithms; monte carlo and optimization methods in high-dimensions
  • Christiane Lemieux: quasi-Monte Carlo methods; high-dimensional problems.
  • Martin Lysy: Monte Carlo methods; missing data problems; approximate Bayesian computations.
  • Zelalem Negeri: Markov Chain Monte Carlo; Monte Carlo Expectation Maximization; Bootstrap methods; Expectation-Maximization algorithm; Machine learning
  • Wayne Oldford: data visualization,  exploratory data analysis,  interactive methods,  visual cluster analysis, high-dimensional data, design and development  of statistical programming environments,  development in R.
  • Matthias Schonlau: machine learning, natural language processing, data visualization.
  • Nathaniel Stevens: network modeling and analysis, treatment effect estimation, changepoint detection.
  • Mu Zhu: machine learning; classification; ensemble learning; sparse kernel machines.
  • Yeying Zhu: machine learning, dimension reduction, variable selection, kernel methods.

Econometrics and quantitative finance

The pricing and the hedging of assets, the management of investment portfolios and risks of many kinds, are problems which require both underlying statistical models together with computational methods for assessing these models and drawing conclusions from them. Many traditional models fail to capture the more extreme behaviour of markets, behaviour which has the greatest impact on the economy and investment decisions. For this reason, more appropriate but complex models are often adopted which frequently require numerical or Monte Carlo methods for inference. These research areas include topics in financial time series and econometrics, option pricing, and computational and Monte Carlo models and methods for hedging, portfolio management and assessing risk.

  • Ben Feng: portfolio optimization; pricing and hedging investment-linked products
  • Mario Ghossoub: decision theory; ambiguity and model uncertainty; quantitative risk management; optimal transport and robust risk management.
  • Adam Kolkiewicz: pricing and hedging of financial and insurance contracts.
  • Bin Li: portfolio optimization, model uncertainty, time inconsistency
  • Martin Lysy: stochastic volatility modeling; time series residual analysis.
  • Greg Rice: panel data; high dimensional time series analysis.
  • David Saunders: portfolio optimization; quantitative risk management; credit risk.
  • Alexander Schied: pricing and hedging, model risk, market impact 
  • Yi Shen: time series analysis, regression discontinuity design.
  • Ruodu Wang: Portfolio optimization; robust finance; credit risk modeling; behavioral economics and decision theory.
  • Chengguo Weng: unit root process; portfolio optimization.
  • Tony S. Wirjanto: Statistical modeling, estimation and inference of financial data and risk measures.
  • Fan Yang: estimation of risk measures; systemic risk; portfolio optimization.

Risk theory

The modelling and analysis of the claims experience on a portfolio of business and how it unfolds over time are longstanding and important actuarial research problems. Incorporation of the incidence and severity of claims necessitates the use of a variety of quantitative tools from applied probability and related areas of stochastic jump processes. Recent mathematical advances, coupled with the ready availability of computational resources, allows for the use of more complex and realistic insurance claim models. Sound financial risk management is thus enhanced by the use of these approaches. Our department continues to provide research leadership internationally in the area of insurance risk theory and loss models. In particular, our strengths are in analysis of surplus, analysis of insurance loss and other statistical distributions with particular emphasis on right tail behaviour, stochastic ordering and dependence among risks, and applications to reinsurance and claims reserving.

  • Jun Cai: ruin theory,  risk analysis, stochastic modelling in insurance and finance.
  • Steve Drekic: ruin theory, matrix analytic methods, computational procedures.
  • David Landriault: risk and ruin theory; dependence modelling in actuarial science; discounted aggregate claims.
  • Christiane Lemieux: ruin theory; adaptive premium policies.
  • Bin Li: exotic ruin problems, optimization problems in risk theory
  • Chengguo Weng: heavy-tailed distributions in the presence of dependence; insurance claim modelling.
  • Gord Willmot: mixed and compound loss models, time dependent claims, ruin theory.


The field of applied probability is quite a general one, predominantly focusing on applications of probability theory to stochastic systems arising in scientific and engineering domains including actuarial science, operations research, computer science, and finance. One of the main branches of applied probability is queuing theory, which is broadly defined as the mathematical study of waiting lines and congestion. Queuing theory finds important uses in applications as diverse as health care and computer systems performance evaluation, and it often provides the essential framework for how to efficiently streamline processes and remove/reduce bottlenecks. As more and more emphasis is being placed on the quality of service experienced by customers, there is a growing need for mathematical queuing models to incorporate realistic features such as flexible and/or preferential service policies in combination with varying job types. In today's world, queuing theorists, or more generally applied probabilists, seek to employ cutting-edge mathematical techniques in conjunction with more powerful and sophisticated computational resources to help them analyze, in a tractable way, more complicated stochastic processes which, in turn, more precisely reflect real-world conditions.

  • Jun Cai: dependence  modelling, stochastic comparison, stochastic processes.
  • Steve Drekic: queueing theory, stochastic models, matrix analytic methods.
  • Aukosh Jagannath: probability theory and its applications to physics, computation and data science, high-dimensional probability, stochastic analysis and stochastic processes in high-dimensions, probabilistic analysis of algorithms, learning theory
  • David Landriault: first passage times, stochastic processes, matrix analytic methods.
  • Bin Li: stochastic analysis, stochastic control
  • Martin Lysy: stochastic differential and integro-differential equations.
  • Greg Rice: non-linear stationary processes.
  • Alexander Schied: stochastic analysis, stochastic processes, stochastic optimal control
  • Yi Shen: stochastic processes and random fields, random topology, extreme value theory.
  • Ruodu Wang: Optimal transport; dependence modeling; martingales.
  • Gord Willmot:  renewal and other jump processes, transform techniques, first passage times.
  • Fan Yang: asymptotic analysis; extreme value theory; multivariate dependence.

Statistical modeling and inference

Statistical models are key to understanding the natural, experimental and technological processes in the world around us. They are ubiquitous and arise in diverse fields including economics, finance and actuarial science, manufacturing, pattern recognition, shape analysis, and the study of chronic disease. Modern datasets are typically massive, often obtained from complex sampling designs, and are subject to other complexities such as incomplete information. They require highly sophisticated methods and algorithms for their statistical analysis to reveal important relationships, facilitate causal inferences, and make predictions. All decision-making or other inference in the presence of uncertainty implicitly requires statistical models of the extent and nature of this uncertainty. Statistical modeling and Inference are thus not only at the foundation of the statistical sciences, but at the core of all inference drawn from data. Since the inception of our department, faculty members have made pioneering contributions to the important field of statistical inference, having made key advances in the theory and applications of likelihood and estimating functions methodology.

  • Audrey Béliveau: bayesian methods, hierarchical modeling, applications to ecology and epidemiology.
  • Steve Brown: generalized linear models for clustered data.
  • Ryan Browne: models for mixed-type data and multivariate-skewed density.
  • Shoja'-eddin Chenouri: data depth; nonparametric inference; statistical neuroscience; network data; high-dimensional data.
  • Cecilia Cotton: causal inference; longitudinal data; survival analysis.
  • Richard Cook: likelihood methods; estimating functions; model misspecification.
  • Liqun Diao: dependence modeling with copulas; likelihood methods; estimating functions; recursive partitioning methods
  • Joel Dubin: longitudinal data methods and analysis, flexible models, survival analysis, predictive models, public health and biomedical applications.
  • Ali Ghodsi: probabilistic inference and graphical models.
  • Aukosh Jagannath: statistical-computational trade offs, high-dimensional statistics, statistical models in physics and computation
  • Jerry Lawless: estimation and testing methodology, inference from stochastic processes, life history models, regression models, prediction.
  • Pengfei Li: mixture model; empirical likelihood; smoothing; asymptotics.
  • Kun Liang: multiple testing; Bayesian methods.
  • Martin Lysy: stochastic modeling of single molecule experiments; mediation analysis; Gaussian process regression.
  • Paul Marriott: applications of geometry to statistical modelling, mixture models.
  • Glen McGee: longitudinal data; informative observation mechanisms; Bayesian methods; epidemiology and environmental health
  • Zelalem Negeri: Bivariate random-effects modelling; Robust statistical methods for meta-analyses; Outlier detection and accommodation
  • Wayne Oldford:exploratory data analysis, visual significance tests, multivariate analysis, history and philosophy of statistics.
  • Yingli Qin: hypothesis testing for high-dimensional data; random matrix theory.
  • Greg Rice: functional data analysis; change point analysis; asymptotics.
  • Peijun Sang: functional data analysis, nonparametric regression, large sampler theory as my research interests
  • Alexander Schied: robust estimation
  • Stefan Steiner: estimating equations, fixed and random effects models, incorporating baseline data, risk adjustment

  • Mary Thompson: estimation theory, inference for stochastic processes, inference from spatial and network data.
  • Ruodu Wang: E-values; selective inference and multiple testing
  • Gord Willmot: counting distributions, mixed and compound distributions.
  • Tony S. Wirjanto: statistical modeling, estimation and inference of financial data and risk measures.
  • Changbao Wu: empirical likelihood methods, estimating equations, resampling methods, missing data problems
  • Mu Zhu: multivariate analysis; dimension reduction; variable selection; nonparametric regression.

Survey methods

Scientific surveys of human and natural populations typically have complex probability sampling designs. Constructing sampling designs which are economical and efficient for inference is an important part of survey methodology. Once the data have been collected, analyses of the data must take into account both the complex sampling procedures and sources of non-sampling error such as non-response, missing data and measurement error. Besides extending the repertoire of techniques available for analyses, a survey methodologist has opportunities to work with researchers in other disciplines and sectors on the design of sampling and modern data collection methods for specific surveys. Our faculty include several specialists in survey methods, who are pursuing both theoretical and applied research. Topics include the analysis of longitudinal and survival data from complex surveys, the use of empirical likelihood in survey analysis, and the treatment of samples collected from several frames and with multiple data collection modes. The Survey Research Centre, a joint venture with the Department of Sociology and Legal Studies, provides advice and carries out survey fieldwork for researchers in all faculties of the University.

  • Audrey Béliveau: capture-recapture, environmental and ecological surveys.
  • Christian Boudreau: sampling weights, sampling design, longitudinal and panel surveys, survival analysis with complex survey data, health behaviour surveys.
  • Steve Brown: design and analysis of school-based surveys of youth; longitudinal surveys of health behaviours.
  • Jerry Lawless: longitudinal surveys.
  • Matthias Schonlau: web surveys; panel surveys; open-ended questions.
  • Mary Thompson: sampling theory,resampling methods, survey design.
  • Changbao Wu: design and analysis of complex surveys