David Sprott Distinguished Lecture by Jerome Friedman

Friday, September 13, 2013 2:30 pm - 2:30 pm EDT (GMT -04:00)

Sparsity, boosting and ensemble methods

Jerome Friedman
Statistical or machine learning involves predicting future outcomes from past observations. Many present day applications involve large numbers of predictor variables, sometimes much larger than the number of cases or observations available to train the learning algorithm. In such situations traditional statistical methods fail. Applying regularization techniques can often produce accurate predictions in these settings. This talk will describe the basic principles underlying regularization and then focus on those methods that attempt to exploit sparsity of the predicting model. A fast gradient boosting algorithm is described that can implement a wide variety of regularization methods for linear predictive models. It is then extended to nonlinear modeling giving rise to general learning ensembles.

Jerome Friedman, Stanford University

Jerome Friedman is professor of statistics at Stanford University, where he has held a faculty position since 1982. His outstanding contributions to statistical methods and computer science in the areas of nonparametric statistics and machine learning have led to many honours and awards. His two books Classification and Regression Trees (1984, co-authored with Leo Breiman, Richard Olshen, and Charles Stone) and The Elements of Statistical Learning: Data Mining, Inference and Prediction (2001, co-authored with Trevor Hastie and Rob Tibshirani), are among the most widely used in statistics, machine learning, and data mining.


  • Everyone welcome.
  • Reception will follow in the Bruce White Atrium.
  • Co-sponsored by Department of Statistics and Actuarial Science and Google.

See "Sparsity, Boosting and Ensemble Methods" lecture poster (PDF)