Master's Defense | Michael Hynes, A Taylor polynomial expansion line search for large-scale optimizationExport this event to calendar

Wednesday, August 10, 2016 10:00 AM EDT

MC 6460

Speaker

Michael Hynes
Department of Applied Mathematics, University of Waterloo

Title

A Taylor polynomial expansion line search for large-scale optimization

Abstract

In trying to cope with the Big Data deluge, the landscape of distributed
computing has changed.  Large commodity hardware clusters, typically
operating in some form of MapReduce framework, are becoming prevalent for
organizations that require both tremendous storage capacity and fault
tolerance.  However, the high cost of communication can dominate the
computation time in large-scale optimization routines in these frameworks.
This thesis considers the problem of how to efficiently conduct univariate
line searches in commodity clusters in the context of gradient-based batch
optimization algorithms, like the staple limited-memory BFGS (LBFGS) method.
In it, a new line search technique is proposed for cases where the
underlying objective function is analytic, as in logistic regression and low
rank matrix factorization.  The technique approximates the objective
function by a truncated Taylor polynomial along a fixed search direction.
The coefficients of this polynomial may be computed efficiently in parallel
with far less communication than needed to transmit the high-dimensional
gradient vector, after which the polynomial may be minimized with high
accuracy in a neighbourhood of the expansion point without distributed
operations. This Polynomial Expansion Line Search (PELS) may be invoked
iteratively until the expansion point and minimum are sufficiently accurate,
and can provide substantial savings in time and communication costs when
multiple iterations in the line search procedure are required.

Three applications of the PELS technique are presented herein for important
classes of analytic functions: (i) Logistic Regression (LR), (ii) low-rank
Matrix Factorization (MF) models, and (iii) the feedforward multilayer
perceptron (MLP). In addition, for LR and MF, implementations of PELS in
the Apache Spark framework for fault-tolerant cluster computing are
provided. These implementations conferred significant convergence
enhancements to their respective algorithms, and will be of interest to
Spark and Hadoop practitioners.  For instance, the Spark PELS technique
reduced the number of iterations and time required by LBFGS to reach
terminal training accuracies for LR models by factors of 1.8--2.
Substantial acceleration was also observed for the Nonlinear Conjugate
Gradient algorithm for MLP models, and is an interesting case for future
study in optimization for neural networks models.  The PELS technique is
applicable to a broad class of models for Big Data processing and
large-scale optimization, and can be a useful component of batch
optimization routines.

S M T W T F S
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
1
2
3
4
  1. 2024 (70)
    1. June (3)
    2. May (7)
    3. April (12)
    4. March (19)
    5. February (15)
    6. January (14)
  2. 2023 (96)
    1. December (6)
    2. November (11)
    3. October (7)
    4. September (8)
    5. August (12)
    6. July (5)
    7. June (6)
    8. May (5)
    9. April (14)
    10. March (7)
    11. February (8)
    12. January (7)
  3. 2022 (106)
  4. 2021 (44)
  5. 2020 (33)
  6. 2019 (86)
  7. 2018 (70)
  8. 2017 (72)
  9. 2016 (76)
  10. 2015 (77)
  11. 2014 (67)
  12. 2013 (49)
  13. 2012 (19)
  14. 2011 (4)
  15. 2009 (5)
  16. 2008 (8)