Title: On the linear convergence of the Frank-Wolfe algorithm
Speaker: | Javier Pena |
Affiliation: | Carnegie Mellon University |
Room: | MC 6486 |
Abstract:
The
Frank-Wolfe
algorithm,
also
known
as
the
conditional
gradient
algorithm,
is
a
first-order
algorithm
to
minimize
a
convex
function
over
a
convex
set.
The
algorithm
relies
only
on
a
gradient
oracle
for
the
objective
function
and
a
linear
oracle
for
the
constraint
set.
This
algorithm
was
introduced
in
the
1950s
and
it
has
had
a
remarkable
surge
in
popularity
in
the
last
few
years.
The
algorithm
is
especially
well-suited
to
generate
parsimonious
solutions
to
large-scale
optimization
problems.
This
talk
will
describe
the
Frank-Wolfe
algorithm
and
some
of
its
variants.
We
will
place
special
emphasis
on
recent
results
related
to
the
linear
convergence
of
the
algorithm
when
the
objective
function
is
strongly
convex
with
Lipschitz
gradient
and
the
constraint
set
is
a
polytope.
In
this
case
the
main
linear
convergence
statement
is
an
elegant
generalization
of
the
analogous
linear
convergence
property
of
the
gradient
descent
method
for
unconstrained
convex
minimization.
This
talk
is
based
on
joint
work
with
Daniel
Rodriguez,
Department
of
Mathematical
Sciences,
Carnegie
Mellon
University.