Ke
Li,
Research
Scientist
Google
In this talk, I will present our work on overcoming two long-standing problems in machine learning and computer vision:
1.
Mode
collapse
in
generative
adversarial
nets
(GANs)
Generative
adversarial
nets
(GANs)
are
perhaps
the
most
popular
class
of
generative
models
in
use
today.
Unfortunately,
they
suffer
from
the
well-documented
problem
of
mode
collapse,
which
the
many
successive
variants
of
GANs
have
failed
to
overcome.
I
will
illustrate
why
mode
collapse
happens
fundamentally
and
show
a
simple
way
to
overcome
it,
which
is
the
basis
of
a
new
method
known
as
Implicit
Maximum
Likelihood
Estimation
(IMLE).
Whereas
conditional
GANs
can
only
generate
identical
images
from
the
same
input,
conditional
IMLE
can
generate
arbitrarily
many
diverse
images
from
the
same
input.
2.
Curse
of
dimensionality
in
exact
nearest
neighbour
search
Efficient
algorithms
for
exact
nearest
neighbour
search
developed
over
the
past
40
years
do
not
work
in
high
(intrinsic)
dimensions,
due
to
the
curse
of
dimensionality.
It
turns
out
that
this
problem
is
not
insurmountable.
I
will
explain
how
the
curse
of
dimensionality
arises
and
show
a
simple
way
to
overcome
it,
which
gives
rise
to
a
new
family
of
algorithms
known
as
Dynamic
Continuous
Indexing
(DCI).
Bio: Ke Li is a recent Ph.D. graduate from UC Berkeley, where he was advised by Prof. Jitendra Malik, and is currently a Research Scientist at Google and a Member of the Institute for Advanced Study (IAS). He is interested in a broad range of topics in machine learning and computer vision and has worked on nearest neighbour search, generative modelling and Learning to Optimize. He is particularly passionate about tackling long-standing fundamental problems that cannot be tackled with a straightforward application of conventional techniques. He received his Hon. B.Sc. in Computer Science from the University of Toronto in 2014.