Please note: This PhD seminar will be given online.
Guojun
Zhang, PhD
candidate
David
R.
Cheriton
School
of
Computer
Science
Supervisors: Professors Yaoliang Yu and Pascal Poupart
Differential games, in particular two-player sequential games (a.k.a. minimax optimization), have been an important modeling tool in applied science and received renewed interest in machine learning due to many recent applications. To account for the sequential and nonconvex nature, new solution concepts and algorithms have been developed.
In this work, we provide a detailed analysis of existing algorithms and relate them to two novel Newton-type algorithms. We argue that our Newton-type algorithms nicely complement existing ones in that (a) they converge faster to (strict) local minimax points; (b) they are much more effective when the problem is ill-conditioned; (c) their computational complexity remains similar. We verify our theoretical results by conducting experiments on training GANs.
To join this PhD seminar on Zoom, please go to https://vectorinstitute.zoom.us/j/98670343311?pwd=bDUweHQ5NTl1NmFvNmd3cVNHNEM3Zz09.