BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Drupal iCal API//EN
X-WR-CALNAME:Events items teaser
X-WR-TIMEZONE:America/Toronto
BEGIN:VTIMEZONE
TZID:America/Toronto
X-LIC-LOCATION:America/Toronto
BEGIN:DAYLIGHT
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
DTSTART:20241103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:69d00fa33a841
DTSTART;TZID=America/Toronto:20250404T153000
SEQUENCE:0
TRANSP:TRANSPARENT
DTEND;TZID=America/Toronto:20250404T163000
URL:https://uwaterloo.ca/combinatorics-and-optimization/events/tutte-colloq
 uium-aukosh-jagannath
SUMMARY:Tutte colloquium-Aukosh Jagannath
CLASS:PUBLIC
DESCRIPTION:TITLE:: The training dynamics and local geometry of high-dimens
 ional\nlearning\n\nSPEAKER:\n Aukosh Jagannath\n\nAFFILIATION:\n Universit
 y of Waterloo\n\nLOCATION:\n MC 5501\n\nABSTRACT:Many modern data science 
 tasks can be expressed as optimizing\na complex\, random functions in high
  dimensions. The go-to methods for\nsuch problems are variations of stocha
 stic gradient descent (SGD)\,\nwhich perform remarkably well—c.f. the su
 ccess of modern neural\nnetworks. However\, the rigorous analysis of SGD o
 n natural\,\nhigh-dimensional statistical models is in its infancy. In thi
 s talk\,\nwe study a general model that captures a broad range of learning
 \ntasks\, from Matrix and Tensor PCA to training two-layer neural\nnetwork
 s to classify mixture models. We show the evolution of natural\nsummary st
 atistics along training converge\, in the high-dimensional\nlimit\, to a c
 losed\, finite-dimensional dynamical system called their\neffective dynami
 cs. We then turn to understanding the landscape of\ntraining from the poin
 t-of-view of the algorithm. We show that in this\nlimit\, the spectrum of 
 the Hessian and Information matrices admit an\neffective spectral theory: 
 the limiting empirical spectral measure and\noutliers have explicit charac
 terizations that depend only on these\nsummary statistics. I will then ill
 ustrate how these techniques can be\nused to give rigorous demonstrations 
 of phenomena observed in the\nmachine learning literature such as the lott
 ery ticket hypothesis and\nthe \"spectral alignment\" phenomenona. This ta
 lk surveys a series of\njoint works with G. Ben Arous (NYU)\, R. Gheissari
  (Northwestern)\, and\nJ. Huang (U Penn).\n\nThis talk is based on joint w
 ork with Saeed Ghadimi and Henry\nWolkowicz from University of Waterloo an
 d Diego Cifuentes and Renato\nMonteiro from Georgia Tech.\n\n \n\n 
DTSTAMP:20260403T190611Z
END:VEVENT
END:VCALENDAR