# Continuous Optimization Seminar- Julian Romero

Wednesday, November 14, 2018 — 4:00 PM EST

Title: Why Random Reshuffling Beats Stochastic Gradient Descent

 Speaker: Julian Romero Affiliation: University of Waterloo Room: MC 5479

Abstract:  Over the first few lectures in the seminar we studied the Stochastic Gradient Descent (SGD) method to minimize functions of the form $f=\sum_{i=1}^m f_i$ for $m$ large. In this talk I will go over a variant of (SGD) called Random Reshuffling (RR). In this method, the descent directions are chosen from the component functions $f_i$ at random as in (SGD), but the choice is made \textit{without replacement} and in a cyclic fashion. In the past, numerical computations have shown evidence of (RR) outperforming (SGD), however a proof of this fact was missing until very recently. I will go over the basic ideas surrounding the proof for the case in which the functions $f_i$ are quadratics. Time permitting I will explain the general case when the $f_i$'s are smooth.

Location
MC - Mathematics & Computer Building
5479
200 University Avenue West

Waterloo, ON N2L 3G1

### June 2022

S M T W T F S
29
30
31
1
3
4
5
8
11
12
15
18
19
22
24
25
26
28
29
1
2
1. 2022 (79)
1. June (17)
2. May (10)
3. April (12)
4. March (18)
5. February (10)
6. January (13)
2. 2021 (103)
1. December (3)
2. November (7)
3. October (6)
4. September (12)
5. August (6)
6. July (10)
7. June (12)
8. May (7)
9. April (9)
10. March (13)
11. February (8)
12. January (10)
3. 2020 (119)
4. 2019 (167)
5. 2018 (136)
6. 2017 (103)
7. 2016 (137)
8. 2015 (136)
9. 2014 (88)
10. 2013 (48)
11. 2012 (39)
12. 2011 (36)
13. 2010 (40)
14. 2009 (40)
15. 2008 (39)
16. 2007 (15)