MetaGrad: Multiple Learning Rates in Online Learning - Laurent Risser

Sep 11, 2018 - In many applications data are stochastic (i.i.d.) Should be easier ..... Conference on Machine Learning (ICML), pages 257–264, 2009. J. Duchi ...
1MB taille 15 téléchargements 286 vues
MetaGrad: Multiple Learning Rates in Online Learning

Tim van Erven

Joint work with: Wouter Koolen, Peter Gr¨ unwald

Optimization and Learning Workshop Toulouse, September 11, 2018

Example: Sequential Prediction for Football Games

I

Before every match t in the English Premier League, my PhD student Dirk van der Hoeven wants to predict the goal difference Yt

I

Given feature vector Xt ∈ Rd , he may predict Yˆt = wt| Xt with a linear model

I

After the match: observe Yt

I

Measure loss by `t (wt ) = (Yt − Yˆt )2 and improve parameter estimates: wt → wt+1

Precursor to modern football in China, Han Dynasty (206 BC – 220 AD)

2 / 17

Example: Sequential Prediction for Football Games

I

Before every match t in the English Premier League, my PhD student Dirk van der Hoeven wants to predict the goal difference Yt

I

Given feature vector Xt ∈ Rd , he may predict Yˆt = wt| Xt with a linear model

I

After the match: observe Yt

I

Measure loss by `t (wt ) = (Yt − Yˆt )2 and improve parameter estimates: wt → wt+1

Precursor to modern football in China, Han Dynasty (206 BC – 220 AD)

Goal: Predict almost as well as the best possible parameters u: Regretu T =

T X t=1

`t (wt ) −

T X

`t (u)

t=1 2 / 17

Online Convex Optimization for t = 1, 2, . . . , T do Learner estimates wt from convex U ⊂ Rd 3: Nature reveals convex loss function `t : U → R 4: Learner incurs loss `t (wt ) 5: end for 1:

2:

3 / 17

Online Convex Optimization for t = 1, 2, . . . , T do Learner estimates wt from convex U ⊂ Rd 3: Nature reveals convex loss function `t : U → R 4: Learner incurs loss `t (wt ) 5: end for 1:

2:

Viewed as a zero-sum game against Nature: V = min max min max · · · min max max w1

`1

w2

`2

wT

`T

u∈U

T X

`t (wt ) −

t=1

|

T X

`t (u)

t=1

{z

Regretu T

}

3 / 17

Online Convex Optimization for t = 1, 2, . . . , T do Learner estimates wt from convex U ⊂ Rd 3: Nature reveals convex loss function `t : U → R 4: Learner incurs loss `t (wt ) 5: end for 1:

2:

Viewed as a zero-sum game against Nature: V = min max min max · · · min max max w1

`1

w2

`2

wT

`T

u∈U

T X

`t (wt ) −

t=1

|

T X

`t (u)

t=1

{z

Regretu T

}

Methods: Efficient computations using only gradient gt = ∇ `t (wt ) wt+1 = wt − ηt gt wt+1 = wt − ηΣt+1 gt where Σt+1 = (I + 2η 2

Pt

s=1

(online gradient descent) (online Newton Step)

gs gs| )−1 . 3 / 17

The Standard Picture Minimax rates based on curvature Convex `t Strongly convex `t Exp-concave `t



(bounded domain and gradients)

[Hazan, 2016]:

T

OGD with ηt ∝

1 √ t

ln T

OGD with ηt ∝

1 t

d ln T

ONS with η ∝ 1

I Strongly convex: second derivative at least α > 0, implies exp-concave I Exp-concave: e −α `t concave

Satisfied by log loss, logistic loss, squared loss, but not hinge loss

4 / 17

The Standard Picture Minimax rates based on curvature Convex `t Strongly convex `t Exp-concave `t



(bounded domain and gradients)

[Hazan, 2016]:

T

OGD with ηt ∝

1 √ t

ln T

OGD with ηt ∝

1 t

d ln T

ONS with η ∝ 1

Limitations: I Different method in each case. (Requires sophisticated users.) I Theoretical tuning of ηt very conservative I What if curvature varies between rounds? I In many applications data are stochastic (i.i.d.) Should be easier than worst case. . .

4 / 17

The Standard Picture Minimax rates based on curvature Convex `t Strongly convex `t Exp-concave `t



(bounded domain and gradients)

[Hazan, 2016]:

T

OGD with ηt ∝

1 √ t

ln T

OGD with ηt ∝

1 t

d ln T

ONS with η ∝ 1

Limitations: I Different method in each case. (Requires sophisticated users.) I Theoretical tuning of ηt very conservative I What if curvature varies between rounds? I In many applications data are stochastic (i.i.d.) Should be easier than worst case. . .

Need Adaptive Methods! I

Difficulty: All existing methods learn η at too slow rate [HP2005] so overhead of learning best η ruins potential benefits 4 / 17

MetaGrad: Multiple Eta Gradient Algorithm η1

η2

η3

η4

···

1 ln(T ) |2 {z } ≤16

5 / 17

MetaGrad: Multiple Eta Gradient Algorithm η1

η2

η3

η4

Σ1

Σ2

Σ3

Σ4

w1

w2

w3

w4

···

1 ln(T ) |2 {z } ≤16

5 / 17

MetaGrad: Multiple Eta Gradient Algorithm η1

η2

η3

η4

Σ1

Σ2

Σ3

Σ4

w1

w2

w3

w4

w1

w2

w3

···

1 ln(T ) |2 {z } ≤16

w4

π

5 / 17

MetaGrad: Multiple Eta Gradient Algorithm η1

η2

η3

η4

Σ1

Σ2

Σ3

Σ4

w1

w2

w3

w4

w1

P π i η i wi w = Pi i πi η i

w2

w3

···

1 ln(T ) |2 {z } ≤16

w4

π

5 / 17

MetaGrad: Multiple Eta Gradient Algorithm η1

η2

η3

η4

Σ1

Σ2

Σ3

Σ4

w1

w2

w3

w4

w1

P π i η i wi w = Pi i πi η i

w2

w3

···

1 ln(T ) |2 {z } ≤16

w4

w π

5 / 17

MetaGrad: Multiple Eta Gradient Algorithm η1

η2

η3

η4

Σ1

Σ2

Σ3

Σ4

w1

w2

w3

w4

P π i η i wi w = Pi i πi η i

···

1 ln(T ) |2 {z } ≤16

w π g = ∇f (w)

5 / 17

MetaGrad: Multiple Eta Gradient Algorithm η1

η2

η3

η4

Σ1

Σ2

Σ3

Σ4

w1

w2

w3

w4

P π i η i wi w = Pi i πi η i πi ← πi e

−ηi ri −ηi2 ri2

···

1 ln(T ) |2 {z } ≤16

w π g = ∇f (w)

where ri = (wi − w)| g Tilted Exponential Weights

5 / 17

MetaGrad: Multiple Eta Gradient Algorithm η1

η2

η3

η4

Σ1

Σ2

Σ3

Σ4

w1

w2

w3

w4

g P π i η i wi w = Pi i πi η i πi ← πi e

−ηi ri −ηi2 ri2

g

···

1 ln(T ) |2 {z } ≤16

g g w π g = ∇f (w)

where ri = (wi − w)| g Tilted Exponential Weights

5 / 17

Σi ← (Σi−1Algorithm + 2ηi2 gg | )−1 MetaGrad: Multiple Eta Gradient wi ← wi − ηi Σi g(1 + 2ηi ri ) η1

η2

η3

Σ1

Σ2

Σ3

Σ4

w1

w2

w3

w4

g P π i η i wi w = Pi i πi η i πi ← πi e

−ηi ri −ηi2 ri2

g

η4

≈ Quasi Newton update

···

1 ln(T ) |2 {z } ≤16

g g w π g = ∇f (w)

where ri = (wi − w)| g Tilted Exponential Weights

5 / 17

MetaGrad: Provable Adaptive Fast Rates Theorem (Van Erven, Koolen, 2016) MetaGrad’s Regretu T is bounded by Regretu T ≤

T X

(wt − u)| gt 4

t=1

√   T ln ln T  p

VTu d ln T + d ln T

where VTu

T X = ((u − wt )| gt )2 t=1

I

By convexity, `t (wt ) − `t (u) ≤ (wt − u)| gt .

I

Optimal learning rate η depends on VTu , but u unknown! Crucial to learn best learning rate from data!

6 / 17

MetaGrad: Provable Adaptive Fast Rates Theorem (Van Erven, Koolen, 2016) MetaGrad’s Regretu T is bounded by Regretu T ≤

T X t=1

(wt − u)| gt 4

√   T ln ln T  p

VTu d ln T + d ln T

where VTu

T T X X | 2 = ((u − wt ) gt ) = (u − wt )| gt gt| (u − wt ). t=1

t=1

I

By convexity, `t (wt ) − `t (u) ≤ (wt − u)| gt .

I

Optimal learning rate η depends on VTu , but u unknown! Crucial to learn best learning rate from data!

6 / 17

Consequences 1. Non-stochastic adaptation: Convex `t



T ln ln T

Exp-concave `t

d ln T

Fixed convex `t = `

d ln T

7 / 17

Consequences 1. Non-stochastic adaptation: Convex `t



T ln ln T

Exp-concave `t

d ln T

Fixed convex `t = `

d ln T

2. Stochastic without curvature Suppose `t i.i.d. with stochastic optimum u∗ = arg minu∈U E` [`(u)]. ∗ Then expected regret E[Regretu T ]: Absolute loss* `t (w ) = |w − Xt |

ln T

Hinge loss max{0, 1 − Yt hw, Xt i} (B, β)-Bernstein

d ln T 1/(2−β)

(Bd ln T )

T (1−β)/(2−β) *Conditions apply

7 / 17

P(Y = 1 | X)

Related Work: Adaptivity to Stochastic Data in Batch Classification [Tsybakov, 2004] 1

1

1

0.5

0.5

0.5

0

0

0

X

X

X

easy β=1

moderate β = 21

hard β=0

8 / 17

P(Y = 1 | X)

Related Work: Adaptivity to Stochastic Data in Batch Classification [Tsybakov, 2004] 1

1

1

0.5

0.5

0.5

0

0

0

X

X

X

easy β=1

moderate β = 21

hard β=0

Definition ((B, β)-Bernstein Condition) Losses are i.i.d. and 2

E (`(w) − `(u∗ )) ≤ B E [`(w) − `(u∗ )]



for all w,

where u∗ = arg minu E[`(u)] minimizes the expected loss. 8 / 17

Bernstein Condition for Online Learning Suppose `t i.i.d. with stochastic optimum u∗ = arg min E[`(u)]. u∈U

`

Standard Bernstein condition: 2

E (`(w) − `(u∗ )) ≤ B E [`(w) − `(u∗ )]



for all w ∈ U.

9 / 17

Bernstein Condition for Online Learning Suppose `t i.i.d. with stochastic optimum u∗ = arg min E[`(u)]. `

u∈U

Standard Bernstein condition: 2

E (`(w) − `(u∗ )) ≤ B E [`(w) − `(u∗ )]



for all w ∈ U.

Replace by weaker linearized version: I Apply with ˜ `(u) = hu, ∇ `(w)i instead of `! I By convexity, `(w) − `(u∗ ) ≤ ˜ `(w) − ˜`(u∗ ). 2

E ((w − u∗ )| ∇ `(w)) ≤ B E [(w − u∗ )| ∇ `(w)]



for all w ∈ U.

9 / 17

Bernstein Condition for Online Learning Suppose `t i.i.d. with stochastic optimum u∗ = arg min E[`(u)]. `

u∈U

Standard Bernstein condition: 2

E (`(w) − `(u∗ )) ≤ B E [`(w) − `(u∗ )]



for all w ∈ U.

Replace by weaker linearized version: I Apply with ˜ `(u) = hu, ∇ `(w)i instead of `! I By convexity, `(w) − `(u∗ ) ≤ ˜ `(w) − ˜`(u∗ ). 2

E ((w − u∗ )| ∇ `(w)) ≤ B E [(w − u∗ )| ∇ `(w)]



for all w ∈ U.

Hinge loss (domain, gradients bounded by 1): β = 1, B =

2λmax (E[XX | ]) k E[Y X]k

9 / 17

Bernstein Condition for Online Learning Suppose `t i.i.d. with stochastic optimum u∗ = arg min E[`(u)]. `

u∈U

Standard Bernstein condition: 2

E (`(w) − `(u∗ )) ≤ B E [`(w) − `(u∗ )]



for all w ∈ U.

Replace by weaker linearized version: I Apply with ˜ `(u) = hu, ∇ `(w)i instead of `! I By convexity, `(w) − `(u∗ ) ≤ ˜ `(w) − ˜`(u∗ ). 2

E ((w − u∗ )| ∇ `(w)) ≤ B E [(w − u∗ )| ∇ `(w)]



for all w ∈ U.

Hinge loss (domain, gradients bounded by 1): β = 1, B =

2λmax (E[XX | ]) k E[Y X]k

Theorem (Koolen, Gr¨unwald, Van Erven, 2016) ∗

E[Regretu T ] 4 (Bd ln T ) ∗

1/(2−β)

Regretu T 4 (Bd ln T − ln δ)

T (1−β)/(2−β)

1/(2−β)

T (1−β)/(2−β)

w.p. ≥ 1 − δ 9 / 17

MetaGrad Simulation Experiments 600

600

AdaGrad MetaGrad

500

500

400 regret

regret

400 300

100

100

0 2

4

6 T

Offline: `t (u) = |u − 1/4|

I I I

300 200

200

0

AdaGrad MetaGrad

8

10 4

x 10

2

4

6

8

T

10 4

x 10

Stochastic Online: `t (u) = |u − Xt | where Xt = ± 1 i.i.d. w.p. 0.4 and 0.6. 2

√ MetaGrad: O(ln T ) regret, AdaGrad: O( T ), match bounds Functions neither strongly convex nor smooth Caveat: comparison more complicated for higher dimensions, unless we run a separate copy of MetaGrad per dimension, like the diagonal version of AdaGrad runs GD per dimension 10 / 17

MetaGrad Football Experiments Curriculum Vitae Dirk van der Hoeven

Regression results square loss l2ball

20000

Naam: Geboortedatum: Geboorteplaats: Adres:

17500 15000

Telefoon: E-mail:

12500 Regret

Persoonlijke gegevens

10000

Opleidingen

7500 5000

0

I

1000

2000

3000 T

4000

5000

06-18998685 [email protected]

Dirk van der Hoeven (my PhD student)

2014 – heden

Msc Mathematics: statistical science for the life and behavioral sciences, Universiteit Leiden.

2013 – 2014

Msc Psychology: Methodology and Statistics in Psychology (cum laude), Universiteit Leiden Scriptietitel: ‘A new statistical test for and a critique of Temporal Dominance of Sensations’ – beoordeling: 8.5.

2010 – 2013

Bsc Psychology, Universiteit Leiden Scriptietitel: ‘Invloed van keuzevrijheid in strategiegebruik op prestatie bij oplossen van vermenigvuldigen Rapha¨ el Deswarte deelsommen in groep 7 en 8’ – beoordeling: 8.

2004 – 2010

Vwo-diploma, Veurs College te Leidschendam

2500 0

Dirk van der Hoeven Metagrad full 08-04-1992 Metagrad diag Leidschendam Jan Hendrikstraat 3h Adagrad diag 2512 GK Den Haag

6000

(visiting PhD student)

Predict difference in goals in 6000 football games in English Premier League (Aug 2000–May 2017). Werkervaring 2015 – heden

Student assistent bij Universiteit Leiden. Voorbereiden en nakijken van opdrachten en werkgroepen verzorgen.

2014 – 2014

Stagiar bij Unilever R&D, Vlaardingen. Onderzoek naar een niew statistisch model voor Temporal Dominance of Sensations Data. Tevens onderzoek naar de stabiliteit van Temporal Dominance of Sensations bij verschillende prikkels.

I

Square loss on Euclidean ball

I

37 features: running average of goals, shots on goal, shots over m = 1, . . . , 10 previous games; multiple ELO-like models; intercept.

11 / 17

Analysis Second-order surrogate loss for each η of interest (from a grid): `ηt (u) = η(u − wt )| gt + η 2 (u − wt )| gt gt| (u − wt )

12 / 17

Analysis Second-order surrogate loss for each η of interest (from a grid): `ηt (u) = η(u − wt )| gt + η 2 (u − wt )| gt gt| (u − wt ) One Slave algorithm per η produces wtη such that T X t=1

`ηt (wtη ) −

T X

u `ηt (u) ≤ Rslave (η)

t=1

12 / 17

Analysis Second-order surrogate loss for each η of interest (from a grid): `ηt (u) = η(u − wt )| gt + η 2 (u − wt )| gt gt| (u − wt ) One Slave algorithm per η produces wtη such that T X

`ηt (wtη ) −

t=1

T X

u `ηt (u) ≤ Rslave (η)

t=1

Single Master algorithm produces wt such that T X

`ηt (wt ) −

t=1

|

T X

`ηt (wtη ) ≤ Rmaster (η)

∀η

t=1

{z

=0

}

12 / 17

Analysis Second-order surrogate loss for each η of interest (from a grid): `ηt (u) = η(u − wt )| gt + η 2 (u − wt )| gt gt| (u − wt ) One Slave algorithm per η produces wtη such that T X

`ηt (wtη ) −

t=1

T X

u `ηt (u) ≤ Rslave (η)

t=1

Single Master algorithm produces wt such that T X

`ηt (wt ) −

t=1

| Together: −

PT

T X

`ηt (wtη ) ≤ Rmaster (η)

∀η

t=1

{z

=0

}

η t=1 `t (u)

u ≤ Rslave (η) + Rmaster (η) ∀η

12 / 17

Analysis Second-order surrogate loss for each η of interest (from a grid): `ηt (u) = η(u − wt )| gt + η 2 (u − wt )| gt gt| (u − wt ) One Slave algorithm per η produces wtη such that T X

`ηt (wtη )



t=1

T X

u `ηt (u) ≤ Rslave (η)

t=1

Single Master algorithm produces wt such that T X

`ηt (wt ) −

t=1

| Together: −

PT

T X

`ηt (wtη ) ≤ Rmaster (η)

∀η

t=1

{z

=0

}

η t=1 `t (u)

u ≤ Rslave (η) + Rmaster (η) ∀η

T X R u (η) + Rmaster (η) (wt − u)| gt ≤ slave + ηVTu η t=1 12 / 17

Analysis Second-order surrogate loss for each η of interest (from a grid): `ηt (u) = η(u − wt )| gt + η 2 (u − wt )| gt gt| (u − wt ) One Slave algorithm per η produces wtη such that T X

`ηt (wtη ) −

t=1

T X

u `ηt (u) ≤ Rslave (η)

t=1

Single Master algorithm produces wt such that T X

`ηt (wt ) −

Together: −

PT

T X t=1

`ηt (wtη ) ≤ Rmaster (η)

∀η

t=1

t=1

|

T X

{z

=0

}

η t=1 `t (u)

u ≤ Rslave (η) + Rmaster (η) ∀η

(wt − u)| gt ≤

O(d ln T ) + O(ln ln T ) + ηVTu η 12 / 17

Analysis Second-order surrogate loss for each η of interest (from a grid): `ηt (u) = η(u − wt )| gt + η 2 (u − wt )| gt gt| (u − wt ) One Slave algorithm per η produces wtη such that T X

`ηt (wtη ) −

t=1

T X

u `ηt (u) ≤ Rslave (η)

t=1

Single Master algorithm produces wt such that T X

`ηt (wt ) −

Together: − T X t=1

`ηt (wtη ) ≤ Rmaster (η)

∀η

t=1

t=1

|

T X

{z

=0

PT

}

η t=1 `t (u)

(wt − u)| gt ≤

u ≤ Rslave (η) + Rmaster (η) ∀η

p  O(d ln T ) + O(ln ln T ) + ηVTu ⇒ O VTu d ln T η 12 / 17

MetaGrad Master Goal: aggregate slave predictions wtη for all η in −0

−1

−d 1 log2 T e

2 2 2 exponentially spaced grid 5DG , 5DG , . . . , 2 5DG Difficulty: master’s predictions must be good w.r.t. different loss functions `ηt for all η simultaneously

Compute exponential weights with performance of each η measured by its own surrogate loss: πt (η) =

π1 (η)e −

P

s