From a98a1b8bb5e1829c8ad561abe8f92071c63ba5a2 Mon Sep 17 00:00:00 2001 From: Dhairya Gandhi Date: Fri, 27 Sep 2019 21:43:39 +0530 Subject: [PATCH 01/13] fixes --- docs/src/saving.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/saving.md b/docs/src/saving.md index f71c4350..8e795298 100644 --- a/docs/src/saving.md +++ b/docs/src/saving.md @@ -113,6 +113,6 @@ You can even store optimiser state alongside the model, to resume training exactly where you left off. ```julia -opt = ADAM(params(model)) +opt = ADAM() @save "model-$(now()).bson" model opt ``` From 32ac71734de3903af021b30b96dda4e492070e8c Mon Sep 17 00:00:00 2001 From: Dhairya Gandhi Date: Fri, 27 Sep 2019 21:43:59 +0530 Subject: [PATCH 02/13] optimiser interface docs --- docs/src/training/optimisers.md | 75 +++++++++++++++++++++++++++++++++ 1 file changed, 75 insertions(+) diff --git a/docs/src/training/optimisers.md b/docs/src/training/optimisers.md index 9eb659c4..47f2e9e6 100644 --- a/docs/src/training/optimisers.md +++ b/docs/src/training/optimisers.md @@ -58,3 +58,78 @@ AMSGrad NADAM ADAMW ``` + +## Optimiser Interface + +Flux's optimsers are built around a `struct` that holds all the optimiser parameters along with a definition of how to apply the update rule associated with it. We do this via the `apply!` function which takes the optimiser as the first argument followed by the parameter and its corresponding gradient. + +In this manner Flux also allows one to create custom optimisers to be used seamlessly. Let's work this with a simple example. + +```julia +mutable struct Momentum{T,S,D} + eta::T + rho::S + velocity::D +end +``` + +The `Momentum` type will act as our optimiser in this case. Notice that we have added all the parameters as fields, along with the velocity which we will use as our state. **Note that this behaviour is set to change in consequent versions of Flux**. We can now define the rule applied when this optimiser is invoked. + +```julia +function apply!(o::Momentum, x, Δ) + η, ρ = o.eta, o.rho + v = get!(o.velocity, x, zero(x))::typeof(x) + @. v = ρ * v - η * Δ + @. Δ = -v +end +``` + +This is the basic definition of a Momentum update rule given by: +$v = ρ * v - η * Δ$ +$w = w - v$ + +The `apply!` defines the update rules for an optimsier `opt`, given the parameters and gradients. It returns the updated gradients usually. Here, every parameter `x` is retrieved from the running state `v` and subsequently updates the state of the optimiser. + +Flux internally calls on this function via the `update!` function. It shares the API with `apply!` but ensures that multiple parameters are handled gracefully. In the future, it will also be delegating immutable update operations. + +## Composing Optimisers + +Flux defines a special kind of optimiser called simply as `Optimiser` which takes in a arbitrary optimisers as input. Its behaviour is similar to the usual optimisers, but differs in that it acts by calling the optimsers listed in it sequentially. Each optimiser produces a modified gradient +that will be fed into the next, and the resultant update will be applied to the parameter as usual. A classic use case is where adding decays is desirable. Flux defines some basic decays including `ExpDecay`, `InvDecay` etc. + +``julia +opt = Optimiser(ExpDecay(0.001, 0.1, 1000, 1e-4), Descent()) +``` + +Here we apply exponential decay to the `Descent` optimser. The defaults of `ExpDecay` say that its learning rate will be decayed every 1000 steps. +It is then applied like any optimser. + +```julia +w = randn(10, 10) +w1 = randn(10,10) +ps = Params([w, w1]) + +loss(x) = Flux.mse(w * x, w1 * x) + +loss(rand(10)) # around 9 + +for t = 1:10^5 + θ = Params([w, w1]) + θ̄ = gradient(() -> loss(rand(10)), θ) + Flux.Optimise.update!(opt, θ, θ̄) +end + +loss(rand(10)) # around 0.9 +``` + +In this manner it is possible to compose optimisers for some added flexibility. + +## Decays + +Similar to optimisers, Flux also defines some simple decays that can be used in conjunction with other optimisers, or standalone. + +```@docs +ExpDecay +InvDecay +WeightDecay +``` From 8bb0db7d0c17a638c69cd6b8e3eae1c0fab09c2b Mon Sep 17 00:00:00 2001 From: Dhairya Gandhi Date: Fri, 27 Sep 2019 22:04:53 +0530 Subject: [PATCH 03/13] opt docstrings --- src/optimise/optimisers.jl | 41 ++++++++++++++++++++++++++------------ 1 file changed, 28 insertions(+), 13 deletions(-) diff --git a/src/optimise/optimisers.jl b/src/optimise/optimisers.jl index 58cd5ff7..be400457 100644 --- a/src/optimise/optimisers.jl +++ b/src/optimise/optimisers.jl @@ -8,6 +8,7 @@ const ϵ = 1e-8 """ Descent(η) + Defaults: η = 0.1 Classic gradient descent optimiser with learning rate `η`. For each parameter `p` and its gradient `δp`, this runs `p -= η*δp`. @@ -23,7 +24,8 @@ function apply!(o::Descent, x, Δ) end """ - Momentum(η = 0.01; ρ = 0.9) + Momentum(η, ρ) + Defaults: η = 0.01, ρ = 0.9 Gradient descent with learning rate `η` and momentum `ρ`. """ @@ -43,7 +45,8 @@ function apply!(o::Momentum, x, Δ) end """ - Nesterov(eta, ρ = 0.9) + Nesterov(η, ρ) + Defaults: η = 0.001, ρ = 0.9 Gradient descent with learning rate `η` and Nesterov momentum `ρ`. """ @@ -64,7 +67,8 @@ function apply!(o::Nesterov, x, Δ) end """ - RMSProp(η = 0.001, ρ = 0.9) + RMSProp(η, ρ) + Defaults: η = 0.001, ρ = 0.9 [RMSProp](https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf) optimiser. Parameters other than learning rate don't need tuning. Often a good @@ -86,7 +90,8 @@ function apply!(o::RMSProp, x, Δ) end """ - ADAM(η = 0.001, β = (0.9, 0.999)) + Defaults: η = 0.001, β = (0.9, 0.999) + ADAM() => ADAM(η, β) [ADAM](https://arxiv.org/abs/1412.6980v8) optimiser. """ @@ -109,7 +114,8 @@ function apply!(o::ADAM, x, Δ) end """ - RADAM(η = 0.001, β = (0.9, 0.999)) + Defaults: η = 0.001, β = (0.9, 0.999) + RADAM() => RADAM(η, β) [RADAM](https://arxiv.org/pdf/1908.03265v1.pdf) optimiser (Rectified ADAM). """ @@ -139,7 +145,8 @@ function apply!(o::RADAM, x, Δ) end """ - AdaMax(params, η = 0.001; β1 = 0.9, β2 = 0.999, ϵ = 1e-08) + Defaults: η = 0.001, β = (0.9, 0.999) + AdaMax() => AdaMax(η, β) [AdaMax](https://arxiv.org/abs/1412.6980v9) optimiser. Variant of ADAM based on the ∞-norm. @@ -163,7 +170,8 @@ function apply!(o::AdaMax, x, Δ) end """ - ADAGrad(η = 0.1; ϵ = 1e-8) + Defaults: η = 0.1 + ADAGrad() => ADAGrad(η) [ADAGrad](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf) optimiser. Parameters don't need tuning. @@ -183,7 +191,8 @@ function apply!(o::ADAGrad, x, Δ) end """ - ADADelta(ρ = 0.9, ϵ = 1e-8) + Defaults: ρ = 0.9 + ADADelta() => ADADelta(ρ) [ADADelta](https://arxiv.org/abs/1212.5701) optimiser. Parameters don't need tuning. @@ -205,7 +214,8 @@ function apply!(o::ADADelta, x, Δ) end """ - AMSGrad(η = 0.001, β = (0.9, 0.999)) + Defaults: η = 0.001, β = (0.9, 0.999) + AMSGrad() => AMSGrad(η, β) [AMSGrad](https://openreview.net/forum?id=ryQu7f-RZ) optimiser. Parameters don't need tuning. @@ -228,7 +238,8 @@ function apply!(o::AMSGrad, x, Δ) end """ - NADAM(η = 0.001, β = (0.9, 0.999)) + Defaults: η = 0.001, β = (0.9, 0.999) + NADAM() => NADAM(η, β) [NADAM](http://cs229.stanford.edu/proj2015/054_report.pdf) optimiser. Parameters don't need tuning. @@ -252,7 +263,8 @@ function apply!(o::NADAM, x, Δ) end """ - ADAMW((η = 0.001, β = (0.9, 0.999), decay = 0) + Defaults: η = 0.001, β = (0.9, 0.999), decay = 0 + ADAMW() => ADAMW(η, β, decay) [ADAMW](https://arxiv.org/abs/1711.05101) fixing weight decay regularization in Adam. """ @@ -287,7 +299,8 @@ function apply!(o::Optimiser, x, Δ) end """ -`InvDecay(γ)` +Defaults: γ = 0.001 +`InvDecay() => InvDecay(γ)` Apply inverse time decay to an optimiser ```julia @@ -311,6 +324,7 @@ end """ `ExpDecay(eta, decay, decay_step, clip)` +Defaults: eta = 0.001, decay = 0.1, decay_step = 1000, clip = 1e-4 Schedule the learning rate `eta` by `decay` every `decay_step` till a minimum of `clip`. @@ -340,7 +354,8 @@ function apply!(o::ExpDecay, x, Δ) end """ -`WeightDecay(wd)` +`WeightDecay() => WeightDecay(wd)` +Defaults: wd = 0 Decay the weight parameter by `wd` """ From 0175485a80c71690aa6c1a95b562b54478226a2a Mon Sep 17 00:00:00 2001 From: Dhairya Gandhi Date: Fri, 27 Sep 2019 22:08:25 +0530 Subject: [PATCH 04/13] fixup --- src/optimise/optimisers.jl | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/src/optimise/optimisers.jl b/src/optimise/optimisers.jl index be400457..09a86174 100644 --- a/src/optimise/optimisers.jl +++ b/src/optimise/optimisers.jl @@ -7,7 +7,7 @@ const ϵ = 1e-8 # TODO: should use weak refs """ - Descent(η) + Descent() => Descent(η) Defaults: η = 0.1 Classic gradient descent optimiser with learning rate `η`. @@ -24,7 +24,7 @@ function apply!(o::Descent, x, Δ) end """ - Momentum(η, ρ) + Momentum() => Momentum(η, ρ) Defaults: η = 0.01, ρ = 0.9 Gradient descent with learning rate `η` and momentum `ρ`. @@ -45,7 +45,7 @@ function apply!(o::Momentum, x, Δ) end """ - Nesterov(η, ρ) + Nesterov() => Nesterov(η, ρ) Defaults: η = 0.001, ρ = 0.9 Gradient descent with learning rate `η` and Nesterov momentum `ρ`. @@ -67,7 +67,7 @@ function apply!(o::Nesterov, x, Δ) end """ - RMSProp(η, ρ) + RMSProp() => RMSProp(η, ρ) Defaults: η = 0.001, ρ = 0.9 [RMSProp](https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf) @@ -90,8 +90,8 @@ function apply!(o::RMSProp, x, Δ) end """ - Defaults: η = 0.001, β = (0.9, 0.999) ADAM() => ADAM(η, β) + Defaults: η = 0.001, β = (0.9, 0.999) [ADAM](https://arxiv.org/abs/1412.6980v8) optimiser. """ @@ -114,8 +114,8 @@ function apply!(o::ADAM, x, Δ) end """ - Defaults: η = 0.001, β = (0.9, 0.999) RADAM() => RADAM(η, β) + Defaults: η = 0.001, β = (0.9, 0.999) [RADAM](https://arxiv.org/pdf/1908.03265v1.pdf) optimiser (Rectified ADAM). """ @@ -145,8 +145,8 @@ function apply!(o::RADAM, x, Δ) end """ - Defaults: η = 0.001, β = (0.9, 0.999) AdaMax() => AdaMax(η, β) + Defaults: η = 0.001, β = (0.9, 0.999) [AdaMax](https://arxiv.org/abs/1412.6980v9) optimiser. Variant of ADAM based on the ∞-norm. @@ -170,8 +170,8 @@ function apply!(o::AdaMax, x, Δ) end """ - Defaults: η = 0.1 ADAGrad() => ADAGrad(η) + Defaults: η = 0.1 [ADAGrad](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf) optimiser. Parameters don't need tuning. @@ -191,8 +191,8 @@ function apply!(o::ADAGrad, x, Δ) end """ - Defaults: ρ = 0.9 ADADelta() => ADADelta(ρ) + Defaults: ρ = 0.9 [ADADelta](https://arxiv.org/abs/1212.5701) optimiser. Parameters don't need tuning. @@ -214,8 +214,8 @@ function apply!(o::ADADelta, x, Δ) end """ - Defaults: η = 0.001, β = (0.9, 0.999) AMSGrad() => AMSGrad(η, β) + Defaults: η = 0.001, β = (0.9, 0.999) [AMSGrad](https://openreview.net/forum?id=ryQu7f-RZ) optimiser. Parameters don't need tuning. @@ -238,8 +238,8 @@ function apply!(o::AMSGrad, x, Δ) end """ - Defaults: η = 0.001, β = (0.9, 0.999) NADAM() => NADAM(η, β) + Defaults: η = 0.001, β = (0.9, 0.999) [NADAM](http://cs229.stanford.edu/proj2015/054_report.pdf) optimiser. Parameters don't need tuning. @@ -299,8 +299,8 @@ function apply!(o::Optimiser, x, Δ) end """ +InvDecay() => InvDecay(γ) Defaults: γ = 0.001 -`InvDecay() => InvDecay(γ)` Apply inverse time decay to an optimiser ```julia @@ -323,7 +323,7 @@ function apply!(o::InvDecay, x, Δ) end """ -`ExpDecay(eta, decay, decay_step, clip)` +ExpDecay(eta, decay, decay_step, clip) Defaults: eta = 0.001, decay = 0.1, decay_step = 1000, clip = 1e-4 Schedule the learning rate `eta` by `decay` every `decay_step` till a minimum of `clip`. @@ -354,7 +354,7 @@ function apply!(o::ExpDecay, x, Δ) end """ -`WeightDecay() => WeightDecay(wd)` +WeightDecay() => WeightDecay(wd) Defaults: wd = 0 Decay the weight parameter by `wd` From 8013c728b112aec15d50c4b6e1470f24758b4c5f Mon Sep 17 00:00:00 2001 From: Dhairya Gandhi Date: Sat, 28 Sep 2019 16:09:00 +0530 Subject: [PATCH 05/13] clearer optimiser docstrings --- src/optimise/optimisers.jl | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/src/optimise/optimisers.jl b/src/optimise/optimisers.jl index 09a86174..aa5b7203 100644 --- a/src/optimise/optimisers.jl +++ b/src/optimise/optimisers.jl @@ -7,7 +7,7 @@ const ϵ = 1e-8 # TODO: should use weak refs """ - Descent() => Descent(η) + Descent(η) Defaults: η = 0.1 Classic gradient descent optimiser with learning rate `η`. @@ -24,7 +24,7 @@ function apply!(o::Descent, x, Δ) end """ - Momentum() => Momentum(η, ρ) + Momentum(η, ρ) Defaults: η = 0.01, ρ = 0.9 Gradient descent with learning rate `η` and momentum `ρ`. @@ -45,7 +45,7 @@ function apply!(o::Momentum, x, Δ) end """ - Nesterov() => Nesterov(η, ρ) + Nesterov(η, ρ) Defaults: η = 0.001, ρ = 0.9 Gradient descent with learning rate `η` and Nesterov momentum `ρ`. @@ -67,7 +67,7 @@ function apply!(o::Nesterov, x, Δ) end """ - RMSProp() => RMSProp(η, ρ) + RMSProp(η, ρ) Defaults: η = 0.001, ρ = 0.9 [RMSProp](https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf) @@ -90,7 +90,7 @@ function apply!(o::RMSProp, x, Δ) end """ - ADAM() => ADAM(η, β) + ADAM(η, β) Defaults: η = 0.001, β = (0.9, 0.999) [ADAM](https://arxiv.org/abs/1412.6980v8) optimiser. @@ -114,7 +114,7 @@ function apply!(o::ADAM, x, Δ) end """ - RADAM() => RADAM(η, β) + RADAM(η, β) Defaults: η = 0.001, β = (0.9, 0.999) [RADAM](https://arxiv.org/pdf/1908.03265v1.pdf) optimiser (Rectified ADAM). @@ -145,7 +145,7 @@ function apply!(o::RADAM, x, Δ) end """ - AdaMax() => AdaMax(η, β) + AdaMax(η, β) Defaults: η = 0.001, β = (0.9, 0.999) [AdaMax](https://arxiv.org/abs/1412.6980v9) optimiser. Variant of ADAM based on @@ -170,7 +170,7 @@ function apply!(o::AdaMax, x, Δ) end """ - ADAGrad() => ADAGrad(η) + ADAGrad(η) Defaults: η = 0.1 [ADAGrad](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf) optimiser. @@ -191,7 +191,7 @@ function apply!(o::ADAGrad, x, Δ) end """ - ADADelta() => ADADelta(ρ) + ADADelta(ρ) Defaults: ρ = 0.9 [ADADelta](https://arxiv.org/abs/1212.5701) optimiser. Parameters don't need @@ -214,7 +214,7 @@ function apply!(o::ADADelta, x, Δ) end """ - AMSGrad() => AMSGrad(η, β) + AMSGrad(η, β) Defaults: η = 0.001, β = (0.9, 0.999) [AMSGrad](https://openreview.net/forum?id=ryQu7f-RZ) optimiser. Parameters don't need @@ -238,7 +238,7 @@ function apply!(o::AMSGrad, x, Δ) end """ - NADAM() => NADAM(η, β) + NADAM(η, β) Defaults: η = 0.001, β = (0.9, 0.999) [NADAM](http://cs229.stanford.edu/proj2015/054_report.pdf) optimiser. Parameters don't need @@ -263,8 +263,8 @@ function apply!(o::NADAM, x, Δ) end """ + ADAMW(η, β, decay) Defaults: η = 0.001, β = (0.9, 0.999), decay = 0 - ADAMW() => ADAMW(η, β, decay) [ADAMW](https://arxiv.org/abs/1711.05101) fixing weight decay regularization in Adam. """ @@ -299,7 +299,7 @@ function apply!(o::Optimiser, x, Δ) end """ -InvDecay() => InvDecay(γ) +InvDecay(γ) Defaults: γ = 0.001 Apply inverse time decay to an optimiser @@ -354,7 +354,7 @@ function apply!(o::ExpDecay, x, Δ) end """ -WeightDecay() => WeightDecay(wd) +WeightDecay(wd) Defaults: wd = 0 Decay the weight parameter by `wd` From b503741651c4c89605aa2ffacb0168d47364405c Mon Sep 17 00:00:00 2001 From: Dhairya Gandhi Date: Fri, 4 Oct 2019 14:46:03 +0530 Subject: [PATCH 06/13] expanded docstrings --- src/optimise/optimisers.jl | 92 +++++++++++++++++++++++++++----------- 1 file changed, 67 insertions(+), 25 deletions(-) diff --git a/src/optimise/optimisers.jl b/src/optimise/optimisers.jl index aa5b7203..bf2122a5 100644 --- a/src/optimise/optimisers.jl +++ b/src/optimise/optimisers.jl @@ -8,7 +8,9 @@ const ϵ = 1e-8 """ Descent(η) - Defaults: η = 0.1 + + Calls to `Descent()` default with: + - learning rate (η): 0.1 Classic gradient descent optimiser with learning rate `η`. For each parameter `p` and its gradient `δp`, this runs `p -= η*δp`. @@ -25,7 +27,10 @@ end """ Momentum(η, ρ) - Defaults: η = 0.01, ρ = 0.9 + + Calls to `Momentum()` default to: + - learning rate (η): 0.01 + - decay (ρ): 0.9 Gradient descent with learning rate `η` and momentum `ρ`. """ @@ -46,7 +51,10 @@ end """ Nesterov(η, ρ) - Defaults: η = 0.001, ρ = 0.9 + + Calls to `Nesterov()` default to: + - learning rate (η): 0.001 + - nesterov momentum (ρ): 0.9 Gradient descent with learning rate `η` and Nesterov momentum `ρ`. """ @@ -68,7 +76,10 @@ end """ RMSProp(η, ρ) - Defaults: η = 0.001, ρ = 0.9 + + Calls to `RMSProp()` default to: + - learning rate (η): 0.001 + - rho (ρ): 0.9 [RMSProp](https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf) optimiser. Parameters other than learning rate don't need tuning. Often a good @@ -90,8 +101,11 @@ function apply!(o::RMSProp, x, Δ) end """ - ADAM(η, β) - Defaults: η = 0.001, β = (0.9, 0.999) + ADAM(η, β::Tuple) + + Calls to `ADAM()` default to: + - learning rate (η): 0.001 + - (beta1, beta2) (β): (0.9, 0.999) [ADAM](https://arxiv.org/abs/1412.6980v8) optimiser. """ @@ -114,8 +128,11 @@ function apply!(o::ADAM, x, Δ) end """ - RADAM(η, β) - Defaults: η = 0.001, β = (0.9, 0.999) + RADAM(η, β::Tuple) + + Calls to `RADAM()` default to: + - learning rate (η): 0.001 + - (beta1, beta2) (β): (0.9, 0.999) [RADAM](https://arxiv.org/pdf/1908.03265v1.pdf) optimiser (Rectified ADAM). """ @@ -145,8 +162,11 @@ function apply!(o::RADAM, x, Δ) end """ - AdaMax(η, β) - Defaults: η = 0.001, β = (0.9, 0.999) + AdaMax(η, β::Tuple) + + Calls to `AdaMax()` default to: + - learning rate (η): 0.001 + - (beta1, beta2) (β): (0.9, 0.999) [AdaMax](https://arxiv.org/abs/1412.6980v9) optimiser. Variant of ADAM based on the ∞-norm. @@ -171,7 +191,9 @@ end """ ADAGrad(η) - Defaults: η = 0.1 + + Calls to `AdaGrad()` default to: + - learning rate (η): 0.1 [ADAGrad](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf) optimiser. Parameters don't need tuning. @@ -192,7 +214,9 @@ end """ ADADelta(ρ) - Defaults: ρ = 0.9 + + Calls to `ADADelta()` default to: + rho (ρ): 0.9 [ADADelta](https://arxiv.org/abs/1212.5701) optimiser. Parameters don't need tuning. @@ -214,8 +238,11 @@ function apply!(o::ADADelta, x, Δ) end """ - AMSGrad(η, β) - Defaults: η = 0.001, β = (0.9, 0.999) + AMSGrad(η, β::Tuple) + + Calls to `AMSGrad()` default to: + - learning rate (η): 0.001 + - (beta1, beta2) (β): (0.9, 0.999) [AMSGrad](https://openreview.net/forum?id=ryQu7f-RZ) optimiser. Parameters don't need tuning. @@ -238,8 +265,11 @@ function apply!(o::AMSGrad, x, Δ) end """ - NADAM(η, β) - Defaults: η = 0.001, β = (0.9, 0.999) + NADAM(η, β::Tuple) + + Calls to `NADAM()` default to: + - learning rate (η): 0.001 + - (beta1, beta2) (β): (0.9, 0.999) [NADAM](http://cs229.stanford.edu/proj2015/054_report.pdf) optimiser. Parameters don't need tuning. @@ -263,8 +293,11 @@ function apply!(o::NADAM, x, Δ) end """ - ADAMW(η, β, decay) - Defaults: η = 0.001, β = (0.9, 0.999), decay = 0 + ADAMW(η, β::Tuple, decay) + + Calls to `ADAMW()` default to: + - learning rate (η) 0.001 + - (beta1, beta2) (β): (0.9, 0.999) [ADAMW](https://arxiv.org/abs/1711.05101) fixing weight decay regularization in Adam. """ @@ -299,8 +332,10 @@ function apply!(o::Optimiser, x, Δ) end """ -InvDecay(γ) -Defaults: γ = 0.001 + InvDecay(γ) + + Calls to `InvDecay()` default to: + - gamma (γ): 0.001 Apply inverse time decay to an optimiser ```julia @@ -323,10 +358,15 @@ function apply!(o::InvDecay, x, Δ) end """ -ExpDecay(eta, decay, decay_step, clip) -Defaults: eta = 0.001, decay = 0.1, decay_step = 1000, clip = 1e-4 + ExpDecay(eta, decay, decay_step, clip) -Schedule the learning rate `eta` by `decay` every `decay_step` till a minimum of `clip`. + Calls to `ExpDecay()` default to: + - learning rate (eta): 0.001 + - decay: 0.1 + - decay_step: 1000 + - clip: 1e-4 + +Discount the learning rate `eta` by `decay` every `decay_step` till a minimum of `clip`. To apply exponential decay to an optimiser: ```julia @@ -354,8 +394,10 @@ function apply!(o::ExpDecay, x, Δ) end """ -WeightDecay(wd) -Defaults: wd = 0 + WeightDecay(wd) + + Calls to `WeightDecay()` default to: + - weight decay (wd): 0 Decay the weight parameter by `wd` """ From fe52689cfe9b2b3a85e7172f5417a65b6a718d66 Mon Sep 17 00:00:00 2001 From: Dhairya Gandhi Date: Wed, 9 Oct 2019 16:16:11 +0530 Subject: [PATCH 07/13] in depth docstrings --- src/optimise/optimisers.jl | 29 ++++++++++++++++++++++++----- 1 file changed, 24 insertions(+), 5 deletions(-) diff --git a/src/optimise/optimisers.jl b/src/optimise/optimisers.jl index bf2122a5..14cc3fec 100644 --- a/src/optimise/optimisers.jl +++ b/src/optimise/optimisers.jl @@ -7,13 +7,32 @@ const ϵ = 1e-8 # TODO: should use weak refs """ - Descent(η) - - Calls to `Descent()` default with: - - learning rate (η): 0.1 +# Descent +## Description Classic gradient descent optimiser with learning rate `η`. -For each parameter `p` and its gradient `δp`, this runs `p -= η*δp`. +For each parameter `p` and its gradient `δp`, this runs `p -= η*δp` + +## Constructors + - `Descent()`: Use the default learning rate (η), as described in the parameters section. + + - `Descent(η)`: Provide a custom learning rate (η) to the Descent optimiser. + +## Parameters + - Learning rate (η): The amount by which the gradients are discounted before updating the weights. Defaults to `0.1`. + +## Example +```julia-repl +opt = Descent() + +ps = params(model) + +gs = gradient(ps) do + loss(x, y) +end + +Flux.Optimise.update(opt, ps, gs) +``` """ mutable struct Descent eta::Float64 From f19066ee29afaf064579f3b3cb330dc00812324a Mon Sep 17 00:00:00 2001 From: Dhairya Gandhi Date: Thu, 10 Oct 2019 16:48:12 +0530 Subject: [PATCH 08/13] more docstrings --- src/optimise/optimisers.jl | 225 ++++++++++++++++++++++++++----------- 1 file changed, 161 insertions(+), 64 deletions(-) diff --git a/src/optimise/optimisers.jl b/src/optimise/optimisers.jl index 14cc3fec..64eee42a 100644 --- a/src/optimise/optimisers.jl +++ b/src/optimise/optimisers.jl @@ -7,23 +7,20 @@ const ϵ = 1e-8 # TODO: should use weak refs """ -# Descent + Descent(η) ## Description Classic gradient descent optimiser with learning rate `η`. For each parameter `p` and its gradient `δp`, this runs `p -= η*δp` -## Constructors - - `Descent()`: Use the default learning rate (η), as described in the parameters section. - - - `Descent(η)`: Provide a custom learning rate (η) to the Descent optimiser. - ## Parameters - - Learning rate (η): The amount by which the gradients are discounted before updating the weights. Defaults to `0.1`. + - Learning Rate (η): The amount by which the gradients are discounted before updating the weights. Defaults to `0.1`. ## Example ```julia-repl -opt = Descent() +opt = Descent() # uses default η (0.1) + +opt = Descent(0.3) # use provided η ps = params(model) @@ -47,11 +44,18 @@ end """ Momentum(η, ρ) - Calls to `Momentum()` default to: - - learning rate (η): 0.01 - - decay (ρ): 0.9 - Gradient descent with learning rate `η` and momentum `ρ`. + +## Parameters + - Learning Rate (`η`): Amount by which gradients are discounted before updating the weights. Defaults to `0.01`. + - Momentum (`ρ`): Parameter that accelerates descent in the relevant direction and dampens oscillations. Defaults to `0.9`. + +## Examples +```julia +opt = Momentum() # uses defaults of η = 0.01 and ρ = 0.9 + +opt = Momentum(0.01, 0.99) +``` """ mutable struct Momentum eta::Float64 @@ -71,11 +75,18 @@ end """ Nesterov(η, ρ) - Calls to `Nesterov()` default to: - - learning rate (η): 0.001 - - nesterov momentum (ρ): 0.9 - Gradient descent with learning rate `η` and Nesterov momentum `ρ`. + +## Parameters + - Learning Rate (η): Amount by which the gradients are dicsounted berfore updating the weights. Defaults to `0.001`. + - Nesterov Momentum (ρ): Paramters controlling the amount of nesterov momentum to be applied. Defaults to `0.9`. + +## Examples +```julia +opt = Nesterov() # uses defaults η = 0.001 and ρ = 0.9 + +opt = Nesterov(0.003, 0.95) +``` """ mutable struct Nesterov eta::Float64 @@ -96,13 +107,21 @@ end """ RMSProp(η, ρ) - Calls to `RMSProp()` default to: - - learning rate (η): 0.001 - - rho (ρ): 0.9 +Implements the RMSProp algortihm. Often a good choice for recurrent networks. Paramters other than learning rate generally don't need tuning. +## Parameters + - Learning Rate (η): Defaults to `0.001`. + - Rho (ρ): Defaults to `0.9`. + +## Examples +```julia +opt = RMSProp() # uses default η = 0.001 and ρ = 0.9 + +opt = RMSProp(0.002, 0.95) +``` + +## References [RMSProp](https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf) -optimiser. Parameters other than learning rate don't need tuning. Often a good -choice for recurrent networks. """ mutable struct RMSProp eta::Float64 @@ -122,10 +141,20 @@ end """ ADAM(η, β::Tuple) - Calls to `ADAM()` default to: - - learning rate (η): 0.001 - - (beta1, beta2) (β): (0.9, 0.999) +Implements the ADAM optimiser. +## Paramters + - Learning Rate (`η`): Defaults to `0.001`. + - Beta (`β::Tuple`): The first element refers to β1 and the second to β2. Defaults to `(0.9, 0.999)`. + +## Examples + +```julia +opt = ADAM() # uses the default η = 0.001 and β = (0.9, 0.999) + +opt = ADAM(0.001, (0.9, 0.8)) +``` +## References [ADAM](https://arxiv.org/abs/1412.6980v8) optimiser. """ mutable struct ADAM @@ -149,10 +178,21 @@ end """ RADAM(η, β::Tuple) - Calls to `RADAM()` default to: - - learning rate (η): 0.001 - - (beta1, beta2) (β): (0.9, 0.999) +Implements the rectified ADAM optimizer. +## Parameters + - Learning Rate (η): Defaults to `0.001` + - Beta (β::Tuple): The first element refers to β1 and the second to β2. Defaults to `(0.9, 0.999)`. + +## Examples + +```julia +opt = RADAM() # uses the default η = 0.001 and β = (0.9, 0.999) + +opt = RADAM(0.001, (0.9, 0.8)) +``` + +## References [RADAM](https://arxiv.org/pdf/1908.03265v1.pdf) optimiser (Rectified ADAM). """ mutable struct RADAM @@ -183,12 +223,20 @@ end """ AdaMax(η, β::Tuple) - Calls to `AdaMax()` default to: - - learning rate (η): 0.001 - - (beta1, beta2) (β): (0.9, 0.999) +Variant of ADAM based on ∞-norm. -[AdaMax](https://arxiv.org/abs/1412.6980v9) optimiser. Variant of ADAM based on -the ∞-norm. +## Parameters + - Learning Rate (η): Defaults to `0.001` + - Beta (β::Tuple): The first element refers to β1 and the second to β2. Defaults to `(0.9, 0.999)`. + +## Examples +```julia +opt = AdaMax() # uses default η and β + +opt = AdaMax(0.001, (0.9, 0.995)) +``` +## References +[AdaMax](https://arxiv.org/abs/1412.6980v9) optimiser. """ mutable struct AdaMax eta::Float64 @@ -211,9 +259,19 @@ end """ ADAGrad(η) - Calls to `AdaGrad()` default to: - - learning rate (η): 0.1 +Implements AdaGrad. It has parameter specific learning rates based on how frequently it is updated. +## Parameters + - Learning Rate (η): Defaults to `0.1` + +## Examples +```julia +opt = ADAGrad() # uses default η = 0.1 + +opt = ADAGrad(0.001) +``` + +## References [ADAGrad](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf) optimiser. Parameters don't need tuning. """ @@ -234,11 +292,19 @@ end """ ADADelta(ρ) - Calls to `ADADelta()` default to: - rho (ρ): 0.9 +Version of ADAGrad that adapts learning rate based on a window of past gradient updates. Parameters don't need tuning. -[ADADelta](https://arxiv.org/abs/1212.5701) optimiser. Parameters don't need -tuning. +## Parameters + - Rho (ρ): Factor by which gradient is decayed at each time step. Defaults to `0.9`. + +## Examples +```julia +opt = ADADelta() # uses default ρ = 0.9 +opt = ADADelta(0.89) +``` + +## References +[ADADelta](https://arxiv.org/abs/1212.5701) optimiser. """ mutable struct ADADelta rho::Float64 @@ -259,12 +325,20 @@ end """ AMSGrad(η, β::Tuple) - Calls to `AMSGrad()` default to: - - learning rate (η): 0.001 - - (beta1, beta2) (β): (0.9, 0.999) +Implements AMSGrad version of the ADAM optimiser. Parameters don't need tuning. -[AMSGrad](https://openreview.net/forum?id=ryQu7f-RZ) optimiser. Parameters don't need -tuning. +## Parameters + - Learning Rate (η): Defaults to `0.001`. + - Beta (β::Tuple): The first element refers to β1 and the second to β2. Defaults to `(0.9, 0.999)`. + +## Examples +```julia +opt = AMSGrad() # uses default η and β +opt = AMSGrad(0.001, (0.89, 0.995)) +``` + +## References +[AMSGrad](https://openreview.net/forum?id=ryQu7f-RZ) optimiser. """ mutable struct AMSGrad eta::Float64 @@ -286,12 +360,20 @@ end """ NADAM(η, β::Tuple) - Calls to `NADAM()` default to: - - learning rate (η): 0.001 - - (beta1, beta2) (β): (0.9, 0.999) +Nesterov variant of ADAM. Parameters don't need tuning. -[NADAM](http://cs229.stanford.edu/proj2015/054_report.pdf) optimiser. Parameters don't need -tuning. +## Parameters + - Learning Rate (η): Defaults to `0.001`. + - Beta (β::Tuple): The first element refers to β1 and the second to β2. Defaults to `(0.9, 0.999)`. + +## Examples +```julia +opt = NADAM() # uses default η and β +opt = NADAM(0.002, (0.89, 0.995)) +``` + +## References +[NADAM](http://cs229.stanford.edu/proj2015/054_report.pdf) optimiser. """ mutable struct NADAM eta::Float64 @@ -314,11 +396,21 @@ end """ ADAMW(η, β::Tuple, decay) - Calls to `ADAMW()` default to: - - learning rate (η) 0.001 - - (beta1, beta2) (β): (0.9, 0.999) +Variant of ADAM defined by fixing weight decay regularization. -[ADAMW](https://arxiv.org/abs/1711.05101) fixing weight decay regularization in Adam. +## Parameters + - Learning Rate (η): Defaults to `0.001`. + - Beta (β::Tuple): The first element refers to β1 and the second to β2. Defaults to (0.9, 0.999). + - decay: Decay applied to weights during optimisation. Defaults to 0. + +## Examples +```julia +opt = ADAMW() # uses default η, β and decay +opt = ADAMW(0.001, (0.89, 0.995), 0.1) +``` + +## References +[ADAMW](https://arxiv.org/abs/1711.05101) """ ADAMW(η = 0.001, β = (0.9, 0.999), decay = 0) = Optimiser(ADAM(η, β), WeightDecay(decay)) @@ -353,10 +445,12 @@ end """ InvDecay(γ) - Calls to `InvDecay()` default to: - - gamma (γ): 0.001 +Applies inverse time decay to an optimiser -Apply inverse time decay to an optimiser +## Parameters + - gamma (γ): Defaults to `0.001` + +## Example ```julia Optimiser(InvDecay(..), Opt(..)) ``` @@ -379,17 +473,20 @@ end """ ExpDecay(eta, decay, decay_step, clip) - Calls to `ExpDecay()` default to: - - learning rate (eta): 0.001 - - decay: 0.1 - - decay_step: 1000 - - clip: 1e-4 - Discount the learning rate `eta` by `decay` every `decay_step` till a minimum of `clip`. +## Parameters + - Learning Rate (eta): Defaults to `0.001`. + - decay: Factor by which the learning rate is discounted. Defaults to `0.1`. + - decay_step: Schedules decay operations by setting number of steps between two decay operations. Defaults to `1000`. + - clip: Minimum value of learning rate. Defaults to `1e-4`. + +## Example To apply exponential decay to an optimiser: ```julia Optimiser(ExpDecay(..), Opt(..)) + + opt = Optimiser(ExpDecay(), ADAM()) ``` """ mutable struct ExpDecay @@ -415,10 +512,10 @@ end """ WeightDecay(wd) - Calls to `WeightDecay()` default to: - - weight decay (wd): 0 +Decays the weight by `wd` -Decay the weight parameter by `wd` +## Parameters + - weight decay (wd): 0 """ mutable struct WeightDecay wd::Real From 623ee2c29c40ddd59c69fd2b55a6eb1f7f0b2afa Mon Sep 17 00:00:00 2001 From: Dhairya Gandhi Date: Thu, 10 Oct 2019 20:16:00 +0530 Subject: [PATCH 09/13] typo Co-Authored-By: Mike J Innes --- docs/src/training/optimisers.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/training/optimisers.md b/docs/src/training/optimisers.md index 47f2e9e6..2d195191 100644 --- a/docs/src/training/optimisers.md +++ b/docs/src/training/optimisers.md @@ -88,7 +88,7 @@ This is the basic definition of a Momentum update rule given by: $v = ρ * v - η * Δ$ $w = w - v$ -The `apply!` defines the update rules for an optimsier `opt`, given the parameters and gradients. It returns the updated gradients usually. Here, every parameter `x` is retrieved from the running state `v` and subsequently updates the state of the optimiser. +The `apply!` defines the update rules for an optimiser `opt`, given the parameters and gradients. It returns the updated gradients usually. Here, every parameter `x` is retrieved from the running state `v` and subsequently updates the state of the optimiser. Flux internally calls on this function via the `update!` function. It shares the API with `apply!` but ensures that multiple parameters are handled gracefully. In the future, it will also be delegating immutable update operations. From a55878453c9dfb499411872f4313facbe0b613cd Mon Sep 17 00:00:00 2001 From: Dhairya Gandhi Date: Thu, 10 Oct 2019 20:16:29 +0530 Subject: [PATCH 10/13] typo Co-Authored-By: Mike J Innes --- docs/src/training/optimisers.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/training/optimisers.md b/docs/src/training/optimisers.md index 2d195191..e3178504 100644 --- a/docs/src/training/optimisers.md +++ b/docs/src/training/optimisers.md @@ -94,7 +94,7 @@ Flux internally calls on this function via the `update!` function. It shares the ## Composing Optimisers -Flux defines a special kind of optimiser called simply as `Optimiser` which takes in a arbitrary optimisers as input. Its behaviour is similar to the usual optimisers, but differs in that it acts by calling the optimsers listed in it sequentially. Each optimiser produces a modified gradient +Flux defines a special kind of optimiser called simply as `Optimiser` which takes in a arbitrary optimisers as input. Its behaviour is similar to the usual optimisers, but differs in that it acts by calling the optimisers listed in it sequentially. Each optimiser produces a modified gradient that will be fed into the next, and the resultant update will be applied to the parameter as usual. A classic use case is where adding decays is desirable. Flux defines some basic decays including `ExpDecay`, `InvDecay` etc. ``julia From 4477dd8d544c53c1f74f3d2e638e90df8895f8a6 Mon Sep 17 00:00:00 2001 From: Dhairya Gandhi Date: Thu, 10 Oct 2019 20:27:11 +0530 Subject: [PATCH 11/13] reviews --- docs/src/training/optimisers.md | 23 ++++++++++++++--------- src/optimise/optimisers.jl | 1 - 2 files changed, 14 insertions(+), 10 deletions(-) diff --git a/docs/src/training/optimisers.md b/docs/src/training/optimisers.md index e3178504..c5f44a95 100644 --- a/docs/src/training/optimisers.md +++ b/docs/src/training/optimisers.md @@ -66,14 +66,16 @@ Flux's optimsers are built around a `struct` that holds all the optimiser parame In this manner Flux also allows one to create custom optimisers to be used seamlessly. Let's work this with a simple example. ```julia -mutable struct Momentum{T,S,D} - eta::T - rho::S - velocity::D +mutable struct Momentum + eta + rho + velocity end + +Momentum(eta::Real, rho::Real) = Momentum(eta, rho, IdDict()) ``` -The `Momentum` type will act as our optimiser in this case. Notice that we have added all the parameters as fields, along with the velocity which we will use as our state. **Note that this behaviour is set to change in consequent versions of Flux**. We can now define the rule applied when this optimiser is invoked. +The `Momentum` type will act as our optimiser in this case. Notice that we have added all the parameters as fields, along with the velocity which we will use as our state dictionary. Each parameter in our models will get an entry in there. We can now define the rule applied when this optimiser is invoked. ```julia function apply!(o::Momentum, x, Δ) @@ -85,19 +87,22 @@ end ``` This is the basic definition of a Momentum update rule given by: -$v = ρ * v - η * Δ$ -$w = w - v$ + +```math +v = ρ * v - η * Δ +w = w - v +``` The `apply!` defines the update rules for an optimiser `opt`, given the parameters and gradients. It returns the updated gradients usually. Here, every parameter `x` is retrieved from the running state `v` and subsequently updates the state of the optimiser. -Flux internally calls on this function via the `update!` function. It shares the API with `apply!` but ensures that multiple parameters are handled gracefully. In the future, it will also be delegating immutable update operations. +Flux internally calls on this function via the `update!` function. It shares the API with `apply!` but ensures that multiple parameters are handled gracefully. ## Composing Optimisers Flux defines a special kind of optimiser called simply as `Optimiser` which takes in a arbitrary optimisers as input. Its behaviour is similar to the usual optimisers, but differs in that it acts by calling the optimisers listed in it sequentially. Each optimiser produces a modified gradient that will be fed into the next, and the resultant update will be applied to the parameter as usual. A classic use case is where adding decays is desirable. Flux defines some basic decays including `ExpDecay`, `InvDecay` etc. -``julia +```julia opt = Optimiser(ExpDecay(0.001, 0.1, 1000, 1e-4), Descent()) ``` diff --git a/src/optimise/optimisers.jl b/src/optimise/optimisers.jl index 64eee42a..8567c7da 100644 --- a/src/optimise/optimisers.jl +++ b/src/optimise/optimisers.jl @@ -9,7 +9,6 @@ const ϵ = 1e-8 """ Descent(η) -## Description Classic gradient descent optimiser with learning rate `η`. For each parameter `p` and its gradient `δp`, this runs `p -= η*δp` From 776023ddad9ffa45d5de0838a4fbad9b9a43c390 Mon Sep 17 00:00:00 2001 From: Dhairya Gandhi Date: Thu, 10 Oct 2019 20:35:28 +0530 Subject: [PATCH 12/13] fixes --- docs/src/training/optimisers.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/training/optimisers.md b/docs/src/training/optimisers.md index c5f44a95..5e8b95de 100644 --- a/docs/src/training/optimisers.md +++ b/docs/src/training/optimisers.md @@ -93,7 +93,7 @@ v = ρ * v - η * Δ w = w - v ``` -The `apply!` defines the update rules for an optimiser `opt`, given the parameters and gradients. It returns the updated gradients usually. Here, every parameter `x` is retrieved from the running state `v` and subsequently updates the state of the optimiser. +The `apply!` defines the update rules for an optimiser `opt`, given the parameters and gradients. It returns the updated gradients. Here, every parameter `x` is retrieved from the running state `v` and subsequently updates the state of the optimiser. Flux internally calls on this function via the `update!` function. It shares the API with `apply!` but ensures that multiple parameters are handled gracefully. From 7ead2d6c7b4054d862e4919c2e8c8e9159d2839f Mon Sep 17 00:00:00 2001 From: Mike Innes Date: Tue, 22 Oct 2019 13:36:39 +0100 Subject: [PATCH 13/13] typo --- src/optimise/optimisers.jl | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/optimise/optimisers.jl b/src/optimise/optimisers.jl index 8567c7da..ea2ef067 100644 --- a/src/optimise/optimisers.jl +++ b/src/optimise/optimisers.jl @@ -27,7 +27,7 @@ gs = gradient(ps) do loss(x, y) end -Flux.Optimise.update(opt, ps, gs) +Flux.Optimise.update!(opt, ps, gs) ``` """ mutable struct Descent @@ -230,7 +230,7 @@ Variant of ADAM based on ∞-norm. ## Examples ```julia -opt = AdaMax() # uses default η and β +opt = AdaMax() # uses default η and β opt = AdaMax(0.001, (0.9, 0.995)) ``` @@ -405,7 +405,7 @@ Variant of ADAM defined by fixing weight decay regularization. ## Examples ```julia opt = ADAMW() # uses default η, β and decay -opt = ADAMW(0.001, (0.89, 0.995), 0.1) +opt = ADAMW(0.001, (0.89, 0.995), 0.1) ``` ## References