Flux.jl/docs/src/training/optimisers.md

143 lines
4.4 KiB
Markdown
Raw Normal View History

2017-09-10 01:01:19 +00:00
# Optimisers
2017-09-12 10:34:04 +00:00
Consider a [simple linear regression](../models/basics.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
2017-09-10 01:01:19 +00:00
```julia
2019-09-11 13:48:50 +00:00
using Flux
2018-07-11 14:31:22 +00:00
2019-09-14 05:26:17 +00:00
W = rand(2, 5)
2019-09-10 15:19:15 +00:00
b = rand(2)
2017-09-10 01:01:19 +00:00
2019-09-10 15:19:15 +00:00
predict(x) = (W * x) .+ b
2017-09-10 01:01:19 +00:00
loss(x, y) = sum((predict(x) .- y).^2)
x, y = rand(5), rand(2) # Dummy data
l = loss(x, y) # ~ 3
2018-07-11 14:31:22 +00:00
2019-01-28 14:10:09 +00:00
θ = Params([W, b])
grads = gradient(() -> loss(x, y), θ)
2017-09-10 01:01:19 +00:00
```
We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:
```julia
2020-02-26 19:27:39 +00:00
using Flux.Optimise: update!
2018-06-29 12:53:50 +00:00
2019-01-10 11:01:57 +00:00
η = 0.1 # Learning Rate
for p in (W, b)
update!(p, -η * grads[p])
2017-09-10 01:01:19 +00:00
end
```
2019-01-10 11:01:57 +00:00
Running this will alter the parameters `W` and `b` and our loss should go down. Flux provides a more general way to do optimiser updates like this.
2017-09-10 01:01:19 +00:00
```julia
2018-11-12 12:12:52 +00:00
opt = Descent(0.1) # Gradient descent with learning rate 0.1
2017-09-10 01:01:19 +00:00
2019-01-10 11:01:57 +00:00
for p in (W, b)
2019-01-28 14:10:09 +00:00
update!(opt, p, grads[p])
2019-01-10 11:01:57 +00:00
end
2017-09-10 01:01:19 +00:00
```
2019-01-10 11:01:57 +00:00
An optimiser `update!` accepts a parameter and a gradient, and updates the parameter according to the chosen rule. We can also pass `opt` to our [training loop](training.md), which will update all parameters of the model in a loop. However, we can now easily replace `Descent` with a more advanced optimiser such as `ADAM`.
2017-10-18 11:07:43 +00:00
## Optimiser Reference
2018-12-04 10:38:03 +00:00
All optimisers return an object that, when passed to `train!`, will update the parameters passed to it.
2017-10-18 11:22:45 +00:00
2018-12-04 10:38:03 +00:00
```@docs
2020-02-26 19:27:39 +00:00
Flux.Optimise.update!
2019-01-10 13:54:17 +00:00
Descent
2018-12-04 10:38:03 +00:00
Momentum
Nesterov
2019-04-04 20:55:21 +00:00
RMSProp
2018-12-04 10:38:03 +00:00
ADAM
RADAM
2019-04-04 20:55:21 +00:00
AdaMax
ADAGrad
ADADelta
AMSGrad
NADAM
ADAMW
2018-11-12 12:12:52 +00:00
```
2019-09-27 16:13:59 +00:00
## Optimiser Interface
2020-03-01 14:07:12 +00:00
Flux's optimisers are built around a `struct` that holds all the optimiser parameters along with a definition of how to apply the update rule associated with it. We do this via the `apply!` function which takes the optimiser as the first argument followed by the parameter and its corresponding gradient.
2019-09-27 16:13:59 +00:00
In this manner Flux also allows one to create custom optimisers to be used seamlessly. Let's work this with a simple example.
```julia
2019-10-10 14:57:11 +00:00
mutable struct Momentum
eta
rho
velocity
2019-09-27 16:13:59 +00:00
end
2019-10-10 14:57:11 +00:00
Momentum(eta::Real, rho::Real) = Momentum(eta, rho, IdDict())
2019-09-27 16:13:59 +00:00
```
2019-10-10 14:57:11 +00:00
The `Momentum` type will act as our optimiser in this case. Notice that we have added all the parameters as fields, along with the velocity which we will use as our state dictionary. Each parameter in our models will get an entry in there. We can now define the rule applied when this optimiser is invoked.
2019-09-27 16:13:59 +00:00
```julia
function apply!(o::Momentum, x, Δ)
η, ρ = o.eta, o.rho
v = get!(o.velocity, x, zero(x))::typeof(x)
@. v = ρ * v - η * Δ
@. Δ = -v
end
```
This is the basic definition of a Momentum update rule given by:
2019-10-10 14:57:11 +00:00
```math
v = ρ * v - η * Δ
w = w - v
```
2019-09-27 16:13:59 +00:00
2019-10-10 15:05:28 +00:00
The `apply!` defines the update rules for an optimiser `opt`, given the parameters and gradients. It returns the updated gradients. Here, every parameter `x` is retrieved from the running state `v` and subsequently updates the state of the optimiser.
2019-09-27 16:13:59 +00:00
2019-10-10 14:57:11 +00:00
Flux internally calls on this function via the `update!` function. It shares the API with `apply!` but ensures that multiple parameters are handled gracefully.
2019-09-27 16:13:59 +00:00
## Composing Optimisers
2020-03-01 14:07:12 +00:00
Flux defines a special kind of optimiser simply called `Optimiser` which takes in arbitrary optimisers as input. Its behaviour is similar to the usual optimisers, but differs in that it acts by calling the optimisers listed in it sequentially. Each optimiser produces a modified gradient
2019-09-27 16:13:59 +00:00
that will be fed into the next, and the resultant update will be applied to the parameter as usual. A classic use case is where adding decays is desirable. Flux defines some basic decays including `ExpDecay`, `InvDecay` etc.
2019-10-10 14:57:11 +00:00
```julia
2019-09-27 16:13:59 +00:00
opt = Optimiser(ExpDecay(0.001, 0.1, 1000, 1e-4), Descent())
```
2020-03-01 14:07:12 +00:00
Here we apply exponential decay to the `Descent` optimiser. The defaults of `ExpDecay` say that its learning rate will be decayed every 1000 steps.
It is then applied like any optimiser.
2019-09-27 16:13:59 +00:00
```julia
w = randn(10, 10)
w1 = randn(10,10)
ps = Params([w, w1])
loss(x) = Flux.mse(w * x, w1 * x)
loss(rand(10)) # around 9
for t = 1:10^5
θ = Params([w, w1])
θ̄ = gradient(() -> loss(rand(10)), θ)
Flux.Optimise.update!(opt, θ, θ̄)
end
loss(rand(10)) # around 0.9
```
In this manner it is possible to compose optimisers for some added flexibility.
## Decays
Similar to optimisers, Flux also defines some simple decays that can be used in conjunction with other optimisers, or standalone.
```@docs
ExpDecay
InvDecay
WeightDecay
```