diff --git a/latest/internals/tracker.html b/latest/internals/tracker.html index 65ba7e58..ee903459 100644 --- a/latest/internals/tracker.html +++ b/latest/internals/tracker.html @@ -6,7 +6,23 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Backpropagation

Flux.Tracker

Backpropagation, or reverse-mode automatic differentiation, is handled by the Flux.Tracker module.

julia> using Flux.Tracker

The param function converts a normal Julia array into a new object that, while behaving like an array, tracks extra information that allows us to calculate derivatives. For example, say we multiply two parameters:

julia> W = param([1 2; 3 4])
+

Backpropagation

Flux.Tracker

Backpropagation, or reverse-mode automatic differentiation, is handled by the Flux.Tracker module.

julia> using Flux.Tracker

Here we discuss some more advanced uses of this module, as well as covering its internals.

Taking Gradients

In the basics section we covered basic usage of the gradient function.

using Flux.Tracker
+
+Tracker.gradient((a, b) -> a*b, 2, 3) # (3.0 (tracked), 2.0 (tracked))

gradient is actually just a thin wrapper around the backpropagator-based interface, forward.

using Flux.Tracker: forward
+
+y, back = forward((a, b) -> a*b, 2, 3) # (6.0 (tracked), Flux.Tracker.#9)
+
+back(1) # (3.0 (tracked), 2.0 (tracked))

The forward function returns two results. The first, y, is the original value of the function (perhaps with tracking applied). The second, back, is a new function which, given a sensitivity, returns the sensitivity of the inputs to forward (we call this a "backpropagator"). One use of this interface is to provide custom sensitivities when outputs are not scalar.

julia> y, back = forward((a, b) -> a.*b, [1,2,3],[4,5,6])
+(param([4.0, 10.0, 18.0]), Flux.Tracker.#9)
+
+julia> back([1,1,1])
+(param([4.0, 5.0, 6.0]), param([1.0, 2.0, 3.0]))

We can also take gradients in-place. This can be useful if you only care about first-order gradients.

a, b = param(2), param(3)
+
+c = a*b # 6.0 (tracked)
+
+Tracker.back!(c)
+
+Tracker.grad(a), Tracker.grad(b) # (3.0, 2.0)

Tracked Arrays

The param function converts a normal Julia array into a new object that, while behaving like an array, tracks extra information that allows us to calculate derivatives. For example, say we multiply two parameters:

julia> W = param([1 2; 3 4])
 Tracked 2×2 Array{Float64,2}:
  1.0  2.0
  3.0  4.0
@@ -29,40 +45,15 @@ julia> W.grad
 julia> x.grad
 2-element Array{Float64,1}:
  -2.0
- -2.0

Internals

All Tracked* objects (TrackedArray, TrackedReal) are light wrappers around the Tracked type, which you can access via the .tracker field.

julia> x.tracker
-Flux.Tracker.Tracked{Array{Float64,1}}(0x00000000, Flux.Tracker.Call{Void,Tuple{}}(nothing, ()), true, [5.0, 6.0], [-2.0, -2.0])

The Tracker stores the value and gradient of a given object, which we've seen before.

julia> x.tracker.data
-2-element Array{Float64,1}:
- 5.0
- 6.0
+ -2.0

You may sometimes want to drop derivative information and just get the plain value back. You can do this by calling Tracker.data(W).

Custom Gradients

We can hook in to the processes above to implement custom gradients for a function or kernel. For a toy example, imagine a custom implementation of minus:

minus(a, b) = a - b

Firstly, we must tell the tracker system to stop when it sees a call to minus, and record it. We can do this using dispatch:

using Flux.Tracker: TrackedReal, track, @grad
 
-julia> x.tracker.grad
+minus(a::TrackedArray, b::TrackedArray) = Tracker.track(minus, a, b)

track takes care of building a new Tracked object and recording the operation on the tape. We just need to provide a gradient definition.

@grad function minus(a, b)
+  return minus(data(a),data(b)), Δ -> (Δ, -Δ)
+end

This is essentially just a way of overloading the forward function we saw above. We strip tracking from a and b so that we are calling the original definition of minus (otherwise, we'd just try to track the call again and hit an infinite regress).

Note that in the backpropagator we don't call data(a); we do in fact want to track this, since nest AD will take a derivative through the backpropagator itself. For example, the gradient of * might look like this.

@grad a * b = data(a)*data(b), Δ -> (Δ*b, a*Δ)

For multi-argument functions with custom gradients, you likely want to catch not just minus(::TrackedArray, ::TrackedArray) but also minus(::Array, TrackedArray) and so on. To do so, just define those extra signatures as needed:

minus(a::AbstractArray, b::TrackedArray) = Tracker.track(minus, a, b)
+minus(a::TrackedArray, b::AbstractArray) = Tracker.track(minus, a, b)

Tracked Internals

All Tracked* objects (TrackedArray, TrackedReal) are light wrappers around the Tracked type, which you can access via the .tracker field.

julia> x.tracker
+Flux.Tracker.Tracked{Array{Float64,1}}(0x00000000, Flux.Tracker.Call{Void,Tuple{}}(nothing, ()), true, [5.0, 6.0], [-2.0, -2.0])

The Tracker stores the gradient of a given object, which we've seen before.

julia> x.tracker.grad
 2-element Array{Float64,1}:
  -2.0
  -2.0

The tracker also contains a Call object, which simply represents a function call that was made at some point during the forward pass. For example, the + call would look like this:

julia> Tracker.Call(+, 1, 2)
 Flux.Tracker.Call{Base.#+,Tuple{Int64,Int64}}(+, (1, 2))

In the case of the y we produced above, we can see that it stores the call that produced it – that is, W*x.

julia> y.tracker.f
-Flux.Tracker.Call{...}(*, (param([1.0 2.0; 3.0 4.0]), param([5.0, 6.0])))

Notice that because the arguments to the call may also be tracked arrays, storing their own calls, this means that Tracker ends up forming a data structure that records everything that happened during the forward pass (often known as a tape).

When we call back!(y, [1, -1]), the sensitivities [1, -1] simply get forwarded to y's call (*), effectively calling

Tracker.back(*, [1, -1], W, x)

which in turn calculates the sensitivities of the arguments (W and x) and backpropagates through their calls. This is recursive, so it will walk the entire program graph and propagate gradients to the original model parameters.

Custom Gradients

We can hook in to the processes above to implement custom gradients for a function or kernel. For a toy example, imagine a custom implementation of minus:

julia> minus(a, b) = a - b

Firstly, we must tell the tracker system to stop when it sees a call to minus, and record it. We can do this using dispatch:

julia> minus(a::TrackedArray, b::TrackedArray) = Tracker.track(minus, a, b)
-minus (generic function with 2 methods)

Tracker.track does two things: (1) it makes sure minus is called with normal array, not tracked ones (you can use @show inside minus to verify this), and (2) it uses the result to add a minus node to the tape. Look inside the result of calling minus to see what happened:

julia> a, b = param([6,5,4]), param([1,2,3])
-(param([6.0, 5.0, 4.0]), param([1.0, 2.0, 3.0]))
-
-julia> c = minus(a, b)
-Tracked 3-element Array{Float64,1}:
- 5.0
- 3.0
- 1.0
-
-julia> c.tracker.f
-Flux.Tracker.Call{...}(minus, (param([6.0, 5.0, 4.0]), param([1.0, 2.0, 3.0])))

Finally, we have to specify the gradient of minus.

julia> Tracker.back(::typeof(minus), Δ, a, b) =
-        (Tracker.@back(a, Δ); Tracker.@back(b, -Δ))

@back(x, Δ) tells the tracker to continue propagating the sensitivity Δ through x. Now, AD will work with any program that calls minus.

julia> Flux.back!(c, 1)
-
-julia> a.grad
-3-element Array{Float64,1}:
- 1.0
- 1.0
- 1.0
-
-julia> b.grad
-3-element Array{Float64,1}:
- -1.0
- -1.0
- -1.0

Notes

For multi-argument functions with custom gradients, you likely want to catch not just minus(::TrackedArray, ::TrackedArray) but also minus(::Array, TrackedArray) and so on. To do so, just define those extra signatures as needed:

minus(a::AbstractArray, b::TrackedArray) = Tracker.track(minus, a, b)
-minus(a::TrackedArray, b::AbstractArray) = Tracker.track(minus, a, b)

@back must be called exactly once on each tracked input argument. You do not need to do any special handling if one of the arguments is not tracked, as @back will just become a no-op.

+Flux.Tracker.Call{...}(*, (param([1.0 2.0; 3.0 4.0]), param([5.0, 6.0])))

Notice that because the arguments to the call may also be tracked arrays, storing their own calls, this means that Tracker ends up forming a data structure that records everything that happened during the forward pass (often known as a tape).

When we call back!(y, [1, -1]), the sensitivities [1, -1] simply get forwarded to y's call (*), effectively calling

Tracker.back(*, [1, -1], W, x)

which in turn calculates the sensitivities of the arguments (W and x) and back-propagates through their calls. This is recursive, so it will walk the entire program graph and propagate gradients to the original model parameters.

diff --git a/latest/models/basics.html b/latest/models/basics.html index 39e531e5..9c63a2c9 100644 --- a/latest/models/basics.html +++ b/latest/models/basics.html @@ -6,28 +6,54 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Basics

Model-Building Basics

Taking Gradients

Consider a simple linear regression, which tries to predict an output array y from an input x. (It's a good idea to follow this example in the Julia repl.)

W = rand(2, 5)
+

Basics

Model-Building Basics

Taking Gradients

Flux's core feature is taking gradients of Julia code. The gradient function takes another Julia function f and a set of arguments, and returns the gradient with respect to each argument. (It's a good idea to try pasting these examples in the Julia terminal.)

using Flux.Tracker
+
+f(x) = 3x^2 + 2x + 1
+
+# df/dx = 6x + 2
+f′(x) = Tracker.gradient(f, x)[1]
+
+f′(2) # 14.0 (tracked)
+
+# d²f/dx² = 6
+f′′(x) = Tracker.gradient(f′, x)[1]
+
+f′′(2) # 6.0 (tracked)

(We'll learn more about why these numbers show up as (tracked) below.)

When a function has many parameters, we can pass them all in explicitly:

f(W, b, x) = W * x + b
+
+Tracker.gradient(f, 2, 3, 4)
+(4.0 (tracked), 1.0, 2.0 (tracked))

But machine learning models can have hundreds of parameters! Flux offers a nice way to handle this. We can tell Flux to treat something as a parameter via param. Then we can collect these together and tell gradient to collect the gradients of all of them at once.

W = param(2) # 2.0 (tracked)
+b = param(3) # 3.0 (tracked)
+
+f(x) = W * x + b
+
+params = Params([W, b])
+grads = Tracker.gradient(() -> f(4), params)
+
+grads[W] # 4.0
+grads[b] # 1.0

There are a few things to notice here. Firstly, W and b now show up as tracked. Tracked things behave like normal numbers or arrays, but keep records of everything you do with them, allowing Flux to calculate their gradients. gradient takes a zero-argument function; no arguments are necessary because the Params tell it what to differentiate.

This will come in really handy when dealing with big, complicated models. For now, though, let's start with something simple.

Simple Models

Consider a simple linear regression, which tries to predict an output array y from an input x.

W = rand(2, 5)
 b = rand(2)
 
 predict(x) = W*x .+ b
-loss(x, y) = sum((predict(x) .- y).^2)
+
+function loss(x, y)
+  ŷ = predict(x)
+  sum((y .- ŷ).^2)
+end
 
 x, y = rand(5), rand(2) # Dummy data
-loss(x, y) # ~ 3

To improve the prediction we can take the gradients of W and b with respect to the loss function and perform gradient descent. We could calculate gradients by hand, but Flux will do it for us if we tell it that W and b are trainable parameters.

using Flux.Tracker
+loss(x, y) # ~ 3

To improve the prediction we can take the gradients of W and b with respect to the loss and perform gradient descent. Let's tell Flux that W and b are parameters, just like we did above.

using Flux.Tracker
 
 W = param(W)
 b = param(b)
 
-l = loss(x, y)
+gs = Tracker.gradient(() -> loss(x, y), Params([W, b]))

Now that we have gradients, we can pull them out and update W to train the model. The update!(W, Δ) function applies W = W + Δ, which we can use for gradient descent.

using Flux.Tracker: update!
 
-back!(l)

loss(x, y) returns the same number, but it's now a tracked value that records gradients as it goes along. Calling back! then accumulates the gradient of W and b. We can see what this gradient is, and modify W to train the model.

using Flux.Tracker: grad, update!
-
-Δ = grad(W)
+Δ = gs[W]
 
 # Update the parameter and reset the gradient
 update!(W, -0.1Δ)
 
-loss(x, y) # ~ 2.5

The loss has decreased a little, meaning that our prediction x is closer to the target y. If we have some data we can already try training the model.

All deep learning in Flux, however complex, is a simple generalisation of this example. Of course, models can look very different – they might have millions of parameters or complex control flow, and there are ways to manage this complexity. Let's see what that looks like.

Building Layers

It's common to create more complex models than the linear regression above. For example, we might want to have two linear layers with a nonlinearity like sigmoid (σ) in between them. In the above style we could write this as:

W1 = param(rand(3, 5))
+loss(x, y) # ~ 2.5

The loss has decreased a little, meaning that our prediction x is closer to the target y. If we have some data we can already try training the model.

All deep learning in Flux, however complex, is a simple generalisation of this example. Of course, models can look very different – they might have millions of parameters or complex control flow. Let's see how Flux handles more complex models.

Building Layers

It's common to create more complex models than the linear regression above. For example, we might want to have two linear layers with a nonlinearity like sigmoid (σ) in between them. In the above style we could write this as:

W1 = param(rand(3, 5))
 b1 = param(rand(3))
 layer1(x) = W1 * x .+ b1
 
diff --git a/latest/models/layers.html b/latest/models/layers.html
index 71ce8706..42310cd3 100644
--- a/latest/models/layers.html
+++ b/latest/models/layers.html
@@ -11,26 +11,26 @@ m(5) == 26
 
 m = Chain(Dense(10, 5), Dense(5, 2))
 x = rand(10)
-m(x) == m[2](m[1](x))

Chain also supports indexing and slicing, e.g. m[2] or m[1:end-1]. m[1:3](x) will calculate the output of the first three layers.

source
Flux.DenseType.
Dense(in::Integer, out::Integer, σ = identity)

Creates a traditional Dense layer with parameters W and b.

y = σ.(W * x .+ b)

The input x must be a vector of length in, or a batch of vectors represented as an in × N matrix. The out y will be a vector or batch of length out.

julia> d = Dense(5, 2)
+m(x) == m[2](m[1](x))

Chain also supports indexing and slicing, e.g. m[2] or m[1:end-1]. m[1:3](x) will calculate the output of the first three layers.

source
Flux.DenseType.
Dense(in::Integer, out::Integer, σ = identity)

Creates a traditional Dense layer with parameters W and b.

y = σ.(W * x .+ b)

The input x must be a vector of length in, or a batch of vectors represented as an in × N matrix. The out y will be a vector or batch of length out.

julia> d = Dense(5, 2)
 Dense(5, 2)
 
 julia> d(rand(5))
 Tracked 2-element Array{Float64,1}:
   0.00257447
-  -0.00449443
source
Flux.ConvType.
Conv(size, in=>out)
-Conv(size, in=>out, relu)

Standard convolutional layer. size should be a tuple like (2, 2). in and out specify the number of input and output channels respectively.

Data should be stored in WHCN order. In other words, a 100×100 RGB image would be a 100×100×3 array, and a batch of 50 would be a 100×100×3×50 array.

Takes the keyword arguments pad, stride and dilation.

source

Recurrent Layers

Much like the core layers above, but can be used to process sequence data (as well as other kinds of structured data).

Flux.RNNFunction.
RNN(in::Integer, out::Integer, σ = tanh)

The most basic recurrent layer; essentially acts as a Dense layer, but with the output fed back into the input each time step.

source
Flux.LSTMFunction.
LSTM(in::Integer, out::Integer, σ = tanh)

Long Short Term Memory recurrent layer. Behaves like an RNN but generally exhibits a longer memory span over sequences.

See this article for a good overview of the internals.

source
Flux.GRUFunction.
GRU(in::Integer, out::Integer, σ = tanh)

Gated Recurrent Unit layer. Behaves like an RNN but generally exhibits a longer memory span over sequences.

See this article for a good overview of the internals.

source
Flux.RecurType.
Recur(cell)

Recur takes a recurrent cell and makes it stateful, managing the hidden state in the background. cell should be a model of the form:

h, y = cell(h, x...)

For example, here's a recurrent network that keeps a running total of its inputs.

accum(h, x) = (h+x, x)
+  -0.00449443
source
Flux.ConvType.
Conv(size, in=>out)
+Conv(size, in=>out, relu)

Standard convolutional layer. size should be a tuple like (2, 2). in and out specify the number of input and output channels respectively.

Data should be stored in WHCN order. In other words, a 100×100 RGB image would be a 100×100×3 array, and a batch of 50 would be a 100×100×3×50 array.

Takes the keyword arguments pad, stride and dilation.

source

Recurrent Layers

Much like the core layers above, but can be used to process sequence data (as well as other kinds of structured data).

Flux.RNNFunction.
RNN(in::Integer, out::Integer, σ = tanh)

The most basic recurrent layer; essentially acts as a Dense layer, but with the output fed back into the input each time step.

source
Flux.LSTMFunction.
LSTM(in::Integer, out::Integer, σ = tanh)

Long Short Term Memory recurrent layer. Behaves like an RNN but generally exhibits a longer memory span over sequences.

See this article for a good overview of the internals.

source
Flux.GRUFunction.
GRU(in::Integer, out::Integer, σ = tanh)

Gated Recurrent Unit layer. Behaves like an RNN but generally exhibits a longer memory span over sequences.

See this article for a good overview of the internals.

source
Flux.RecurType.
Recur(cell)

Recur takes a recurrent cell and makes it stateful, managing the hidden state in the background. cell should be a model of the form:

h, y = cell(h, x...)

For example, here's a recurrent network that keeps a running total of its inputs.

accum(h, x) = (h+x, x)
 rnn = Flux.Recur(accum, 0)
 rnn(2) # 2
 rnn(3) # 3
 rnn.state # 5
 rnn.(1:10) # apply to a sequence
-rnn.state # 60
source

Activation Functions

Non-linearities that go between layers of your model. Most of these functions are defined in NNlib but are available by default in Flux.

Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call σ.(xs), relu.(xs) and so on.

NNlib.σFunction.
σ(x) = 1 / (1 + exp(-x))

Classic sigmoid activation function.

source
NNlib.reluFunction.
relu(x) = max(0, x)

Rectified Linear Unit activation function.

source
NNlib.leakyreluFunction.
leakyrelu(x) = max(0.01x, x)

Leaky Rectified Linear Unit activation function. You can also specify the coefficient explicitly, e.g. leakyrelu(x, 0.01).

source
NNlib.eluFunction.
elu(x, α = 1) =
+rnn.state # 60
source

Activation Functions

Non-linearities that go between layers of your model. Most of these functions are defined in NNlib but are available by default in Flux.

Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call σ.(xs), relu.(xs) and so on.

NNlib.σFunction.
σ(x) = 1 / (1 + exp(-x))

Classic sigmoid activation function.

source
NNlib.reluFunction.
relu(x) = max(0, x)

Rectified Linear Unit activation function.

source
NNlib.leakyreluFunction.
leakyrelu(x) = max(0.01x, x)

Leaky Rectified Linear Unit activation function. You can also specify the coefficient explicitly, e.g. leakyrelu(x, 0.01).

source
NNlib.eluFunction.
elu(x, α = 1) =
   x > 0 ? x : α * (exp(x) - 1)

Exponential Linear Unit activation function. See Fast and Accurate Deep Network Learning by Exponential Linear Units. You can also specify the coefficient explicitly, e.g. elu(x, 1).

source
NNlib.swishFunction.
swish(x) = x * σ(x)

Self-gated actvation function. See Swish: a Self-Gated Activation Function.

source

Normalisation & Regularisation

These layers don't affect the structure of the network but may improve training times or reduce overfitting.

Flux.testmode!Function.
testmode!(m)
-testmode!(m, false)

Put layers like Dropout and BatchNorm into testing mode (or back to training mode with false).

source
Flux.BatchNormType.
BatchNorm(channels::Integer, σ = identity;
+testmode!(m, false)

Put layers like Dropout and BatchNorm into testing mode (or back to training mode with false).

source
Flux.BatchNormType.
BatchNorm(channels::Integer, σ = identity;
           initβ = zeros, initγ = ones,
           ϵ = 1e-8, momentum = .1)

Batch Normalization layer. The channels input should be the size of the channel dimension in your data (see below).

Given an array with N dimensions, call the N-1th the channel dimension. (For a batch of feature vectors this is just the data dimension, for WHCN images it's the usual channel dimension.)

BatchNorm computes the mean and variance for each each W×H×1×N slice and shifts them to have a new mean and variance (corresponding to the learnable, per-channel bias and scale parameters).

See Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.

Example:

m = Chain(
   Dense(28^2, 64),
   BatchNorm(64, relu),
   Dense(64, 10),
   BatchNorm(10),
-  softmax)
source
Flux.DropoutType.
Dropout(p)

A Dropout layer. For each input, either sets that input to 0 (with probability p) or scales it by 1/(1-p). This is used as a regularisation, i.e. it reduces overfitting during training.

Does nothing to the input once in testmode!.

source
Flux.LayerNormType.
LayerNorm(h::Integer)

A normalisation layer designed to be used with recurrent hidden states of size h. Normalises the mean/stddev of each input before applying a per-neuron gain/bias.

source
+ softmax)
source
Flux.DropoutType.
Dropout(p)

A Dropout layer. For each input, either sets that input to 0 (with probability p) or scales it by 1/(1-p). This is used as a regularisation, i.e. it reduces overfitting during training.

Does nothing to the input once in testmode!.

source
Flux.LayerNormType.
LayerNorm(h::Integer)

A normalisation layer designed to be used with recurrent hidden states of size h. Normalises the mean/stddev of each input before applying a per-neuron gain/bias.

source
diff --git a/latest/models/recurrence.html b/latest/models/recurrence.html index 550b5c7d..d04b07c4 100644 --- a/latest/models/recurrence.html +++ b/latest/models/recurrence.html @@ -39,4 +39,4 @@ m = Flux.Recur(rnn, h) y = m(x)

The Recur wrapper stores the state between runs in the m.state field.

If you use the RNN(10, 5) constructor – as opposed to RNNCell – you'll see that it's simply a wrapped cell.

julia> RNN(10, 5)
 Recur(RNNCell(Dense(15, 5)))

Sequences

Often we want to work with sequences of inputs, rather than individual xs.

seq = [rand(10) for i = 1:10]

With Recur, applying our model to each element of a sequence is trivial:

m.(seq) # returns a list of 5-element vectors

This works even when we've chain recurrent layers into a larger model.

m = Chain(LSTM(10, 15), Dense(15, 5))
-m.(seq)

Truncating Gradients

By default, calculating the gradients in a recurrent layer involves the entire history. For example, if we call the model on 100 inputs, calling back! will calculate the gradient for those 100 calls. If we then calculate another 10 inputs we have to calculate 110 gradients – this accumulates and quickly becomes expensive.

To avoid this we can truncate the gradient calculation, forgetting the history.

truncate!(m)

Calling truncate! wipes the slate clean, so we can call the model with more inputs without building up an expensive gradient computation.

truncate! makes sense when you are working with multiple chunks of a large sequence, but we may also want to work with a set of independent sequences. In this case the hidden state should be completely reset to its original value, throwing away any accumulated information. reset! does this for you.

+m.(seq)

Truncating Gradients

By default, calculating the gradients in a recurrent layer involves its entire history. For example, if we call the model on 100 inputs, we'll have to calculate the gradient for those 100 calls. If we then calculate another 10 inputs we have to calculate 110 gradients – this accumulates and quickly becomes expensive.

To avoid this we can truncate the gradient calculation, forgetting the history.

truncate!(m)

Calling truncate! wipes the slate clean, so we can call the model with more inputs without building up an expensive gradient computation.

truncate! makes sense when you are working with multiple chunks of a large sequence, but we may also want to work with a set of independent sequences. In this case the hidden state should be completely reset to its original value, throwing away any accumulated information. reset! does this for you.

diff --git a/latest/search_index.js b/latest/search_index.js index e51bea0d..abba565c 100644 --- a/latest/search_index.js +++ b/latest/search_index.js @@ -45,7 +45,15 @@ var documenterSearchIndex = {"docs": [ "page": "Basics", "title": "Taking Gradients", "category": "section", - "text": "Consider a simple linear regression, which tries to predict an output array y from an input x. (It\'s a good idea to follow this example in the Julia repl.)W = rand(2, 5)\nb = rand(2)\n\npredict(x) = W*x .+ b\nloss(x, y) = sum((predict(x) .- y).^2)\n\nx, y = rand(5), rand(2) # Dummy data\nloss(x, y) # ~ 3To improve the prediction we can take the gradients of W and b with respect to the loss function and perform gradient descent. We could calculate gradients by hand, but Flux will do it for us if we tell it that W and b are trainable parameters.using Flux.Tracker\n\nW = param(W)\nb = param(b)\n\nl = loss(x, y)\n\nback!(l)loss(x, y) returns the same number, but it\'s now a tracked value that records gradients as it goes along. Calling back! then accumulates the gradient of W and b. We can see what this gradient is, and modify W to train the model.using Flux.Tracker: grad, update!\n\nΔ = grad(W)\n\n# Update the parameter and reset the gradient\nupdate!(W, -0.1Δ)\n\nloss(x, y) # ~ 2.5The loss has decreased a little, meaning that our prediction x is closer to the target y. If we have some data we can already try training the model.All deep learning in Flux, however complex, is a simple generalisation of this example. Of course, models can look very different – they might have millions of parameters or complex control flow, and there are ways to manage this complexity. Let\'s see what that looks like." + "text": "Flux\'s core feature is taking gradients of Julia code. The gradient function takes another Julia function f and a set of arguments, and returns the gradient with respect to each argument. (It\'s a good idea to try pasting these examples in the Julia terminal.)using Flux.Tracker\n\nf(x) = 3x^2 + 2x + 1\n\n# df/dx = 6x + 2\nf′(x) = Tracker.gradient(f, x)[1]\n\nf′(2) # 14.0 (tracked)\n\n# d²f/dx² = 6\nf′′(x) = Tracker.gradient(f′, x)[1]\n\nf′′(2) # 6.0 (tracked)(We\'ll learn more about why these numbers show up as (tracked) below.)When a function has many parameters, we can pass them all in explicitly:f(W, b, x) = W * x + b\n\nTracker.gradient(f, 2, 3, 4)\n(4.0 (tracked), 1.0, 2.0 (tracked))But machine learning models can have hundreds of parameters! Flux offers a nice way to handle this. We can tell Flux to treat something as a parameter via param. Then we can collect these together and tell gradient to collect the gradients of all of them at once.W = param(2) # 2.0 (tracked)\nb = param(3) # 3.0 (tracked)\n\nf(x) = W * x + b\n\nparams = Params([W, b])\ngrads = Tracker.gradient(() -> f(4), params)\n\ngrads[W] # 4.0\ngrads[b] # 1.0There are a few things to notice here. Firstly, W and b now show up as tracked. Tracked things behave like normal numbers or arrays, but keep records of everything you do with them, allowing Flux to calculate their gradients. gradient takes a zero-argument function; no arguments are necessary because the Params tell it what to differentiate.This will come in really handy when dealing with big, complicated models. For now, though, let\'s start with something simple." +}, + +{ + "location": "models/basics.html#Simple-Models-1", + "page": "Basics", + "title": "Simple Models", + "category": "section", + "text": "Consider a simple linear regression, which tries to predict an output array y from an input x.W = rand(2, 5)\nb = rand(2)\n\npredict(x) = W*x .+ b\n\nfunction loss(x, y)\n ŷ = predict(x)\n sum((y .- ŷ).^2)\nend\n\nx, y = rand(5), rand(2) # Dummy data\nloss(x, y) # ~ 3To improve the prediction we can take the gradients of W and b with respect to the loss and perform gradient descent. Let\'s tell Flux that W and b are parameters, just like we did above.using Flux.Tracker\n\nW = param(W)\nb = param(b)\n\ngs = Tracker.gradient(() -> loss(x, y), Params([W, b]))Now that we have gradients, we can pull them out and update W to train the model. The update!(W, Δ) function applies W = W + Δ, which we can use for gradient descent.using Flux.Tracker: update!\n\nΔ = gs[W]\n\n# Update the parameter and reset the gradient\nupdate!(W, -0.1Δ)\n\nloss(x, y) # ~ 2.5The loss has decreased a little, meaning that our prediction x is closer to the target y. If we have some data we can already try training the model.All deep learning in Flux, however complex, is a simple generalisation of this example. Of course, models can look very different – they might have millions of parameters or complex control flow. Let\'s see how Flux handles more complex models." }, { @@ -117,7 +125,7 @@ var documenterSearchIndex = {"docs": [ "page": "Recurrence", "title": "Truncating Gradients", "category": "section", - "text": "By default, calculating the gradients in a recurrent layer involves the entire history. For example, if we call the model on 100 inputs, calling back! will calculate the gradient for those 100 calls. If we then calculate another 10 inputs we have to calculate 110 gradients – this accumulates and quickly becomes expensive.To avoid this we can truncate the gradient calculation, forgetting the history.truncate!(m)Calling truncate! wipes the slate clean, so we can call the model with more inputs without building up an expensive gradient computation.truncate! makes sense when you are working with multiple chunks of a large sequence, but we may also want to work with a set of independent sequences. In this case the hidden state should be completely reset to its original value, throwing away any accumulated information. reset! does this for you." + "text": "By default, calculating the gradients in a recurrent layer involves its entire history. For example, if we call the model on 100 inputs, we\'ll have to calculate the gradient for those 100 calls. If we then calculate another 10 inputs we have to calculate 110 gradients – this accumulates and quickly becomes expensive.To avoid this we can truncate the gradient calculation, forgetting the history.truncate!(m)Calling truncate! wipes the slate clean, so we can call the model with more inputs without building up an expensive gradient computation.truncate! makes sense when you are working with multiple chunks of a large sequence, but we may also want to work with a set of independent sequences. In this case the hidden state should be completely reset to its original value, throwing away any accumulated information. reset! does this for you." }, { @@ -317,7 +325,7 @@ var documenterSearchIndex = {"docs": [ "page": "Optimisers", "title": "Optimisers", "category": "section", - "text": "Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.W = param(rand(2, 5))\nb = param(rand(2))\n\npredict(x) = W*x .+ b\nloss(x, y) = sum((predict(x) .- y).^2)\n\nx, y = rand(5), rand(2) # Dummy data\nl = loss(x, y) # ~ 3\nback!(l)We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here\'s one way to do that:using Flux.Tracker: grad, update!\n\nfunction sgd()\n η = 0.1 # Learning Rate\n for p in (W, b)\n update!(p, -η * grad(p))\n end\nendIf we call sgd, the parameters W and b will change and our loss should go down.There are two pieces here: one is that we need a list of trainable parameters for the model ([W, b] in this case), and the other is the update step. In this case the update is simply gradient descent (x .-= η .* Δ), but we might choose to do something more advanced, like adding momentum.In this case, getting the variables is trivial, but you can imagine it\'d be more of a pain with some complex stack of layers.m = Chain(\n Dense(10, 5, σ),\n Dense(5, 2), softmax)Instead of having to write [m[1].W, m[1].b, ...], Flux provides a params function params(m) that returns a list of all parameters in the model for you.For the update step, there\'s nothing whatsoever wrong with writing the loop above – it\'ll work just fine – but Flux provides various optimisers that make it more convenient.opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1\n\nopt() # Carry out the update, modifying `W` and `b`.An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data." + "text": "Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.using Flux.Tracker\n\nW = param(rand(2, 5))\nb = param(rand(2))\n\npredict(x) = W*x .+ b\nloss(x, y) = sum((predict(x) .- y).^2)\n\nx, y = rand(5), rand(2) # Dummy data\nl = loss(x, y) # ~ 3\n\nparams = Params([W, b])\ngrads = Tracker.gradient(() -> loss(x, y), params)We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here\'s one way to do that:using Flux.Tracker: grad, update!\n\nfunction sgd()\n η = 0.1 # Learning Rate\n for p in (W, b)\n update!(p, -η * grads[p])\n end\nendIf we call sgd, the parameters W and b will change and our loss should go down.There are two pieces here: one is that we need a list of trainable parameters for the model ([W, b] in this case), and the other is the update step. In this case the update is simply gradient descent (x .-= η .* Δ), but we might choose to do something more advanced, like adding momentum.In this case, getting the variables is trivial, but you can imagine it\'d be more of a pain with some complex stack of layers.m = Chain(\n Dense(10, 5, σ),\n Dense(5, 2), softmax)Instead of having to write [m[1].W, m[1].b, ...], Flux provides a params function params(m) that returns a list of all parameters in the model for you.For the update step, there\'s nothing whatsoever wrong with writing the loop above – it\'ll work just fine – but Flux provides various optimisers that make it more convenient.opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1\n\nopt() # Carry out the update, modifying `W` and `b`.An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data." }, { @@ -485,15 +493,23 @@ var documenterSearchIndex = {"docs": [ "page": "Backpropagation", "title": "Flux.Tracker", "category": "section", - "text": "Backpropagation, or reverse-mode automatic differentiation, is handled by the Flux.Tracker module.julia> using Flux.TrackerThe param function converts a normal Julia array into a new object that, while behaving like an array, tracks extra information that allows us to calculate derivatives. For example, say we multiply two parameters:julia> W = param([1 2; 3 4])\nTracked 2×2 Array{Float64,2}:\n 1.0 2.0\n 3.0 4.0\n\njulia> x = param([5, 6])\nTracked 2-element Array{Float64,1}:\n 5.0\n 6.0\n\njulia> y = W*x\nTracked 2-element Array{Float64,1}:\n 17.0\n 39.0The output y is also a TrackedArray object. We can now backpropagate sensitivities to W and x via the back! function, and see the gradients accumulated in the W and x tracked arrays:julia> Tracker.back!(y, [1, -1])\n\njulia> W.grad\n2×2 Array{Float64,2}:\n 5.0 6.0\n-5.0 -6.0\n\njulia> x.grad\n2-element Array{Float64,1}:\n -2.0\n -2.0" + "text": "Backpropagation, or reverse-mode automatic differentiation, is handled by the Flux.Tracker module.julia> using Flux.TrackerHere we discuss some more advanced uses of this module, as well as covering its internals." }, { - "location": "internals/tracker.html#Internals-1", + "location": "internals/tracker.html#Taking-Gradients-1", "page": "Backpropagation", - "title": "Internals", + "title": "Taking Gradients", "category": "section", - "text": "All Tracked* objects (TrackedArray, TrackedReal) are light wrappers around the Tracked type, which you can access via the .tracker field.julia> x.tracker\nFlux.Tracker.Tracked{Array{Float64,1}}(0x00000000, Flux.Tracker.Call{Void,Tuple{}}(nothing, ()), true, [5.0, 6.0], [-2.0, -2.0])The Tracker stores the value and gradient of a given object, which we\'ve seen before.julia> x.tracker.data\n2-element Array{Float64,1}:\n 5.0\n 6.0\n\njulia> x.tracker.grad\n2-element Array{Float64,1}:\n -2.0\n -2.0The tracker also contains a Call object, which simply represents a function call that was made at some point during the forward pass. For example, the + call would look like this:julia> Tracker.Call(+, 1, 2)\nFlux.Tracker.Call{Base.#+,Tuple{Int64,Int64}}(+, (1, 2))In the case of the y we produced above, we can see that it stores the call that produced it – that is, W*x.julia> y.tracker.f\nFlux.Tracker.Call{...}(*, (param([1.0 2.0; 3.0 4.0]), param([5.0, 6.0])))Notice that because the arguments to the call may also be tracked arrays, storing their own calls, this means that Tracker ends up forming a data structure that records everything that happened during the forward pass (often known as a tape).When we call back!(y, [1, -1]), the sensitivities [1, -1] simply get forwarded to y\'s call (*), effectively callingTracker.back(*, [1, -1], W, x)which in turn calculates the sensitivities of the arguments (W and x) and backpropagates through their calls. This is recursive, so it will walk the entire program graph and propagate gradients to the original model parameters." + "text": "In the basics section we covered basic usage of the gradient function.using Flux.Tracker\n\nTracker.gradient((a, b) -> a*b, 2, 3) # (3.0 (tracked), 2.0 (tracked))gradient is actually just a thin wrapper around the backpropagator-based interface, forward.using Flux.Tracker: forward\n\ny, back = forward((a, b) -> a*b, 2, 3) # (6.0 (tracked), Flux.Tracker.#9)\n\nback(1) # (3.0 (tracked), 2.0 (tracked))The forward function returns two results. The first, y, is the original value of the function (perhaps with tracking applied). The second, back, is a new function which, given a sensitivity, returns the sensitivity of the inputs to forward (we call this a \"backpropagator\"). One use of this interface is to provide custom sensitivities when outputs are not scalar.julia> y, back = forward((a, b) -> a.*b, [1,2,3],[4,5,6])\n(param([4.0, 10.0, 18.0]), Flux.Tracker.#9)\n\njulia> back([1,1,1])\n(param([4.0, 5.0, 6.0]), param([1.0, 2.0, 3.0]))We can also take gradients in-place. This can be useful if you only care about first-order gradients.a, b = param(2), param(3)\n\nc = a*b # 6.0 (tracked)\n\nTracker.back!(c)\n\nTracker.grad(a), Tracker.grad(b) # (3.0, 2.0)" +}, + +{ + "location": "internals/tracker.html#Tracked-Arrays-1", + "page": "Backpropagation", + "title": "Tracked Arrays", + "category": "section", + "text": "The param function converts a normal Julia array into a new object that, while behaving like an array, tracks extra information that allows us to calculate derivatives. For example, say we multiply two parameters:julia> W = param([1 2; 3 4])\nTracked 2×2 Array{Float64,2}:\n 1.0 2.0\n 3.0 4.0\n\njulia> x = param([5, 6])\nTracked 2-element Array{Float64,1}:\n 5.0\n 6.0\n\njulia> y = W*x\nTracked 2-element Array{Float64,1}:\n 17.0\n 39.0The output y is also a TrackedArray object. We can now backpropagate sensitivities to W and x via the back! function, and see the gradients accumulated in the W and x tracked arrays:julia> Tracker.back!(y, [1, -1])\n\njulia> W.grad\n2×2 Array{Float64,2}:\n 5.0 6.0\n-5.0 -6.0\n\njulia> x.grad\n2-element Array{Float64,1}:\n -2.0\n -2.0You may sometimes want to drop derivative information and just get the plain value back. You can do this by calling Tracker.data(W)." }, { @@ -501,15 +517,15 @@ var documenterSearchIndex = {"docs": [ "page": "Backpropagation", "title": "Custom Gradients", "category": "section", - "text": "We can hook in to the processes above to implement custom gradients for a function or kernel. For a toy example, imagine a custom implementation of minus:julia> minus(a, b) = a - bFirstly, we must tell the tracker system to stop when it sees a call to minus, and record it. We can do this using dispatch:julia> minus(a::TrackedArray, b::TrackedArray) = Tracker.track(minus, a, b)\nminus (generic function with 2 methods)Tracker.track does two things: (1) it makes sure minus is called with normal array, not tracked ones (you can use @show inside minus to verify this), and (2) it uses the result to add a minus node to the tape. Look inside the result of calling minus to see what happened:julia> a, b = param([6,5,4]), param([1,2,3])\n(param([6.0, 5.0, 4.0]), param([1.0, 2.0, 3.0]))\n\njulia> c = minus(a, b)\nTracked 3-element Array{Float64,1}:\n 5.0\n 3.0\n 1.0\n\njulia> c.tracker.f\nFlux.Tracker.Call{...}(minus, (param([6.0, 5.0, 4.0]), param([1.0, 2.0, 3.0])))Finally, we have to specify the gradient of minus.julia> Tracker.back(::typeof(minus), Δ, a, b) =\n (Tracker.@back(a, Δ); Tracker.@back(b, -Δ))@back(x, Δ) tells the tracker to continue propagating the sensitivity Δ through x. Now, AD will work with any program that calls minus.julia> Flux.back!(c, 1)\n\njulia> a.grad\n3-element Array{Float64,1}:\n 1.0\n 1.0\n 1.0\n\njulia> b.grad\n3-element Array{Float64,1}:\n -1.0\n -1.0\n -1.0" + "text": "We can hook in to the processes above to implement custom gradients for a function or kernel. For a toy example, imagine a custom implementation of minus:minus(a, b) = a - bFirstly, we must tell the tracker system to stop when it sees a call to minus, and record it. We can do this using dispatch:using Flux.Tracker: TrackedReal, track, @grad\n\nminus(a::TrackedArray, b::TrackedArray) = Tracker.track(minus, a, b)track takes care of building a new Tracked object and recording the operation on the tape. We just need to provide a gradient definition.@grad function minus(a, b)\n return minus(data(a),data(b)), Δ -> (Δ, -Δ)\nendThis is essentially just a way of overloading the forward function we saw above. We strip tracking from a and b so that we are calling the original definition of minus (otherwise, we\'d just try to track the call again and hit an infinite regress).Note that in the backpropagator we don\'t call data(a); we do in fact want to track this, since nest AD will take a derivative through the backpropagator itself. For example, the gradient of * might look like this.@grad a * b = data(a)*data(b), Δ -> (Δ*b, a*Δ)For multi-argument functions with custom gradients, you likely want to catch not just minus(::TrackedArray, ::TrackedArray) but also minus(::Array, TrackedArray) and so on. To do so, just define those extra signatures as needed:minus(a::AbstractArray, b::TrackedArray) = Tracker.track(minus, a, b)\nminus(a::TrackedArray, b::AbstractArray) = Tracker.track(minus, a, b)" }, { - "location": "internals/tracker.html#Notes-1", + "location": "internals/tracker.html#Tracked-Internals-1", "page": "Backpropagation", - "title": "Notes", + "title": "Tracked Internals", "category": "section", - "text": "For multi-argument functions with custom gradients, you likely want to catch not just minus(::TrackedArray, ::TrackedArray) but also minus(::Array, TrackedArray) and so on. To do so, just define those extra signatures as needed:minus(a::AbstractArray, b::TrackedArray) = Tracker.track(minus, a, b)\nminus(a::TrackedArray, b::AbstractArray) = Tracker.track(minus, a, b)@back must be called exactly once on each tracked input argument. You do not need to do any special handling if one of the arguments is not tracked, as @back will just become a no-op." + "text": "All Tracked* objects (TrackedArray, TrackedReal) are light wrappers around the Tracked type, which you can access via the .tracker field.julia> x.tracker\nFlux.Tracker.Tracked{Array{Float64,1}}(0x00000000, Flux.Tracker.Call{Void,Tuple{}}(nothing, ()), true, [5.0, 6.0], [-2.0, -2.0])The Tracker stores the gradient of a given object, which we\'ve seen before.julia> x.tracker.grad\n2-element Array{Float64,1}:\n -2.0\n -2.0The tracker also contains a Call object, which simply represents a function call that was made at some point during the forward pass. For example, the + call would look like this:julia> Tracker.Call(+, 1, 2)\nFlux.Tracker.Call{Base.#+,Tuple{Int64,Int64}}(+, (1, 2))In the case of the y we produced above, we can see that it stores the call that produced it – that is, W*x.julia> y.tracker.f\nFlux.Tracker.Call{...}(*, (param([1.0 2.0; 3.0 4.0]), param([5.0, 6.0])))Notice that because the arguments to the call may also be tracked arrays, storing their own calls, this means that Tracker ends up forming a data structure that records everything that happened during the forward pass (often known as a tape).When we call back!(y, [1, -1]), the sensitivities [1, -1] simply get forwarded to y\'s call (*), effectively callingTracker.back(*, [1, -1], W, x)which in turn calculates the sensitivities of the arguments (W and x) and back-propagates through their calls. This is recursive, so it will walk the entire program graph and propagate gradients to the original model parameters." }, { diff --git a/latest/training/optimisers.html b/latest/training/optimisers.html index c77e07a9..3c6cb6db 100644 --- a/latest/training/optimisers.html +++ b/latest/training/optimisers.html @@ -6,7 +6,9 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Optimisers

Optimisers

Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.

W = param(rand(2, 5))
+

Optimisers

Optimisers

Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.

using Flux.Tracker
+
+W = param(rand(2, 5))
 b = param(rand(2))
 
 predict(x) = W*x .+ b
@@ -14,15 +16,17 @@ loss(x, y) = sum((predict(x) .- y).^2)
 
 x, y = rand(5), rand(2) # Dummy data
 l = loss(x, y) # ~ 3
-back!(l)

We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:

using Flux.Tracker: grad, update!
+
+params = Params([W, b])
+grads = Tracker.gradient(() -> loss(x, y), params)

We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:

using Flux.Tracker: grad, update!
 
 function sgd()
   η = 0.1 # Learning Rate
   for p in (W, b)
-    update!(p, -η * grad(p))
+    update!(p, -η * grads[p])
   end
 end

If we call sgd, the parameters W and b will change and our loss should go down.

There are two pieces here: one is that we need a list of trainable parameters for the model ([W, b] in this case), and the other is the update step. In this case the update is simply gradient descent (x .-= η .* Δ), but we might choose to do something more advanced, like adding momentum.

In this case, getting the variables is trivial, but you can imagine it'd be more of a pain with some complex stack of layers.

m = Chain(
   Dense(10, 5, σ),
   Dense(5, 2), softmax)

Instead of having to write [m[1].W, m[1].b, ...], Flux provides a params function params(m) that returns a list of all parameters in the model for you.

For the update step, there's nothing whatsoever wrong with writing the loop above – it'll work just fine – but Flux provides various optimisers that make it more convenient.

opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1
 
-opt() # Carry out the update, modifying `W` and `b`.

An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data.

Optimiser Reference

All optimisers return a function that, when called, will update the parameters passed to it.

Flux.Optimise.SGDFunction.
SGD(params, η = 0.1; decay = 0)

Classic gradient descent optimiser with learning rate η. For each parameter p and its gradient δp, this runs p -= η*δp.

Supports inverse decaying learning rate if the decay argument is provided.

source
Momentum(params, η = 0.01; ρ = 0.9, decay = 0)

SGD with learning rate η, momentum ρ and optional learning rate inverse decay.

source
Nesterov(params, η = 0.01; ρ = 0.9, decay = 0)

SGD with learning rate η, Nesterov momentum ρ and optional learning rate inverse decay.

source
Flux.Optimise.ADAMFunction.
ADAM(params, η = 0.001; β1 = 0.9, β2 = 0.999, ϵ = 1e-08, decay = 0)

ADAM optimiser.

source
+opt() # Carry out the update, modifying `W` and `b`.

An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data.

Optimiser Reference

All optimisers return a function that, when called, will update the parameters passed to it.

Flux.Optimise.SGDFunction.
SGD(params, η = 0.1; decay = 0)

Classic gradient descent optimiser with learning rate η. For each parameter p and its gradient δp, this runs p -= η*δp.

Supports inverse decaying learning rate if the decay argument is provided.

source
Momentum(params, η = 0.01; ρ = 0.9, decay = 0)

SGD with learning rate η, momentum ρ and optional learning rate inverse decay.

source
Nesterov(params, η = 0.01; ρ = 0.9, decay = 0)

SGD with learning rate η, Nesterov momentum ρ and optional learning rate inverse decay.

source
Flux.Optimise.ADAMFunction.
ADAM(params, η = 0.001; β1 = 0.9, β2 = 0.999, ϵ = 1e-08, decay = 0)

ADAM optimiser.

source