From 647d6341bffd0509d2766f1260f0a90485da9ff2 Mon Sep 17 00:00:00 2001 From: autodocs Date: Tue, 21 Nov 2017 11:32:59 +0000 Subject: [PATCH] build based on 187fddc --- latest/models/layers.html | 7 +++--- latest/search_index.js | 44 +++++++++++++++++++++++++++++++-- latest/training/optimisers.html | 8 +----- 3 files changed, 47 insertions(+), 12 deletions(-) diff --git a/latest/models/layers.html b/latest/models/layers.html index 898d87a9..3f00a300 100644 --- a/latest/models/layers.html +++ b/latest/models/layers.html @@ -11,16 +11,17 @@ m(5) == 26 m = Chain(Dense(10, 5), Dense(5, 2)) x = rand(10) -m(x) == m[2](m[1](x))

Chain also supports indexing and slicing, e.g. m[2] or m[1:end-1]. m[1:3](x) will calculate the output of the first three layers.

source
Flux.DenseType.
Dense(in::Integer, out::Integer, σ = identity)

Creates a traditional Dense layer with parameters W and b.

y = σ.(W * x .+ b)

The input x must be a vector of length in, or a batch of vectors represented as an in × N matrix. The out y will be a vector or batch of length out.

julia> d = Dense(5, 2)
+m(x) == m[2](m[1](x))

Chain also supports indexing and slicing, e.g. m[2] or m[1:end-1]. m[1:3](x) will calculate the output of the first three layers.

source
Flux.DenseType.
Dense(in::Integer, out::Integer, σ = identity)

Creates a traditional Dense layer with parameters W and b.

y = σ.(W * x .+ b)

The input x must be a vector of length in, or a batch of vectors represented as an in × N matrix. The out y will be a vector or batch of length out.

julia> d = Dense(5, 2)
 Dense(5, 2)
 
 julia> d(rand(5))
 Tracked 2-element Array{Float64,1}:
   0.00257447
-  -0.00449443
source

Recurrent Layers

Much like the core layers above, but can be used to process sequence data (as well as other kinds of structured data).

Flux.RNNFunction.
RNN(in::Integer, out::Integer, σ = tanh)

The most basic recurrent layer; essentially acts as a Dense layer, but with the output fed back into the input each time step.

source
Flux.LSTMFunction.
LSTM(in::Integer, out::Integer, σ = tanh)

Long Short Term Memory recurrent layer. Behaves like an RNN but generally exhibits a longer memory span over sequences.

See this article for a good overview of the internals.

source
Flux.RecurType.
Recur(cell)

Recur takes a recurrent cell and makes it stateful, managing the hidden state in the background. cell should be a model of the form:

h, y = cell(h, x...)

For example, here's a recurrent network that keeps a running total of its inputs.

accum(h, x) = (h+x, x)
+  -0.00449443
source

Recurrent Layers

Much like the core layers above, but can be used to process sequence data (as well as other kinds of structured data).

Flux.RNNFunction.
RNN(in::Integer, out::Integer, σ = tanh)

The most basic recurrent layer; essentially acts as a Dense layer, but with the output fed back into the input each time step.

source
Flux.LSTMFunction.
LSTM(in::Integer, out::Integer, σ = tanh)

Long Short Term Memory recurrent layer. Behaves like an RNN but generally exhibits a longer memory span over sequences.

See this article for a good overview of the internals.

source
Flux.RecurType.
Recur(cell)

Recur takes a recurrent cell and makes it stateful, managing the hidden state in the background. cell should be a model of the form:

h, y = cell(h, x...)

For example, here's a recurrent network that keeps a running total of its inputs.

accum(h, x) = (h+x, x)
 rnn = Flux.Recur(accum, 0)
 rnn(2) # 2
 rnn(3) # 3
 rnn.state # 5
 rnn.(1:10) # apply to a sequence
-rnn.state # 60
source

Activation Functions

Non-linearities that go between layers of your model. Most of these functions are defined in NNlib but are available by default in Flux.

Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call σ.(xs), relu.(xs) and so on.

NNlib.σFunction.
σ(x) = 1 / (1 + exp(-x))

Classic sigmoid activation function.

source
NNlib.reluFunction.
relu(x) = max(0, x)

Rectified Linear Unit activation function.

source
NNlib.leakyreluFunction.
leakyrelu(x) = max(0.01x, x)

Leaky Rectified Linear Unit activation function.

You can also specify the coefficient explicitly, e.g. leakyrelu(x, 0.01).

source
NNlib.eluFunction.
elu(x; α = 1) = x > 0 ? x : α * (exp(x) - one(x)

Exponential Linear Unit activation function. See Fast and Accurate Deep Network Learning by Exponential Linear Units

source
NNlib.swishFunction.
swish(x) = x * σ(x)

Self-gated actvation function.

See Swish: a Self-Gated Activation Function.

source

Normalisation & Regularisation

These layers don't affect the structure of the network but may improve training times or reduce overfitting.

Flux.DropoutType.
Dropout(p)

A Dropout layer. For each input, either sets that input to 0 (with probability p) or scales it by 1/(1-p). This is used as a regularisation, i.e. it reduces overfitting during training.

Does nothing to the input once in testmode!.

source
+rnn.state # 60source

Activation Functions

Non-linearities that go between layers of your model. Most of these functions are defined in NNlib but are available by default in Flux.

Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call σ.(xs), relu.(xs) and so on.

NNlib.σFunction.
σ(x) = 1 / (1 + exp(-x))

Classic sigmoid activation function.

source
NNlib.reluFunction.
relu(x) = max(0, x)

Rectified Linear Unit activation function.

source
NNlib.leakyreluFunction.
leakyrelu(x) = max(0.01x, x)

Leaky Rectified Linear Unit activation function.

You can also specify the coefficient explicitly, e.g. leakyrelu(x, 0.01).

source
NNlib.eluFunction.
elu(x; α = 1) = x > 0 ? x : α * (exp(x) - one(x)

Exponential Linear Unit activation function. See Fast and Accurate Deep Network Learning by Exponential Linear Units

source
NNlib.swishFunction.
swish(x) = x * σ(x)

Self-gated actvation function.

See Swish: a Self-Gated Activation Function.

source

Normalisation & Regularisation

These layers don't affect the structure of the network but may improve training times or reduce overfitting.

Flux.testmode!Function.
testmode!(m)
+testmode!(m, false)

Put layers like Dropout and BatchNorm into testing mode (or back to training mode with false).

source
Flux.DropoutType.
Dropout(p)

A Dropout layer. For each input, either sets that input to 0 (with probability p) or scales it by 1/(1-p). This is used as a regularisation, i.e. it reduces overfitting during training.

Does nothing to the input once in testmode!.

source
diff --git a/latest/search_index.js b/latest/search_index.js index d4b6ce75..837d68b6 100644 --- a/latest/search_index.js +++ b/latest/search_index.js @@ -224,6 +224,14 @@ var documenterSearchIndex = {"docs": [ "text": "Non-linearities that go between layers of your model. Most of these functions are defined in NNlib but are available by default in Flux.Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call σ.(xs), relu.(xs) and so on.σ\nrelu\nleakyrelu\nelu\nswish" }, +{ + "location": "models/layers.html#Flux.testmode!", + "page": "Model Reference", + "title": "Flux.testmode!", + "category": "Function", + "text": "testmode!(m)\ntestmode!(m, false)\n\nPut layers like Dropout and BatchNorm into testing mode (or back to training mode with false).\n\n\n\n" +}, + { "location": "models/layers.html#Flux.Dropout", "page": "Model Reference", @@ -237,7 +245,7 @@ var documenterSearchIndex = {"docs": [ "page": "Model Reference", "title": "Normalisation & Regularisation", "category": "section", - "text": "These layers don't affect the structure of the network but may improve training times or reduce overfitting.Dropout" + "text": "These layers don't affect the structure of the network but may improve training times or reduce overfitting.Flux.testmode!\nDropout" }, { @@ -256,12 +264,44 @@ var documenterSearchIndex = {"docs": [ "text": "Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.W = param(rand(2, 5))\nb = param(rand(2))\n\npredict(x) = W*x .+ b\nloss(x, y) = sum((predict(x) .- y).^2)\n\nx, y = rand(5), rand(2) # Dummy data\nl = loss(x, y) # ~ 3\nback!(l)We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:function update()\n η = 0.1 # Learning Rate\n for p in (W, b)\n p.data .-= η .* p.grad # Apply the update\n p.grad .= 0 # Clear the gradient\n end\nendIf we call update, the parameters W and b will change and our loss should go down.There are two pieces here: one is that we need a list of trainable parameters for the model ([W, b] in this case), and the other is the update step. In this case the update is simply gradient descent (x .-= η .* Δ), but we might choose to do something more advanced, like adding momentum.In this case, getting the variables is trivial, but you can imagine it'd be more of a pain with some complex stack of layers.m = Chain(\n Dense(10, 5, σ),\n Dense(5, 2), softmax)Instead of having to write [m[1].W, m[1].b, ...], Flux provides a params function params(m) that returns a list of all parameters in the model for you.For the update step, there's nothing whatsoever wrong with writing the loop above – it'll work just fine – but Flux provides various optimisers that make it more convenient.opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1\n\nopt() # Carry out the update, modifying `W` and `b`.An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data." }, +{ + "location": "training/optimisers.html#Flux.Optimise.SGD", + "page": "Optimisers", + "title": "Flux.Optimise.SGD", + "category": "Function", + "text": "SGD(params, η = 1; decay = 0)\n\nClassic gradient descent optimiser. For each parameter p and its gradient δp, this runs p -= η*δp.\n\nSupports decayed learning rate decay if the decay argument is provided.\n\n\n\n" +}, + +{ + "location": "training/optimisers.html#Flux.Optimise.Momentum", + "page": "Optimisers", + "title": "Flux.Optimise.Momentum", + "category": "Function", + "text": "Momentum(params, ρ, decay = 0)\n\nSGD with momentum ρ and optional learning rate decay.\n\n\n\n" +}, + +{ + "location": "training/optimisers.html#Flux.Optimise.Nesterov", + "page": "Optimisers", + "title": "Flux.Optimise.Nesterov", + "category": "Function", + "text": "Nesterov(params, ρ, decay = 0)\n\nSGD with Nesterov momentum ρ and optional learning rate decay.\n\n\n\n" +}, + +{ + "location": "training/optimisers.html#Flux.Optimise.ADAM", + "page": "Optimisers", + "title": "Flux.Optimise.ADAM", + "category": "Function", + "text": "ADAM(params; η = 0.001, β1 = 0.9, β2 = 0.999, ϵ = 1e-08, decay = 0)\n\nADAM optimiser.\n\n\n\n" +}, + { "location": "training/optimisers.html#Optimiser-Reference-1", "page": "Optimisers", "title": "Optimiser Reference", "category": "section", - "text": "All optimisers return a function that, when called, will update the parameters passed to it.SGD\nMomentum\nNesterov\nRMSProp\nADAM\nADAGrad\nADADelta" + "text": "All optimisers return a function that, when called, will update the parameters passed to it.SGD\nMomentum\nNesterov\nADAM" }, { diff --git a/latest/training/optimisers.html b/latest/training/optimisers.html index 19e97ab8..b1a7430e 100644 --- a/latest/training/optimisers.html +++ b/latest/training/optimisers.html @@ -24,10 +24,4 @@ end

If we call update, the parameters W Dense(10, 5, σ), Dense(5, 2), softmax)

Instead of having to write [m[1].W, m[1].b, ...], Flux provides a params function params(m) that returns a list of all parameters in the model for you.

For the update step, there's nothing whatsoever wrong with writing the loop above – it'll work just fine – but Flux provides various optimisers that make it more convenient.

opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1
 
-opt() # Carry out the update, modifying `W` and `b`.

An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data.

Optimiser Reference

All optimisers return a function that, when called, will update the parameters passed to it.

SGD
-Momentum
-Nesterov
-RMSProp
-ADAM
-ADAGrad
-ADADelta
+opt() # Carry out the update, modifying `W` and `b`.

An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data.

Optimiser Reference

All optimisers return a function that, when called, will update the parameters passed to it.

Flux.Optimise.SGDFunction.
SGD(params, η = 1; decay = 0)

Classic gradient descent optimiser. For each parameter p and its gradient δp, this runs p -= η*δp.

Supports decayed learning rate decay if the decay argument is provided.

source
Momentum(params, ρ, decay = 0)

SGD with momentum ρ and optional learning rate decay.

source
Nesterov(params, ρ, decay = 0)

SGD with Nesterov momentum ρ and optional learning rate decay.

source
Flux.Optimise.ADAMFunction.
ADAM(params; η = 0.001, β1 = 0.9, β2 = 0.999, ϵ = 1e-08, decay = 0)

ADAM optimiser.

source