build based on 187fddc

This commit is contained in:
autodocs 2017-11-21 11:32:59 +00:00
parent 0e63d56ef8
commit 647d6341bf
3 changed files with 47 additions and 12 deletions

File diff suppressed because one or more lines are too long

View File

@ -224,6 +224,14 @@ var documenterSearchIndex = {"docs": [
"text": "Non-linearities that go between layers of your model. Most of these functions are defined in NNlib but are available by default in Flux.Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call σ.(xs), relu.(xs) and so on.σ\nrelu\nleakyrelu\nelu\nswish" "text": "Non-linearities that go between layers of your model. Most of these functions are defined in NNlib but are available by default in Flux.Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call σ.(xs), relu.(xs) and so on.σ\nrelu\nleakyrelu\nelu\nswish"
}, },
{
"location": "models/layers.html#Flux.testmode!",
"page": "Model Reference",
"title": "Flux.testmode!",
"category": "Function",
"text": "testmode!(m)\ntestmode!(m, false)\n\nPut layers like Dropout and BatchNorm into testing mode (or back to training mode with false).\n\n\n\n"
},
{ {
"location": "models/layers.html#Flux.Dropout", "location": "models/layers.html#Flux.Dropout",
"page": "Model Reference", "page": "Model Reference",
@ -237,7 +245,7 @@ var documenterSearchIndex = {"docs": [
"page": "Model Reference", "page": "Model Reference",
"title": "Normalisation & Regularisation", "title": "Normalisation & Regularisation",
"category": "section", "category": "section",
"text": "These layers don't affect the structure of the network but may improve training times or reduce overfitting.Dropout" "text": "These layers don't affect the structure of the network but may improve training times or reduce overfitting.Flux.testmode!\nDropout"
}, },
{ {
@ -256,12 +264,44 @@ var documenterSearchIndex = {"docs": [
"text": "Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.W = param(rand(2, 5))\nb = param(rand(2))\n\npredict(x) = W*x .+ b\nloss(x, y) = sum((predict(x) .- y).^2)\n\nx, y = rand(5), rand(2) # Dummy data\nl = loss(x, y) # ~ 3\nback!(l)We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:function update()\n η = 0.1 # Learning Rate\n for p in (W, b)\n p.data .-= η .* p.grad # Apply the update\n p.grad .= 0 # Clear the gradient\n end\nendIf we call update, the parameters W and b will change and our loss should go down.There are two pieces here: one is that we need a list of trainable parameters for the model ([W, b] in this case), and the other is the update step. In this case the update is simply gradient descent (x .-= η .* Δ), but we might choose to do something more advanced, like adding momentum.In this case, getting the variables is trivial, but you can imagine it'd be more of a pain with some complex stack of layers.m = Chain(\n Dense(10, 5, σ),\n Dense(5, 2), softmax)Instead of having to write [m[1].W, m[1].b, ...], Flux provides a params function params(m) that returns a list of all parameters in the model for you.For the update step, there's nothing whatsoever wrong with writing the loop above it'll work just fine but Flux provides various optimisers that make it more convenient.opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1\n\nopt() # Carry out the update, modifying `W` and `b`.An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data." "text": "Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.W = param(rand(2, 5))\nb = param(rand(2))\n\npredict(x) = W*x .+ b\nloss(x, y) = sum((predict(x) .- y).^2)\n\nx, y = rand(5), rand(2) # Dummy data\nl = loss(x, y) # ~ 3\nback!(l)We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:function update()\n η = 0.1 # Learning Rate\n for p in (W, b)\n p.data .-= η .* p.grad # Apply the update\n p.grad .= 0 # Clear the gradient\n end\nendIf we call update, the parameters W and b will change and our loss should go down.There are two pieces here: one is that we need a list of trainable parameters for the model ([W, b] in this case), and the other is the update step. In this case the update is simply gradient descent (x .-= η .* Δ), but we might choose to do something more advanced, like adding momentum.In this case, getting the variables is trivial, but you can imagine it'd be more of a pain with some complex stack of layers.m = Chain(\n Dense(10, 5, σ),\n Dense(5, 2), softmax)Instead of having to write [m[1].W, m[1].b, ...], Flux provides a params function params(m) that returns a list of all parameters in the model for you.For the update step, there's nothing whatsoever wrong with writing the loop above it'll work just fine but Flux provides various optimisers that make it more convenient.opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1\n\nopt() # Carry out the update, modifying `W` and `b`.An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data."
}, },
{
"location": "training/optimisers.html#Flux.Optimise.SGD",
"page": "Optimisers",
"title": "Flux.Optimise.SGD",
"category": "Function",
"text": "SGD(params, η = 1; decay = 0)\n\nClassic gradient descent optimiser. For each parameter p and its gradient δp, this runs p -= η*δp.\n\nSupports decayed learning rate decay if the decay argument is provided.\n\n\n\n"
},
{
"location": "training/optimisers.html#Flux.Optimise.Momentum",
"page": "Optimisers",
"title": "Flux.Optimise.Momentum",
"category": "Function",
"text": "Momentum(params, ρ, decay = 0)\n\nSGD with momentum ρ and optional learning rate decay.\n\n\n\n"
},
{
"location": "training/optimisers.html#Flux.Optimise.Nesterov",
"page": "Optimisers",
"title": "Flux.Optimise.Nesterov",
"category": "Function",
"text": "Nesterov(params, ρ, decay = 0)\n\nSGD with Nesterov momentum ρ and optional learning rate decay.\n\n\n\n"
},
{
"location": "training/optimisers.html#Flux.Optimise.ADAM",
"page": "Optimisers",
"title": "Flux.Optimise.ADAM",
"category": "Function",
"text": "ADAM(params; η = 0.001, β1 = 0.9, β2 = 0.999, ϵ = 1e-08, decay = 0)\n\nADAM optimiser.\n\n\n\n"
},
{ {
"location": "training/optimisers.html#Optimiser-Reference-1", "location": "training/optimisers.html#Optimiser-Reference-1",
"page": "Optimisers", "page": "Optimisers",
"title": "Optimiser Reference", "title": "Optimiser Reference",
"category": "section", "category": "section",
"text": "All optimisers return a function that, when called, will update the parameters passed to it.SGD\nMomentum\nNesterov\nRMSProp\nADAM\nADAGrad\nADADelta" "text": "All optimisers return a function that, when called, will update the parameters passed to it.SGD\nMomentum\nNesterov\nADAM"
}, },
{ {

View File

@ -24,10 +24,4 @@ end</code></pre><p>If we call <code>update</code>, the parameters <code>W</code>
Dense(10, 5, σ), Dense(10, 5, σ),
Dense(5, 2), softmax)</code></pre><p>Instead of having to write <code>[m[1].W, m[1].b, ...]</code>, Flux provides a params function <code>params(m)</code> that returns a list of all parameters in the model for you.</p><p>For the update step, there&#39;s nothing whatsoever wrong with writing the loop above it&#39;ll work just fine but Flux provides various <em>optimisers</em> that make it more convenient.</p><pre><code class="language-julia">opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1 Dense(5, 2), softmax)</code></pre><p>Instead of having to write <code>[m[1].W, m[1].b, ...]</code>, Flux provides a params function <code>params(m)</code> that returns a list of all parameters in the model for you.</p><p>For the update step, there&#39;s nothing whatsoever wrong with writing the loop above it&#39;ll work just fine but Flux provides various <em>optimisers</em> that make it more convenient.</p><pre><code class="language-julia">opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1
opt() # Carry out the update, modifying `W` and `b`.</code></pre><p>An optimiser takes a parameter list and returns a function that does the same thing as <code>update</code> above. We can pass either <code>opt</code> or <code>update</code> to our <a href="training.html">training loop</a>, which will then run the optimiser after every mini-batch of data.</p><h2><a class="nav-anchor" id="Optimiser-Reference-1" href="#Optimiser-Reference-1">Optimiser Reference</a></h2><p>All optimisers return a function that, when called, will update the parameters passed to it.</p><pre><code class="language-none">SGD opt() # Carry out the update, modifying `W` and `b`.</code></pre><p>An optimiser takes a parameter list and returns a function that does the same thing as <code>update</code> above. We can pass either <code>opt</code> or <code>update</code> to our <a href="training.html">training loop</a>, which will then run the optimiser after every mini-batch of data.</p><h2><a class="nav-anchor" id="Optimiser-Reference-1" href="#Optimiser-Reference-1">Optimiser Reference</a></h2><p>All optimisers return a function that, when called, will update the parameters passed to it.</p><section class="docstring"><div class="docstring-header"><a class="docstring-binding" id="Flux.Optimise.SGD" href="#Flux.Optimise.SGD"><code>Flux.Optimise.SGD</code></a><span class="docstring-category">Function</span>.</div><div><pre><code class="language-none">SGD(params, η = 1; decay = 0)</code></pre><p>Classic gradient descent optimiser. For each parameter <code>p</code> and its gradient <code>δp</code>, this runs <code>p -= η*δp</code>.</p><p>Supports decayed learning rate decay if the <code>decay</code> argument is provided.</p></div><a class="source-link" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/187fddc11c2f0733d5e6a1644c2167d8bde590ab/src/optimise/interface.jl#L12-L19">source</a></section><section class="docstring"><div class="docstring-header"><a class="docstring-binding" id="Flux.Optimise.Momentum" href="#Flux.Optimise.Momentum"><code>Flux.Optimise.Momentum</code></a><span class="docstring-category">Function</span>.</div><div><pre><code class="language-none">Momentum(params, ρ, decay = 0)</code></pre><p>SGD with momentum <code>ρ</code> and optional learning rate decay.</p></div><a class="source-link" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/187fddc11c2f0733d5e6a1644c2167d8bde590ab/src/optimise/interface.jl#L23-L27">source</a></section><section class="docstring"><div class="docstring-header"><a class="docstring-binding" id="Flux.Optimise.Nesterov" href="#Flux.Optimise.Nesterov"><code>Flux.Optimise.Nesterov</code></a><span class="docstring-category">Function</span>.</div><div><pre><code class="language-none">Nesterov(params, ρ, decay = 0)</code></pre><p>SGD with Nesterov momentum <code>ρ</code> and optional learning rate decay.</p></div><a class="source-link" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/187fddc11c2f0733d5e6a1644c2167d8bde590ab/src/optimise/interface.jl#L31-L35">source</a></section><section class="docstring"><div class="docstring-header"><a class="docstring-binding" id="Flux.Optimise.ADAM" href="#Flux.Optimise.ADAM"><code>Flux.Optimise.ADAM</code></a><span class="docstring-category">Function</span>.</div><div><pre><code class="language-none">ADAM(params; η = 0.001, β1 = 0.9, β2 = 0.999, ϵ = 1e-08, decay = 0)</code></pre><p><a href="https://arxiv.org/abs/1412.6980v8">ADAM</a> optimiser.</p></div><a class="source-link" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/187fddc11c2f0733d5e6a1644c2167d8bde590ab/src/optimise/interface.jl#L49-L53">source</a></section><footer><hr/><a class="previous" href="../models/layers.html"><span class="direction">Previous</span><span class="title">Model Reference</span></a><a class="next" href="training.html"><span class="direction">Next</span><span class="title">Training</span></a></footer></article></body></html>
Momentum
Nesterov
RMSProp
ADAM
ADAGrad
ADADelta</code></pre><footer><hr/><a class="previous" href="../models/layers.html"><span class="direction">Previous</span><span class="title">Model Reference</span></a><a class="next" href="training.html"><span class="direction">Next</span><span class="title">Training</span></a></footer></article></body></html>