build based on 33ab22a

This commit is contained in:
zeptodoctor 2020-04-30 10:33:06 +00:00
parent cef2838f10
commit 460104c389
21 changed files with 75 additions and 68 deletions

File diff suppressed because one or more lines are too long

View File

@ -29,4 +29,4 @@ end
# train for 10 epochs
using IterTools: ncycle
Flux.train!(loss, ps, ncycle(train_loader, 10), opt)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/data/dataloader.jl#L13-L54">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../onehot/">« One-Hot Encoding</a><a class="docs-footer-nextpage" href="../../training/optimisers/">Optimisers »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 29 April 2020 10:54">Wednesday 29 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
Flux.train!(loss, ps, ncycle(train_loader, 10), opt)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/data/dataloader.jl#L13-L54">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../onehot/">« One-Hot Encoding</a><a class="docs-footer-nextpage" href="../../training/optimisers/">Optimisers »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Thursday 30 April 2020 10:33">Thursday 30 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>

View File

@ -35,11 +35,11 @@ julia&gt; Flux.onehot(:c, [:a, :b, :c])
3-element Flux.OneHotVector:
0
0
1</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/onehot.jl#L45-L67">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.onecold" href="#Flux.onecold"><code>Flux.onecold</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">onecold(y[, labels = 1:length(y)])</code></pre><p>Inverse operations of <a href="#Flux.onehot"><code>onehot</code></a>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; Flux.onecold([true, false, false], [:a, :b, :c])
1</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/onehot.jl#L45-L67">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.onecold" href="#Flux.onecold"><code>Flux.onecold</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">onecold(y[, labels = 1:length(y)])</code></pre><p>Inverse operations of <a href="#Flux.onehot"><code>onehot</code></a>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; Flux.onecold([true, false, false], [:a, :b, :c])
:a
julia&gt; Flux.onecold([0.3, 0.2, 0.5], [:a, :b, :c])
:c</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/onehot.jl#L102-L115">source</a></section></article><h2 id="Batches-1"><a class="docs-heading-anchor" href="#Batches-1">Batches</a><a class="docs-heading-anchor-permalink" href="#Batches-1" title="Permalink"></a></h2><p><code>onehotbatch</code> creates a batch (matrix) of one-hot vectors, and <code>onecold</code> treats matrices as batches.</p><pre><code class="language-julia">julia&gt; using Flux: onehotbatch
:c</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/onehot.jl#L102-L115">source</a></section></article><h2 id="Batches-1"><a class="docs-heading-anchor" href="#Batches-1">Batches</a><a class="docs-heading-anchor-permalink" href="#Batches-1" title="Permalink"></a></h2><p><code>onehotbatch</code> creates a batch (matrix) of one-hot vectors, and <code>onecold</code> treats matrices as batches.</p><pre><code class="language-julia">julia&gt; using Flux: onehotbatch
julia&gt; onehotbatch([:b, :a, :b], [:a, :b, :c])
3×3 Flux.OneHotMatrix:
@ -55,4 +55,4 @@ julia&gt; onecold(ans, [:a, :b, :c])
3×3 Flux.OneHotMatrix{Array{Flux.OneHotVector,1}}:
0 1 0
1 0 1
0 0 0</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/onehot.jl#L80-L96">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../../models/nnlib/">« NNlib</a><a class="docs-footer-nextpage" href="../dataloader/">DataLoader »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 29 April 2020 10:54">Wednesday 29 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
0 0 0</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/onehot.jl#L80-L96">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../../models/nnlib/">« NNlib</a><a class="docs-footer-nextpage" href="../dataloader/">DataLoader »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Thursday 30 April 2020 10:33">Thursday 30 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -47,4 +47,4 @@ julia&gt; x |&gt; cpu
10-element Array{Float32,1}:
0.235164
0.192538</code></pre></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../training/training/">« Training</a><a class="docs-footer-nextpage" href="../saving/">Saving &amp; Loading »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 29 April 2020 10:54">Wednesday 29 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
0.192538</code></pre></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../training/training/">« Training</a><a class="docs-footer-nextpage" href="../saving/">Saving &amp; Loading »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Thursday 30 April 2020 10:33">Thursday 30 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>

File diff suppressed because one or more lines are too long

View File

@ -24,4 +24,4 @@ Params([[0.66722 0.774872 0.249809; 0.843321 0.403843 0.429232; 0.683525 0.66245
)
ps = Flux.params(m[3:end])</code></pre><p>The <code>Zygote.Params</code> object <code>ps</code> now holds a reference to only the parameters of the layers passed to it.</p><p>During training, the gradients will only be computed for (and applied to) the last <code>Dense</code> layer, therefore only that would have its parameters changed.</p><p><code>Flux.params</code> also takes multiple inputs to make it easy to collect parameters from heterogenous models with a single call. A simple demonstration would be if we wanted to omit optimising the second <code>Dense</code> layer in the previous example. It would look something like this:</p><pre><code class="language-julia">Flux.params(m[1], m[3:end])</code></pre><p>Sometimes, a more fine-tuned control is needed. We can freeze a specific parameter of a specific layer which already entered a <code>Params</code> object <code>ps</code>, by simply deleting it from <code>ps</code>:</p><pre><code class="language-julia">ps = params(m)
delete!(ps, m[2].b) </code></pre></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../regularisation/">« Regularisation</a><a class="docs-footer-nextpage" href="../nnlib/">NNlib »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 29 April 2020 10:54">Wednesday 29 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
delete!(ps, m[2].b) </code></pre></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../regularisation/">« Regularisation</a><a class="docs-footer-nextpage" href="../nnlib/">NNlib »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Thursday 30 April 2020 10:33">Thursday 30 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>

View File

@ -111,8 +111,8 @@ model2(rand(10)) # =&gt; 2-element vector</code></pre><p>This quickly starts to
m(rand(10))</code></pre><p>Likewise, <code>Chain</code> will happily work with any Julia function.</p><pre><code class="language-julia">m = Chain(x -&gt; x^2, x -&gt; x+1)
m(5) # =&gt; 26</code></pre><h2 id="Layer-helpers-1"><a class="docs-heading-anchor" href="#Layer-helpers-1">Layer helpers</a><a class="docs-heading-anchor-permalink" href="#Layer-helpers-1" title="Permalink"></a></h2><p>Flux provides a set of helpers for custom layers, which you can enable by calling</p><pre><code class="language-julia">Flux.@functor Affine</code></pre><p>This enables a useful extra set of functionality for our <code>Affine</code> layer, such as <a href="../../training/optimisers/">collecting its parameters</a> or <a href="../../gpu/">moving it to the GPU</a>.</p><p>For some more helpful tricks, including parameter freezing, please checkout the <a href="../advanced/">advanced usage guide</a>.</p><h2 id="Utility-functions-1"><a class="docs-heading-anchor" href="#Utility-functions-1">Utility functions</a><a class="docs-heading-anchor-permalink" href="#Utility-functions-1" title="Permalink"></a></h2><p>Flux provides some utility functions to help you generate models in an automated fashion.</p><p><code>outdims</code> enables you to calculate the spatial output dimensions of layers like <code>Conv</code> when applied to input images of a given size. Currently limited to the following layers:</p><ul><li><code>Chain</code></li><li><code>Dense</code></li><li><code>Conv</code></li><li><code>Diagonal</code></li><li><code>Maxout</code></li><li><code>ConvTranspose</code></li><li><code>DepthwiseConv</code></li><li><code>CrossCor</code></li><li><code>MaxPool</code></li><li><code>MeanPool</code></li></ul><article class="docstring"><header><a class="docstring-binding" id="Flux.outdims" href="#Flux.outdims"><code>Flux.outdims</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">outdims(c::Chain, isize)</code></pre><p>Calculate the output dimensions given the input dimensions, <code>isize</code>.</p><pre><code class="language-julia">m = Chain(Conv((3, 3), 3 =&gt; 16), Conv((3, 3), 16 =&gt; 32))
outdims(m, (10, 10)) == (6, 6)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/basic.jl#L50-L59">source</a></section><section><div><pre><code class="language-none">outdims(l::Dense, isize)</code></pre><p>Calculate the output dimensions given the input dimensions, <code>isize</code>.</p><pre><code class="language-julia">m = Dense(10, 5)
outdims(m, (10, 10)) == (6, 6)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/basic.jl#L50-L59">source</a></section><section><div><pre><code class="language-none">outdims(l::Dense, isize)</code></pre><p>Calculate the output dimensions given the input dimensions, <code>isize</code>.</p><pre><code class="language-julia">m = Dense(10, 5)
outdims(m, (5, 2)) == (5,)
outdims(m, (10,)) == (5,)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/basic.jl#L139-L149">source</a></section><section><div><pre><code class="language-none">outdims(l::Conv, isize::Tuple)</code></pre><p>Calculate the output dimensions given the input dimensions <code>isize</code>. Batch size and channel size are ignored as per <a href="https://github.com/FluxML/NNlib.jl">NNlib.jl</a>.</p><pre><code class="language-julia">m = Conv((3, 3), 3 =&gt; 16)
outdims(m, (10,)) == (5,)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/basic.jl#L139-L149">source</a></section><section><div><pre><code class="language-none">outdims(l::Conv, isize::Tuple)</code></pre><p>Calculate the output dimensions given the input dimensions <code>isize</code>. Batch size and channel size are ignored as per <a href="https://github.com/FluxML/NNlib.jl">NNlib.jl</a>.</p><pre><code class="language-julia">m = Conv((3, 3), 3 =&gt; 16)
outdims(m, (10, 10)) == (8, 8)
outdims(m, (10, 10, 1, 3)) == (8, 8)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/conv.jl#L101-L112">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../../">« Home</a><a class="docs-footer-nextpage" href="../recurrence/">Recurrence »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 29 April 2020 10:54">Wednesday 29 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
outdims(m, (10, 10, 1, 3)) == (8, 8)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/conv.jl#L101-L112">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../../">« Home</a><a class="docs-footer-nextpage" href="../recurrence/">Recurrence »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Thursday 30 April 2020 10:33">Thursday 30 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>

View File

@ -16,28 +16,28 @@ julia&gt; m = Chain(Dense(10, 5), Dense(5, 2));
julia&gt; x = rand(10);
julia&gt; m(x) == m[2](m[1](x))
true</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/basic.jl#L1-L24">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Dense" href="#Flux.Dense"><code>Flux.Dense</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Dense(in::Integer, out::Integer, σ = identity)</code></pre><p>Create a traditional <code>Dense</code> layer with parameters <code>W</code> and <code>b</code>.</p><pre><code class="language-none">y = σ.(W * x .+ b)</code></pre><p>The input <code>x</code> must be a vector of length <code>in</code>, or a batch of vectors represented as an <code>in × N</code> matrix. The out <code>y</code> will be a vector or batch of length <code>out</code>.</p><p><strong>Examples</strong></p><p>```jldoctest; setup = :(using Random; Random.seed!(0)) julia&gt; d = Dense(5, 2) Dense(5, 2)</p><p>julia&gt; d(rand(5)) 2-element Array{Float32,1}: -0.16210233 0.12311903```</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/basic.jl#L85-L104">source</a></section></article><h2 id="Convolution-and-Pooling-Layers-1"><a class="docs-heading-anchor" href="#Convolution-and-Pooling-Layers-1">Convolution and Pooling Layers</a><a class="docs-heading-anchor-permalink" href="#Convolution-and-Pooling-Layers-1" title="Permalink"></a></h2><p>These layers are used to build convolutional neural networks (CNNs).</p><article class="docstring"><header><a class="docstring-binding" id="Flux.Conv" href="#Flux.Conv"><code>Flux.Conv</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Conv(size, in =&gt; out, σ = identity; init = glorot_uniform,
true</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/basic.jl#L1-L24">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Dense" href="#Flux.Dense"><code>Flux.Dense</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Dense(in::Integer, out::Integer, σ = identity)</code></pre><p>Create a traditional <code>Dense</code> layer with parameters <code>W</code> and <code>b</code>.</p><pre><code class="language-none">y = σ.(W * x .+ b)</code></pre><p>The input <code>x</code> must be a vector of length <code>in</code>, or a batch of vectors represented as an <code>in × N</code> matrix. The out <code>y</code> will be a vector or batch of length <code>out</code>.</p><p><strong>Examples</strong></p><p>```jldoctest; setup = :(using Random; Random.seed!(0)) julia&gt; d = Dense(5, 2) Dense(5, 2)</p><p>julia&gt; d(rand(5)) 2-element Array{Float32,1}: -0.16210233 0.12311903```</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/basic.jl#L85-L104">source</a></section></article><h2 id="Convolution-and-Pooling-Layers-1"><a class="docs-heading-anchor" href="#Convolution-and-Pooling-Layers-1">Convolution and Pooling Layers</a><a class="docs-heading-anchor-permalink" href="#Convolution-and-Pooling-Layers-1" title="Permalink"></a></h2><p>These layers are used to build convolutional neural networks (CNNs).</p><article class="docstring"><header><a class="docstring-binding" id="Flux.Conv" href="#Flux.Conv"><code>Flux.Conv</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Conv(size, in =&gt; out, σ = identity; init = glorot_uniform,
stride = 1, pad = 0, dilation = 1)</code></pre><p>Standard convolutional layer. <code>size</code> should be a tuple like <code>(2, 2)</code>. <code>in</code> and <code>out</code> specify the number of input and output channels respectively.</p><p>Data should be stored in WHCN order (width, height, # channels, batch size). In other words, a 100×100 RGB image would be a <code>100×100×3×1</code> array, and a batch of 50 would be a <code>100×100×3×50</code> array.</p><p>Use <code>pad=SamePad()</code> to apply padding so that outputsize == inputsize / stride.</p><p><strong>Examples</strong></p><p>Apply a <code>Conv</code> layer to a 1-channel input using a 2×2 window size, giving us a 16-channel output. Output is activated with ReLU.</p><pre><code class="language-julia">size = (2,2)
in = 1
out = 16
Conv(size, in =&gt; out, relu)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/conv.jl#L32-L55">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.MaxPool" href="#Flux.MaxPool"><code>Flux.MaxPool</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">MaxPool(k; pad = 0, stride = k)</code></pre><p>Max pooling layer. <code>k</code> is the size of the window for each dimension of the input.</p><p><strong>Use <code>pad=SamePad()</code> to apply padding so that outputsize == inputsize / stride.</strong></p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/conv.jl#L387-L394">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.GlobalMaxPool" href="#Flux.GlobalMaxPool"><code>Flux.GlobalMaxPool</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">GlobalMaxPool()</code></pre><p>Global max pooling layer.</p><p>Transforms (w,h,c,b)-shaped input into (1,1,c,b)-shaped output, by performing max pooling on the complete (w,h)-shaped feature maps.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/conv.jl#L337-L344">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.MeanPool" href="#Flux.MeanPool"><code>Flux.MeanPool</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">MeanPool(k; pad = 0, stride = k)</code></pre><p>Mean pooling layer. <code>k</code> is the size of the window for each dimension of the input.</p><p>Use <code>pad=SamePad()</code> to apply padding so that outputsize == inputsize / stride.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/conv.jl#L418-L424">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.GlobalMeanPool" href="#Flux.GlobalMeanPool"><code>Flux.GlobalMeanPool</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">GlobalMeanPool()</code></pre><p>Global mean pooling layer.</p><p>Transforms (w,h,c,b)-shaped input into (1,1,c,b)-shaped output, by performing mean pooling on the complete (w,h)-shaped feature maps.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/conv.jl#L362-L369">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.DepthwiseConv" href="#Flux.DepthwiseConv"><code>Flux.DepthwiseConv</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">DepthwiseConv(size, in =&gt; out, σ = identity; init = glorot_uniform,
stride = 1, pad = 0, dilation = 1)</code></pre><p>Depthwise convolutional layer. <code>size</code> should be a tuple like <code>(2, 2)</code>. <code>in</code> and <code>out</code> specify the number of input and output channels respectively. Note that <code>out</code> must be an integer multiple of <code>in</code>.</p><p>Data should be stored in WHCN order (width, height, # channels, batch size). In other words, a 100×100 RGB image would be a <code>100×100×3×1</code> array, and a batch of 50 would be a <code>100×100×3×50</code> array.</p><p>Use <code>pad=SamePad()</code> to apply padding so that outputsize == inputsize / stride.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/conv.jl#L192-L205">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.ConvTranspose" href="#Flux.ConvTranspose"><code>Flux.ConvTranspose</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ConvTranspose(size, in =&gt; out, σ = identity; init = glorot_uniform,
stride = 1, pad = 0, dilation = 1)</code></pre><p>Standard convolutional transpose layer. <code>size</code> should be a tuple like <code>(2, 2)</code>. <code>in</code> and <code>out</code> specify the number of input and output channels respectively.</p><p>Data should be stored in WHCN order (width, height, # channels, batch size). In other words, a 100×100 RGB image would be a <code>100×100×3×1</code> array, and a batch of 50 would be a <code>100×100×3×50</code> array.</p><p>Use <code>pad=SamePad()</code> to apply padding so that outputsize == stride * inputsize - stride + 1.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/conv.jl#L116-L128">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.CrossCor" href="#Flux.CrossCor"><code>Flux.CrossCor</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">CrossCor(size, in =&gt; out, σ = identity; init = glorot_uniform,
Conv(size, in =&gt; out, relu)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/conv.jl#L32-L55">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.MaxPool" href="#Flux.MaxPool"><code>Flux.MaxPool</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">MaxPool(k; pad = 0, stride = k)</code></pre><p>Max pooling layer. <code>k</code> is the size of the window for each dimension of the input.</p><p><strong>Use <code>pad=SamePad()</code> to apply padding so that outputsize == inputsize / stride.</strong></p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/conv.jl#L387-L394">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.GlobalMaxPool" href="#Flux.GlobalMaxPool"><code>Flux.GlobalMaxPool</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">GlobalMaxPool()</code></pre><p>Global max pooling layer.</p><p>Transforms (w,h,c,b)-shaped input into (1,1,c,b)-shaped output, by performing max pooling on the complete (w,h)-shaped feature maps.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/conv.jl#L337-L344">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.MeanPool" href="#Flux.MeanPool"><code>Flux.MeanPool</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">MeanPool(k; pad = 0, stride = k)</code></pre><p>Mean pooling layer. <code>k</code> is the size of the window for each dimension of the input.</p><p>Use <code>pad=SamePad()</code> to apply padding so that outputsize == inputsize / stride.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/conv.jl#L418-L424">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.GlobalMeanPool" href="#Flux.GlobalMeanPool"><code>Flux.GlobalMeanPool</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">GlobalMeanPool()</code></pre><p>Global mean pooling layer.</p><p>Transforms (w,h,c,b)-shaped input into (1,1,c,b)-shaped output, by performing mean pooling on the complete (w,h)-shaped feature maps.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/conv.jl#L362-L369">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.DepthwiseConv" href="#Flux.DepthwiseConv"><code>Flux.DepthwiseConv</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">DepthwiseConv(size, in =&gt; out, σ = identity; init = glorot_uniform,
stride = 1, pad = 0, dilation = 1)</code></pre><p>Depthwise convolutional layer. <code>size</code> should be a tuple like <code>(2, 2)</code>. <code>in</code> and <code>out</code> specify the number of input and output channels respectively. Note that <code>out</code> must be an integer multiple of <code>in</code>.</p><p>Data should be stored in WHCN order (width, height, # channels, batch size). In other words, a 100×100 RGB image would be a <code>100×100×3×1</code> array, and a batch of 50 would be a <code>100×100×3×50</code> array.</p><p>Use <code>pad=SamePad()</code> to apply padding so that outputsize == inputsize / stride.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/conv.jl#L192-L205">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.ConvTranspose" href="#Flux.ConvTranspose"><code>Flux.ConvTranspose</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ConvTranspose(size, in =&gt; out, σ = identity; init = glorot_uniform,
stride = 1, pad = 0, dilation = 1)</code></pre><p>Standard convolutional transpose layer. <code>size</code> should be a tuple like <code>(2, 2)</code>. <code>in</code> and <code>out</code> specify the number of input and output channels respectively.</p><p>Data should be stored in WHCN order (width, height, # channels, batch size). In other words, a 100×100 RGB image would be a <code>100×100×3×1</code> array, and a batch of 50 would be a <code>100×100×3×50</code> array.</p><p>Use <code>pad=SamePad()</code> to apply padding so that outputsize == stride * inputsize - stride + 1.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/conv.jl#L116-L128">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.CrossCor" href="#Flux.CrossCor"><code>Flux.CrossCor</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">CrossCor(size, in =&gt; out, σ = identity; init = glorot_uniform,
stride = 1, pad = 0, dilation = 1)</code></pre><p>Standard cross convolutional layer. <code>size</code> should be a tuple like <code>(2, 2)</code>. <code>in</code> and <code>out</code> specify the number of input and output channels respectively.</p><p>Data should be stored in WHCN order (width, height, # channels, batch size). In other words, a 100×100 RGB image would be a <code>100×100×3×1</code> array, and a batch of 50 would be a <code>100×100×3×50</code> array.</p><p>Use <code>pad=SamePad()</code> to apply padding so that outputsize == inputsize / stride.</p><p><strong>Examples</strong></p><p>Apply a <code>CrossCor</code> layer to a 1-channel input using a 2×2 window size, giving us a 16-channel output. Output is activated with ReLU.</p><pre><code class="language-julia">size = (2,2)
in = 1
out = 16
CrossCor((2, 2), 1=&gt;16, relu)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/conv.jl#L260-L283">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.flatten" href="#Flux.flatten"><code>Flux.flatten</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">flatten(x::AbstractArray)</code></pre><p>Reshape arbitrarly-shaped input into a matrix-shaped output preserving the last dimension size. Equivalent to <code>reshape(x, :, size(x)[end])</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/stateless.jl#L36-L42">source</a></section></article><h2 id="Recurrent-Layers-1"><a class="docs-heading-anchor" href="#Recurrent-Layers-1">Recurrent Layers</a><a class="docs-heading-anchor-permalink" href="#Recurrent-Layers-1" title="Permalink"></a></h2><p>Much like the core layers above, but can be used to process sequence data (as well as other kinds of structured data).</p><article class="docstring"><header><a class="docstring-binding" id="Flux.RNN" href="#Flux.RNN"><code>Flux.RNN</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">RNN(in::Integer, out::Integer, σ = tanh)</code></pre><p>The most basic recurrent layer; essentially acts as a <code>Dense</code> layer, but with the output fed back into the input each time step.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/recurrent.jl#L91-L96">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.LSTM" href="#Flux.LSTM"><code>Flux.LSTM</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">LSTM(in::Integer, out::Integer)</code></pre><p><a href="https://www.researchgate.net/publication/13853244_Long_Short-term_Memory">Long Short Term Memory</a> recurrent layer. Behaves like an RNN but generally exhibits a longer memory span over sequences.</p><p>See <a href="https://colah.github.io/posts/2015-08-Understanding-LSTMs/">this article</a> for a good overview of the internals.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/recurrent.jl#L136-L144">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.GRU" href="#Flux.GRU"><code>Flux.GRU</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">GRU(in::Integer, out::Integer)</code></pre><p><a href="https://arxiv.org/abs/1406.1078">Gated Recurrent Unit</a> layer. Behaves like an RNN but generally exhibits a longer memory span over sequences.</p><p>See <a href="https://colah.github.io/posts/2015-08-Understanding-LSTMs/">this article</a> for a good overview of the internals.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/recurrent.jl#L177-L185">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Recur" href="#Flux.Recur"><code>Flux.Recur</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Recur(cell)</code></pre><p><code>Recur</code> takes a recurrent cell and makes it stateful, managing the hidden state in the background. <code>cell</code> should be a model of the form:</p><pre><code class="language-none">h, y = cell(h, x...)</code></pre><p>For example, here&#39;s a recurrent network that keeps a running total of its inputs:</p><pre><code class="language-julia">accum(h, x) = (h + x, x)
CrossCor((2, 2), 1=&gt;16, relu)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/conv.jl#L260-L283">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.flatten" href="#Flux.flatten"><code>Flux.flatten</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">flatten(x::AbstractArray)</code></pre><p>Reshape arbitrarly-shaped input into a matrix-shaped output preserving the last dimension size. Equivalent to <code>reshape(x, :, size(x)[end])</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/stateless.jl#L256-L262">source</a></section></article><h2 id="Recurrent-Layers-1"><a class="docs-heading-anchor" href="#Recurrent-Layers-1">Recurrent Layers</a><a class="docs-heading-anchor-permalink" href="#Recurrent-Layers-1" title="Permalink"></a></h2><p>Much like the core layers above, but can be used to process sequence data (as well as other kinds of structured data).</p><article class="docstring"><header><a class="docstring-binding" id="Flux.RNN" href="#Flux.RNN"><code>Flux.RNN</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">RNN(in::Integer, out::Integer, σ = tanh)</code></pre><p>The most basic recurrent layer; essentially acts as a <code>Dense</code> layer, but with the output fed back into the input each time step.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/recurrent.jl#L91-L96">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.LSTM" href="#Flux.LSTM"><code>Flux.LSTM</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">LSTM(in::Integer, out::Integer)</code></pre><p><a href="https://www.researchgate.net/publication/13853244_Long_Short-term_Memory">Long Short Term Memory</a> recurrent layer. Behaves like an RNN but generally exhibits a longer memory span over sequences.</p><p>See <a href="https://colah.github.io/posts/2015-08-Understanding-LSTMs/">this article</a> for a good overview of the internals.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/recurrent.jl#L136-L144">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.GRU" href="#Flux.GRU"><code>Flux.GRU</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">GRU(in::Integer, out::Integer)</code></pre><p><a href="https://arxiv.org/abs/1406.1078">Gated Recurrent Unit</a> layer. Behaves like an RNN but generally exhibits a longer memory span over sequences.</p><p>See <a href="https://colah.github.io/posts/2015-08-Understanding-LSTMs/">this article</a> for a good overview of the internals.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/recurrent.jl#L177-L185">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Recur" href="#Flux.Recur"><code>Flux.Recur</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Recur(cell)</code></pre><p><code>Recur</code> takes a recurrent cell and makes it stateful, managing the hidden state in the background. <code>cell</code> should be a model of the form:</p><pre><code class="language-none">h, y = cell(h, x...)</code></pre><p>For example, here&#39;s a recurrent network that keeps a running total of its inputs:</p><pre><code class="language-julia">accum(h, x) = (h + x, x)
rnn = Flux.Recur(accum, 0)
rnn(2) # 2
rnn(3) # 3
rnn.state # 5
rnn.(1:10) # apply to a sequence
rnn.state # 60</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/recurrent.jl#L7-L26">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.reset!" href="#Flux.reset!"><code>Flux.reset!</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">reset!(rnn)</code></pre><p>Reset the hidden state of a recurrent layer back to its original value.</p><p>Assuming you have a <code>Recur</code> layer <code>rnn</code>, this is roughly equivalent to:</p><pre><code class="language-julia">rnn.state = hidden(rnn.cell)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/recurrent.jl#L45-L54">source</a></section></article><h2 id="Other-General-Purpose-Layers-1"><a class="docs-heading-anchor" href="#Other-General-Purpose-Layers-1">Other General Purpose Layers</a><a class="docs-heading-anchor-permalink" href="#Other-General-Purpose-Layers-1" title="Permalink"></a></h2><p>These are marginally more obscure than the Basic Layers. But in contrast to the layers described in the other sections are not readily grouped around a particular purpose (e.g. CNNs or RNNs).</p><article class="docstring"><header><a class="docstring-binding" id="Flux.Maxout" href="#Flux.Maxout"><code>Flux.Maxout</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Maxout(over)</code></pre><p>The <a href="https://arxiv.org/pdf/1302.4389.pdf">Maxout</a> layer has a number of internal layers which all receive the same input. It returns the elementwise maximum of the internal layers&#39; outputs.</p><p>Maxout over linear dense layers satisfies the univeral approximation theorem.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/basic.jl#L183-L191">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.SkipConnection" href="#Flux.SkipConnection"><code>Flux.SkipConnection</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">SkipConnection(layer, connection)</code></pre><p>Create a skip connection which consists of a layer or <code>Chain</code> of consecutive layers and a shortcut connection linking the block&#39;s input to the output through a user-supplied 2-argument callable. The first argument to the callable will be propagated through the given <code>layer</code> while the second is the unchanged, &quot;skipped&quot; input.</p><p>The simplest &quot;ResNet&quot;-type connection is just <code>SkipConnection(layer, +)</code>, and requires the output of the layers to be the same shape as the input. Here is a more complicated example:</p><pre><code class="language-julia">m = Conv((3,3), 4=&gt;7, pad=(1,1))
rnn.state # 60</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/recurrent.jl#L7-L26">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.reset!" href="#Flux.reset!"><code>Flux.reset!</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">reset!(rnn)</code></pre><p>Reset the hidden state of a recurrent layer back to its original value.</p><p>Assuming you have a <code>Recur</code> layer <code>rnn</code>, this is roughly equivalent to:</p><pre><code class="language-julia">rnn.state = hidden(rnn.cell)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/recurrent.jl#L45-L54">source</a></section></article><h2 id="Other-General-Purpose-Layers-1"><a class="docs-heading-anchor" href="#Other-General-Purpose-Layers-1">Other General Purpose Layers</a><a class="docs-heading-anchor-permalink" href="#Other-General-Purpose-Layers-1" title="Permalink"></a></h2><p>These are marginally more obscure than the Basic Layers. But in contrast to the layers described in the other sections are not readily grouped around a particular purpose (e.g. CNNs or RNNs).</p><article class="docstring"><header><a class="docstring-binding" id="Flux.Maxout" href="#Flux.Maxout"><code>Flux.Maxout</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Maxout(over)</code></pre><p>The <a href="https://arxiv.org/pdf/1302.4389.pdf">Maxout</a> layer has a number of internal layers which all receive the same input. It returns the elementwise maximum of the internal layers&#39; outputs.</p><p>Maxout over linear dense layers satisfies the univeral approximation theorem.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/basic.jl#L183-L191">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.SkipConnection" href="#Flux.SkipConnection"><code>Flux.SkipConnection</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">SkipConnection(layer, connection)</code></pre><p>Create a skip connection which consists of a layer or <code>Chain</code> of consecutive layers and a shortcut connection linking the block&#39;s input to the output through a user-supplied 2-argument callable. The first argument to the callable will be propagated through the given <code>layer</code> while the second is the unchanged, &quot;skipped&quot; input.</p><p>The simplest &quot;ResNet&quot;-type connection is just <code>SkipConnection(layer, +)</code>, and requires the output of the layers to be the same shape as the input. Here is a more complicated example:</p><pre><code class="language-julia">m = Conv((3,3), 4=&gt;7, pad=(1,1))
x = ones(5,5,4,10);
size(m(x)) == (5, 5, 7, 10)
sm = SkipConnection(m, (mx, x) -&gt; cat(mx, x, dims=3))
size(sm(x)) == (5, 5, 11, 10)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/basic.jl#L226-L246">source</a></section></article><h2 id="Normalisation-and-Regularisation-1"><a class="docs-heading-anchor" href="#Normalisation-and-Regularisation-1">Normalisation &amp; Regularisation</a><a class="docs-heading-anchor-permalink" href="#Normalisation-and-Regularisation-1" title="Permalink"></a></h2><p>These layers don&#39;t affect the structure of the network but may improve training times or reduce overfitting.</p><article class="docstring"><header><a class="docstring-binding" id="Flux.normalise" href="#Flux.normalise"><code>Flux.normalise</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">normalise(x; dims=1)</code></pre><p>Normalise <code>x</code> to mean 0 and standard deviation 1 across the dimensions given by <code>dims</code>. Defaults to normalising over columns.</p><pre><code class="language-julia-repl">julia&gt; a = reshape(collect(1:9), 3, 3)
size(sm(x)) == (5, 5, 11, 10)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/basic.jl#L226-L246">source</a></section></article><h2 id="Normalisation-and-Regularisation-1"><a class="docs-heading-anchor" href="#Normalisation-and-Regularisation-1">Normalisation &amp; Regularisation</a><a class="docs-heading-anchor-permalink" href="#Normalisation-and-Regularisation-1" title="Permalink"></a></h2><p>These layers don&#39;t affect the structure of the network but may improve training times or reduce overfitting.</p><article class="docstring"><header><a class="docstring-binding" id="Flux.normalise" href="#Flux.normalise"><code>Flux.normalise</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">normalise(x; dims=1)</code></pre><p>Normalise <code>x</code> to mean 0 and standard deviation 1 across the dimensions given by <code>dims</code>. Defaults to normalising over columns.</p><pre><code class="language-julia-repl">julia&gt; a = reshape(collect(1:9), 3, 3)
3×3 Array{Int64,2}:
1 4 7
2 5 8
@ -53,22 +53,22 @@ julia&gt; Flux.normalise(a, dims=2)
3×3 Array{Float64,2}:
-1.22474 0.0 1.22474
-1.22474 0.0 1.22474
-1.22474 0.0 1.22474</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/stateless.jl#L3-L28">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.BatchNorm" href="#Flux.BatchNorm"><code>Flux.BatchNorm</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">BatchNorm(channels::Integer, σ = identity;
-1.22474 0.0 1.22474</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/stateless.jl#L223-L248">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.BatchNorm" href="#Flux.BatchNorm"><code>Flux.BatchNorm</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">BatchNorm(channels::Integer, σ = identity;
initβ = zeros, initγ = ones,
ϵ = 1e-8, momentum = .1)</code></pre><p><a href="https://arxiv.org/pdf/1502.03167.pdf">Batch Normalization</a> layer. <code>channels</code> should be the size of the channel dimension in your data (see below).</p><p>Given an array with <code>N</code> dimensions, call the <code>N-1</code>th the channel dimension. (For a batch of feature vectors this is just the data dimension, for <code>WHCN</code> images it&#39;s the usual channel dimension.)</p><p><code>BatchNorm</code> computes the mean and variance for each each <code>W×H×1×N</code> slice and shifts them to have a new mean and variance (corresponding to the learnable, per-channel <code>bias</code> and <code>scale</code> parameters).</p><p>Use <a href="#Flux.testmode!"><code>testmode!</code></a> during inference.</p><p><strong>Examples</strong></p><pre><code class="language-julia">m = Chain(
Dense(28^2, 64),
BatchNorm(64, relu),
Dense(64, 10),
BatchNorm(10),
softmax)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/normalise.jl#L122-L149">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.dropout" href="#Flux.dropout"><code>Flux.dropout</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">dropout(x, p; dims = :)</code></pre><p>The dropout function. For each input, either sets that input to <code>0</code> (with probability <code>p</code>) or scales it by <code>1 / (1 - p)</code>. <code>dims</code> specifies the unbroadcasted dimensions, e.g. <code>dims=1</code> applies dropout along columns and <code>dims=2</code> along rows. This is used as a regularisation, i.e. it reduces overfitting during training.</p><p>See also the <a href="#Flux.Dropout"><code>Dropout</code></a> layer.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/normalise.jl#L12-L21">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Dropout" href="#Flux.Dropout"><code>Flux.Dropout</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Dropout(p, dims = :)</code></pre><p>Dropout layer. In the forward pass, apply the <a href="#Flux.dropout"><code>Flux.dropout</code></a> function on the input.</p><p>Does nothing to the input once <a href="#Flux.testmode!"><code>Flux.testmode!</code></a> is <code>true</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/normalise.jl#L30-L36">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.AlphaDropout" href="#Flux.AlphaDropout"><code>Flux.AlphaDropout</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">AlphaDropout(p)</code></pre><p>A dropout layer. Used in <a href="https://papers.nips.cc/paper/6698-self-normalizing-neural-networks.pdf">Self-Normalizing Neural Networks</a>. The AlphaDropout layer ensures that mean and variance of activations remain the same as before.</p><p>Does nothing to the input once <a href="#Flux.testmode!"><code>testmode!</code></a> is true.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/normalise.jl#L65-L74">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.LayerNorm" href="#Flux.LayerNorm"><code>Flux.LayerNorm</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">LayerNorm(h::Integer)</code></pre><p>A <a href="https://arxiv.org/pdf/1607.06450.pdf">normalisation layer</a> designed to be used with recurrent hidden states of size <code>h</code>. Normalises the mean and standard deviation of each input before applying a per-neuron gain/bias.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/normalise.jl#L100-L106">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.InstanceNorm" href="#Flux.InstanceNorm"><code>Flux.InstanceNorm</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">InstanceNorm(channels::Integer, σ = identity;
softmax)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/normalise.jl#L122-L149">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.dropout" href="#Flux.dropout"><code>Flux.dropout</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">dropout(x, p; dims = :)</code></pre><p>The dropout function. For each input, either sets that input to <code>0</code> (with probability <code>p</code>) or scales it by <code>1 / (1 - p)</code>. <code>dims</code> specifies the unbroadcasted dimensions, e.g. <code>dims=1</code> applies dropout along columns and <code>dims=2</code> along rows. This is used as a regularisation, i.e. it reduces overfitting during training.</p><p>See also the <a href="#Flux.Dropout"><code>Dropout</code></a> layer.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/normalise.jl#L12-L21">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Dropout" href="#Flux.Dropout"><code>Flux.Dropout</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Dropout(p, dims = :)</code></pre><p>Dropout layer. In the forward pass, apply the <a href="#Flux.dropout"><code>Flux.dropout</code></a> function on the input.</p><p>Does nothing to the input once <a href="#Flux.testmode!"><code>Flux.testmode!</code></a> is <code>true</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/normalise.jl#L30-L36">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.AlphaDropout" href="#Flux.AlphaDropout"><code>Flux.AlphaDropout</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">AlphaDropout(p)</code></pre><p>A dropout layer. Used in <a href="https://papers.nips.cc/paper/6698-self-normalizing-neural-networks.pdf">Self-Normalizing Neural Networks</a>. The AlphaDropout layer ensures that mean and variance of activations remain the same as before.</p><p>Does nothing to the input once <a href="#Flux.testmode!"><code>testmode!</code></a> is true.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/normalise.jl#L65-L74">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.LayerNorm" href="#Flux.LayerNorm"><code>Flux.LayerNorm</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">LayerNorm(h::Integer)</code></pre><p>A <a href="https://arxiv.org/pdf/1607.06450.pdf">normalisation layer</a> designed to be used with recurrent hidden states of size <code>h</code>. Normalises the mean and standard deviation of each input before applying a per-neuron gain/bias.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/normalise.jl#L100-L106">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.InstanceNorm" href="#Flux.InstanceNorm"><code>Flux.InstanceNorm</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">InstanceNorm(channels::Integer, σ = identity;
initβ = zeros, initγ = ones,
ϵ = 1e-8, momentum = .1)</code></pre><p><a href="https://arxiv.org/abs/1607.08022">Instance Normalization</a> layer. <code>channels</code> should be the size of the channel dimension in your data (see below).</p><p>Given an array with <code>N</code> dimensions, call the <code>N-1</code>th the channel dimension. (For a batch of feature vectors this is just the data dimension, for <code>WHCN</code> images it&#39;s the usual channel dimension.)</p><p><code>InstanceNorm</code> computes the mean and variance for each each <code>W×H×1×1</code> slice and shifts them to have a new mean and variance (corresponding to the learnable, per-channel <code>bias</code> and <code>scale</code> parameters).</p><p>Use <a href="#Flux.testmode!"><code>testmode!</code></a> during inference.</p><p><strong>Examples</strong></p><pre><code class="language-julia">m = Chain(
Dense(28^2, 64),
InstanceNorm(64, relu),
Dense(64, 10),
InstanceNorm(10),
softmax)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/normalise.jl#L228-L255">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.GroupNorm" href="#Flux.GroupNorm"><code>Flux.GroupNorm</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">GroupNorm(chs::Integer, G::Integer, λ = identity;
softmax)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/normalise.jl#L228-L255">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.GroupNorm" href="#Flux.GroupNorm"><code>Flux.GroupNorm</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">GroupNorm(chs::Integer, G::Integer, λ = identity;
initβ = (i) -&gt; zeros(Float32, i), initγ = (i) -&gt; ones(Float32, i),
ϵ = 1f-5, momentum = 0.1f0)</code></pre><p><a href="https://arxiv.org/pdf/1803.08494.pdf">Group Normalization</a> layer. This layer can outperform Batch Normalization and Instance Normalization.</p><p><code>chs</code> is the number of channels, the channel dimension of your input. For an array of N dimensions, the <code>N-1</code>th index is the channel dimension.</p><p><code>G</code> is the number of groups along which the statistics are computed. The number of channels must be an integer multiple of the number of groups.</p><p>Use <a href="#Flux.testmode!"><code>testmode!</code></a> during inference.</p><p><strong>Examples</strong></p><pre><code class="language-julia">m = Chain(Conv((3,3), 1=&gt;32, leakyrelu;pad = 1),
GroupNorm(32,16))
# 32 channels, 16 groups (G = 16), thus 2 channels per group used</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/normalise.jl#L313-L335">source</a></section></article><h3 id="Testmode-1"><a class="docs-heading-anchor" href="#Testmode-1">Testmode</a><a class="docs-heading-anchor-permalink" href="#Testmode-1" title="Permalink"></a></h3><p>Many normalisation layers behave differently under training and inference (testing). By default, Flux will automatically determine when a layer evaluation is part of training or inference. Still, depending on your use case, it may be helpful to manually specify when these layers should be treated as being trained or not. For this, Flux provides <code>Flux.testmode!</code>. When called on a model (e.g. a layer or chain of layers), this function will place the model into the mode specified.</p><article class="docstring"><header><a class="docstring-binding" id="Flux.testmode!" href="#Flux.testmode!"><code>Flux.testmode!</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">testmode!(m, mode = true)</code></pre><p>Set a layer or model&#39;s test mode (see below). Using <code>:auto</code> mode will treat any gradient computation as training.</p><p><em>Note</em>: if you manually set a model into test mode, you need to manually place it back into train mode during training phase.</p><p>Possible values include:</p><ul><li><code>false</code> for training</li><li><code>true</code> for testing</li><li><code>:auto</code> or <code>nothing</code> for Flux to detect the mode automatically</li></ul></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/functor.jl#L42-L55">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.trainmode!" href="#Flux.trainmode!"><code>Flux.trainmode!</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">trainmode!(m, mode = true)</code></pre><p>Set a layer of model&#39;s train mode (see below). Symmetric to <a href="#Flux.testmode!"><code>testmode!</code></a> (i.e. `trainmode!(m, mode) == testmode!(m, !mode)).</p><p><em>Note</em>: if you manually set a model into train mode, you need to manually place it into test mode during testing phase.</p><p>Possible values include:</p><ul><li><code>true</code> for training</li><li><code>false</code> for testing</li><li><code>:auto</code> or <code>nothing</code> for Flux to detect the mode automatically</li></ul></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/functor.jl#L58-L71">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../recurrence/">« Recurrence</a><a class="docs-footer-nextpage" href="../losses/">Loss Functions »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 29 April 2020 10:54">Wednesday 29 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
# 32 channels, 16 groups (G = 16), thus 2 channels per group used</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/normalise.jl#L313-L335">source</a></section></article><h3 id="Testmode-1"><a class="docs-heading-anchor" href="#Testmode-1">Testmode</a><a class="docs-heading-anchor-permalink" href="#Testmode-1" title="Permalink"></a></h3><p>Many normalisation layers behave differently under training and inference (testing). By default, Flux will automatically determine when a layer evaluation is part of training or inference. Still, depending on your use case, it may be helpful to manually specify when these layers should be treated as being trained or not. For this, Flux provides <code>Flux.testmode!</code>. When called on a model (e.g. a layer or chain of layers), this function will place the model into the mode specified.</p><article class="docstring"><header><a class="docstring-binding" id="Flux.testmode!" href="#Flux.testmode!"><code>Flux.testmode!</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">testmode!(m, mode = true)</code></pre><p>Set a layer or model&#39;s test mode (see below). Using <code>:auto</code> mode will treat any gradient computation as training.</p><p><em>Note</em>: if you manually set a model into test mode, you need to manually place it back into train mode during training phase.</p><p>Possible values include:</p><ul><li><code>false</code> for training</li><li><code>true</code> for testing</li><li><code>:auto</code> or <code>nothing</code> for Flux to detect the mode automatically</li></ul></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/functor.jl#L42-L55">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.trainmode!" href="#Flux.trainmode!"><code>Flux.trainmode!</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">trainmode!(m, mode = true)</code></pre><p>Set a layer of model&#39;s train mode (see below). Symmetric to <a href="#Flux.testmode!"><code>testmode!</code></a> (i.e. `trainmode!(m, mode) == testmode!(m, !mode)).</p><p><em>Note</em>: if you manually set a model into train mode, you need to manually place it into test mode during testing phase.</p><p>Possible values include:</p><ul><li><code>true</code> for training</li><li><code>false</code> for testing</li><li><code>:auto</code> or <code>nothing</code> for Flux to detect the mode automatically</li></ul></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/functor.jl#L58-L71">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../recurrence/">« Recurrence</a><a class="docs-footer-nextpage" href="../losses/">Loss Functions »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Thursday 30 April 2020 10:33">Thursday 30 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>

File diff suppressed because one or more lines are too long

View File

@ -28,4 +28,4 @@ a = randomly sampled from uniform distribution U(l, u)</code></pre><p>Randomized
batched_adjoint(A)</code></pre><p>Equivalent to applying <code>transpose</code> or <code>adjoint</code> to each matrix <code>A[:,:,k]</code>.</p><p>These exist to control how <code>batched_mul</code> behaves, as it operated on such matrix slices of an array with <code>ndims(A)==3</code>.</p><pre><code class="language-none">BatchedTranspose{T, N, S} &lt;: AbstractBatchedMatrix{T, N}
BatchedAdjoint{T, N, S}</code></pre><p>Lazy wrappers analogous to <code>Transpose</code> and <code>Adjoint</code>, returned by <code>batched_transpose</code></p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.batched_transpose" href="#NNlib.batched_transpose"><code>NNlib.batched_transpose</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">batched_transpose(A::AbstractArray{T,3})
batched_adjoint(A)</code></pre><p>Equivalent to applying <code>transpose</code> or <code>adjoint</code> to each matrix <code>A[:,:,k]</code>.</p><p>These exist to control how <code>batched_mul</code> behaves, as it operated on such matrix slices of an array with <code>ndims(A)==3</code>.</p><pre><code class="language-none">BatchedTranspose{T, N, S} &lt;: AbstractBatchedMatrix{T, N}
BatchedAdjoint{T, N, S}</code></pre><p>Lazy wrappers analogous to <code>Transpose</code> and <code>Adjoint</code>, returned by <code>batched_transpose</code></p></div></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../advanced/">« Advanced Model Building</a><a class="docs-footer-nextpage" href="../../data/onehot/">One-Hot Encoding »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 29 April 2020 10:54">Wednesday 29 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
BatchedAdjoint{T, N, S}</code></pre><p>Lazy wrappers analogous to <code>Transpose</code> and <code>Adjoint</code>, returned by <code>batched_transpose</code></p></div></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../advanced/">« Advanced Model Building</a><a class="docs-footer-nextpage" href="../../data/onehot/">One-Hot Encoding »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Thursday 30 April 2020 10:33">Thursday 30 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>

View File

@ -39,4 +39,4 @@ m = Flux.Recur(rnn, h)
y = m(x)</code></pre><p>The <code>Recur</code> wrapper stores the state between runs in the <code>m.state</code> field.</p><p>If you use the <code>RNN(10, 5)</code> constructor as opposed to <code>RNNCell</code> you&#39;ll see that it&#39;s simply a wrapped cell.</p><pre><code class="language-julia">julia&gt; RNN(10, 5)
Recur(RNNCell(10, 5, tanh))</code></pre><h2 id="Sequences-1"><a class="docs-heading-anchor" href="#Sequences-1">Sequences</a><a class="docs-heading-anchor-permalink" href="#Sequences-1" title="Permalink"></a></h2><p>Often we want to work with sequences of inputs, rather than individual <code>x</code>s.</p><pre><code class="language-julia">seq = [rand(10) for i = 1:10]</code></pre><p>With <code>Recur</code>, applying our model to each element of a sequence is trivial:</p><pre><code class="language-julia">m.(seq) # returns a list of 5-element vectors</code></pre><p>This works even when we&#39;ve chain recurrent layers into a larger model.</p><pre><code class="language-julia">m = Chain(LSTM(10, 15), Dense(15, 5))
m.(seq)</code></pre><p>Finally, we can reset the hidden state of the cell back to its initial value using <code>reset!(m)</code>.</p></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../basics/">« Basics</a><a class="docs-footer-nextpage" href="../layers/">Model Reference »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 29 April 2020 10:54">Wednesday 29 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
m.(seq)</code></pre><p>Finally, we can reset the hidden state of the cell back to its initial value using <code>reset!(m)</code>.</p></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../basics/">« Basics</a><a class="docs-footer-nextpage" href="../layers/">Model Reference »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Thursday 30 April 2020 10:33">Thursday 30 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>

View File

@ -37,4 +37,4 @@ julia&gt; activations(c, rand(10))
Float32[0.5192045, 0.48079553]
julia&gt; sum(norm, ans)
2.1166067f0</code></pre><article class="docstring"><header><a class="docstring-binding" id="Flux.activations" href="#Flux.activations"><code>Flux.activations</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">activations(c::Chain, input)</code></pre><p>Calculate the forward results of each layers in Chain <code>c</code> with <code>input</code> as model input.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/layers/basic.jl#L67-L71">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../losses/">« Loss Functions</a><a class="docs-footer-nextpage" href="../advanced/">Advanced Model Building »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 29 April 2020 10:54">Wednesday 29 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
2.1166067f0</code></pre><article class="docstring"><header><a class="docstring-binding" id="Flux.activations" href="#Flux.activations"><code>Flux.activations</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">activations(c::Chain, input)</code></pre><p>Calculate the forward results of each layers in Chain <code>c</code> with <code>input</code> as model input.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/layers/basic.jl#L67-L71">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../losses/">« Loss Functions</a><a class="docs-footer-nextpage" href="../advanced/">Advanced Model Building »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Thursday 30 April 2020 10:32">Thursday 30 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>

View File

@ -17,4 +17,4 @@ y_batch = reduce(hcat, ys)
function loss_total(x_batch::Matrix, y_batch::Matrix)
y_preds = model(x_batch)
sum(loss.(y_preds, y_batch))
end</code></pre><p>When doing this kind of concatenation use <code>reduce(hcat, xs)</code> rather than <code>hcat(xs...)</code>. This will avoid the splatting penalty, and will hit the optimised <code>reduce</code> method.</p></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../utilities/">« Utility Functions</a><a class="docs-footer-nextpage" href="../datasets/">Datasets »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 29 April 2020 10:54">Wednesday 29 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
end</code></pre><p>When doing this kind of concatenation use <code>reduce(hcat, xs)</code> rather than <code>hcat(xs...)</code>. This will avoid the splatting penalty, and will hit the optimised <code>reduce</code> method.</p></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../utilities/">« Utility Functions</a><a class="docs-footer-nextpage" href="../datasets/">Datasets »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Thursday 30 April 2020 10:33">Thursday 30 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>

View File

@ -47,4 +47,4 @@ evalcb = throttle(30) do
# Show loss
@save &quot;model-checkpoint.bson&quot; model
end</code></pre><p>This will update the <code>&quot;model-checkpoint.bson&quot;</code> file every thirty seconds.</p><p>You can get more advanced by saving a series of models throughout training, for example</p><pre><code class="language-julia">@save &quot;model-$(now()).bson&quot; model</code></pre><p>will produce a series of models like <code>&quot;model-2018-03-06T02:57:10.41.bson&quot;</code>. You could also store the current test set loss, so that it&#39;s easy to (for example) revert to an older copy of the model if it starts to overfit.</p><pre><code class="language-julia">@save &quot;model-$(now()).bson&quot; model loss = testloss()</code></pre><p>You can even store optimiser state alongside the model, to resume training exactly where you left off.</p><pre><code class="language-julia">opt = ADAM()
@save &quot;model-$(now()).bson&quot; model opt</code></pre></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../gpu/">« GPU Support</a><a class="docs-footer-nextpage" href="../ecosystem/">The Julia Ecosystem »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 29 April 2020 10:54">Wednesday 29 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
@save &quot;model-$(now()).bson&quot; model opt</code></pre></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../gpu/">« GPU Support</a><a class="docs-footer-nextpage" href="../ecosystem/">The Julia Ecosystem »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Thursday 30 April 2020 10:33">Thursday 30 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>

View File

@ -6,4 +6,4 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
ga('create', 'UA-36890222-9', 'auto');
ga('send', 'pageview', {'page': location.pathname + location.search + location.hash});
</script><link href="https://fonts.googleapis.com/css?family=Lato|Roboto+Mono" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/fontawesome.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/solid.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/brands.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.11.1/katex.min.css" rel="stylesheet" type="text/css"/><script>documenterBaseURL=".."</script><script src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.6/require.min.js" data-main="../assets/documenter.js"></script><script src="../siteinfo.js"></script><script src="../../versions.js"></script><link href="../assets/flux.css" rel="stylesheet" type="text/css"/><link class="docs-theme-link" rel="stylesheet" type="text/css" href="../assets/themes/documenter-dark.css" data-theme-name="documenter-dark"/><link class="docs-theme-link" rel="stylesheet" type="text/css" href="../assets/themes/documenter-light.css" data-theme-name="documenter-light" data-theme-primary/><script src="../assets/themeswap.js"></script></head><body><div id="documenter"><nav class="docs-sidebar"><div class="docs-package-name"><span class="docs-autofit">Flux</span></div><form class="docs-search" action><input class="docs-search-query" id="documenter-search-query" name="q" type="text" placeholder="Search docs"/></form><ul class="docs-menu"><li><a class="tocitem" href="../">Home</a></li><li><span class="tocitem">Building Models</span><ul><li><a class="tocitem" href="../models/basics/">Basics</a></li><li><a class="tocitem" href="../models/recurrence/">Recurrence</a></li><li><a class="tocitem" href="../models/layers/">Model Reference</a></li><li><a class="tocitem" href="../models/losses/">Loss Functions</a></li><li><a class="tocitem" href="../models/regularisation/">Regularisation</a></li><li><a class="tocitem" href="../models/advanced/">Advanced Model Building</a></li><li><a class="tocitem" href="../models/nnlib/">NNlib</a></li></ul></li><li><span class="tocitem">Handling Data</span><ul><li><a class="tocitem" href="../data/onehot/">One-Hot Encoding</a></li><li><a class="tocitem" href="../data/dataloader/">DataLoader</a></li></ul></li><li><span class="tocitem">Training Models</span><ul><li><a class="tocitem" href="../training/optimisers/">Optimisers</a></li><li><a class="tocitem" href="../training/training/">Training</a></li></ul></li><li><a class="tocitem" href="../gpu/">GPU Support</a></li><li><a class="tocitem" href="../saving/">Saving &amp; Loading</a></li><li><a class="tocitem" href="../ecosystem/">The Julia Ecosystem</a></li><li><a class="tocitem" href="../utilities/">Utility Functions</a></li><li><a class="tocitem" href="../performance/">Performance Tips</a></li><li><a class="tocitem" href="../datasets/">Datasets</a></li><li><a class="tocitem" href="../community/">Community</a></li></ul><div class="docs-version-selector field has-addons"><div class="control"><span class="docs-label button is-static is-size-7">Version</span></div><div class="docs-selector control is-expanded"><div class="select is-fullwidth is-size-7"><select id="documenter-version-selector"></select></div></div></div></nav><div class="docs-main"><header class="docs-navbar"><nav class="breadcrumb"><ul class="is-hidden-mobile"><li class="is-active"><a href>Search</a></li></ul><ul class="is-hidden-tablet"><li class="is-active"><a href>Search</a></li></ul></nav><div class="docs-right"><a class="docs-settings-button fas fa-cog" id="documenter-settings-button" href="#" title="Settings"></a><a class="docs-sidebar-button fa fa-bars is-hidden-desktop" id="documenter-sidebar-button" href="#"></a></div></header><article><p id="documenter-search-info">Loading search...</p><ul id="documenter-search-results"></ul></article></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 29 April 2020 10:54">Wednesday 29 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body><script src="../search_index.js"></script><script src="../assets/search.js"></script></html>
</script><link href="https://fonts.googleapis.com/css?family=Lato|Roboto+Mono" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/fontawesome.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/solid.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/brands.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.11.1/katex.min.css" rel="stylesheet" type="text/css"/><script>documenterBaseURL=".."</script><script src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.6/require.min.js" data-main="../assets/documenter.js"></script><script src="../siteinfo.js"></script><script src="../../versions.js"></script><link href="../assets/flux.css" rel="stylesheet" type="text/css"/><link class="docs-theme-link" rel="stylesheet" type="text/css" href="../assets/themes/documenter-dark.css" data-theme-name="documenter-dark"/><link class="docs-theme-link" rel="stylesheet" type="text/css" href="../assets/themes/documenter-light.css" data-theme-name="documenter-light" data-theme-primary/><script src="../assets/themeswap.js"></script></head><body><div id="documenter"><nav class="docs-sidebar"><div class="docs-package-name"><span class="docs-autofit">Flux</span></div><form class="docs-search" action><input class="docs-search-query" id="documenter-search-query" name="q" type="text" placeholder="Search docs"/></form><ul class="docs-menu"><li><a class="tocitem" href="../">Home</a></li><li><span class="tocitem">Building Models</span><ul><li><a class="tocitem" href="../models/basics/">Basics</a></li><li><a class="tocitem" href="../models/recurrence/">Recurrence</a></li><li><a class="tocitem" href="../models/layers/">Model Reference</a></li><li><a class="tocitem" href="../models/losses/">Loss Functions</a></li><li><a class="tocitem" href="../models/regularisation/">Regularisation</a></li><li><a class="tocitem" href="../models/advanced/">Advanced Model Building</a></li><li><a class="tocitem" href="../models/nnlib/">NNlib</a></li></ul></li><li><span class="tocitem">Handling Data</span><ul><li><a class="tocitem" href="../data/onehot/">One-Hot Encoding</a></li><li><a class="tocitem" href="../data/dataloader/">DataLoader</a></li></ul></li><li><span class="tocitem">Training Models</span><ul><li><a class="tocitem" href="../training/optimisers/">Optimisers</a></li><li><a class="tocitem" href="../training/training/">Training</a></li></ul></li><li><a class="tocitem" href="../gpu/">GPU Support</a></li><li><a class="tocitem" href="../saving/">Saving &amp; Loading</a></li><li><a class="tocitem" href="../ecosystem/">The Julia Ecosystem</a></li><li><a class="tocitem" href="../utilities/">Utility Functions</a></li><li><a class="tocitem" href="../performance/">Performance Tips</a></li><li><a class="tocitem" href="../datasets/">Datasets</a></li><li><a class="tocitem" href="../community/">Community</a></li></ul><div class="docs-version-selector field has-addons"><div class="control"><span class="docs-label button is-static is-size-7">Version</span></div><div class="docs-selector control is-expanded"><div class="select is-fullwidth is-size-7"><select id="documenter-version-selector"></select></div></div></div></nav><div class="docs-main"><header class="docs-navbar"><nav class="breadcrumb"><ul class="is-hidden-mobile"><li class="is-active"><a href>Search</a></li></ul><ul class="is-hidden-tablet"><li class="is-active"><a href>Search</a></li></ul></nav><div class="docs-right"><a class="docs-settings-button fas fa-cog" id="documenter-settings-button" href="#" title="Settings"></a><a class="docs-sidebar-button fa fa-bars is-hidden-desktop" id="documenter-sidebar-button" href="#"></a></div></header><article><p id="documenter-search-info">Loading search...</p><ul id="documenter-search-results"></ul></article></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Thursday 30 April 2020 10:33">Thursday 30 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body><script src="../search_index.js"></script><script src="../assets/search.js"></script></html>

File diff suppressed because one or more lines are too long

View File

@ -27,8 +27,8 @@ end</code></pre><p>Running this will alter the parameters <code>W</code> and <co
for p in (W, b)
update!(opt, p, grads[p])
end</code></pre><p>An optimiser <code>update!</code> accepts a parameter and a gradient, and updates the parameter according to the chosen rule. We can also pass <code>opt</code> to our <a href="../training/">training loop</a>, which will update all parameters of the model in a loop. However, we can now easily replace <code>Descent</code> with a more advanced optimiser such as <code>ADAM</code>.</p><h2 id="Optimiser-Reference-1"><a class="docs-heading-anchor" href="#Optimiser-Reference-1">Optimiser Reference</a><a class="docs-heading-anchor-permalink" href="#Optimiser-Reference-1" title="Permalink"></a></h2><p>All optimisers return an object that, when passed to <code>train!</code>, will update the parameters passed to it.</p><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.update!" href="#Flux.Optimise.update!"><code>Flux.Optimise.update!</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">update!(x, x̄)</code></pre><p>Update the array <code>x</code> according to <code>x .-= x̄</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/train.jl#L6-L10">source</a></section><section><div><pre><code class="language-none">update!(opt, p, g)
update!(opt, ps::Params, gs)</code></pre><p>Perform an update step of the parameters <code>ps</code> (or the single parameter <code>p</code>) according to optimizer <code>opt</code> and the gradients <code>gs</code> (the gradient <code>g</code>).</p><p>As a result, the parameters are mutated and the optimizer&#39;s internal state may change.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/train.jl#L15-L23">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.Descent" href="#Flux.Optimise.Descent"><code>Flux.Optimise.Descent</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Descent(η = 0.1)</code></pre><p>Classic gradient descent optimiser with learning rate <code>η</code>. For each parameter <code>p</code> and its gradient <code>δp</code>, this runs <code>p -= η*δp</code></p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = Descent()
end</code></pre><p>An optimiser <code>update!</code> accepts a parameter and a gradient, and updates the parameter according to the chosen rule. We can also pass <code>opt</code> to our <a href="../training/">training loop</a>, which will update all parameters of the model in a loop. However, we can now easily replace <code>Descent</code> with a more advanced optimiser such as <code>ADAM</code>.</p><h2 id="Optimiser-Reference-1"><a class="docs-heading-anchor" href="#Optimiser-Reference-1">Optimiser Reference</a><a class="docs-heading-anchor-permalink" href="#Optimiser-Reference-1" title="Permalink"></a></h2><p>All optimisers return an object that, when passed to <code>train!</code>, will update the parameters passed to it.</p><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.update!" href="#Flux.Optimise.update!"><code>Flux.Optimise.update!</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">update!(x, x̄)</code></pre><p>Update the array <code>x</code> according to <code>x .-= x̄</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/train.jl#L6-L10">source</a></section><section><div><pre><code class="language-none">update!(opt, p, g)
update!(opt, ps::Params, gs)</code></pre><p>Perform an update step of the parameters <code>ps</code> (or the single parameter <code>p</code>) according to optimizer <code>opt</code> and the gradients <code>gs</code> (the gradient <code>g</code>).</p><p>As a result, the parameters are mutated and the optimizer&#39;s internal state may change.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/train.jl#L15-L23">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.Descent" href="#Flux.Optimise.Descent"><code>Flux.Optimise.Descent</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Descent(η = 0.1)</code></pre><p>Classic gradient descent optimiser with learning rate <code>η</code>. For each parameter <code>p</code> and its gradient <code>δp</code>, this runs <code>p -= η*δp</code></p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = Descent()
opt = Descent(0.3)
@ -38,29 +38,29 @@ gs = gradient(ps) do
loss(x, y)
end
Flux.Optimise.update!(opt, ps, gs)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/optimisers.jl#L8-L32">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.Momentum" href="#Flux.Optimise.Momentum"><code>Flux.Optimise.Momentum</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Momentum(η = 0.01, ρ = 0.9)</code></pre><p>Gradient descent optimizer with learning rate <code>η</code> and momentum <code>ρ</code>.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Momentum (<code>ρ</code>): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = Momentum()
Flux.Optimise.update!(opt, ps, gs)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/optimisers.jl#L8-L32">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.Momentum" href="#Flux.Optimise.Momentum"><code>Flux.Optimise.Momentum</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Momentum(η = 0.01, ρ = 0.9)</code></pre><p>Gradient descent optimizer with learning rate <code>η</code> and momentum <code>ρ</code>.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Momentum (<code>ρ</code>): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = Momentum()
opt = Momentum(0.01, 0.99)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/optimisers.jl#L43-L60">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.Nesterov" href="#Flux.Optimise.Nesterov"><code>Flux.Optimise.Nesterov</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Nesterov(η = 0.001, ρ = 0.9)</code></pre><p>Gradient descent optimizer with learning rate <code>η</code> and Nesterov momentum <code>ρ</code>.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Nesterov momentum (<code>ρ</code>): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = Nesterov()
opt = Momentum(0.01, 0.99)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/optimisers.jl#L43-L60">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.Nesterov" href="#Flux.Optimise.Nesterov"><code>Flux.Optimise.Nesterov</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Nesterov(η = 0.001, ρ = 0.9)</code></pre><p>Gradient descent optimizer with learning rate <code>η</code> and Nesterov momentum <code>ρ</code>.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Nesterov momentum (<code>ρ</code>): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = Nesterov()
opt = Nesterov(0.003, 0.95)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/optimisers.jl#L76-L93">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.RMSProp" href="#Flux.Optimise.RMSProp"><code>Flux.Optimise.RMSProp</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">RMSProp(η = 0.001, ρ = 0.9)</code></pre><p>Optimizer using the <a href="https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf">RMSProp</a> algorithm. Often a good choice for recurrent networks. Parameters other than learning rate generally don&#39;t need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Momentum (<code>ρ</code>): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = RMSProp()
opt = Nesterov(0.003, 0.95)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/optimisers.jl#L76-L93">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.RMSProp" href="#Flux.Optimise.RMSProp"><code>Flux.Optimise.RMSProp</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">RMSProp(η = 0.001, ρ = 0.9)</code></pre><p>Optimizer using the <a href="https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf">RMSProp</a> algorithm. Often a good choice for recurrent networks. Parameters other than learning rate generally don&#39;t need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Momentum (<code>ρ</code>): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = RMSProp()
opt = RMSProp(0.002, 0.95)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/optimisers.jl#L110-L130">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADAM" href="#Flux.Optimise.ADAM"><code>Flux.Optimise.ADAM</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ADAM(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="https://arxiv.org/abs/1412.6980v8">ADAM</a> optimiser.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADAM()
opt = RMSProp(0.002, 0.95)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/optimisers.jl#L110-L130">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADAM" href="#Flux.Optimise.ADAM"><code>Flux.Optimise.ADAM</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ADAM(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="https://arxiv.org/abs/1412.6980v8">ADAM</a> optimiser.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADAM()
opt = ADAM(0.001, (0.9, 0.8))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/optimisers.jl#L146-L163">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.RADAM" href="#Flux.Optimise.RADAM"><code>Flux.Optimise.RADAM</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">RADAM(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="https://arxiv.org/pdf/1908.03265v1.pdf">Rectified ADAM</a> optimizer.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = RADAM()
opt = ADAM(0.001, (0.9, 0.8))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/optimisers.jl#L146-L163">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.RADAM" href="#Flux.Optimise.RADAM"><code>Flux.Optimise.RADAM</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">RADAM(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="https://arxiv.org/pdf/1908.03265v1.pdf">Rectified ADAM</a> optimizer.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = RADAM()
opt = RADAM(0.001, (0.9, 0.8))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/optimisers.jl#L182-L199">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.AdaMax" href="#Flux.Optimise.AdaMax"><code>Flux.Optimise.AdaMax</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">AdaMax(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="https://arxiv.org/abs/1412.6980v9">AdaMax</a> is a variant of ADAM based on the ∞-norm.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = AdaMax()
opt = RADAM(0.001, (0.9, 0.8))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/optimisers.jl#L182-L199">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.AdaMax" href="#Flux.Optimise.AdaMax"><code>Flux.Optimise.AdaMax</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">AdaMax(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="https://arxiv.org/abs/1412.6980v9">AdaMax</a> is a variant of ADAM based on the ∞-norm.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = AdaMax()
opt = AdaMax(0.001, (0.9, 0.995))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/optimisers.jl#L225-L242">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADAGrad" href="#Flux.Optimise.ADAGrad"><code>Flux.Optimise.ADAGrad</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ADAGrad(η = 0.1)</code></pre><p><a href="http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf">ADAGrad</a> optimizer. It has parameter specific learning rates based on how frequently it is updated. Parameters don&#39;t need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADAGrad()
opt = AdaMax(0.001, (0.9, 0.995))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/optimisers.jl#L225-L242">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADAGrad" href="#Flux.Optimise.ADAGrad"><code>Flux.Optimise.ADAGrad</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ADAGrad(η = 0.1)</code></pre><p><a href="http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf">ADAGrad</a> optimizer. It has parameter specific learning rates based on how frequently it is updated. Parameters don&#39;t need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADAGrad()
opt = ADAGrad(0.001)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/optimisers.jl#L261-L278">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADADelta" href="#Flux.Optimise.ADADelta"><code>Flux.Optimise.ADADelta</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ADADelta(ρ = 0.9)</code></pre><p><a href="https://arxiv.org/abs/1212.5701">ADADelta</a> is a version of ADAGrad adapting its learning rate based on a window of past gradient updates. Parameters don&#39;t need tuning.</p><p><strong>Parameters</strong></p><ul><li>Rho (<code>ρ</code>): Factor by which the gradient is decayed at each time step.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADADelta()
opt = ADAGrad(0.001)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/optimisers.jl#L261-L278">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADADelta" href="#Flux.Optimise.ADADelta"><code>Flux.Optimise.ADADelta</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ADADelta(ρ = 0.9)</code></pre><p><a href="https://arxiv.org/abs/1212.5701">ADADelta</a> is a version of ADAGrad adapting its learning rate based on a window of past gradient updates. Parameters don&#39;t need tuning.</p><p><strong>Parameters</strong></p><ul><li>Rho (<code>ρ</code>): Factor by which the gradient is decayed at each time step.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADADelta()
opt = ADADelta(0.89)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/optimisers.jl#L293-L309">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.AMSGrad" href="#Flux.Optimise.AMSGrad"><code>Flux.Optimise.AMSGrad</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">AMSGrad(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p>The <a href="https://openreview.net/forum?id=ryQu7f-RZ">AMSGrad</a> version of the ADAM optimiser. Parameters don&#39;t need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = AMSGrad()
opt = ADADelta(0.89)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/optimisers.jl#L293-L309">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.AMSGrad" href="#Flux.Optimise.AMSGrad"><code>Flux.Optimise.AMSGrad</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">AMSGrad(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p>The <a href="https://openreview.net/forum?id=ryQu7f-RZ">AMSGrad</a> version of the ADAM optimiser. Parameters don&#39;t need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = AMSGrad()
opt = AMSGrad(0.001, (0.89, 0.995))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/optimisers.jl#L326-L344">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.NADAM" href="#Flux.Optimise.NADAM"><code>Flux.Optimise.NADAM</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">NADAM(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="http://cs229.stanford.edu/proj2015/054_report.pdf">NADAM</a> is a Nesterov variant of ADAM. Parameters don&#39;t need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = NADAM()
opt = AMSGrad(0.001, (0.89, 0.995))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/optimisers.jl#L326-L344">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.NADAM" href="#Flux.Optimise.NADAM"><code>Flux.Optimise.NADAM</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">NADAM(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="http://cs229.stanford.edu/proj2015/054_report.pdf">NADAM</a> is a Nesterov variant of ADAM. Parameters don&#39;t need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = NADAM()
opt = NADAM(0.002, (0.89, 0.995))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/optimisers.jl#L362-L380">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADAMW" href="#Flux.Optimise.ADAMW"><code>Flux.Optimise.ADAMW</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">ADAMW(η = 0.001, β::Tuple = (0.9, 0.999), decay = 0)</code></pre><p><a href="https://arxiv.org/abs/1711.05101">ADAMW</a> is a variant of ADAM fixing (as in repairing) its weight decay regularization.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li><li><code>decay</code>: Decay applied to weights during optimisation.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADAMW()
opt = NADAM(0.002, (0.89, 0.995))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/optimisers.jl#L362-L380">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADAMW" href="#Flux.Optimise.ADAMW"><code>Flux.Optimise.ADAMW</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">ADAMW(η = 0.001, β::Tuple = (0.9, 0.999), decay = 0)</code></pre><p><a href="https://arxiv.org/abs/1711.05101">ADAMW</a> is a variant of ADAM fixing (as in repairing) its weight decay regularization.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li><li><code>decay</code>: Decay applied to weights during optimisation.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADAMW()
opt = ADAMW(0.001, (0.89, 0.995), 0.1)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/optimisers.jl#L399-L418">source</a></section></article><h2 id="Optimiser-Interface-1"><a class="docs-heading-anchor" href="#Optimiser-Interface-1">Optimiser Interface</a><a class="docs-heading-anchor-permalink" href="#Optimiser-Interface-1" title="Permalink"></a></h2><p>Flux&#39;s optimisers are built around a <code>struct</code> that holds all the optimiser parameters along with a definition of how to apply the update rule associated with it. We do this via the <code>apply!</code> function which takes the optimiser as the first argument followed by the parameter and its corresponding gradient.</p><p>In this manner Flux also allows one to create custom optimisers to be used seamlessly. Let&#39;s work this with a simple example.</p><pre><code class="language-julia">mutable struct Momentum
opt = ADAMW(0.001, (0.89, 0.995), 0.1)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/optimisers.jl#L399-L418">source</a></section></article><h2 id="Optimiser-Interface-1"><a class="docs-heading-anchor" href="#Optimiser-Interface-1">Optimiser Interface</a><a class="docs-heading-anchor-permalink" href="#Optimiser-Interface-1" title="Permalink"></a></h2><p>Flux&#39;s optimisers are built around a <code>struct</code> that holds all the optimiser parameters along with a definition of how to apply the update rule associated with it. We do this via the <code>apply!</code> function which takes the optimiser as the first argument followed by the parameter and its corresponding gradient.</p><p>In this manner Flux also allows one to create custom optimisers to be used seamlessly. Let&#39;s work this with a simple example.</p><pre><code class="language-julia">mutable struct Momentum
eta
rho
velocity
@ -88,4 +88,4 @@ end
loss(rand(10)) # around 0.9</code></pre><p>In this manner it is possible to compose optimisers for some added flexibility.</p><h2 id="Decays-1"><a class="docs-heading-anchor" href="#Decays-1">Decays</a><a class="docs-heading-anchor-permalink" href="#Decays-1" title="Permalink"></a></h2><p>Similar to optimisers, Flux also defines some simple decays that can be used in conjunction with other optimisers, or standalone.</p><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ExpDecay" href="#Flux.Optimise.ExpDecay"><code>Flux.Optimise.ExpDecay</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ExpDecay(η = 0.001, decay = 0.1, decay_step = 1000, clip = 1e-4)</code></pre><p>Discount the learning rate <code>η</code> by the factor <code>decay</code> every <code>decay_step</code> steps till a minimum of <code>clip</code>.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li><code>decay</code>: Factor by which the learning rate is discounted.</li><li><code>decay_step</code>: Schedule decay operations by setting the number of steps between two decay operations.</li><li><code>clip</code>: Minimum value of learning rate.</li></ul><p><strong>Examples</strong></p><p>To apply exponential decay to an optimiser:</p><pre><code class="language-julia">Optimiser(ExpDecay(..), Opt(..))
opt = Optimiser(ExpDecay(), ADAM())</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/optimisers.jl#L476-L497">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.InvDecay" href="#Flux.Optimise.InvDecay"><code>Flux.Optimise.InvDecay</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">InvDecay(γ = 0.001)</code></pre><p>Apply inverse time decay to an optimiser, so that the effective step size at iteration <code>n</code> is <code>eta / (1 + γ * n)</code> where <code>eta</code> is the initial step size. The wrapped optimiser&#39;s step size is not modified.</p><p><strong>Examples</strong></p><pre><code class="language-julia">Optimiser(InvDecay(..), Opt(..))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/optimisers.jl#L449-L460">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.WeightDecay" href="#Flux.Optimise.WeightDecay"><code>Flux.Optimise.WeightDecay</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">WeightDecay(wd = 0)</code></pre><p>Decay weights by <code>wd</code>.</p><p><strong>Parameters</strong></p><ul><li>Weight decay (<code>wd</code>)</li></ul></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/optimisers.jl#L518-L525">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../../data/dataloader/">« DataLoader</a><a class="docs-footer-nextpage" href="../training/">Training »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 29 April 2020 10:54">Wednesday 29 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
opt = Optimiser(ExpDecay(), ADAM())</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/optimisers.jl#L476-L497">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.InvDecay" href="#Flux.Optimise.InvDecay"><code>Flux.Optimise.InvDecay</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">InvDecay(γ = 0.001)</code></pre><p>Apply inverse time decay to an optimiser, so that the effective step size at iteration <code>n</code> is <code>eta / (1 + γ * n)</code> where <code>eta</code> is the initial step size. The wrapped optimiser&#39;s step size is not modified.</p><p><strong>Examples</strong></p><pre><code class="language-julia">Optimiser(InvDecay(..), Opt(..))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/optimisers.jl#L449-L460">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.WeightDecay" href="#Flux.Optimise.WeightDecay"><code>Flux.Optimise.WeightDecay</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">WeightDecay(wd = 0)</code></pre><p>Decay weights by <code>wd</code>.</p><p><strong>Parameters</strong></p><ul><li>Weight decay (<code>wd</code>)</li></ul></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/optimisers.jl#L518-L525">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../../data/dataloader/">« DataLoader</a><a class="docs-footer-nextpage" href="../training/">Training »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Thursday 30 April 2020 10:32">Thursday 30 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>

File diff suppressed because one or more lines are too long

View File

@ -24,7 +24,7 @@ julia&gt; Flux.unsqueeze([1 2; 3 4], 2)
[:, :, 2] =
2
4</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/utils.jl#L50-L78">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.stack" href="#Flux.stack"><code>Flux.stack</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">stack(xs, dim)</code></pre><p>Concatenate the given <code>Array</code> of <code>Array</code>s <code>xs</code> into a single <code>Array</code> along the given dimension <code>dim</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; xs = [[1, 2], [3, 4], [5, 6]]
4</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/utils.jl#L49-L77">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.stack" href="#Flux.stack"><code>Flux.stack</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">stack(xs, dim)</code></pre><p>Concatenate the given <code>Array</code> of <code>Array</code>s <code>xs</code> into a single <code>Array</code> along the given dimension <code>dim</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; xs = [[1, 2], [3, 4], [5, 6]]
3-element Array{Array{Int64,1},1}:
[1, 2]
[3, 4]
@ -40,12 +40,12 @@ julia&gt; cat(xs, dims=1)
3-element Array{Array{Int64,1},1}:
[1, 2]
[3, 4]
[5, 6]</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/utils.jl#L81-L107">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.unstack" href="#Flux.unstack"><code>Flux.unstack</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">unstack(xs, dim)</code></pre><p>Unroll the given <code>xs</code> into an <code>Array</code> of <code>Array</code>s along the given dimension <code>dim</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; Flux.unstack([1 3 5 7; 2 4 6 8], 2)
[5, 6]</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/utils.jl#L80-L106">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.unstack" href="#Flux.unstack"><code>Flux.unstack</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">unstack(xs, dim)</code></pre><p>Unroll the given <code>xs</code> into an <code>Array</code> of <code>Array</code>s along the given dimension <code>dim</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; Flux.unstack([1 3 5 7; 2 4 6 8], 2)
4-element Array{Array{Int64,1},1}:
[1, 2]
[3, 4]
[5, 6]
[7, 8]</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/utils.jl#L110-L124">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.chunk" href="#Flux.chunk"><code>Flux.chunk</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">chunk(xs, n)</code></pre><p>Split <code>xs</code> into <code>n</code> parts.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; Flux.chunk(1:10, 3)
[7, 8]</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/utils.jl#L109-L123">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.chunk" href="#Flux.chunk"><code>Flux.chunk</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">chunk(xs, n)</code></pre><p>Split <code>xs</code> into <code>n</code> parts.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; Flux.chunk(1:10, 3)
3-element Array{UnitRange{Int64},1}:
1:4
5:8
@ -55,18 +55,18 @@ julia&gt; Flux.chunk(collect(1:10), 3)
3-element Array{SubArray{Int64,1,Array{Int64,1},Tuple{UnitRange{Int64}},true},1}:
[1, 2, 3, 4]
[5, 6, 7, 8]
[9, 10]</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/utils.jl#L127-L146">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.frequencies" href="#Flux.frequencies"><code>Flux.frequencies</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">frequencies(xs)</code></pre><p>Count the number of times that each element of <code>xs</code> appears.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; Flux.frequencies([&#39;a&#39;,&#39;b&#39;,&#39;b&#39;])
[9, 10]</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/utils.jl#L126-L145">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.frequencies" href="#Flux.frequencies"><code>Flux.frequencies</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">frequencies(xs)</code></pre><p>Count the number of times that each element of <code>xs</code> appears.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; Flux.frequencies([&#39;a&#39;,&#39;b&#39;,&#39;b&#39;])
Dict{Char,Int64} with 2 entries:
&#39;a&#39; =&gt; 1
&#39;b&#39; =&gt; 2</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/utils.jl#L151-L163">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.batch" href="#Flux.batch"><code>Flux.batch</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">batch(xs)</code></pre><p>Batch the arrays in <code>xs</code> into a single array.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; Flux.batch([[1,2,3],[4,5,6]])
&#39;b&#39; =&gt; 2</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/utils.jl#L150-L162">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.batch" href="#Flux.batch"><code>Flux.batch</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">batch(xs)</code></pre><p>Batch the arrays in <code>xs</code> into a single array.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; Flux.batch([[1,2,3],[4,5,6]])
3×2 Array{Int64,2}:
1 4
2 5
3 6</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/utils.jl#L176-L189">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.batchseq" href="#Flux.batchseq"><code>Flux.batchseq</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">batchseq(seqs, pad)</code></pre><p>Take a list of <code>N</code> sequences, and turn them into a single sequence where each item is a batch of <code>N</code>. Short sequences will be padded by <code>pad</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; Flux.batchseq([[1, 2, 3], [4, 5]], 0)
3 6</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/utils.jl#L175-L188">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.batchseq" href="#Flux.batchseq"><code>Flux.batchseq</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">batchseq(seqs, pad)</code></pre><p>Take a list of <code>N</code> sequences, and turn them into a single sequence where each item is a batch of <code>N</code>. Short sequences will be padded by <code>pad</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; Flux.batchseq([[1, 2, 3], [4, 5]], 0)
3-element Array{Array{Int64,1},1}:
[1, 4]
[2, 5]
[3, 0]</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/utils.jl#L221-L235">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Base.rpad-Tuple{AbstractArray{T,1} where T,Integer,Any}" href="#Base.rpad-Tuple{AbstractArray{T,1} where T,Integer,Any}"><code>Base.rpad</code></a><span class="docstring-category">Method</span></header><section><div><p>Return the given sequence padded with <code>p</code> up to a maximum length of <code>n</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; rpad([1, 2], 4, 0)
[3, 0]</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/utils.jl#L220-L234">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Base.rpad-Tuple{AbstractArray{T,1} where T,Integer,Any}" href="#Base.rpad-Tuple{AbstractArray{T,1} where T,Integer,Any}"><code>Base.rpad</code></a><span class="docstring-category">Method</span></header><section><div><p>Return the given sequence padded with <code>p</code> up to a maximum length of <code>n</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; rpad([1, 2], 4, 0)
4-element Array{Int64,1}:
1
2
@ -77,15 +77,15 @@ julia&gt; rpad([1, 2, 3], 2, 0)
3-element Array{Int64,1}:
1
2
3</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/utils.jl#L200-L218">source</a></section></article><h2 id="Layer-Initialization-1"><a class="docs-heading-anchor" href="#Layer-Initialization-1">Layer Initialization</a><a class="docs-heading-anchor-permalink" href="#Layer-Initialization-1" title="Permalink"></a></h2><p>These are primarily useful if you are planning to write your own layers. Flux initializes convolutional layers and recurrent cells with <code>glorot_uniform</code> by default. To change the default on an applicable layer, pass the desired function with the <code>init</code> keyword. For example:</p><pre><code class="language-julia-repl">julia&gt; conv = Conv((3, 3), 1 =&gt; 8, relu; init=Flux.glorot_normal)
3</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/utils.jl#L199-L217">source</a></section></article><h2 id="Layer-Initialization-1"><a class="docs-heading-anchor" href="#Layer-Initialization-1">Layer Initialization</a><a class="docs-heading-anchor-permalink" href="#Layer-Initialization-1" title="Permalink"></a></h2><p>These are primarily useful if you are planning to write your own layers. Flux initializes convolutional layers and recurrent cells with <code>glorot_uniform</code> by default. To change the default on an applicable layer, pass the desired function with the <code>init</code> keyword. For example:</p><pre><code class="language-julia-repl">julia&gt; conv = Conv((3, 3), 1 =&gt; 8, relu; init=Flux.glorot_normal)
Conv((3, 3), 1=&gt;8, relu)</code></pre><article class="docstring"><header><a class="docstring-binding" id="Flux.glorot_uniform" href="#Flux.glorot_uniform"><code>Flux.glorot_uniform</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">glorot_uniform(dims...)</code></pre><p>Return an <code>Array</code> of size <code>dims</code> containing random variables taken from a uniform distribution in the interval <span>$[-x, x]$</span>, where <code>x = sqrt(24 / sum(dims)) / 2</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; Flux.glorot_uniform(2, 3)
2×3 Array{Float32,2}:
0.601094 -0.57414 -0.814925
0.900868 0.805994 0.057514</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/utils.jl#L11-L24">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.glorot_normal" href="#Flux.glorot_normal"><code>Flux.glorot_normal</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">glorot_normal(dims...)</code></pre><p>Return an <code>Array</code> of size <code>dims</code> containing random variables taken from a normal distribution with mean 0 and standard deviation <code>sqrt(2 / sum(dims))</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; Flux.glorot_normal(3, 2)
0.900868 0.805994 0.057514</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/utils.jl#L10-L23">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.glorot_normal" href="#Flux.glorot_normal"><code>Flux.glorot_normal</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">glorot_normal(dims...)</code></pre><p>Return an <code>Array</code> of size <code>dims</code> containing random variables taken from a normal distribution with mean 0 and standard deviation <code>sqrt(2 / sum(dims))</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; Flux.glorot_normal(3, 2)
3×2 Array{Float32,2}:
0.429505 -0.0852891
0.523935 0.371009
-0.223261 0.188052</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/utils.jl#L27-L41">source</a></section></article><h2 id="Model-Abstraction-1"><a class="docs-heading-anchor" href="#Model-Abstraction-1">Model Abstraction</a><a class="docs-heading-anchor-permalink" href="#Model-Abstraction-1" title="Permalink"></a></h2><article class="docstring"><header><a class="docstring-binding" id="Flux.destructure" href="#Flux.destructure"><code>Flux.destructure</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">destructure(m)</code></pre><p>Flatten a model&#39;s parameters into a single weight vector.</p><pre><code class="language-none">julia&gt; m = Chain(Dense(10, 5, σ), Dense(5, 2), softmax)
-0.223261 0.188052</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/utils.jl#L26-L40">source</a></section></article><h2 id="Model-Abstraction-1"><a class="docs-heading-anchor" href="#Model-Abstraction-1">Model Abstraction</a><a class="docs-heading-anchor-permalink" href="#Model-Abstraction-1" title="Permalink"></a></h2><article class="docstring"><header><a class="docstring-binding" id="Flux.destructure" href="#Flux.destructure"><code>Flux.destructure</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">destructure(m)</code></pre><p>Flatten a model&#39;s parameters into a single weight vector.</p><pre><code class="language-none">julia&gt; m = Chain(Dense(10, 5, σ), Dense(5, 2), softmax)
Chain(Dense(10, 5, σ), Dense(5, 2), softmax)
julia&gt; θ, re = destructure(m);
@ -94,6 +94,6 @@ julia&gt; θ
67-element Array{Float32,1}:
-0.1407104
...</code></pre><p>The second return value <code>re</code> allows you to reconstruct the original network after making modifications to the weight vector (for example, with a hypernetwork).</p><pre><code class="language-none">julia&gt; re(θ .* 2)
Chain(Dense(10, 5, σ), Dense(5, 2), softmax)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/utils.jl#L253-L273">source</a></section></article><h2 id="Callback-Helpers-1"><a class="docs-heading-anchor" href="#Callback-Helpers-1">Callback Helpers</a><a class="docs-heading-anchor-permalink" href="#Callback-Helpers-1" title="Permalink"></a></h2><article class="docstring"><header><a class="docstring-binding" id="Flux.throttle" href="#Flux.throttle"><code>Flux.throttle</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">throttle(f, timeout; leading=true, trailing=false)</code></pre><p>Return a function that when invoked, will only be triggered at most once during <code>timeout</code> seconds.</p><p>Normally, the throttled function will run as much as it can, without ever going more than once per <code>wait</code> duration; but if you&#39;d like to disable the execution on the leading edge, pass <code>leading=false</code>. To enable execution on the trailing edge, pass <code>trailing=true</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/utils.jl#L285-L295">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.stop" href="#Flux.Optimise.stop"><code>Flux.Optimise.stop</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">stop()</code></pre><p>Call <code>Flux.stop()</code> in a callback to indicate when a callback condition is met. This will trigger the train loop to stop and exit.</p><p><strong>Examples</strong></p><pre><code class="language-julia">cb = function ()
Chain(Dense(10, 5, σ), Dense(5, 2), softmax)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/utils.jl#L252-L272">source</a></section></article><h2 id="Callback-Helpers-1"><a class="docs-heading-anchor" href="#Callback-Helpers-1">Callback Helpers</a><a class="docs-heading-anchor-permalink" href="#Callback-Helpers-1" title="Permalink"></a></h2><article class="docstring"><header><a class="docstring-binding" id="Flux.throttle" href="#Flux.throttle"><code>Flux.throttle</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">throttle(f, timeout; leading=true, trailing=false)</code></pre><p>Return a function that when invoked, will only be triggered at most once during <code>timeout</code> seconds.</p><p>Normally, the throttled function will run as much as it can, without ever going more than once per <code>wait</code> duration; but if you&#39;d like to disable the execution on the leading edge, pass <code>leading=false</code>. To enable execution on the trailing edge, pass <code>trailing=true</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/utils.jl#L284-L294">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.stop" href="#Flux.Optimise.stop"><code>Flux.Optimise.stop</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">stop()</code></pre><p>Call <code>Flux.stop()</code> in a callback to indicate when a callback condition is met. This will trigger the train loop to stop and exit.</p><p><strong>Examples</strong></p><pre><code class="language-julia">cb = function ()
accuracy() &gt; 0.9 &amp;&amp; Flux.stop()
end</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/bf9fe18c47e89df1f0f09df06be3b7f2c7925a3e/src/optimise/train.jl#L42-L54">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../ecosystem/">« The Julia Ecosystem</a><a class="docs-footer-nextpage" href="../performance/">Performance Tips »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 29 April 2020 10:54">Wednesday 29 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
end</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/33ab22a592e3cd914a5854f057d922c3ba0db5db/src/optimise/train.jl#L42-L54">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../ecosystem/">« The Julia Ecosystem</a><a class="docs-footer-nextpage" href="../performance/">Performance Tips »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Thursday 30 April 2020 10:33">Thursday 30 April 2020</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>