Flux.jl/previews/PR1150/models/layers/index.html
2020-05-05 14:57:23 +00:00

86 lines
36 KiB
HTML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!DOCTYPE html>
<html lang="en"><head><meta charset="UTF-8"/><meta name="viewport" content="width=device-width, initial-scale=1.0"/><title>Model Reference · Flux</title><script>(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-36890222-9', 'auto');
ga('send', 'pageview', {'page': location.pathname + location.search + location.hash});
</script><link href="https://fonts.googleapis.com/css?family=Lato|Roboto+Mono" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/fontawesome.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/solid.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/brands.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.11.1/katex.min.css" rel="stylesheet" type="text/css"/><script>documenterBaseURL="../.."</script><script src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.6/require.min.js" data-main="../../assets/documenter.js"></script><script src="../../siteinfo.js"></script><script src="../../../versions.js"></script><link href="../../assets/flux.css" rel="stylesheet" type="text/css"/><link class="docs-theme-link" rel="stylesheet" type="text/css" href="../../assets/themes/documenter-dark.css" data-theme-name="documenter-dark"/><link class="docs-theme-link" rel="stylesheet" type="text/css" href="../../assets/themes/documenter-light.css" data-theme-name="documenter-light" data-theme-primary/><script src="../../assets/themeswap.js"></script></head><body><div id="documenter"><nav class="docs-sidebar"><div class="docs-package-name"><span class="docs-autofit">Flux</span></div><form class="docs-search" action="../../search/"><input class="docs-search-query" id="documenter-search-query" name="q" type="text" placeholder="Search docs"/></form><ul class="docs-menu"><li><a class="tocitem" href="../../">Home</a></li><li><span class="tocitem">Building Models</span><ul><li><a class="tocitem" href="../basics/">Basics</a></li><li><a class="tocitem" href="../recurrence/">Recurrence</a></li><li class="is-active"><a class="tocitem" href>Model Reference</a><ul class="internal"><li><a class="tocitem" href="#Basic-Layers-1"><span>Basic Layers</span></a></li><li><a class="tocitem" href="#Convolution-and-Pooling-Layers-1"><span>Convolution and Pooling Layers</span></a></li><li><a class="tocitem" href="#Recurrent-Layers-1"><span>Recurrent Layers</span></a></li><li><a class="tocitem" href="#Other-General-Purpose-Layers-1"><span>Other General Purpose Layers</span></a></li><li><a class="tocitem" href="#Normalisation-and-Regularisation-1"><span>Normalisation &amp; Regularisation</span></a></li></ul></li><li><a class="tocitem" href="../losses/">Loss Functions</a></li><li><a class="tocitem" href="../regularisation/">Regularisation</a></li><li><a class="tocitem" href="../advanced/">Advanced Model Building</a></li><li><a class="tocitem" href="../nnlib/">NNlib</a></li></ul></li><li><span class="tocitem">Handling Data</span><ul><li><a class="tocitem" href="../../data/onehot/">One-Hot Encoding</a></li><li><a class="tocitem" href="../../data/dataloader/">DataLoader</a></li></ul></li><li><span class="tocitem">Training Models</span><ul><li><a class="tocitem" href="../../training/optimisers/">Optimisers</a></li><li><a class="tocitem" href="../../training/training/">Training</a></li></ul></li><li><a class="tocitem" href="../../gpu/">GPU Support</a></li><li><a class="tocitem" href="../../saving/">Saving &amp; Loading</a></li><li><a class="tocitem" href="../../ecosystem/">The Julia Ecosystem</a></li><li><a class="tocitem" href="../../utilities/">Utility Functions</a></li><li><a class="tocitem" href="../../performance/">Performance Tips</a></li><li><a class="tocitem" href="../../datasets/">Datasets</a></li><li><a class="tocitem" href="../../community/">Community</a></li></ul><div class="docs-version-selector field has-addons"><div class="control"><span class="docs-label button is-static is-size-7">Version</span></div><div class="docs-selector control is-expanded"><div class="select is-fullwidth is-size-7"><select id="documenter-version-selector"></select></div></div></div></nav><div class="docs-main"><header class="docs-navbar"><nav class="breadcrumb"><ul class="is-hidden-mobile"><li><a class="is-disabled">Building Models</a></li><li class="is-active"><a href>Model Reference</a></li></ul><ul class="is-hidden-tablet"><li class="is-active"><a href>Model Reference</a></li></ul></nav><div class="docs-right"><a class="docs-edit-link" href="https://github.com/FluxML/Flux.jl/blob/master/docs/src/models/layers.md" title="Edit on GitHub"><span class="docs-icon fab"></span><span class="docs-label is-hidden-touch">Edit on GitHub</span></a><a class="docs-settings-button fas fa-cog" id="documenter-settings-button" href="#" title="Settings"></a><a class="docs-sidebar-button fa fa-bars is-hidden-desktop" id="documenter-sidebar-button" href="#"></a></div></header><article class="content" id="documenter-page"><h2 id="Basic-Layers-1"><a class="docs-heading-anchor" href="#Basic-Layers-1">Basic Layers</a><a class="docs-heading-anchor-permalink" href="#Basic-Layers-1" title="Permalink"></a></h2><p>These core layers form the foundation of almost all neural networks.</p><article class="docstring"><header><a class="docstring-binding" id="Flux.Chain" href="#Flux.Chain"><code>Flux.Chain</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Chain(layers...)</code></pre><p>Chain multiple layers / functions together, so that they are called in sequence on a given input.</p><p><code>Chain</code> also supports indexing and slicing, e.g. <code>m[2]</code> or <code>m[1:end-1]</code>. <code>m[1:3](x)</code> will calculate the output of the first three layers.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia&gt; m = Chain(x -&gt; x^2, x -&gt; x+1);
julia&gt; m(5) == 26
true
julia&gt; m = Chain(Dense(10, 5), Dense(5, 2));
julia&gt; x = rand(10);
julia&gt; m(x) == m[2](m[1](x))
true</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/basic.jl#L1-L24">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Dense" href="#Flux.Dense"><code>Flux.Dense</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Dense(in::Integer, out::Integer, σ = identity)</code></pre><p>Create a traditional <code>Dense</code> layer with parameters <code>W</code> and <code>b</code>.</p><pre><code class="language-none">y = σ.(W * x .+ b)</code></pre><p>The input <code>x</code> must be a vector of length <code>in</code>, or a batch of vectors represented as an <code>in × N</code> matrix. The out <code>y</code> will be a vector or batch of length <code>out</code>.</p><p><strong>Examples</strong></p><p>```jldoctest; setup = :(using Random; Random.seed!(0)) julia&gt; d = Dense(5, 2) Dense(5, 2)</p><p>julia&gt; d(rand(5)) 2-element Array{Float32,1}: -0.16210233 0.12311903```</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/basic.jl#L85-L104">source</a></section></article><h2 id="Convolution-and-Pooling-Layers-1"><a class="docs-heading-anchor" href="#Convolution-and-Pooling-Layers-1">Convolution and Pooling Layers</a><a class="docs-heading-anchor-permalink" href="#Convolution-and-Pooling-Layers-1" title="Permalink"></a></h2><p>These layers are used to build convolutional neural networks (CNNs).</p><article class="docstring"><header><a class="docstring-binding" id="Flux.Conv" href="#Flux.Conv"><code>Flux.Conv</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Conv(filter, in =&gt; out, σ = identity; init = glorot_uniform,
stride = 1, pad = 0, dilation = 1)
filter = (2,2)
in = 1
out = 16
Conv((2, 2), 1=&gt;16, relu)</code></pre><p>Standard convolutional layer. <code>filter</code> should be a tuple like <code>(2, 2)</code>. <code>in</code> and <code>out</code> specify the number of input and output channels respectively.</p><p>Data should be stored in WHCN order (width, height, # channels, batch size). In other words, a 100×100 RGB image would be a <code>100×100×3×1</code> array, and a batch of 50 would be a <code>100×100×3×50</code> array.</p><p>Accepts keyword arguments <code>weight</code> and <code>bias</code> to set the corresponding fields. Setting <code>bias</code> to <code>Flux.Zeros()</code> will switch bias off for the layer.</p><p>Takes the keyword arguments <code>pad</code>, <code>stride</code> and <code>dilation</code>. Use <code>pad=SamePad()</code> to apply padding so that outputsize == inputsize / stride.</p><p><strong>Examples</strong></p><p>Apply a <code>Conv</code> layer to a 1-channel input using a 2×2 window filter size, giving us a 16-channel output. Output is activated with ReLU.</p><pre><code class="language-julia">filter = (2,2)
in = 1
out = 16
Conv(filter, in =&gt; out, relu)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/conv.jl#L32-L64">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.MaxPool" href="#Flux.MaxPool"><code>Flux.MaxPool</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">MaxPool(k; pad = 0, stride = k)</code></pre><p>Max pooling layer. <code>k</code> is the size of the window for each dimension of the input.</p><p><strong>Use <code>pad=SamePad()</code> to apply padding so that outputsize == inputsize / stride.</strong></p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/conv.jl#L533-L540">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.GlobalMaxPool" href="#Flux.GlobalMaxPool"><code>Flux.GlobalMaxPool</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">GlobalMaxPool()</code></pre><p>Global max pooling layer.</p><p>Transforms (w,h,c,b)-shaped input into (1,1,c,b)-shaped output, by performing max pooling on the complete (w,h)-shaped feature maps.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/conv.jl#L483-L490">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.MeanPool" href="#Flux.MeanPool"><code>Flux.MeanPool</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">MeanPool(k; pad = 0, stride = k)</code></pre><p>Mean pooling layer. <code>k</code> is the size of the window for each dimension of the input.</p><p>Use <code>pad=SamePad()</code> to apply padding so that outputsize == inputsize / stride.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/conv.jl#L564-L570">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.GlobalMeanPool" href="#Flux.GlobalMeanPool"><code>Flux.GlobalMeanPool</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">GlobalMeanPool()</code></pre><p>Global mean pooling layer.</p><p>Transforms (w,h,c,b)-shaped input into (1,1,c,b)-shaped output, by performing mean pooling on the complete (w,h)-shaped feature maps.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/conv.jl#L508-L515">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.DepthwiseConv" href="#Flux.DepthwiseConv"><code>Flux.DepthwiseConv</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">DepthwiseConv(filter::Tuple, in=&gt;out)
DepthwiseConv(filter::Tuple, in=&gt;out, activation)
DepthwiseConv(filter, in =&gt; out, σ = identity; init = glorot_uniform,
stride = 1, pad = 0, dilation = 1)</code></pre><p>Depthwise convolutional layer. <code>filter</code> should be a tuple like <code>(2, 2)</code>. <code>in</code> and <code>out</code> specify the number of input and output channels respectively. Note that <code>out</code> must be an integer multiple of <code>in</code>.</p><p>Data should be stored in WHCN order (width, height, # channels, batch size). In other words, a 100×100 RGB image would be a <code>100×100×3×1</code> array, and a batch of 50 would be a <code>100×100×3×50</code> array.</p><p>Accepts keyword arguments <code>weight</code> and <code>bias</code> to set the corresponding fields. Setting <code>bias</code> to <code>Flux.Zeros()</code> will switch bias off for the layer.</p><p>Takes the keyword arguments <code>pad</code>, <code>stride</code> and <code>dilation</code>. Use <code>pad=SamePad()</code> to apply padding so that outputsize == inputsize / stride.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/conv.jl#L271-L290">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.ConvTranspose" href="#Flux.ConvTranspose"><code>Flux.ConvTranspose</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ConvTranspose(filter, in=&gt;out)
ConvTranspose(filter, in=&gt;out, activation)
ConvTranspose(filter, in =&gt; out, σ = identity; init = glorot_uniform,
stride = 1, pad = 0, dilation = 1)</code></pre><p>Standard convolutional transpose layer. <code>filter</code> should be a tuple like <code>(2, 2)</code>. <code>in</code> and <code>out</code> specify the number of input and output channels respectively.</p><p>Data should be stored in WHCN order (width, height, # channels, batch size). In other words, a 100×100 RGB image would be a <code>100×100×3×1</code> array, and a batch of 50 would be a <code>100×100×3×50</code> array.</p><p>Accepts keyword arguments <code>weight</code> and <code>bias</code> to set the corresponding fields. Setting <code>bias</code> to <code>Flux.Zeros()</code> will switch bias off for the layer.</p><p>Takes the keyword arguments <code>pad</code>, <code>stride</code> and <code>dilation</code>. Use <code>pad=SamePad()</code> to apply padding so that outputsize == stride * inputsize - stride + 1.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/conv.jl#L168-L186">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.CrossCor" href="#Flux.CrossCor"><code>Flux.CrossCor</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">CrossCor(filter, in=&gt;out)
CrossCor(filter, in=&gt;out, activation)
CrossCor(filter, in =&gt; out, σ = identity; init = glorot_uniform,
stride = 1, pad = 0, dilation = 1)</code></pre><p>Standard cross convolutional layer. <code>filter</code> should be a tuple like <code>(2, 2)</code>. <code>in</code> and <code>out</code> specify the number of input and output channels respectively.</p><p>Data should be stored in WHCN order (width, height, # channels, batch size). In other words, a 100×100 RGB image would be a <code>100×100×3×1</code> array, and a batch of 50 would be a <code>100×100×3×50</code> array.</p><p>Accepts keyword arguments <code>weight</code> and <code>bias</code> to set the corresponding fields. Setting <code>bias</code> to <code>Flux.Zeros()</code> will switch bias off for the layer.</p><p>Takes the keyword arguments <code>pad</code>, <code>stride</code> and <code>dilation</code>. Use <code>pad=SamePad()</code> to apply padding so that outputsize == inputsize / stride.</p><p><strong>Examples</strong></p><p>Apply a <code>CrossCor</code> layer to a 1-channel input using a 2×2 window filter size, giving us a 16-channel output. Output is activated with ReLU.</p><pre><code class="language-julia">filter = (2,2)
in = 1
out = 16
CrossCor((2, 2), 1=&gt;16, relu)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/conv.jl#L379-L408">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.flatten" href="#Flux.flatten"><code>Flux.flatten</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">flatten(x::AbstractArray)</code></pre><p>Reshape arbitrarly-shaped input into a matrix-shaped output preserving the last dimension size. Equivalent to <code>reshape(x, :, size(x)[end])</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/stateless.jl#L256-L262">source</a></section></article><h2 id="Recurrent-Layers-1"><a class="docs-heading-anchor" href="#Recurrent-Layers-1">Recurrent Layers</a><a class="docs-heading-anchor-permalink" href="#Recurrent-Layers-1" title="Permalink"></a></h2><p>Much like the core layers above, but can be used to process sequence data (as well as other kinds of structured data).</p><article class="docstring"><header><a class="docstring-binding" id="Flux.RNN" href="#Flux.RNN"><code>Flux.RNN</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">RNN(in::Integer, out::Integer, σ = tanh)</code></pre><p>The most basic recurrent layer; essentially acts as a <code>Dense</code> layer, but with the output fed back into the input each time step.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/recurrent.jl#L91-L96">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.LSTM" href="#Flux.LSTM"><code>Flux.LSTM</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">LSTM(in::Integer, out::Integer)</code></pre><p><a href="https://www.researchgate.net/publication/13853244_Long_Short-term_Memory">Long Short Term Memory</a> recurrent layer. Behaves like an RNN but generally exhibits a longer memory span over sequences.</p><p>See <a href="https://colah.github.io/posts/2015-08-Understanding-LSTMs/">this article</a> for a good overview of the internals.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/recurrent.jl#L136-L144">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.GRU" href="#Flux.GRU"><code>Flux.GRU</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">GRU(in::Integer, out::Integer)</code></pre><p><a href="https://arxiv.org/abs/1406.1078">Gated Recurrent Unit</a> layer. Behaves like an RNN but generally exhibits a longer memory span over sequences.</p><p>See <a href="https://colah.github.io/posts/2015-08-Understanding-LSTMs/">this article</a> for a good overview of the internals.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/recurrent.jl#L177-L185">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Recur" href="#Flux.Recur"><code>Flux.Recur</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Recur(cell)</code></pre><p><code>Recur</code> takes a recurrent cell and makes it stateful, managing the hidden state in the background. <code>cell</code> should be a model of the form:</p><pre><code class="language-none">h, y = cell(h, x...)</code></pre><p>For example, here&#39;s a recurrent network that keeps a running total of its inputs:</p><pre><code class="language-julia">accum(h, x) = (h + x, x)
rnn = Flux.Recur(accum, 0)
rnn(2) # 2
rnn(3) # 3
rnn.state # 5
rnn.(1:10) # apply to a sequence
rnn.state # 60</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/recurrent.jl#L7-L26">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.reset!" href="#Flux.reset!"><code>Flux.reset!</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">reset!(rnn)</code></pre><p>Reset the hidden state of a recurrent layer back to its original value.</p><p>Assuming you have a <code>Recur</code> layer <code>rnn</code>, this is roughly equivalent to:</p><pre><code class="language-julia">rnn.state = hidden(rnn.cell)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/recurrent.jl#L45-L54">source</a></section></article><h2 id="Other-General-Purpose-Layers-1"><a class="docs-heading-anchor" href="#Other-General-Purpose-Layers-1">Other General Purpose Layers</a><a class="docs-heading-anchor-permalink" href="#Other-General-Purpose-Layers-1" title="Permalink"></a></h2><p>These are marginally more obscure than the Basic Layers. But in contrast to the layers described in the other sections are not readily grouped around a particular purpose (e.g. CNNs or RNNs).</p><article class="docstring"><header><a class="docstring-binding" id="Flux.Maxout" href="#Flux.Maxout"><code>Flux.Maxout</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Maxout(over)</code></pre><p>The <a href="https://arxiv.org/pdf/1302.4389.pdf">Maxout</a> layer has a number of internal layers which all receive the same input. It returns the elementwise maximum of the internal layers&#39; outputs.</p><p>Maxout over linear dense layers satisfies the univeral approximation theorem.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/basic.jl#L183-L191">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.SkipConnection" href="#Flux.SkipConnection"><code>Flux.SkipConnection</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">SkipConnection(layer, connection)</code></pre><p>Create a skip connection which consists of a layer or <code>Chain</code> of consecutive layers and a shortcut connection linking the block&#39;s input to the output through a user-supplied 2-argument callable. The first argument to the callable will be propagated through the given <code>layer</code> while the second is the unchanged, &quot;skipped&quot; input.</p><p>The simplest &quot;ResNet&quot;-type connection is just <code>SkipConnection(layer, +)</code>, and requires the output of the layers to be the same shape as the input. Here is a more complicated example:</p><pre><code class="language-julia">m = Conv((3,3), 4=&gt;7, pad=(1,1))
x = ones(5,5,4,10);
size(m(x)) == (5, 5, 7, 10)
sm = SkipConnection(m, (mx, x) -&gt; cat(mx, x, dims=3))
size(sm(x)) == (5, 5, 11, 10)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/basic.jl#L226-L246">source</a></section></article><h2 id="Normalisation-and-Regularisation-1"><a class="docs-heading-anchor" href="#Normalisation-and-Regularisation-1">Normalisation &amp; Regularisation</a><a class="docs-heading-anchor-permalink" href="#Normalisation-and-Regularisation-1" title="Permalink"></a></h2><p>These layers don&#39;t affect the structure of the network but may improve training times or reduce overfitting.</p><article class="docstring"><header><a class="docstring-binding" id="Flux.normalise" href="#Flux.normalise"><code>Flux.normalise</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">normalise(x; dims=1)</code></pre><p>Normalise <code>x</code> to mean 0 and standard deviation 1 across the dimensions given by <code>dims</code>. Defaults to normalising over columns.</p><pre><code class="language-julia-repl">julia&gt; a = reshape(collect(1:9), 3, 3)
3×3 Array{Int64,2}:
1 4 7
2 5 8
3 6 9
julia&gt; Flux.normalise(a)
3×3 Array{Float64,2}:
-1.22474 -1.22474 -1.22474
0.0 0.0 0.0
1.22474 1.22474 1.22474
julia&gt; Flux.normalise(a, dims=2)
3×3 Array{Float64,2}:
-1.22474 0.0 1.22474
-1.22474 0.0 1.22474
-1.22474 0.0 1.22474</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/stateless.jl#L223-L248">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.BatchNorm" href="#Flux.BatchNorm"><code>Flux.BatchNorm</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">BatchNorm(channels::Integer, σ = identity;
initβ = zeros, initγ = ones,
ϵ = 1e-8, momentum = .1)</code></pre><p><a href="https://arxiv.org/pdf/1502.03167.pdf">Batch Normalization</a> layer. <code>channels</code> should be the size of the channel dimension in your data (see below).</p><p>Given an array with <code>N</code> dimensions, call the <code>N-1</code>th the channel dimension. (For a batch of feature vectors this is just the data dimension, for <code>WHCN</code> images it&#39;s the usual channel dimension.)</p><p><code>BatchNorm</code> computes the mean and variance for each each <code>W×H×1×N</code> slice and shifts them to have a new mean and variance (corresponding to the learnable, per-channel <code>bias</code> and <code>scale</code> parameters).</p><p>Use <a href="#Flux.testmode!"><code>testmode!</code></a> during inference.</p><p><strong>Examples</strong></p><pre><code class="language-julia">m = Chain(
Dense(28^2, 64),
BatchNorm(64, relu),
Dense(64, 10),
BatchNorm(10),
softmax)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/normalise.jl#L122-L149">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.dropout" href="#Flux.dropout"><code>Flux.dropout</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">dropout(x, p; dims = :)</code></pre><p>The dropout function. For each input, either sets that input to <code>0</code> (with probability <code>p</code>) or scales it by <code>1 / (1 - p)</code>. <code>dims</code> specifies the unbroadcasted dimensions, e.g. <code>dims=1</code> applies dropout along columns and <code>dims=2</code> along rows. This is used as a regularisation, i.e. it reduces overfitting during training.</p><p>See also the <a href="#Flux.Dropout"><code>Dropout</code></a> layer.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/normalise.jl#L12-L21">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Dropout" href="#Flux.Dropout"><code>Flux.Dropout</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Dropout(p, dims = :)</code></pre><p>Dropout layer. In the forward pass, apply the <a href="#Flux.dropout"><code>Flux.dropout</code></a> function on the input.</p><p>Does nothing to the input once <a href="#Flux.testmode!"><code>Flux.testmode!</code></a> is <code>true</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/normalise.jl#L30-L36">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.AlphaDropout" href="#Flux.AlphaDropout"><code>Flux.AlphaDropout</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">AlphaDropout(p)</code></pre><p>A dropout layer. Used in <a href="https://papers.nips.cc/paper/6698-self-normalizing-neural-networks.pdf">Self-Normalizing Neural Networks</a>. The AlphaDropout layer ensures that mean and variance of activations remain the same as before.</p><p>Does nothing to the input once <a href="#Flux.testmode!"><code>testmode!</code></a> is true.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/normalise.jl#L65-L74">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.LayerNorm" href="#Flux.LayerNorm"><code>Flux.LayerNorm</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">LayerNorm(h::Integer)</code></pre><p>A <a href="https://arxiv.org/pdf/1607.06450.pdf">normalisation layer</a> designed to be used with recurrent hidden states of size <code>h</code>. Normalises the mean and standard deviation of each input before applying a per-neuron gain/bias.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/normalise.jl#L100-L106">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.InstanceNorm" href="#Flux.InstanceNorm"><code>Flux.InstanceNorm</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">InstanceNorm(channels::Integer, σ = identity;
initβ = zeros, initγ = ones,
ϵ = 1e-8, momentum = .1)</code></pre><p><a href="https://arxiv.org/abs/1607.08022">Instance Normalization</a> layer. <code>channels</code> should be the size of the channel dimension in your data (see below).</p><p>Given an array with <code>N</code> dimensions, call the <code>N-1</code>th the channel dimension. (For a batch of feature vectors this is just the data dimension, for <code>WHCN</code> images it&#39;s the usual channel dimension.)</p><p><code>InstanceNorm</code> computes the mean and variance for each each <code>W×H×1×1</code> slice and shifts them to have a new mean and variance (corresponding to the learnable, per-channel <code>bias</code> and <code>scale</code> parameters).</p><p>Use <a href="#Flux.testmode!"><code>testmode!</code></a> during inference.</p><p><strong>Examples</strong></p><pre><code class="language-julia">m = Chain(
Dense(28^2, 64),
InstanceNorm(64, relu),
Dense(64, 10),
InstanceNorm(10),
softmax)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/normalise.jl#L228-L255">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.GroupNorm" href="#Flux.GroupNorm"><code>Flux.GroupNorm</code></a><span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">GroupNorm(chs::Integer, G::Integer, λ = identity;
initβ = (i) -&gt; zeros(Float32, i), initγ = (i) -&gt; ones(Float32, i),
ϵ = 1f-5, momentum = 0.1f0)</code></pre><p><a href="https://arxiv.org/pdf/1803.08494.pdf">Group Normalization</a> layer. This layer can outperform Batch Normalization and Instance Normalization.</p><p><code>chs</code> is the number of channels, the channel dimension of your input. For an array of N dimensions, the <code>N-1</code>th index is the channel dimension.</p><p><code>G</code> is the number of groups along which the statistics are computed. The number of channels must be an integer multiple of the number of groups.</p><p>Use <a href="#Flux.testmode!"><code>testmode!</code></a> during inference.</p><p><strong>Examples</strong></p><pre><code class="language-julia">m = Chain(Conv((3,3), 1=&gt;32, leakyrelu;pad = 1),
GroupNorm(32,16))
# 32 channels, 16 groups (G = 16), thus 2 channels per group used</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/layers/normalise.jl#L313-L335">source</a></section></article><h3 id="Testmode-1"><a class="docs-heading-anchor" href="#Testmode-1">Testmode</a><a class="docs-heading-anchor-permalink" href="#Testmode-1" title="Permalink"></a></h3><p>Many normalisation layers behave differently under training and inference (testing). By default, Flux will automatically determine when a layer evaluation is part of training or inference. Still, depending on your use case, it may be helpful to manually specify when these layers should be treated as being trained or not. For this, Flux provides <code>Flux.testmode!</code>. When called on a model (e.g. a layer or chain of layers), this function will place the model into the mode specified.</p><article class="docstring"><header><a class="docstring-binding" id="Flux.testmode!" href="#Flux.testmode!"><code>Flux.testmode!</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">testmode!(m, mode = true)</code></pre><p>Set a layer or model&#39;s test mode (see below). Using <code>:auto</code> mode will treat any gradient computation as training.</p><p><em>Note</em>: if you manually set a model into test mode, you need to manually place it back into train mode during training phase.</p><p>Possible values include:</p><ul><li><code>false</code> for training</li><li><code>true</code> for testing</li><li><code>:auto</code> or <code>nothing</code> for Flux to detect the mode automatically</li></ul></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/functor.jl#L42-L55">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.trainmode!" href="#Flux.trainmode!"><code>Flux.trainmode!</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">trainmode!(m, mode = true)</code></pre><p>Set a layer of model&#39;s train mode (see below). Symmetric to <a href="#Flux.testmode!"><code>testmode!</code></a> (i.e. `trainmode!(m, mode) == testmode!(m, !mode)).</p><p><em>Note</em>: if you manually set a model into train mode, you need to manually place it into test mode during testing phase.</p><p>Possible values include:</p><ul><li><code>true</code> for training</li><li><code>false</code> for testing</li><li><code>:auto</code> or <code>nothing</code> for Flux to detect the mode automatically</li></ul></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/8e9cce94e9496c0de96ef85d7da676a1a5387565/src/functor.jl#L58-L71">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../recurrence/">« Recurrence</a><a class="docs-footer-nextpage" href="../losses/">Loss Functions »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Tuesday 5 May 2020 14:57">Tuesday 5 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>