build based on ddd0f4e
This commit is contained in:
parent
9a02a93384
commit
4cff4bfb52
File diff suppressed because one or more lines are too long
|
@ -29,4 +29,4 @@ end
|
|||
|
||||
# train for 10 epochs
|
||||
using IterTools: ncycle
|
||||
Flux.train!(loss, ps, ncycle(train_loader, 10), opt)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/data/dataloader.jl#L13-L54">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../onehot/">« One-Hot Encoding</a><a class="docs-footer-nextpage" href="../../training/optimisers/">Optimisers »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Monday 25 May 2020 15:19">Monday 25 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
Flux.train!(loss, ps, ncycle(train_loader, 10), opt)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/data/dataloader.jl#L13-L54">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../onehot/">« One-Hot Encoding</a><a class="docs-footer-nextpage" href="../../training/optimisers/">Optimisers »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 27 May 2020 11:52">Wednesday 27 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
|
|
|
@ -35,11 +35,11 @@ julia> Flux.onehot(:c, [:a, :b, :c])
|
|||
3-element Flux.OneHotVector:
|
||||
0
|
||||
0
|
||||
1</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/onehot.jl#L45-L67">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.onecold" href="#Flux.onecold"><code>Flux.onecold</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">onecold(y[, labels = 1:length(y)])</code></pre><p>Inverse operations of <a href="#Flux.onehot"><code>onehot</code></a>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> Flux.onecold([true, false, false], [:a, :b, :c])
|
||||
1</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/onehot.jl#L45-L67">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.onecold" href="#Flux.onecold"><code>Flux.onecold</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">onecold(y[, labels = 1:length(y)])</code></pre><p>Inverse operations of <a href="#Flux.onehot"><code>onehot</code></a>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> Flux.onecold([true, false, false], [:a, :b, :c])
|
||||
:a
|
||||
|
||||
julia> Flux.onecold([0.3, 0.2, 0.5], [:a, :b, :c])
|
||||
:c</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/onehot.jl#L102-L115">source</a></section></article><h2 id="Batches-1"><a class="docs-heading-anchor" href="#Batches-1">Batches</a><a class="docs-heading-anchor-permalink" href="#Batches-1" title="Permalink"></a></h2><p><code>onehotbatch</code> creates a batch (matrix) of one-hot vectors, and <code>onecold</code> treats matrices as batches.</p><pre><code class="language-julia">julia> using Flux: onehotbatch
|
||||
:c</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/onehot.jl#L102-L115">source</a></section></article><h2 id="Batches-1"><a class="docs-heading-anchor" href="#Batches-1">Batches</a><a class="docs-heading-anchor-permalink" href="#Batches-1" title="Permalink"></a></h2><p><code>onehotbatch</code> creates a batch (matrix) of one-hot vectors, and <code>onecold</code> treats matrices as batches.</p><pre><code class="language-julia">julia> using Flux: onehotbatch
|
||||
|
||||
julia> onehotbatch([:b, :a, :b], [:a, :b, :c])
|
||||
3×3 Flux.OneHotMatrix:
|
||||
|
@ -55,4 +55,4 @@ julia> onecold(ans, [:a, :b, :c])
|
|||
3×3 Flux.OneHotMatrix{Array{Flux.OneHotVector,1}}:
|
||||
0 1 0
|
||||
1 0 1
|
||||
0 0 0</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/onehot.jl#L80-L96">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../../models/nnlib/">« NNlib</a><a class="docs-footer-nextpage" href="../dataloader/">DataLoader »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Monday 25 May 2020 15:19">Monday 25 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
0 0 0</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/onehot.jl#L80-L96">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../../models/nnlib/">« NNlib</a><a class="docs-footer-nextpage" href="../dataloader/">DataLoader »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 27 May 2020 11:52">Wednesday 27 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
|
|
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
@ -47,4 +47,4 @@ julia> x |> cpu
|
|||
10-element Array{Float32,1}:
|
||||
0.235164
|
||||
⋮
|
||||
0.192538</code></pre></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../training/training/">« Training</a><a class="docs-footer-nextpage" href="../saving/">Saving & Loading »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Monday 25 May 2020 15:19">Monday 25 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
0.192538</code></pre></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../training/training/">« Training</a><a class="docs-footer-nextpage" href="../saving/">Saving & Loading »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 27 May 2020 11:52">Wednesday 27 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
|
|
File diff suppressed because one or more lines are too long
|
@ -24,4 +24,4 @@ Params([[0.66722 0.774872 0.249809; 0.843321 0.403843 0.429232; 0.683525 0.66245
|
|||
)
|
||||
|
||||
ps = Flux.params(m[3:end])</code></pre><p>The <code>Zygote.Params</code> object <code>ps</code> now holds a reference to only the parameters of the layers passed to it.</p><p>During training, the gradients will only be computed for (and applied to) the last <code>Dense</code> layer, therefore only that would have its parameters changed.</p><p><code>Flux.params</code> also takes multiple inputs to make it easy to collect parameters from heterogenous models with a single call. A simple demonstration would be if we wanted to omit optimising the second <code>Dense</code> layer in the previous example. It would look something like this:</p><pre><code class="language-julia">Flux.params(m[1], m[3:end])</code></pre><p>Sometimes, a more fine-tuned control is needed. We can freeze a specific parameter of a specific layer which already entered a <code>Params</code> object <code>ps</code>, by simply deleting it from <code>ps</code>:</p><pre><code class="language-julia">ps = params(m)
|
||||
delete!(ps, m[2].b) </code></pre></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../layers/">« Model Reference</a><a class="docs-footer-nextpage" href="../nnlib/">NNlib »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Monday 25 May 2020 15:19">Monday 25 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
delete!(ps, m[2].b) </code></pre></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../layers/">« Model Reference</a><a class="docs-footer-nextpage" href="../nnlib/">NNlib »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 27 May 2020 11:52">Wednesday 27 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
|
|
|
@ -109,8 +109,8 @@ model2(rand(10)) # => 2-element vector</code></pre><p>This quickly starts to
|
|||
m(rand(10))</code></pre><p>Likewise, <code>Chain</code> will happily work with any Julia function.</p><pre><code class="language-julia">m = Chain(x -> x^2, x -> x+1)
|
||||
|
||||
m(5) # => 26</code></pre><h2 id="Layer-helpers-1"><a class="docs-heading-anchor" href="#Layer-helpers-1">Layer helpers</a><a class="docs-heading-anchor-permalink" href="#Layer-helpers-1" title="Permalink"></a></h2><p>Flux provides a set of helpers for custom layers, which you can enable by calling</p><pre><code class="language-julia">Flux.@functor Affine</code></pre><p>This enables a useful extra set of functionality for our <code>Affine</code> layer, such as <a href="../../training/optimisers/">collecting its parameters</a> or <a href="../../gpu/">moving it to the GPU</a>.</p><p>For some more helpful tricks, including parameter freezing, please checkout the <a href="../advanced/">advanced usage guide</a>.</p><h2 id="Utility-functions-1"><a class="docs-heading-anchor" href="#Utility-functions-1">Utility functions</a><a class="docs-heading-anchor-permalink" href="#Utility-functions-1" title="Permalink"></a></h2><p>Flux provides some utility functions to help you generate models in an automated fashion.</p><p><code>outdims</code> enables you to calculate the spatial output dimensions of layers like <code>Conv</code> when applied to input images of a given size. Currently limited to the following layers:</p><ul><li><code>Chain</code></li><li><code>Dense</code></li><li><code>Conv</code></li><li><code>Diagonal</code></li><li><code>Maxout</code></li><li><code>ConvTranspose</code></li><li><code>DepthwiseConv</code></li><li><code>CrossCor</code></li><li><code>MaxPool</code></li><li><code>MeanPool</code></li></ul><article class="docstring"><header><a class="docstring-binding" id="Flux.outdims" href="#Flux.outdims"><code>Flux.outdims</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">outdims(c::Chain, isize)</code></pre><p>Calculate the output dimensions given the input dimensions, <code>isize</code>.</p><pre><code class="language-julia">m = Chain(Conv((3, 3), 3 => 16), Conv((3, 3), 16 => 32))
|
||||
outdims(m, (10, 10)) == (6, 6)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/layers/basic.jl#L50-L59">source</a></section><section><div><pre><code class="language-none">outdims(l::Dense, isize)</code></pre><p>Calculate the output dimensions given the input dimensions, <code>isize</code>.</p><pre><code class="language-julia">m = Dense(10, 5)
|
||||
outdims(m, (10, 10)) == (6, 6)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/layers/basic.jl#L50-L59">source</a></section><section><div><pre><code class="language-none">outdims(l::Dense, isize)</code></pre><p>Calculate the output dimensions given the input dimensions, <code>isize</code>.</p><pre><code class="language-julia">m = Dense(10, 5)
|
||||
outdims(m, (5, 2)) == (5,)
|
||||
outdims(m, (10,)) == (5,)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/layers/basic.jl#L139-L149">source</a></section><section><div><pre><code class="language-none">outdims(l::Conv, isize::Tuple)</code></pre><p>Calculate the output dimensions given the input dimensions <code>isize</code>. Batch size and channel size are ignored as per <a href="https://github.com/FluxML/NNlib.jl">NNlib.jl</a>.</p><pre><code class="language-julia">m = Conv((3, 3), 3 => 16)
|
||||
outdims(m, (10,)) == (5,)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/layers/basic.jl#L139-L149">source</a></section><section><div><pre><code class="language-none">outdims(l::Conv, isize::Tuple)</code></pre><p>Calculate the output dimensions given the input dimensions <code>isize</code>. Batch size and channel size are ignored as per <a href="https://github.com/FluxML/NNlib.jl">NNlib.jl</a>.</p><pre><code class="language-julia">m = Conv((3, 3), 3 => 16)
|
||||
outdims(m, (10, 10)) == (8, 8)
|
||||
outdims(m, (10, 10, 1, 3)) == (8, 8)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/layers/conv.jl#L153-L164">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../../">« Home</a><a class="docs-footer-nextpage" href="../recurrence/">Recurrence »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Monday 25 May 2020 15:19">Monday 25 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
outdims(m, (10, 10, 1, 3)) == (8, 8)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/layers/conv.jl#L153-L164">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../../">« Home</a><a class="docs-footer-nextpage" href="../recurrence/">Recurrence »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 27 May 2020 11:52">Wednesday 27 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
|
|
File diff suppressed because one or more lines are too long
|
@ -28,4 +28,4 @@ a = randomly sampled from uniform distribution U(l, u)</code></pre><p>Randomized
|
|||
batched_adjoint(A)</code></pre><p>Equivalent to applying <code>transpose</code> or <code>adjoint</code> to each matrix <code>A[:,:,k]</code>.</p><p>These exist to control how <code>batched_mul</code> behaves, as it operated on such matrix slices of an array with <code>ndims(A)==3</code>.</p><pre><code class="language-none">BatchedTranspose{T, N, S} <: AbstractBatchedMatrix{T, N}
|
||||
BatchedAdjoint{T, N, S}</code></pre><p>Lazy wrappers analogous to <code>Transpose</code> and <code>Adjoint</code>, returned by <code>batched_transpose</code></p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.batched_transpose" href="#NNlib.batched_transpose"><code>NNlib.batched_transpose</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">batched_transpose(A::AbstractArray{T,3})
|
||||
batched_adjoint(A)</code></pre><p>Equivalent to applying <code>transpose</code> or <code>adjoint</code> to each matrix <code>A[:,:,k]</code>.</p><p>These exist to control how <code>batched_mul</code> behaves, as it operated on such matrix slices of an array with <code>ndims(A)==3</code>.</p><pre><code class="language-none">BatchedTranspose{T, N, S} <: AbstractBatchedMatrix{T, N}
|
||||
BatchedAdjoint{T, N, S}</code></pre><p>Lazy wrappers analogous to <code>Transpose</code> and <code>Adjoint</code>, returned by <code>batched_transpose</code></p></div></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../advanced/">« Advanced Model Building</a><a class="docs-footer-nextpage" href="../../data/onehot/">One-Hot Encoding »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Monday 25 May 2020 15:19">Monday 25 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
BatchedAdjoint{T, N, S}</code></pre><p>Lazy wrappers analogous to <code>Transpose</code> and <code>Adjoint</code>, returned by <code>batched_transpose</code></p></div></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../advanced/">« Advanced Model Building</a><a class="docs-footer-nextpage" href="../../data/onehot/">One-Hot Encoding »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 27 May 2020 11:52">Wednesday 27 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
|
|
|
@ -39,4 +39,4 @@ m = Flux.Recur(rnn, h)
|
|||
|
||||
y = m(x)</code></pre><p>The <code>Recur</code> wrapper stores the state between runs in the <code>m.state</code> field.</p><p>If you use the <code>RNN(10, 5)</code> constructor – as opposed to <code>RNNCell</code> – you'll see that it's simply a wrapped cell.</p><pre><code class="language-julia">julia> RNN(10, 5)
|
||||
Recur(RNNCell(10, 5, tanh))</code></pre><h2 id="Sequences-1"><a class="docs-heading-anchor" href="#Sequences-1">Sequences</a><a class="docs-heading-anchor-permalink" href="#Sequences-1" title="Permalink"></a></h2><p>Often we want to work with sequences of inputs, rather than individual <code>x</code>s.</p><pre><code class="language-julia">seq = [rand(10) for i = 1:10]</code></pre><p>With <code>Recur</code>, applying our model to each element of a sequence is trivial:</p><pre><code class="language-julia">m.(seq) # returns a list of 5-element vectors</code></pre><p>This works even when we've chain recurrent layers into a larger model.</p><pre><code class="language-julia">m = Chain(LSTM(10, 15), Dense(15, 5))
|
||||
m.(seq)</code></pre><p>Finally, we can reset the hidden state of the cell back to its initial value using <code>reset!(m)</code>.</p></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../basics/">« Basics</a><a class="docs-footer-nextpage" href="../regularisation/">Regularisation »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Monday 25 May 2020 15:19">Monday 25 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
m.(seq)</code></pre><p>Finally, we can reset the hidden state of the cell back to its initial value using <code>reset!(m)</code>.</p></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../basics/">« Basics</a><a class="docs-footer-nextpage" href="../regularisation/">Regularisation »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 27 May 2020 11:52">Wednesday 27 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
|
|
|
@ -36,4 +36,4 @@ julia> activations(c, rand(10))
|
|||
Float32[0.5192045, 0.48079553]
|
||||
|
||||
julia> sum(norm, ans)
|
||||
2.1166067f0</code></pre><article class="docstring"><header><a class="docstring-binding" id="Flux.activations" href="#Flux.activations"><code>Flux.activations</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">activations(c::Chain, input)</code></pre><p>Calculate the forward results of each layers in Chain <code>c</code> with <code>input</code> as model input.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/layers/basic.jl#L67-L71">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../recurrence/">« Recurrence</a><a class="docs-footer-nextpage" href="../layers/">Model Reference »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Monday 25 May 2020 15:19">Monday 25 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
2.1166067f0</code></pre><article class="docstring"><header><a class="docstring-binding" id="Flux.activations" href="#Flux.activations"><code>Flux.activations</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">activations(c::Chain, input)</code></pre><p>Calculate the forward results of each layers in Chain <code>c</code> with <code>input</code> as model input.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/layers/basic.jl#L67-L71">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../recurrence/">« Recurrence</a><a class="docs-footer-nextpage" href="../layers/">Model Reference »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 27 May 2020 11:52">Wednesday 27 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
|
|
|
@ -17,4 +17,4 @@ y_batch = reduce(hcat, ys)
|
|||
function loss_total(x_batch::Matrix, y_batch::Matrix)
|
||||
y_preds = model(x_batch)
|
||||
sum(loss.(y_preds, y_batch))
|
||||
end</code></pre><p>When doing this kind of concatenation use <code>reduce(hcat, xs)</code> rather than <code>hcat(xs...)</code>. This will avoid the splatting penalty, and will hit the optimised <code>reduce</code> method.</p></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../utilities/">« Utility Functions</a><a class="docs-footer-nextpage" href="../datasets/">Datasets »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Monday 25 May 2020 15:19">Monday 25 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
end</code></pre><p>When doing this kind of concatenation use <code>reduce(hcat, xs)</code> rather than <code>hcat(xs...)</code>. This will avoid the splatting penalty, and will hit the optimised <code>reduce</code> method.</p></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../utilities/">« Utility Functions</a><a class="docs-footer-nextpage" href="../datasets/">Datasets »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 27 May 2020 11:52">Wednesday 27 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
|
|
|
@ -47,4 +47,4 @@ evalcb = throttle(30) do
|
|||
# Show loss
|
||||
@save "model-checkpoint.bson" model
|
||||
end</code></pre><p>This will update the <code>"model-checkpoint.bson"</code> file every thirty seconds.</p><p>You can get more advanced by saving a series of models throughout training, for example</p><pre><code class="language-julia">@save "model-$(now()).bson" model</code></pre><p>will produce a series of models like <code>"model-2018-03-06T02:57:10.41.bson"</code>. You could also store the current test set loss, so that it's easy to (for example) revert to an older copy of the model if it starts to overfit.</p><pre><code class="language-julia">@save "model-$(now()).bson" model loss = testloss()</code></pre><p>You can even store optimiser state alongside the model, to resume training exactly where you left off.</p><pre><code class="language-julia">opt = ADAM()
|
||||
@save "model-$(now()).bson" model opt</code></pre></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../gpu/">« GPU Support</a><a class="docs-footer-nextpage" href="../ecosystem/">The Julia Ecosystem »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Monday 25 May 2020 15:19">Monday 25 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
@save "model-$(now()).bson" model opt</code></pre></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../gpu/">« GPU Support</a><a class="docs-footer-nextpage" href="../ecosystem/">The Julia Ecosystem »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 27 May 2020 11:52">Wednesday 27 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
|
|
|
@ -6,4 +6,4 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
|||
|
||||
ga('create', 'UA-36890222-9', 'auto');
|
||||
ga('send', 'pageview', {'page': location.pathname + location.search + location.hash});
|
||||
</script><link href="https://fonts.googleapis.com/css?family=Lato|Roboto+Mono" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/fontawesome.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/solid.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/brands.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.11.1/katex.min.css" rel="stylesheet" type="text/css"/><script>documenterBaseURL=".."</script><script src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.6/require.min.js" data-main="../assets/documenter.js"></script><script src="../siteinfo.js"></script><script src="../../versions.js"></script><link href="../assets/flux.css" rel="stylesheet" type="text/css"/><link class="docs-theme-link" rel="stylesheet" type="text/css" href="../assets/themes/documenter-dark.css" data-theme-name="documenter-dark"/><link class="docs-theme-link" rel="stylesheet" type="text/css" href="../assets/themes/documenter-light.css" data-theme-name="documenter-light" data-theme-primary/><script src="../assets/themeswap.js"></script></head><body><div id="documenter"><nav class="docs-sidebar"><div class="docs-package-name"><span class="docs-autofit">Flux</span></div><form class="docs-search" action><input class="docs-search-query" id="documenter-search-query" name="q" type="text" placeholder="Search docs"/></form><ul class="docs-menu"><li><a class="tocitem" href="../">Home</a></li><li><span class="tocitem">Building Models</span><ul><li><a class="tocitem" href="../models/basics/">Basics</a></li><li><a class="tocitem" href="../models/recurrence/">Recurrence</a></li><li><a class="tocitem" href="../models/regularisation/">Regularisation</a></li><li><a class="tocitem" href="../models/layers/">Model Reference</a></li><li><a class="tocitem" href="../models/advanced/">Advanced Model Building</a></li><li><a class="tocitem" href="../models/nnlib/">NNlib</a></li></ul></li><li><span class="tocitem">Handling Data</span><ul><li><a class="tocitem" href="../data/onehot/">One-Hot Encoding</a></li><li><a class="tocitem" href="../data/dataloader/">DataLoader</a></li></ul></li><li><span class="tocitem">Training Models</span><ul><li><a class="tocitem" href="../training/optimisers/">Optimisers</a></li><li><a class="tocitem" href="../training/training/">Training</a></li></ul></li><li><a class="tocitem" href="../gpu/">GPU Support</a></li><li><a class="tocitem" href="../saving/">Saving & Loading</a></li><li><a class="tocitem" href="../ecosystem/">The Julia Ecosystem</a></li><li><a class="tocitem" href="../utilities/">Utility Functions</a></li><li><a class="tocitem" href="../performance/">Performance Tips</a></li><li><a class="tocitem" href="../datasets/">Datasets</a></li><li><a class="tocitem" href="../community/">Community</a></li></ul><div class="docs-version-selector field has-addons"><div class="control"><span class="docs-label button is-static is-size-7">Version</span></div><div class="docs-selector control is-expanded"><div class="select is-fullwidth is-size-7"><select id="documenter-version-selector"></select></div></div></div></nav><div class="docs-main"><header class="docs-navbar"><nav class="breadcrumb"><ul class="is-hidden-mobile"><li class="is-active"><a href>Search</a></li></ul><ul class="is-hidden-tablet"><li class="is-active"><a href>Search</a></li></ul></nav><div class="docs-right"><a class="docs-settings-button fas fa-cog" id="documenter-settings-button" href="#" title="Settings"></a><a class="docs-sidebar-button fa fa-bars is-hidden-desktop" id="documenter-sidebar-button" href="#"></a></div></header><article><p id="documenter-search-info">Loading search...</p><ul id="documenter-search-results"></ul></article></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Monday 25 May 2020 15:19">Monday 25 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body><script src="../search_index.js"></script><script src="../assets/search.js"></script></html>
|
||||
</script><link href="https://fonts.googleapis.com/css?family=Lato|Roboto+Mono" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/fontawesome.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/solid.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/brands.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.11.1/katex.min.css" rel="stylesheet" type="text/css"/><script>documenterBaseURL=".."</script><script src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.6/require.min.js" data-main="../assets/documenter.js"></script><script src="../siteinfo.js"></script><script src="../../versions.js"></script><link href="../assets/flux.css" rel="stylesheet" type="text/css"/><link class="docs-theme-link" rel="stylesheet" type="text/css" href="../assets/themes/documenter-dark.css" data-theme-name="documenter-dark"/><link class="docs-theme-link" rel="stylesheet" type="text/css" href="../assets/themes/documenter-light.css" data-theme-name="documenter-light" data-theme-primary/><script src="../assets/themeswap.js"></script></head><body><div id="documenter"><nav class="docs-sidebar"><div class="docs-package-name"><span class="docs-autofit">Flux</span></div><form class="docs-search" action><input class="docs-search-query" id="documenter-search-query" name="q" type="text" placeholder="Search docs"/></form><ul class="docs-menu"><li><a class="tocitem" href="../">Home</a></li><li><span class="tocitem">Building Models</span><ul><li><a class="tocitem" href="../models/basics/">Basics</a></li><li><a class="tocitem" href="../models/recurrence/">Recurrence</a></li><li><a class="tocitem" href="../models/regularisation/">Regularisation</a></li><li><a class="tocitem" href="../models/layers/">Model Reference</a></li><li><a class="tocitem" href="../models/advanced/">Advanced Model Building</a></li><li><a class="tocitem" href="../models/nnlib/">NNlib</a></li></ul></li><li><span class="tocitem">Handling Data</span><ul><li><a class="tocitem" href="../data/onehot/">One-Hot Encoding</a></li><li><a class="tocitem" href="../data/dataloader/">DataLoader</a></li></ul></li><li><span class="tocitem">Training Models</span><ul><li><a class="tocitem" href="../training/optimisers/">Optimisers</a></li><li><a class="tocitem" href="../training/training/">Training</a></li></ul></li><li><a class="tocitem" href="../gpu/">GPU Support</a></li><li><a class="tocitem" href="../saving/">Saving & Loading</a></li><li><a class="tocitem" href="../ecosystem/">The Julia Ecosystem</a></li><li><a class="tocitem" href="../utilities/">Utility Functions</a></li><li><a class="tocitem" href="../performance/">Performance Tips</a></li><li><a class="tocitem" href="../datasets/">Datasets</a></li><li><a class="tocitem" href="../community/">Community</a></li></ul><div class="docs-version-selector field has-addons"><div class="control"><span class="docs-label button is-static is-size-7">Version</span></div><div class="docs-selector control is-expanded"><div class="select is-fullwidth is-size-7"><select id="documenter-version-selector"></select></div></div></div></nav><div class="docs-main"><header class="docs-navbar"><nav class="breadcrumb"><ul class="is-hidden-mobile"><li class="is-active"><a href>Search</a></li></ul><ul class="is-hidden-tablet"><li class="is-active"><a href>Search</a></li></ul></nav><div class="docs-right"><a class="docs-settings-button fas fa-cog" id="documenter-settings-button" href="#" title="Settings"></a><a class="docs-sidebar-button fa fa-bars is-hidden-desktop" id="documenter-sidebar-button" href="#"></a></div></header><article><p id="documenter-search-info">Loading search...</p><ul id="documenter-search-results"></ul></article></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 27 May 2020 11:52">Wednesday 27 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body><script src="../search_index.js"></script><script src="../assets/search.js"></script></html>
|
||||
|
|
|
@ -27,8 +27,8 @@ end</code></pre><p>Running this will alter the parameters <code>W</code> and <co
|
|||
|
||||
for p in (W, b)
|
||||
update!(opt, p, grads[p])
|
||||
end</code></pre><p>An optimiser <code>update!</code> accepts a parameter and a gradient, and updates the parameter according to the chosen rule. We can also pass <code>opt</code> to our <a href="../training/">training loop</a>, which will update all parameters of the model in a loop. However, we can now easily replace <code>Descent</code> with a more advanced optimiser such as <code>ADAM</code>.</p><h2 id="Optimiser-Reference-1"><a class="docs-heading-anchor" href="#Optimiser-Reference-1">Optimiser Reference</a><a class="docs-heading-anchor-permalink" href="#Optimiser-Reference-1" title="Permalink"></a></h2><p>All optimisers return an object that, when passed to <code>train!</code>, will update the parameters passed to it.</p><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.update!" href="#Flux.Optimise.update!"><code>Flux.Optimise.update!</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">update!(x, x̄)</code></pre><p>Update the array <code>x</code> according to <code>x .-= x̄</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/train.jl#L6-L10">source</a></section><section><div><pre><code class="language-none">update!(opt, p, g)
|
||||
update!(opt, ps::Params, gs)</code></pre><p>Perform an update step of the parameters <code>ps</code> (or the single parameter <code>p</code>) according to optimizer <code>opt</code> and the gradients <code>gs</code> (the gradient <code>g</code>).</p><p>As a result, the parameters are mutated and the optimizer's internal state may change.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/train.jl#L15-L23">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.Descent" href="#Flux.Optimise.Descent"><code>Flux.Optimise.Descent</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Descent(η = 0.1)</code></pre><p>Classic gradient descent optimiser with learning rate <code>η</code>. For each parameter <code>p</code> and its gradient <code>δp</code>, this runs <code>p -= η*δp</code></p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = Descent()
|
||||
end</code></pre><p>An optimiser <code>update!</code> accepts a parameter and a gradient, and updates the parameter according to the chosen rule. We can also pass <code>opt</code> to our <a href="../training/">training loop</a>, which will update all parameters of the model in a loop. However, we can now easily replace <code>Descent</code> with a more advanced optimiser such as <code>ADAM</code>.</p><h2 id="Optimiser-Reference-1"><a class="docs-heading-anchor" href="#Optimiser-Reference-1">Optimiser Reference</a><a class="docs-heading-anchor-permalink" href="#Optimiser-Reference-1" title="Permalink"></a></h2><p>All optimisers return an object that, when passed to <code>train!</code>, will update the parameters passed to it.</p><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.update!" href="#Flux.Optimise.update!"><code>Flux.Optimise.update!</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">update!(x, x̄)</code></pre><p>Update the array <code>x</code> according to <code>x .-= x̄</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/train.jl#L6-L10">source</a></section><section><div><pre><code class="language-none">update!(opt, p, g)
|
||||
update!(opt, ps::Params, gs)</code></pre><p>Perform an update step of the parameters <code>ps</code> (or the single parameter <code>p</code>) according to optimizer <code>opt</code> and the gradients <code>gs</code> (the gradient <code>g</code>).</p><p>As a result, the parameters are mutated and the optimizer's internal state may change.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/train.jl#L15-L23">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.Descent" href="#Flux.Optimise.Descent"><code>Flux.Optimise.Descent</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Descent(η = 0.1)</code></pre><p>Classic gradient descent optimiser with learning rate <code>η</code>. For each parameter <code>p</code> and its gradient <code>δp</code>, this runs <code>p -= η*δp</code></p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = Descent()
|
||||
|
||||
opt = Descent(0.3)
|
||||
|
||||
|
@ -38,29 +38,29 @@ gs = gradient(ps) do
|
|||
loss(x, y)
|
||||
end
|
||||
|
||||
Flux.Optimise.update!(opt, ps, gs)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L8-L32">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.Momentum" href="#Flux.Optimise.Momentum"><code>Flux.Optimise.Momentum</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Momentum(η = 0.01, ρ = 0.9)</code></pre><p>Gradient descent optimizer with learning rate <code>η</code> and momentum <code>ρ</code>.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Momentum (<code>ρ</code>): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = Momentum()
|
||||
Flux.Optimise.update!(opt, ps, gs)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L8-L32">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.Momentum" href="#Flux.Optimise.Momentum"><code>Flux.Optimise.Momentum</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Momentum(η = 0.01, ρ = 0.9)</code></pre><p>Gradient descent optimizer with learning rate <code>η</code> and momentum <code>ρ</code>.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Momentum (<code>ρ</code>): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = Momentum()
|
||||
|
||||
opt = Momentum(0.01, 0.99)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L43-L60">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.Nesterov" href="#Flux.Optimise.Nesterov"><code>Flux.Optimise.Nesterov</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Nesterov(η = 0.001, ρ = 0.9)</code></pre><p>Gradient descent optimizer with learning rate <code>η</code> and Nesterov momentum <code>ρ</code>.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Nesterov momentum (<code>ρ</code>): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = Nesterov()
|
||||
opt = Momentum(0.01, 0.99)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L43-L60">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.Nesterov" href="#Flux.Optimise.Nesterov"><code>Flux.Optimise.Nesterov</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Nesterov(η = 0.001, ρ = 0.9)</code></pre><p>Gradient descent optimizer with learning rate <code>η</code> and Nesterov momentum <code>ρ</code>.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Nesterov momentum (<code>ρ</code>): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = Nesterov()
|
||||
|
||||
opt = Nesterov(0.003, 0.95)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L76-L93">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.RMSProp" href="#Flux.Optimise.RMSProp"><code>Flux.Optimise.RMSProp</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">RMSProp(η = 0.001, ρ = 0.9)</code></pre><p>Optimizer using the <a href="https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf">RMSProp</a> algorithm. Often a good choice for recurrent networks. Parameters other than learning rate generally don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Momentum (<code>ρ</code>): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = RMSProp()
|
||||
opt = Nesterov(0.003, 0.95)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L76-L93">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.RMSProp" href="#Flux.Optimise.RMSProp"><code>Flux.Optimise.RMSProp</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">RMSProp(η = 0.001, ρ = 0.9)</code></pre><p>Optimizer using the <a href="https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf">RMSProp</a> algorithm. Often a good choice for recurrent networks. Parameters other than learning rate generally don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Momentum (<code>ρ</code>): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = RMSProp()
|
||||
|
||||
opt = RMSProp(0.002, 0.95)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L110-L130">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADAM" href="#Flux.Optimise.ADAM"><code>Flux.Optimise.ADAM</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ADAM(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="https://arxiv.org/abs/1412.6980v8">ADAM</a> optimiser.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADAM()
|
||||
opt = RMSProp(0.002, 0.95)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L110-L130">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADAM" href="#Flux.Optimise.ADAM"><code>Flux.Optimise.ADAM</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ADAM(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="https://arxiv.org/abs/1412.6980v8">ADAM</a> optimiser.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADAM()
|
||||
|
||||
opt = ADAM(0.001, (0.9, 0.8))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L146-L163">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.RADAM" href="#Flux.Optimise.RADAM"><code>Flux.Optimise.RADAM</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">RADAM(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="https://arxiv.org/pdf/1908.03265v1.pdf">Rectified ADAM</a> optimizer.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = RADAM()
|
||||
opt = ADAM(0.001, (0.9, 0.8))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L146-L163">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.RADAM" href="#Flux.Optimise.RADAM"><code>Flux.Optimise.RADAM</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">RADAM(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="https://arxiv.org/pdf/1908.03265v1.pdf">Rectified ADAM</a> optimizer.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = RADAM()
|
||||
|
||||
opt = RADAM(0.001, (0.9, 0.8))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L182-L199">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.AdaMax" href="#Flux.Optimise.AdaMax"><code>Flux.Optimise.AdaMax</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">AdaMax(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="https://arxiv.org/abs/1412.6980v9">AdaMax</a> is a variant of ADAM based on the ∞-norm.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = AdaMax()
|
||||
opt = RADAM(0.001, (0.9, 0.8))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L182-L199">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.AdaMax" href="#Flux.Optimise.AdaMax"><code>Flux.Optimise.AdaMax</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">AdaMax(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="https://arxiv.org/abs/1412.6980v9">AdaMax</a> is a variant of ADAM based on the ∞-norm.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = AdaMax()
|
||||
|
||||
opt = AdaMax(0.001, (0.9, 0.995))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L225-L242">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADAGrad" href="#Flux.Optimise.ADAGrad"><code>Flux.Optimise.ADAGrad</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ADAGrad(η = 0.1)</code></pre><p><a href="http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf">ADAGrad</a> optimizer. It has parameter specific learning rates based on how frequently it is updated. Parameters don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADAGrad()
|
||||
opt = AdaMax(0.001, (0.9, 0.995))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L225-L242">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADAGrad" href="#Flux.Optimise.ADAGrad"><code>Flux.Optimise.ADAGrad</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ADAGrad(η = 0.1)</code></pre><p><a href="http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf">ADAGrad</a> optimizer. It has parameter specific learning rates based on how frequently it is updated. Parameters don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADAGrad()
|
||||
|
||||
opt = ADAGrad(0.001)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L261-L278">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADADelta" href="#Flux.Optimise.ADADelta"><code>Flux.Optimise.ADADelta</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ADADelta(ρ = 0.9)</code></pre><p><a href="https://arxiv.org/abs/1212.5701">ADADelta</a> is a version of ADAGrad adapting its learning rate based on a window of past gradient updates. Parameters don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Rho (<code>ρ</code>): Factor by which the gradient is decayed at each time step.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADADelta()
|
||||
opt = ADAGrad(0.001)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L261-L278">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADADelta" href="#Flux.Optimise.ADADelta"><code>Flux.Optimise.ADADelta</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ADADelta(ρ = 0.9)</code></pre><p><a href="https://arxiv.org/abs/1212.5701">ADADelta</a> is a version of ADAGrad adapting its learning rate based on a window of past gradient updates. Parameters don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Rho (<code>ρ</code>): Factor by which the gradient is decayed at each time step.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADADelta()
|
||||
|
||||
opt = ADADelta(0.89)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L293-L309">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.AMSGrad" href="#Flux.Optimise.AMSGrad"><code>Flux.Optimise.AMSGrad</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">AMSGrad(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p>The <a href="https://openreview.net/forum?id=ryQu7f-RZ">AMSGrad</a> version of the ADAM optimiser. Parameters don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = AMSGrad()
|
||||
opt = ADADelta(0.89)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L293-L309">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.AMSGrad" href="#Flux.Optimise.AMSGrad"><code>Flux.Optimise.AMSGrad</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">AMSGrad(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p>The <a href="https://openreview.net/forum?id=ryQu7f-RZ">AMSGrad</a> version of the ADAM optimiser. Parameters don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = AMSGrad()
|
||||
|
||||
opt = AMSGrad(0.001, (0.89, 0.995))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L326-L344">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.NADAM" href="#Flux.Optimise.NADAM"><code>Flux.Optimise.NADAM</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">NADAM(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="http://cs229.stanford.edu/proj2015/054_report.pdf">NADAM</a> is a Nesterov variant of ADAM. Parameters don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = NADAM()
|
||||
opt = AMSGrad(0.001, (0.89, 0.995))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L326-L344">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.NADAM" href="#Flux.Optimise.NADAM"><code>Flux.Optimise.NADAM</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">NADAM(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="http://cs229.stanford.edu/proj2015/054_report.pdf">NADAM</a> is a Nesterov variant of ADAM. Parameters don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = NADAM()
|
||||
|
||||
opt = NADAM(0.002, (0.89, 0.995))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L362-L380">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADAMW" href="#Flux.Optimise.ADAMW"><code>Flux.Optimise.ADAMW</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">ADAMW(η = 0.001, β::Tuple = (0.9, 0.999), decay = 0)</code></pre><p><a href="https://arxiv.org/abs/1711.05101">ADAMW</a> is a variant of ADAM fixing (as in repairing) its weight decay regularization.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li><li><code>decay</code>: Decay applied to weights during optimisation.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADAMW()
|
||||
opt = NADAM(0.002, (0.89, 0.995))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L362-L380">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADAMW" href="#Flux.Optimise.ADAMW"><code>Flux.Optimise.ADAMW</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">ADAMW(η = 0.001, β::Tuple = (0.9, 0.999), decay = 0)</code></pre><p><a href="https://arxiv.org/abs/1711.05101">ADAMW</a> is a variant of ADAM fixing (as in repairing) its weight decay regularization.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li><li><code>decay</code>: Decay applied to weights during optimisation.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADAMW()
|
||||
|
||||
opt = ADAMW(0.001, (0.89, 0.995), 0.1)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L399-L418">source</a></section></article><h2 id="Optimiser-Interface-1"><a class="docs-heading-anchor" href="#Optimiser-Interface-1">Optimiser Interface</a><a class="docs-heading-anchor-permalink" href="#Optimiser-Interface-1" title="Permalink"></a></h2><p>Flux's optimisers are built around a <code>struct</code> that holds all the optimiser parameters along with a definition of how to apply the update rule associated with it. We do this via the <code>apply!</code> function which takes the optimiser as the first argument followed by the parameter and its corresponding gradient.</p><p>In this manner Flux also allows one to create custom optimisers to be used seamlessly. Let's work this with a simple example.</p><pre><code class="language-julia">mutable struct Momentum
|
||||
opt = ADAMW(0.001, (0.89, 0.995), 0.1)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L399-L418">source</a></section></article><h2 id="Optimiser-Interface-1"><a class="docs-heading-anchor" href="#Optimiser-Interface-1">Optimiser Interface</a><a class="docs-heading-anchor-permalink" href="#Optimiser-Interface-1" title="Permalink"></a></h2><p>Flux's optimisers are built around a <code>struct</code> that holds all the optimiser parameters along with a definition of how to apply the update rule associated with it. We do this via the <code>apply!</code> function which takes the optimiser as the first argument followed by the parameter and its corresponding gradient.</p><p>In this manner Flux also allows one to create custom optimisers to be used seamlessly. Let's work this with a simple example.</p><pre><code class="language-julia">mutable struct Momentum
|
||||
eta
|
||||
rho
|
||||
velocity
|
||||
|
@ -88,4 +88,4 @@ end
|
|||
|
||||
loss(rand(10)) # around 0.9</code></pre><p>In this manner it is possible to compose optimisers for some added flexibility.</p><h2 id="Decays-1"><a class="docs-heading-anchor" href="#Decays-1">Decays</a><a class="docs-heading-anchor-permalink" href="#Decays-1" title="Permalink"></a></h2><p>Similar to optimisers, Flux also defines some simple decays that can be used in conjunction with other optimisers, or standalone.</p><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ExpDecay" href="#Flux.Optimise.ExpDecay"><code>Flux.Optimise.ExpDecay</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ExpDecay(η = 0.001, decay = 0.1, decay_step = 1000, clip = 1e-4)</code></pre><p>Discount the learning rate <code>η</code> by the factor <code>decay</code> every <code>decay_step</code> steps till a minimum of <code>clip</code>.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li><code>decay</code>: Factor by which the learning rate is discounted.</li><li><code>decay_step</code>: Schedule decay operations by setting the number of steps between two decay operations.</li><li><code>clip</code>: Minimum value of learning rate.</li></ul><p><strong>Examples</strong></p><p>To apply exponential decay to an optimiser:</p><pre><code class="language-julia">Optimiser(ExpDecay(..), Opt(..))
|
||||
|
||||
opt = Optimiser(ExpDecay(), ADAM())</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L476-L497">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.InvDecay" href="#Flux.Optimise.InvDecay"><code>Flux.Optimise.InvDecay</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">InvDecay(γ = 0.001)</code></pre><p>Apply inverse time decay to an optimiser, so that the effective step size at iteration <code>n</code> is <code>eta / (1 + γ * n)</code> where <code>eta</code> is the initial step size. The wrapped optimiser's step size is not modified.</p><p><strong>Examples</strong></p><pre><code class="language-julia">Optimiser(InvDecay(..), Opt(..))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L449-L460">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.WeightDecay" href="#Flux.Optimise.WeightDecay"><code>Flux.Optimise.WeightDecay</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">WeightDecay(wd = 0)</code></pre><p>Decay weights by <code>wd</code>.</p><p><strong>Parameters</strong></p><ul><li>Weight decay (<code>wd</code>)</li></ul></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L518-L525">source</a></section></article><h2 id="Gradient-Clipping-1"><a class="docs-heading-anchor" href="#Gradient-Clipping-1">Gradient Clipping</a><a class="docs-heading-anchor-permalink" href="#Gradient-Clipping-1" title="Permalink"></a></h2><p>Gradient clipping is useful for training recurrent neural networks, which have a tendency to suffer from the exploding gradient problem. An example usage is</p><pre><code class="language-julia">opt = Optimiser(ClipValue(1e-3), ADAM(1e-3))</code></pre><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ClipValue" href="#Flux.Optimise.ClipValue"><code>Flux.Optimise.ClipValue</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ClipValue(thresh)</code></pre><p>Clip gradients when their absolute value exceeds <code>thresh</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L537-L541">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ClipNorm" href="#Flux.Optimise.ClipNorm"><code>Flux.Optimise.ClipNorm</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ClipNorm(thresh)</code></pre><p>Clip gradients when their L2 norm exceeds <code>thresh</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/optimisers.jl#L548-L552">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../../data/dataloader/">« DataLoader</a><a class="docs-footer-nextpage" href="../training/">Training »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Monday 25 May 2020 15:19">Monday 25 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
opt = Optimiser(ExpDecay(), ADAM())</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L476-L497">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.InvDecay" href="#Flux.Optimise.InvDecay"><code>Flux.Optimise.InvDecay</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">InvDecay(γ = 0.001)</code></pre><p>Apply inverse time decay to an optimiser, so that the effective step size at iteration <code>n</code> is <code>eta / (1 + γ * n)</code> where <code>eta</code> is the initial step size. The wrapped optimiser's step size is not modified.</p><p><strong>Examples</strong></p><pre><code class="language-julia">Optimiser(InvDecay(..), Opt(..))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L449-L460">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.WeightDecay" href="#Flux.Optimise.WeightDecay"><code>Flux.Optimise.WeightDecay</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">WeightDecay(wd = 0)</code></pre><p>Decay weights by <code>wd</code>.</p><p><strong>Parameters</strong></p><ul><li>Weight decay (<code>wd</code>)</li></ul></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L518-L525">source</a></section></article><h2 id="Gradient-Clipping-1"><a class="docs-heading-anchor" href="#Gradient-Clipping-1">Gradient Clipping</a><a class="docs-heading-anchor-permalink" href="#Gradient-Clipping-1" title="Permalink"></a></h2><p>Gradient clipping is useful for training recurrent neural networks, which have a tendency to suffer from the exploding gradient problem. An example usage is</p><pre><code class="language-julia">opt = Optimiser(ClipValue(1e-3), ADAM(1e-3))</code></pre><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ClipValue" href="#Flux.Optimise.ClipValue"><code>Flux.Optimise.ClipValue</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ClipValue(thresh)</code></pre><p>Clip gradients when their absolute value exceeds <code>thresh</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L537-L541">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ClipNorm" href="#Flux.Optimise.ClipNorm"><code>Flux.Optimise.ClipNorm</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ClipNorm(thresh)</code></pre><p>Clip gradients when their L2 norm exceeds <code>thresh</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L548-L552">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../../data/dataloader/">« DataLoader</a><a class="docs-footer-nextpage" href="../training/">Training »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 27 May 2020 11:52">Wednesday 27 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
|
|
File diff suppressed because one or more lines are too long
|
@ -24,7 +24,7 @@ julia> Flux.unsqueeze([1 2; 3 4], 2)
|
|||
|
||||
[:, :, 2] =
|
||||
2
|
||||
4</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/utils.jl#L46-L74">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.stack" href="#Flux.stack"><code>Flux.stack</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">stack(xs, dim)</code></pre><p>Concatenate the given <code>Array</code> of <code>Array</code>s <code>xs</code> into a single <code>Array</code> along the given dimension <code>dim</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> xs = [[1, 2], [3, 4], [5, 6]]
|
||||
4</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/utils.jl#L46-L74">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.stack" href="#Flux.stack"><code>Flux.stack</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">stack(xs, dim)</code></pre><p>Concatenate the given <code>Array</code> of <code>Array</code>s <code>xs</code> into a single <code>Array</code> along the given dimension <code>dim</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> xs = [[1, 2], [3, 4], [5, 6]]
|
||||
3-element Array{Array{Int64,1},1}:
|
||||
[1, 2]
|
||||
[3, 4]
|
||||
|
@ -40,12 +40,12 @@ julia> cat(xs, dims=1)
|
|||
3-element Array{Array{Int64,1},1}:
|
||||
[1, 2]
|
||||
[3, 4]
|
||||
[5, 6]</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/utils.jl#L77-L103">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.unstack" href="#Flux.unstack"><code>Flux.unstack</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">unstack(xs, dim)</code></pre><p>Unroll the given <code>xs</code> into an <code>Array</code> of <code>Array</code>s along the given dimension <code>dim</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> Flux.unstack([1 3 5 7; 2 4 6 8], 2)
|
||||
[5, 6]</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/utils.jl#L77-L103">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.unstack" href="#Flux.unstack"><code>Flux.unstack</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">unstack(xs, dim)</code></pre><p>Unroll the given <code>xs</code> into an <code>Array</code> of <code>Array</code>s along the given dimension <code>dim</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> Flux.unstack([1 3 5 7; 2 4 6 8], 2)
|
||||
4-element Array{Array{Int64,1},1}:
|
||||
[1, 2]
|
||||
[3, 4]
|
||||
[5, 6]
|
||||
[7, 8]</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/utils.jl#L106-L120">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.chunk" href="#Flux.chunk"><code>Flux.chunk</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">chunk(xs, n)</code></pre><p>Split <code>xs</code> into <code>n</code> parts.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> Flux.chunk(1:10, 3)
|
||||
[7, 8]</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/utils.jl#L106-L120">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.chunk" href="#Flux.chunk"><code>Flux.chunk</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">chunk(xs, n)</code></pre><p>Split <code>xs</code> into <code>n</code> parts.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> Flux.chunk(1:10, 3)
|
||||
3-element Array{UnitRange{Int64},1}:
|
||||
1:4
|
||||
5:8
|
||||
|
@ -55,18 +55,18 @@ julia> Flux.chunk(collect(1:10), 3)
|
|||
3-element Array{SubArray{Int64,1,Array{Int64,1},Tuple{UnitRange{Int64}},true},1}:
|
||||
[1, 2, 3, 4]
|
||||
[5, 6, 7, 8]
|
||||
[9, 10]</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/utils.jl#L123-L142">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.frequencies" href="#Flux.frequencies"><code>Flux.frequencies</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">frequencies(xs)</code></pre><p>Count the number of times that each element of <code>xs</code> appears.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> Flux.frequencies(['a','b','b'])
|
||||
[9, 10]</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/utils.jl#L123-L142">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.frequencies" href="#Flux.frequencies"><code>Flux.frequencies</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">frequencies(xs)</code></pre><p>Count the number of times that each element of <code>xs</code> appears.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> Flux.frequencies(['a','b','b'])
|
||||
Dict{Char,Int64} with 2 entries:
|
||||
'a' => 1
|
||||
'b' => 2</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/utils.jl#L147-L159">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.batch" href="#Flux.batch"><code>Flux.batch</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">batch(xs)</code></pre><p>Batch the arrays in <code>xs</code> into a single array.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> Flux.batch([[1,2,3],[4,5,6]])
|
||||
'b' => 2</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/utils.jl#L147-L159">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.batch" href="#Flux.batch"><code>Flux.batch</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">batch(xs)</code></pre><p>Batch the arrays in <code>xs</code> into a single array.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> Flux.batch([[1,2,3],[4,5,6]])
|
||||
3×2 Array{Int64,2}:
|
||||
1 4
|
||||
2 5
|
||||
3 6</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/utils.jl#L172-L185">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.batchseq" href="#Flux.batchseq"><code>Flux.batchseq</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">batchseq(seqs, pad)</code></pre><p>Take a list of <code>N</code> sequences, and turn them into a single sequence where each item is a batch of <code>N</code>. Short sequences will be padded by <code>pad</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> Flux.batchseq([[1, 2, 3], [4, 5]], 0)
|
||||
3 6</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/utils.jl#L172-L185">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.batchseq" href="#Flux.batchseq"><code>Flux.batchseq</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">batchseq(seqs, pad)</code></pre><p>Take a list of <code>N</code> sequences, and turn them into a single sequence where each item is a batch of <code>N</code>. Short sequences will be padded by <code>pad</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> Flux.batchseq([[1, 2, 3], [4, 5]], 0)
|
||||
3-element Array{Array{Int64,1},1}:
|
||||
[1, 4]
|
||||
[2, 5]
|
||||
[3, 0]</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/utils.jl#L217-L231">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Base.rpad-Tuple{AbstractArray{T,1} where T,Integer,Any}" href="#Base.rpad-Tuple{AbstractArray{T,1} where T,Integer,Any}"><code>Base.rpad</code></a> — <span class="docstring-category">Method</span></header><section><div><p>Return the given sequence padded with <code>p</code> up to a maximum length of <code>n</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> rpad([1, 2], 4, 0)
|
||||
[3, 0]</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/utils.jl#L217-L231">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Base.rpad-Tuple{AbstractArray{T,1} where T,Integer,Any}" href="#Base.rpad-Tuple{AbstractArray{T,1} where T,Integer,Any}"><code>Base.rpad</code></a> — <span class="docstring-category">Method</span></header><section><div><p>Return the given sequence padded with <code>p</code> up to a maximum length of <code>n</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> rpad([1, 2], 4, 0)
|
||||
4-element Array{Int64,1}:
|
||||
1
|
||||
2
|
||||
|
@ -77,15 +77,15 @@ julia> rpad([1, 2, 3], 2, 0)
|
|||
3-element Array{Int64,1}:
|
||||
1
|
||||
2
|
||||
3</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/utils.jl#L196-L214">source</a></section></article><h2 id="Layer-Initialization-1"><a class="docs-heading-anchor" href="#Layer-Initialization-1">Layer Initialization</a><a class="docs-heading-anchor-permalink" href="#Layer-Initialization-1" title="Permalink"></a></h2><p>These are primarily useful if you are planning to write your own layers. Flux initializes convolutional layers and recurrent cells with <code>glorot_uniform</code> by default. To change the default on an applicable layer, pass the desired function with the <code>init</code> keyword. For example:</p><pre><code class="language-julia-repl">julia> conv = Conv((3, 3), 1 => 8, relu; init=Flux.glorot_normal)
|
||||
3</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/utils.jl#L196-L214">source</a></section></article><h2 id="Layer-Initialization-1"><a class="docs-heading-anchor" href="#Layer-Initialization-1">Layer Initialization</a><a class="docs-heading-anchor-permalink" href="#Layer-Initialization-1" title="Permalink"></a></h2><p>These are primarily useful if you are planning to write your own layers. Flux initializes convolutional layers and recurrent cells with <code>glorot_uniform</code> by default. To change the default on an applicable layer, pass the desired function with the <code>init</code> keyword. For example:</p><pre><code class="language-julia-repl">julia> conv = Conv((3, 3), 1 => 8, relu; init=Flux.glorot_normal)
|
||||
Conv((3, 3), 1=>8, relu)</code></pre><article class="docstring"><header><a class="docstring-binding" id="Flux.glorot_uniform" href="#Flux.glorot_uniform"><code>Flux.glorot_uniform</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">glorot_uniform(dims...)</code></pre><p>Return an <code>Array</code> of size <code>dims</code> containing random variables taken from a uniform distribution in the interval <span>$[-x, x]$</span>, where <code>x = sqrt(24 / sum(dims)) / 2</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> Flux.glorot_uniform(2, 3)
|
||||
2×3 Array{Float32,2}:
|
||||
0.601094 -0.57414 -0.814925
|
||||
0.900868 0.805994 0.057514</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/utils.jl#L7-L20">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.glorot_normal" href="#Flux.glorot_normal"><code>Flux.glorot_normal</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">glorot_normal(dims...)</code></pre><p>Return an <code>Array</code> of size <code>dims</code> containing random variables taken from a normal distribution with mean 0 and standard deviation <code>sqrt(2 / sum(dims))</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> Flux.glorot_normal(3, 2)
|
||||
0.900868 0.805994 0.057514</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/utils.jl#L7-L20">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.glorot_normal" href="#Flux.glorot_normal"><code>Flux.glorot_normal</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">glorot_normal(dims...)</code></pre><p>Return an <code>Array</code> of size <code>dims</code> containing random variables taken from a normal distribution with mean 0 and standard deviation <code>sqrt(2 / sum(dims))</code>.</p><p><strong>Examples</strong></p><pre><code class="language-julia-repl">julia> Flux.glorot_normal(3, 2)
|
||||
3×2 Array{Float32,2}:
|
||||
0.429505 -0.0852891
|
||||
0.523935 0.371009
|
||||
-0.223261 0.188052</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/utils.jl#L23-L37">source</a></section></article><h2 id="Model-Abstraction-1"><a class="docs-heading-anchor" href="#Model-Abstraction-1">Model Abstraction</a><a class="docs-heading-anchor-permalink" href="#Model-Abstraction-1" title="Permalink"></a></h2><article class="docstring"><header><a class="docstring-binding" id="Flux.destructure" href="#Flux.destructure"><code>Flux.destructure</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">destructure(m)</code></pre><p>Flatten a model's parameters into a single weight vector.</p><pre><code class="language-none">julia> m = Chain(Dense(10, 5, σ), Dense(5, 2), softmax)
|
||||
-0.223261 0.188052</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/utils.jl#L23-L37">source</a></section></article><h2 id="Model-Abstraction-1"><a class="docs-heading-anchor" href="#Model-Abstraction-1">Model Abstraction</a><a class="docs-heading-anchor-permalink" href="#Model-Abstraction-1" title="Permalink"></a></h2><article class="docstring"><header><a class="docstring-binding" id="Flux.destructure" href="#Flux.destructure"><code>Flux.destructure</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">destructure(m)</code></pre><p>Flatten a model's parameters into a single weight vector.</p><pre><code class="language-none">julia> m = Chain(Dense(10, 5, σ), Dense(5, 2), softmax)
|
||||
Chain(Dense(10, 5, σ), Dense(5, 2), softmax)
|
||||
|
||||
julia> θ, re = destructure(m);
|
||||
|
@ -94,6 +94,6 @@ julia> θ
|
|||
67-element Array{Float32,1}:
|
||||
-0.1407104
|
||||
...</code></pre><p>The second return value <code>re</code> allows you to reconstruct the original network after making modifications to the weight vector (for example, with a hypernetwork).</p><pre><code class="language-none">julia> re(θ .* 2)
|
||||
Chain(Dense(10, 5, σ), Dense(5, 2), softmax)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/utils.jl#L249-L269">source</a></section></article><h2 id="Callback-Helpers-1"><a class="docs-heading-anchor" href="#Callback-Helpers-1">Callback Helpers</a><a class="docs-heading-anchor-permalink" href="#Callback-Helpers-1" title="Permalink"></a></h2><article class="docstring"><header><a class="docstring-binding" id="Flux.throttle" href="#Flux.throttle"><code>Flux.throttle</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">throttle(f, timeout; leading=true, trailing=false)</code></pre><p>Return a function that when invoked, will only be triggered at most once during <code>timeout</code> seconds.</p><p>Normally, the throttled function will run as much as it can, without ever going more than once per <code>wait</code> duration; but if you'd like to disable the execution on the leading edge, pass <code>leading=false</code>. To enable execution on the trailing edge, pass <code>trailing=true</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/utils.jl#L281-L291">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.stop" href="#Flux.Optimise.stop"><code>Flux.Optimise.stop</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">stop()</code></pre><p>Call <code>Flux.stop()</code> in a callback to indicate when a callback condition is met. This will trigger the train loop to stop and exit.</p><p><strong>Examples</strong></p><pre><code class="language-julia">cb = function ()
|
||||
Chain(Dense(10, 5, σ), Dense(5, 2), softmax)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/utils.jl#L249-L269">source</a></section></article><h2 id="Callback-Helpers-1"><a class="docs-heading-anchor" href="#Callback-Helpers-1">Callback Helpers</a><a class="docs-heading-anchor-permalink" href="#Callback-Helpers-1" title="Permalink"></a></h2><article class="docstring"><header><a class="docstring-binding" id="Flux.throttle" href="#Flux.throttle"><code>Flux.throttle</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">throttle(f, timeout; leading=true, trailing=false)</code></pre><p>Return a function that when invoked, will only be triggered at most once during <code>timeout</code> seconds.</p><p>Normally, the throttled function will run as much as it can, without ever going more than once per <code>wait</code> duration; but if you'd like to disable the execution on the leading edge, pass <code>leading=false</code>. To enable execution on the trailing edge, pass <code>trailing=true</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/utils.jl#L281-L291">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.stop" href="#Flux.Optimise.stop"><code>Flux.Optimise.stop</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">stop()</code></pre><p>Call <code>Flux.stop()</code> in a callback to indicate when a callback condition is met. This will trigger the train loop to stop and exit.</p><p><strong>Examples</strong></p><pre><code class="language-julia">cb = function ()
|
||||
accuracy() > 0.9 && Flux.stop()
|
||||
end</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/85c39e2309bf5004c7c15ece8ce8e313417587da/src/optimise/train.jl#L42-L54">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../ecosystem/">« The Julia Ecosystem</a><a class="docs-footer-nextpage" href="../performance/">Performance Tips »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Monday 25 May 2020 15:19">Monday 25 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
end</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/train.jl#L42-L54">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../ecosystem/">« The Julia Ecosystem</a><a class="docs-footer-nextpage" href="../performance/">Performance Tips »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 27 May 2020 11:52">Wednesday 27 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|
||||
|
|
Loading…
Reference in New Issue