92 lines
30 KiB
HTML
92 lines
30 KiB
HTML
<!DOCTYPE html>
|
||
<html lang="en"><head><meta charset="UTF-8"/><meta name="viewport" content="width=device-width, initial-scale=1.0"/><title>Optimisers · Flux</title><script>(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
|
||
|
||
ga('create', 'UA-36890222-9', 'auto');
|
||
ga('send', 'pageview', {'page': location.pathname + location.search + location.hash});
|
||
</script><link href="https://fonts.googleapis.com/css?family=Lato|Roboto+Mono" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/fontawesome.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/solid.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/brands.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.11.1/katex.min.css" rel="stylesheet" type="text/css"/><script>documenterBaseURL="../.."</script><script src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.6/require.min.js" data-main="../../assets/documenter.js"></script><script src="../../siteinfo.js"></script><script src="../../../versions.js"></script><link href="../../assets/flux.css" rel="stylesheet" type="text/css"/><link class="docs-theme-link" rel="stylesheet" type="text/css" href="../../assets/themes/documenter-dark.css" data-theme-name="documenter-dark"/><link class="docs-theme-link" rel="stylesheet" type="text/css" href="../../assets/themes/documenter-light.css" data-theme-name="documenter-light" data-theme-primary/><script src="../../assets/themeswap.js"></script></head><body><div id="documenter"><nav class="docs-sidebar"><div class="docs-package-name"><span class="docs-autofit">Flux</span></div><form class="docs-search" action="../../search/"><input class="docs-search-query" id="documenter-search-query" name="q" type="text" placeholder="Search docs"/></form><ul class="docs-menu"><li><a class="tocitem" href="../../">Home</a></li><li><span class="tocitem">Building Models</span><ul><li><a class="tocitem" href="../../models/basics/">Basics</a></li><li><a class="tocitem" href="../../models/recurrence/">Recurrence</a></li><li><a class="tocitem" href="../../models/regularisation/">Regularisation</a></li><li><a class="tocitem" href="../../models/layers/">Model Reference</a></li><li><a class="tocitem" href="../../models/advanced/">Advanced Model Building</a></li><li><a class="tocitem" href="../../models/nnlib/">NNlib</a></li></ul></li><li><span class="tocitem">Handling Data</span><ul><li><a class="tocitem" href="../../data/onehot/">One-Hot Encoding</a></li><li><a class="tocitem" href="../../data/dataloader/">DataLoader</a></li></ul></li><li><span class="tocitem">Training Models</span><ul><li class="is-active"><a class="tocitem" href>Optimisers</a><ul class="internal"><li><a class="tocitem" href="#Optimiser-Reference-1"><span>Optimiser Reference</span></a></li><li><a class="tocitem" href="#Optimiser-Interface-1"><span>Optimiser Interface</span></a></li><li><a class="tocitem" href="#Composing-Optimisers-1"><span>Composing Optimisers</span></a></li><li><a class="tocitem" href="#Decays-1"><span>Decays</span></a></li><li><a class="tocitem" href="#Gradient-Clipping-1"><span>Gradient Clipping</span></a></li></ul></li><li><a class="tocitem" href="../training/">Training</a></li></ul></li><li><a class="tocitem" href="../../gpu/">GPU Support</a></li><li><a class="tocitem" href="../../saving/">Saving & Loading</a></li><li><a class="tocitem" href="../../ecosystem/">The Julia Ecosystem</a></li><li><a class="tocitem" href="../../utilities/">Utility Functions</a></li><li><a class="tocitem" href="../../performance/">Performance Tips</a></li><li><a class="tocitem" href="../../datasets/">Datasets</a></li><li><a class="tocitem" href="../../community/">Community</a></li></ul><div class="docs-version-selector field has-addons"><div class="control"><span class="docs-label button is-static is-size-7">Version</span></div><div class="docs-selector control is-expanded"><div class="select is-fullwidth is-size-7"><select id="documenter-version-selector"></select></div></div></div></nav><div class="docs-main"><header class="docs-navbar"><nav class="breadcrumb"><ul class="is-hidden-mobile"><li><a class="is-disabled">Training Models</a></li><li class="is-active"><a href>Optimisers</a></li></ul><ul class="is-hidden-tablet"><li class="is-active"><a href>Optimisers</a></li></ul></nav><div class="docs-right"><a class="docs-edit-link" href="https://github.com/FluxML/Flux.jl/blob/master/docs/src/training/optimisers.md" title="Edit on GitHub"><span class="docs-icon fab"></span><span class="docs-label is-hidden-touch">Edit on GitHub</span></a><a class="docs-settings-button fas fa-cog" id="documenter-settings-button" href="#" title="Settings"></a><a class="docs-sidebar-button fa fa-bars is-hidden-desktop" id="documenter-sidebar-button" href="#"></a></div></header><article class="content" id="documenter-page"><h1 id="Optimisers-1"><a class="docs-heading-anchor" href="#Optimisers-1">Optimisers</a><a class="docs-heading-anchor-permalink" href="#Optimisers-1" title="Permalink"></a></h1><p>Consider a <a href="../../models/basics/">simple linear regression</a>. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters <code>W</code> and <code>b</code>.</p><pre><code class="language-julia">using Flux
|
||
|
||
W = rand(2, 5)
|
||
b = rand(2)
|
||
|
||
predict(x) = (W * x) .+ b
|
||
loss(x, y) = sum((predict(x) .- y).^2)
|
||
|
||
x, y = rand(5), rand(2) # Dummy data
|
||
l = loss(x, y) # ~ 3
|
||
|
||
θ = Params([W, b])
|
||
grads = gradient(() -> loss(x, y), θ)</code></pre><p>We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:</p><pre><code class="language-julia">using Flux.Optimise: update!
|
||
|
||
η = 0.1 # Learning Rate
|
||
for p in (W, b)
|
||
update!(p, -η * grads[p])
|
||
end</code></pre><p>Running this will alter the parameters <code>W</code> and <code>b</code> and our loss should go down. Flux provides a more general way to do optimiser updates like this.</p><pre><code class="language-julia">opt = Descent(0.1) # Gradient descent with learning rate 0.1
|
||
|
||
for p in (W, b)
|
||
update!(opt, p, grads[p])
|
||
end</code></pre><p>An optimiser <code>update!</code> accepts a parameter and a gradient, and updates the parameter according to the chosen rule. We can also pass <code>opt</code> to our <a href="../training/">training loop</a>, which will update all parameters of the model in a loop. However, we can now easily replace <code>Descent</code> with a more advanced optimiser such as <code>ADAM</code>.</p><h2 id="Optimiser-Reference-1"><a class="docs-heading-anchor" href="#Optimiser-Reference-1">Optimiser Reference</a><a class="docs-heading-anchor-permalink" href="#Optimiser-Reference-1" title="Permalink"></a></h2><p>All optimisers return an object that, when passed to <code>train!</code>, will update the parameters passed to it.</p><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.update!" href="#Flux.Optimise.update!"><code>Flux.Optimise.update!</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">update!(x, x̄)</code></pre><p>Update the array <code>x</code> according to <code>x .-= x̄</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/train.jl#L6-L10">source</a></section><section><div><pre><code class="language-none">update!(opt, p, g)
|
||
update!(opt, ps::Params, gs)</code></pre><p>Perform an update step of the parameters <code>ps</code> (or the single parameter <code>p</code>) according to optimizer <code>opt</code> and the gradients <code>gs</code> (the gradient <code>g</code>).</p><p>As a result, the parameters are mutated and the optimizer's internal state may change.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/train.jl#L15-L23">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.Descent" href="#Flux.Optimise.Descent"><code>Flux.Optimise.Descent</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Descent(η = 0.1)</code></pre><p>Classic gradient descent optimiser with learning rate <code>η</code>. For each parameter <code>p</code> and its gradient <code>δp</code>, this runs <code>p -= η*δp</code></p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = Descent()
|
||
|
||
opt = Descent(0.3)
|
||
|
||
ps = params(model)
|
||
|
||
gs = gradient(ps) do
|
||
loss(x, y)
|
||
end
|
||
|
||
Flux.Optimise.update!(opt, ps, gs)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L8-L32">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.Momentum" href="#Flux.Optimise.Momentum"><code>Flux.Optimise.Momentum</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Momentum(η = 0.01, ρ = 0.9)</code></pre><p>Gradient descent optimizer with learning rate <code>η</code> and momentum <code>ρ</code>.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Momentum (<code>ρ</code>): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = Momentum()
|
||
|
||
opt = Momentum(0.01, 0.99)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L43-L60">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.Nesterov" href="#Flux.Optimise.Nesterov"><code>Flux.Optimise.Nesterov</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">Nesterov(η = 0.001, ρ = 0.9)</code></pre><p>Gradient descent optimizer with learning rate <code>η</code> and Nesterov momentum <code>ρ</code>.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Nesterov momentum (<code>ρ</code>): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = Nesterov()
|
||
|
||
opt = Nesterov(0.003, 0.95)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L76-L93">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.RMSProp" href="#Flux.Optimise.RMSProp"><code>Flux.Optimise.RMSProp</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">RMSProp(η = 0.001, ρ = 0.9)</code></pre><p>Optimizer using the <a href="https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf">RMSProp</a> algorithm. Often a good choice for recurrent networks. Parameters other than learning rate generally don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Momentum (<code>ρ</code>): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = RMSProp()
|
||
|
||
opt = RMSProp(0.002, 0.95)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L110-L130">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADAM" href="#Flux.Optimise.ADAM"><code>Flux.Optimise.ADAM</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ADAM(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="https://arxiv.org/abs/1412.6980v8">ADAM</a> optimiser.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADAM()
|
||
|
||
opt = ADAM(0.001, (0.9, 0.8))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L146-L163">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.RADAM" href="#Flux.Optimise.RADAM"><code>Flux.Optimise.RADAM</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">RADAM(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="https://arxiv.org/pdf/1908.03265v1.pdf">Rectified ADAM</a> optimizer.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = RADAM()
|
||
|
||
opt = RADAM(0.001, (0.9, 0.8))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L182-L199">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.AdaMax" href="#Flux.Optimise.AdaMax"><code>Flux.Optimise.AdaMax</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">AdaMax(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="https://arxiv.org/abs/1412.6980v9">AdaMax</a> is a variant of ADAM based on the ∞-norm.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = AdaMax()
|
||
|
||
opt = AdaMax(0.001, (0.9, 0.995))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L225-L242">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADAGrad" href="#Flux.Optimise.ADAGrad"><code>Flux.Optimise.ADAGrad</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ADAGrad(η = 0.1)</code></pre><p><a href="http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf">ADAGrad</a> optimizer. It has parameter specific learning rates based on how frequently it is updated. Parameters don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADAGrad()
|
||
|
||
opt = ADAGrad(0.001)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L261-L278">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADADelta" href="#Flux.Optimise.ADADelta"><code>Flux.Optimise.ADADelta</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ADADelta(ρ = 0.9)</code></pre><p><a href="https://arxiv.org/abs/1212.5701">ADADelta</a> is a version of ADAGrad adapting its learning rate based on a window of past gradient updates. Parameters don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Rho (<code>ρ</code>): Factor by which the gradient is decayed at each time step.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADADelta()
|
||
|
||
opt = ADADelta(0.89)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L293-L309">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.AMSGrad" href="#Flux.Optimise.AMSGrad"><code>Flux.Optimise.AMSGrad</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">AMSGrad(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p>The <a href="https://openreview.net/forum?id=ryQu7f-RZ">AMSGrad</a> version of the ADAM optimiser. Parameters don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = AMSGrad()
|
||
|
||
opt = AMSGrad(0.001, (0.89, 0.995))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L326-L344">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.NADAM" href="#Flux.Optimise.NADAM"><code>Flux.Optimise.NADAM</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">NADAM(η = 0.001, β::Tuple = (0.9, 0.999))</code></pre><p><a href="http://cs229.stanford.edu/proj2015/054_report.pdf">NADAM</a> is a Nesterov variant of ADAM. Parameters don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = NADAM()
|
||
|
||
opt = NADAM(0.002, (0.89, 0.995))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L362-L380">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ADAMW" href="#Flux.Optimise.ADAMW"><code>Flux.Optimise.ADAMW</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">ADAMW(η = 0.001, β::Tuple = (0.9, 0.999), decay = 0)</code></pre><p><a href="https://arxiv.org/abs/1711.05101">ADAMW</a> is a variant of ADAM fixing (as in repairing) its weight decay regularization.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li>Decay of momentums (<code>β::Tuple</code>): Exponential decay for the first (β1) and the second (β2) momentum estimate.</li><li><code>decay</code>: Decay applied to weights during optimisation.</li></ul><p><strong>Examples</strong></p><pre><code class="language-julia">opt = ADAMW()
|
||
|
||
opt = ADAMW(0.001, (0.89, 0.995), 0.1)</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L399-L418">source</a></section></article><h2 id="Optimiser-Interface-1"><a class="docs-heading-anchor" href="#Optimiser-Interface-1">Optimiser Interface</a><a class="docs-heading-anchor-permalink" href="#Optimiser-Interface-1" title="Permalink"></a></h2><p>Flux's optimisers are built around a <code>struct</code> that holds all the optimiser parameters along with a definition of how to apply the update rule associated with it. We do this via the <code>apply!</code> function which takes the optimiser as the first argument followed by the parameter and its corresponding gradient.</p><p>In this manner Flux also allows one to create custom optimisers to be used seamlessly. Let's work this with a simple example.</p><pre><code class="language-julia">mutable struct Momentum
|
||
eta
|
||
rho
|
||
velocity
|
||
end
|
||
|
||
Momentum(eta::Real, rho::Real) = Momentum(eta, rho, IdDict())</code></pre><p>The <code>Momentum</code> type will act as our optimiser in this case. Notice that we have added all the parameters as fields, along with the velocity which we will use as our state dictionary. Each parameter in our models will get an entry in there. We can now define the rule applied when this optimiser is invoked.</p><pre><code class="language-julia">function Flux.Optimise.apply!(o::Momentum, x, Δ)
|
||
η, ρ = o.eta, o.rho
|
||
v = get!(o.velocity, x, zero(x))::typeof(x)
|
||
@. v = ρ * v - η * Δ
|
||
@. Δ = -v
|
||
end</code></pre><p>This is the basic definition of a Momentum update rule given by:</p><div>\[v = ρ * v - η * Δ
|
||
w = w - v\]</div><p>The <code>apply!</code> defines the update rules for an optimiser <code>opt</code>, given the parameters and gradients. It returns the updated gradients. Here, every parameter <code>x</code> is retrieved from the running state <code>v</code> and subsequently updates the state of the optimiser.</p><p>Flux internally calls on this function via the <code>update!</code> function. It shares the API with <code>apply!</code> but ensures that multiple parameters are handled gracefully.</p><h2 id="Composing-Optimisers-1"><a class="docs-heading-anchor" href="#Composing-Optimisers-1">Composing Optimisers</a><a class="docs-heading-anchor-permalink" href="#Composing-Optimisers-1" title="Permalink"></a></h2><p>Flux defines a special kind of optimiser simply called <code>Optimiser</code> which takes in arbitrary optimisers as input. Its behaviour is similar to the usual optimisers, but differs in that it acts by calling the optimisers listed in it sequentially. Each optimiser produces a modified gradient that will be fed into the next, and the resultant update will be applied to the parameter as usual. A classic use case is where adding decays is desirable. Flux defines some basic decays including <code>ExpDecay</code>, <code>InvDecay</code> etc.</p><pre><code class="language-julia">opt = Optimiser(ExpDecay(0.001, 0.1, 1000, 1e-4), Descent())</code></pre><p>Here we apply exponential decay to the <code>Descent</code> optimiser. The defaults of <code>ExpDecay</code> say that its learning rate will be decayed every 1000 steps. It is then applied like any optimiser.</p><pre><code class="language-julia">w = randn(10, 10)
|
||
w1 = randn(10,10)
|
||
ps = Params([w, w1])
|
||
|
||
loss(x) = Flux.mse(w * x, w1 * x)
|
||
|
||
loss(rand(10)) # around 9
|
||
|
||
for t = 1:10^5
|
||
θ = Params([w, w1])
|
||
θ̄ = gradient(() -> loss(rand(10)), θ)
|
||
Flux.Optimise.update!(opt, θ, θ̄)
|
||
end
|
||
|
||
loss(rand(10)) # around 0.9</code></pre><p>In this manner it is possible to compose optimisers for some added flexibility.</p><h2 id="Decays-1"><a class="docs-heading-anchor" href="#Decays-1">Decays</a><a class="docs-heading-anchor-permalink" href="#Decays-1" title="Permalink"></a></h2><p>Similar to optimisers, Flux also defines some simple decays that can be used in conjunction with other optimisers, or standalone.</p><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ExpDecay" href="#Flux.Optimise.ExpDecay"><code>Flux.Optimise.ExpDecay</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ExpDecay(η = 0.001, decay = 0.1, decay_step = 1000, clip = 1e-4)</code></pre><p>Discount the learning rate <code>η</code> by the factor <code>decay</code> every <code>decay_step</code> steps till a minimum of <code>clip</code>.</p><p><strong>Parameters</strong></p><ul><li>Learning rate (<code>η</code>): Amount by which gradients are discounted before updating the weights.</li><li><code>decay</code>: Factor by which the learning rate is discounted.</li><li><code>decay_step</code>: Schedule decay operations by setting the number of steps between two decay operations.</li><li><code>clip</code>: Minimum value of learning rate.</li></ul><p><strong>Examples</strong></p><p>To apply exponential decay to an optimiser:</p><pre><code class="language-julia">Optimiser(ExpDecay(..), Opt(..))
|
||
|
||
opt = Optimiser(ExpDecay(), ADAM())</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L476-L497">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.InvDecay" href="#Flux.Optimise.InvDecay"><code>Flux.Optimise.InvDecay</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">InvDecay(γ = 0.001)</code></pre><p>Apply inverse time decay to an optimiser, so that the effective step size at iteration <code>n</code> is <code>eta / (1 + γ * n)</code> where <code>eta</code> is the initial step size. The wrapped optimiser's step size is not modified.</p><p><strong>Examples</strong></p><pre><code class="language-julia">Optimiser(InvDecay(..), Opt(..))</code></pre></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L449-L460">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.WeightDecay" href="#Flux.Optimise.WeightDecay"><code>Flux.Optimise.WeightDecay</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">WeightDecay(wd = 0)</code></pre><p>Decay weights by <code>wd</code>.</p><p><strong>Parameters</strong></p><ul><li>Weight decay (<code>wd</code>)</li></ul></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L518-L525">source</a></section></article><h2 id="Gradient-Clipping-1"><a class="docs-heading-anchor" href="#Gradient-Clipping-1">Gradient Clipping</a><a class="docs-heading-anchor-permalink" href="#Gradient-Clipping-1" title="Permalink"></a></h2><p>Gradient clipping is useful for training recurrent neural networks, which have a tendency to suffer from the exploding gradient problem. An example usage is</p><pre><code class="language-julia">opt = Optimiser(ClipValue(1e-3), ADAM(1e-3))</code></pre><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ClipValue" href="#Flux.Optimise.ClipValue"><code>Flux.Optimise.ClipValue</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ClipValue(thresh)</code></pre><p>Clip gradients when their absolute value exceeds <code>thresh</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L537-L541">source</a></section></article><article class="docstring"><header><a class="docstring-binding" id="Flux.Optimise.ClipNorm" href="#Flux.Optimise.ClipNorm"><code>Flux.Optimise.ClipNorm</code></a> — <span class="docstring-category">Type</span></header><section><div><pre><code class="language-julia">ClipNorm(thresh)</code></pre><p>Clip gradients when their L2 norm exceeds <code>thresh</code>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/FluxML/Flux.jl/blob/ddd0f4e747347555894f71ae275ac3906fc87b9e/src/optimise/optimisers.jl#L548-L552">source</a></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../../data/dataloader/">« DataLoader</a><a class="docs-footer-nextpage" href="../training/">Training »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 27 May 2020 11:52">Wednesday 27 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|