grads = gradient(() -> loss(x, y), θ)</code></pre><p>We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:</p><pre><codeclass="language-julia">using Flux.Optimise: update!
end</code></pre><p>Running this will alter the parameters <code>W</code> and <code>b</code> and our loss should go down. Flux provides a more general way to do optimiser updates like this.</p><pre><codeclass="language-julia">opt = Descent(0.1) # Gradient descent with learning rate 0.1
end</code></pre><p>An optimiser <code>update!</code> accepts a parameter and a gradient, and updates the parameter according to the chosen rule. We can also pass <code>opt</code> to our <ahref="../training/">training loop</a>, which will update all parameters of the model in a loop. However, we can now easily replace <code>Descent</code> with a more advanced optimiser such as <code>ADAM</code>.</p><h2id="Optimiser-Reference-1"><aclass="docs-heading-anchor"href="#Optimiser-Reference-1">Optimiser Reference</a><aclass="docs-heading-anchor-permalink"href="#Optimiser-Reference-1"title="Permalink"></a></h2><p>All optimisers return an object that, when passed to <code>train!</code>, will update the parameters passed to it.</p><articleclass="docstring"><header><aclass="docstring-binding"id="Flux.Optimise.update!"href="#Flux.Optimise.update!"><code>Flux.Optimise.update!</code></a> — <spanclass="docstring-category">Function</span></header><section><div><pre><codeclass="language-julia">update!(opt, p, g)
update!(opt, ps::Params, gs)</code></pre><p>Perform an update step of the parameters <code>ps</code> (or the single parameter <code>p</code>) according to optimizer <code>opt</code> and the gradients <code>gs</code> (the gradient <code>g</code>).</p><p>As a result, the parameters are mutated and the optimizer's internal state may change. </p><p>update!(x, x̄)</p><p>Update the array <code>x</code> according to <code>x .-= x̄</code>.</p></div><aclass="docs-sourcelink"target="_blank"href="https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/optimise/train.jl#L5-L17">source</a></section></article><articleclass="docstring"><header><aclass="docstring-binding"id="Flux.Optimise.Descent"href="#Flux.Optimise.Descent"><code>Flux.Optimise.Descent</code></a> — <spanclass="docstring-category">Type</span></header><section><div><pre><codeclass="language-julia">Descent(η)</code></pre><p>Classic gradient descent optimiser with learning rate <code>η</code>. For each parameter <code>p</code> and its gradient <code>δp</code>, this runs <code>p -= η*δp</code></p><p><strong>Parameters</strong></p><ul><li>Learning Rate (η): The amount by which the gradients are discounted before updating the weights. Defaults to <code>0.1</code>.</li></ul><p><strong>Example</strong></p><pre><codeclass="language-julia-repl">opt = Descent() # uses default η (0.1)
Flux.Optimise.update!(opt, ps, gs)</code></pre></div><aclass="docs-sourcelink"target="_blank"href="https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/optimise/optimisers.jl#L8-L31">source</a></section></article><articleclass="docstring"><header><aclass="docstring-binding"id="Flux.Optimise.Momentum"href="#Flux.Optimise.Momentum"><code>Flux.Optimise.Momentum</code></a> — <spanclass="docstring-category">Type</span></header><section><div><pre><codeclass="language-julia">Momentum(η, ρ)</code></pre><p>Gradient descent with learning rate <code>η</code> and momentum <code>ρ</code>.</p><p><strong>Parameters</strong></p><ul><li>Learning Rate (<code>η</code>): Amount by which gradients are discounted before updating the weights. Defaults to <code>0.01</code>.</li><li>Momentum (<code>ρ</code>): Parameter that accelerates descent in the relevant direction and dampens oscillations. Defaults to <code>0.9</code>.</li></ul><p><strong>Examples</strong></p><pre><codeclass="language-julia">opt = Momentum() # uses defaults of η = 0.01 and ρ = 0.9
opt = Momentum(0.01, 0.99)</code></pre></div><aclass="docs-sourcelink"target="_blank"href="https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/optimise/optimisers.jl#L42-L57">source</a></section></article><articleclass="docstring"><header><aclass="docstring-binding"id="Flux.Optimise.Nesterov"href="#Flux.Optimise.Nesterov"><code>Flux.Optimise.Nesterov</code></a> — <spanclass="docstring-category">Type</span></header><section><div><pre><codeclass="language-julia">Nesterov(η, ρ)</code></pre><p>Gradient descent with learning rate <code>η</code> and Nesterov momentum <code>ρ</code>.</p><p><strong>Parameters</strong></p><ul><li>Learning Rate (η): Amount by which the gradients are dicsounted berfore updating the weights. Defaults to <code>0.001</code>.</li><li>Nesterov Momentum (ρ): Parameters controlling the amount of nesterov momentum to be applied. Defaults to <code>0.9</code>.</li></ul><p><strong>Examples</strong></p><pre><codeclass="language-julia">opt = Nesterov() # uses defaults η = 0.001 and ρ = 0.9
opt = Nesterov(0.003, 0.95)</code></pre></div><aclass="docs-sourcelink"target="_blank"href="https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/optimise/optimisers.jl#L73-L88">source</a></section></article><articleclass="docstring"><header><aclass="docstring-binding"id="Flux.Optimise.RMSProp"href="#Flux.Optimise.RMSProp"><code>Flux.Optimise.RMSProp</code></a> — <spanclass="docstring-category">Type</span></header><section><div><pre><codeclass="language-julia">RMSProp(η, ρ)</code></pre><p>Implements the RMSProp algortihm. Often a good choice for recurrent networks. Parameters other than learning rate generally don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning Rate (η): Defaults to <code>0.001</code>.</li><li>Rho (ρ): Defaults to <code>0.9</code>.</li></ul><p><strong>Examples</strong></p><pre><codeclass="language-julia">opt = RMSProp() # uses default η = 0.001 and ρ = 0.9
opt = RMSProp(0.002, 0.95)</code></pre><p><strong>References</strong></p><p><ahref="https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf">RMSProp</a></p></div><aclass="docs-sourcelink"target="_blank"href="https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/optimise/optimisers.jl#L105-L123">source</a></section></article><articleclass="docstring"><header><aclass="docstring-binding"id="Flux.Optimise.ADAM"href="#Flux.Optimise.ADAM"><code>Flux.Optimise.ADAM</code></a> — <spanclass="docstring-category">Type</span></header><section><div><pre><codeclass="language-julia">ADAM(η, β::Tuple)</code></pre><p>Implements the ADAM optimiser.</p><p><strong>Paramters</strong></p><ul><li>Learning Rate (<code>η</code>): Defaults to <code>0.001</code>.</li><li>Beta (<code>β::Tuple</code>): The first element refers to β1 and the second to β2. Defaults to <code>(0.9, 0.999)</code>.</li></ul><p><strong>Examples</strong></p><pre><codeclass="language-julia">opt = ADAM() # uses the default η = 0.001 and β = (0.9, 0.999)
opt = ADAM(0.001, (0.9, 0.8))</code></pre><p><strong>References</strong></p><p><ahref="https://arxiv.org/abs/1412.6980v8">ADAM</a> optimiser.</p></div><aclass="docs-sourcelink"target="_blank"href="https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/optimise/optimisers.jl#L139-L157">source</a></section></article><articleclass="docstring"><header><aclass="docstring-binding"id="Flux.Optimise.AdaMax"href="#Flux.Optimise.AdaMax"><code>Flux.Optimise.AdaMax</code></a> — <spanclass="docstring-category">Type</span></header><section><div><pre><codeclass="language-julia">AdaMax(η, β::Tuple)</code></pre><p>Variant of ADAM based on ∞-norm.</p><p><strong>Parameters</strong></p><ul><li>Learning Rate (η): Defaults to <code>0.001</code></li><li>Beta (β::Tuple): The first element refers to β1 and the second to β2. Defaults to <code>(0.9, 0.999)</code>.</li></ul><p><strong>Examples</strong></p><pre><codeclass="language-julia">opt = AdaMax() # uses default η and β
opt = AdaMax(0.001, (0.9, 0.995))</code></pre><p><strong>References</strong></p><p><ahref="https://arxiv.org/abs/1412.6980v9">AdaMax</a> optimiser.</p></div><aclass="docs-sourcelink"target="_blank"href="https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/optimise/optimisers.jl#L221-L238">source</a></section></article><articleclass="docstring"><header><aclass="docstring-binding"id="Flux.Optimise.ADAGrad"href="#Flux.Optimise.ADAGrad"><code>Flux.Optimise.ADAGrad</code></a> — <spanclass="docstring-category">Type</span></header><section><div><pre><codeclass="language-julia">ADAGrad(η)</code></pre><p>Implements AdaGrad. It has parameter specific learning rates based on how frequently it is updated.</p><p><strong>Parameters</strong></p><ul><li>Learning Rate (η): Defaults to <code>0.1</code></li></ul><p><strong>Examples</strong></p><pre><codeclass="language-julia">opt = ADAGrad() # uses default η = 0.1
opt = ADAGrad(0.001)</code></pre><p><strong>References</strong></p><p><ahref="http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf">ADAGrad</a> optimiser. Parameters don't need tuning.</p></div><aclass="docs-sourcelink"target="_blank"href="https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/optimise/optimisers.jl#L257-L275">source</a></section></article><articleclass="docstring"><header><aclass="docstring-binding"id="Flux.Optimise.ADADelta"href="#Flux.Optimise.ADADelta"><code>Flux.Optimise.ADADelta</code></a> — <spanclass="docstring-category">Type</span></header><section><div><pre><codeclass="language-julia">ADADelta(ρ)</code></pre><p>Version of ADAGrad that adapts learning rate based on a window of past gradient updates. Parameters don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Rho (ρ): Factor by which gradient is decayed at each time step. Defaults to <code>0.9</code>.</li></ul><p><strong>Examples</strong></p><pre><codeclass="language-julia">opt = ADADelta() # uses default ρ = 0.9
opt = ADADelta(0.89)</code></pre><p><strong>References</strong></p><p><ahref="https://arxiv.org/abs/1212.5701">ADADelta</a> optimiser.</p></div><aclass="docs-sourcelink"target="_blank"href="https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/optimise/optimisers.jl#L290-L306">source</a></section></article><articleclass="docstring"><header><aclass="docstring-binding"id="Flux.Optimise.AMSGrad"href="#Flux.Optimise.AMSGrad"><code>Flux.Optimise.AMSGrad</code></a> — <spanclass="docstring-category">Type</span></header><section><div><pre><codeclass="language-julia">AMSGrad(η, β::Tuple)</code></pre><p>Implements AMSGrad version of the ADAM optimiser. Parameters don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning Rate (η): Defaults to <code>0.001</code>.</li><li>Beta (β::Tuple): The first element refers to β1 and the second to β2. Defaults to <code>(0.9, 0.999)</code>.</li></ul><p><strong>Examples</strong></p><pre><codeclass="language-julia">opt = AMSGrad() # uses default η and β
opt = AMSGrad(0.001, (0.89, 0.995))</code></pre><p><strong>References</strong></p><p><ahref="https://openreview.net/forum?id=ryQu7f-RZ">AMSGrad</a> optimiser.</p></div><aclass="docs-sourcelink"target="_blank"href="https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/optimise/optimisers.jl#L323-L340">source</a></section></article><articleclass="docstring"><header><aclass="docstring-binding"id="Flux.Optimise.NADAM"href="#Flux.Optimise.NADAM"><code>Flux.Optimise.NADAM</code></a> — <spanclass="docstring-category">Type</span></header><section><div><pre><codeclass="language-julia">NADAM(η, β::Tuple)</code></pre><p>Nesterov variant of ADAM. Parameters don't need tuning.</p><p><strong>Parameters</strong></p><ul><li>Learning Rate (η): Defaults to <code>0.001</code>.</li><li>Beta (β::Tuple): The first element refers to β1 and the second to β2. Defaults to <code>(0.9, 0.999)</code>.</li></ul><p><strong>Examples</strong></p><pre><codeclass="language-julia">opt = NADAM() # uses default η and β
opt = NADAM(0.002, (0.89, 0.995))</code></pre><p><strong>References</strong></p><p><ahref="http://cs229.stanford.edu/proj2015/054_report.pdf">NADAM</a> optimiser.</p></div><aclass="docs-sourcelink"target="_blank"href="https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/optimise/optimisers.jl#L358-L375">source</a></section></article><articleclass="docstring"><header><aclass="docstring-binding"id="Flux.Optimise.ADAMW"href="#Flux.Optimise.ADAMW"><code>Flux.Optimise.ADAMW</code></a> — <spanclass="docstring-category">Function</span></header><section><div><pre><codeclass="language-julia">ADAMW(η, β::Tuple, decay)</code></pre><p>Variant of ADAM defined by fixing weight decay regularization.</p><p><strong>Parameters</strong></p><ul><li>Learning Rate (η): Defaults to <code>0.001</code>.</li><li>Beta (β::Tuple): The first element refers to β1 and the second to β2. Defaults to (0.9, 0.999).</li><li>decay: Decay applied to weights during optimisation. Defaults to 0.</li></ul><p><strong>Examples</strong></p><pre><codeclass="language-julia">opt = ADAMW() # uses default η, β and decay
opt = ADAMW(0.001, (0.89, 0.995), 0.1)</code></pre><p><strong>References</strong></p><p><ahref="https://arxiv.org/abs/1711.05101">ADAMW</a></p></div><aclass="docs-sourcelink"target="_blank"href="https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/optimise/optimisers.jl#L394-L412">source</a></section></article><h2id="Optimiser-Interface-1"><aclass="docs-heading-anchor"href="#Optimiser-Interface-1">Optimiser Interface</a><aclass="docs-heading-anchor-permalink"href="#Optimiser-Interface-1"title="Permalink"></a></h2><p>Flux's optimisers are built around a <code>struct</code> that holds all the optimiser parameters along with a definition of how to apply the update rule associated with it. We do this via the <code>apply!</code> function which takes the optimiser as the first argument followed by the parameter and its corresponding gradient.</p><p>In this manner Flux also allows one to create custom optimisers to be used seamlessly. Let's work this with a simple example.</p><pre><codeclass="language-julia">mutable struct Momentum
Momentum(eta::Real, rho::Real) = Momentum(eta, rho, IdDict())</code></pre><p>The <code>Momentum</code> type will act as our optimiser in this case. Notice that we have added all the parameters as fields, along with the velocity which we will use as our state dictionary. Each parameter in our models will get an entry in there. We can now define the rule applied when this optimiser is invoked.</p><pre><codeclass="language-julia">function apply!(o::Momentum, x, Δ)
η, ρ = o.eta, o.rho
v = get!(o.velocity, x, zero(x))::typeof(x)
@. v = ρ * v - η * Δ
@. Δ = -v
end</code></pre><p>This is the basic definition of a Momentum update rule given by:</p><div>\[v = ρ * v - η * Δ
w = w - v\]</div><p>The <code>apply!</code> defines the update rules for an optimiser <code>opt</code>, given the parameters and gradients. It returns the updated gradients. Here, every parameter <code>x</code> is retrieved from the running state <code>v</code> and subsequently updates the state of the optimiser.</p><p>Flux internally calls on this function via the <code>update!</code> function. It shares the API with <code>apply!</code> but ensures that multiple parameters are handled gracefully.</p><h2id="Composing-Optimisers-1"><aclass="docs-heading-anchor"href="#Composing-Optimisers-1">Composing Optimisers</a><aclass="docs-heading-anchor-permalink"href="#Composing-Optimisers-1"title="Permalink"></a></h2><p>Flux defines a special kind of optimiser simply called <code>Optimiser</code> which takes in arbitrary optimisers as input. Its behaviour is similar to the usual optimisers, but differs in that it acts by calling the optimisers listed in it sequentially. Each optimiser produces a modified gradient that will be fed into the next, and the resultant update will be applied to the parameter as usual. A classic use case is where adding decays is desirable. Flux defines some basic decays including <code>ExpDecay</code>, <code>InvDecay</code> etc.</p><pre><codeclass="language-julia">opt = Optimiser(ExpDecay(0.001, 0.1, 1000, 1e-4), Descent())</code></pre><p>Here we apply exponential decay to the <code>Descent</code> optimiser. The defaults of <code>ExpDecay</code> say that its learning rate will be decayed every 1000 steps. It is then applied like any optimiser.</p><pre><codeclass="language-julia">w = randn(10, 10)
loss(rand(10)) # around 0.9</code></pre><p>In this manner it is possible to compose optimisers for some added flexibility.</p><h2id="Decays-1"><aclass="docs-heading-anchor"href="#Decays-1">Decays</a><aclass="docs-heading-anchor-permalink"href="#Decays-1"title="Permalink"></a></h2><p>Similar to optimisers, Flux also defines some simple decays that can be used in conjunction with other optimisers, or standalone.</p><articleclass="docstring"><header><aclass="docstring-binding"id="Flux.Optimise.ExpDecay"href="#Flux.Optimise.ExpDecay"><code>Flux.Optimise.ExpDecay</code></a> — <spanclass="docstring-category">Type</span></header><section><div><pre><codeclass="language-julia">ExpDecay(eta, decay, decay_step, clip)</code></pre><p>Discount the learning rate <code>eta</code> by a multiplicative factor <code>decay</code> every <code>decay_step</code> till a minimum of <code>clip</code>.</p><p><strong>Parameters</strong></p><ul><li>Learning Rate (eta): Defaults to <code>0.001</code>.</li><li>decay: Factor by which the learning rate is discounted. Defaults to <code>0.1</code>.</li><li>decay_step: Schedules decay operations by setting number of steps between two decay operations. Defaults to <code>1000</code>.</li><li>clip: Minimum value of learning rate. Defaults to <code>1e-4</code>.</li></ul><p><strong>Example</strong></p><p>To apply exponential decay to an optimiser:</p><pre><codeclass="language-julia">Optimiser(ExpDecay(..), Opt(..))
opt = Optimiser(ExpDecay(), ADAM())</code></pre></div><aclass="docs-sourcelink"target="_blank"href="https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/optimise/optimisers.jl#L471-L488">source</a></section></article><articleclass="docstring"><header><aclass="docstring-binding"id="Flux.Optimise.InvDecay"href="#Flux.Optimise.InvDecay"><code>Flux.Optimise.InvDecay</code></a> — <spanclass="docstring-category">Type</span></header><section><div><pre><codeclass="language-julia">InvDecay(γ)</code></pre><p>Applies inverse time decay to an optimiser, i.e., the effective step size at iteration <code>n</code> is <code>eta / (1 + γ * n)</code> where <code>eta</code> is the initial step size. The wrapped optimiser's step size is not modified.</p><p><strong>Parameters</strong></p><ul><li>gamma (γ): Defaults to <code>0.001</code></li></ul><p><strong>Example</strong></p><pre><codeclass="language-julia">Optimiser(InvDecay(..), Opt(..))</code></pre></div><aclass="docs-sourcelink"target="_blank"href="https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/optimise/optimisers.jl#L443-L455">source</a></section></article><articleclass="docstring"><header><aclass="docstring-binding"id="Flux.Optimise.WeightDecay"href="#Flux.Optimise.WeightDecay"><code>Flux.Optimise.WeightDecay</code></a> — <spanclass="docstring-category">Type</span></header><section><div><pre><codeclass="language-julia">WeightDecay(wd)</code></pre><p>Decays the weight by <code>wd</code></p><p><strong>Parameters</strong></p><ul><li>weight decay (wd): 0</li></ul></div><aclass="docs-sourcelink"target="_blank"href="https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/optimise/optimisers.jl#L509-L516">source</a></section></article></article><navclass="docs-footer"><aclass="docs-footer-prevpage"href="../../data/dataloader/">« DataLoader</a><aclass="docs-footer-nextpage"href="../training/">Training »</a></nav></div><divclass="modal"id="documenter-settings"><divclass="modal-background"></div><divclass="modal-card"><headerclass="modal-card-head"><pclass="modal-card-title">Settings</p><buttonclass="delete"></button></header><sectionclass="modal-card-body"><p><labelclass="label">Theme</label><divclass="select"><selectid="documenter-themepicker"><optionvalue="documenter-light">documenter-light</option><optionvalue="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <ahref="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <spanclass="colophon-date"title="Tuesday 10 March 2020 10:11">Tuesday 10 March 2020</span>. Using Julia version 1.3.1.</p></section><footerclass="modal-card-foot"></footer></div></div></div></body></html>