</script><linkhref="https://cdnjs.cloudflare.com/ajax/libs/normalize/4.2.0/normalize.min.css"rel="stylesheet"type="text/css"/><linkhref="https://fonts.googleapis.com/css?family=Lato|Roboto+Mono"rel="stylesheet"type="text/css"/><linkhref="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.6.3/css/font-awesome.min.css"rel="stylesheet"type="text/css"/><linkhref="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"rel="stylesheet"type="text/css"/><script>documenterBaseURL="../.."</script><scriptsrc="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.2.0/require.min.js"data-main="../../assets/documenter.js"></script><scriptsrc="../../siteinfo.js"></script><scriptsrc="../../../versions.js"></script><linkhref="../../assets/documenter.css"rel="stylesheet"type="text/css"/><linkhref="../../assets/flux.css"rel="stylesheet"type="text/css"/></head><body><navclass="toc"><h1>Flux</h1><selectid="version-selector"onChange="window.location.href=this.value"style="visibility: hidden"></select><formclass="search"id="search-form"action="../../search/"><inputid="search-query"name="q"type="text"placeholder="Search docs"/></form><ul><li><aclass="toctext"href="../../">Home</a></li><li><spanclass="toctext">Building Models</span><ul><liclass="current"><aclass="toctext"href>Basics</a><ulclass="internal"><li><aclass="toctext"href="#Taking-Gradients-1">Taking Gradients</a></li><li><aclass="toctext"href="#Simple-Models-1">Simple Models</a></li><li><aclass="toctext"href="#Building-Layers-1">Building Layers</a></li><li><aclass="toctext"href="#Stacking-It-Up-1">Stacking It Up</a></li><li><aclass="toctext"href="#Layer-helpers-1">Layer helpers</a></li><li><aclass="toctext"href="#Utility-functions-1">Utility functions</a></li></ul></li><li><aclass="toctext"href="../recurrence/">Recurrence</a></li><li><aclass="toctext"href="../regularisation/">Regularisation</a></li><li><aclass="toctext"href="../layers/">Model Reference</a></li><li><aclass="toctext"href="../nnlib/">NNlib</a></li></ul></li><li><spanclass="toctext">Handling Data</span><ul><li><aclass="toctext"href="../../data/onehot/">One-Hot Encoding</a></li><li><aclass="toctext"href="../../data/dataloader/">DataLoader</a></li></ul></li><li><spanclass="toctext">Training Models</span><ul><li><aclass="toctext"href="../../training/optimisers/">Optimisers</a></li><li><aclass="toctext"href="../../training/training/">Training</a></li></ul></li><li><aclass="toctext"href="../../gpu/">GPU Support</a></li><li><aclass="toctext"href="../../saving/">Saving & Loading</a></li><li><aclass="toctext"href="../../ecosystem/">The Julia Ecosystem</a></li><li><aclass="toctext"href="../../performance/">Performance Tips</a></li><li><aclass="toctext"href="../../community/">Community</a></li></ul></nav><articleid="docs"><header><nav><ul><li>Building Models</li><li><ahref>Basics</a></li></ul><aclass="edit-page"href="https://github.com/FluxML/Flux.jl/blob/master/docs/src/models/basics.md"><spanclass="fa"></span> Edit on GitHub</a></nav><hr/><divid="topbar"><span>Basics</span><aclass="fa fa-bars"href="#"></a></div></header><h1><aclass="nav-anchor"id="Model-Building-Basics-1"href="#Model-Building-Basics-1">Model-Building Basics</a></h1><h2><aclass="nav-anchor"id="Taking-Gradients-1"href="#Taking-Gradients-1">Taking Gradients</a></h2><p>Flux's core feature is taking gradients of Julia code. The <code>gradient</code> function takes another Julia function <code>f</code> and a set of arguments, and returns the gradient with respect to each argument. (It's a good idea to try pasting these examples in the Julia terminal.)</p><pre><codeclass="language-julia-repl">julia> using Flux
6</code></pre><p>When a function has many parameters, we can get gradients of each one at the same time:</p><pre><codeclass="language-julia-repl">julia> f(x, y) = sum((x .- y).^2);
([0, 2], [0, -2])</code></pre><p>But machine learning models can have <em>hundreds</em> of parameters! To handle this, Flux lets you work with collections of parameters, via <code>params</code>. You can get the gradient of all parameters used in a program without explicitly passing them in.</p><pre><codeclass="language-julia-repl">julia> using Flux
-2</code></pre><p>Here, <code>gradient</code> takes a zero-argument function; no arguments are necessary because the <code>params</code> tell it what to differentiate.</p><p>This will come in really handy when dealing with big, complicated models. For now, though, let's start with something simple.</p><h2><aclass="nav-anchor"id="Simple-Models-1"href="#Simple-Models-1">Simple Models</a></h2><p>Consider a simple linear regression, which tries to predict an output array <code>y</code> from an input <code>x</code>.</p><pre><codeclass="language-julia">W = rand(2, 5)
loss(x, y) # ~ 3</code></pre><p>To improve the prediction we can take the gradients of <code>W</code> and <code>b</code> with respect to the loss and perform gradient descent.</p><pre><codeclass="language-julia">using Flux
gs = gradient(() -> loss(x, y), params(W, b))</code></pre><p>Now that we have gradients, we can pull them out and update <code>W</code> to train the model.</p><pre><codeclass="language-julia">W̄ = gs[W]
loss(x, y) # ~ 2.5</code></pre><p>The loss has decreased a little, meaning that our prediction <code>x</code> is closer to the target <code>y</code>. If we have some data we can already try <ahref="../../training/training/">training the model</a>.</p><p>All deep learning in Flux, however complex, is a simple generalisation of this example. Of course, models can <em>look</em> very different – they might have millions of parameters or complex control flow. Let's see how Flux handles more complex models.</p><h2><aclass="nav-anchor"id="Building-Layers-1"href="#Building-Layers-1">Building Layers</a></h2><p>It's common to create more complex models than the linear regression above. For example, we might want to have two linear layers with a nonlinearity like <ahref="https://en.wikipedia.org/wiki/Sigmoid_function">sigmoid</a> (<code>σ</code>) in between them. In the above style we could write this as:</p><pre><codeclass="language-julia">using Flux
model(rand(5)) # => 2-element vector</code></pre><p>This works but is fairly unwieldy, with a lot of repetition – especially as we add more layers. One way to factor this out is to create a function that returns linear layers.</p><pre><codeclass="language-julia">function linear(in, out)
model(rand(5)) # => 2-element vector</code></pre><p>Another (equivalent) way is to create a struct that explicitly represents the affine layer.</p><pre><codeclass="language-julia">struct Affine
# Overload call, so the object can be used as a function
(m::Affine)(x) = m.W * x .+ m.b
a = Affine(10, 5)
a(rand(10)) # => 5-element vector</code></pre><p>Congratulations! You just built the <code>Dense</code> layer that comes with Flux. Flux has many interesting layers available, but they're all things you could have built yourself very easily.</p><p>(There is one small difference with <code>Dense</code>– for convenience it also takes an activation function, like <code>Dense(10, 5, σ)</code>.)</p><h2><aclass="nav-anchor"id="Stacking-It-Up-1"href="#Stacking-It-Up-1">Stacking It Up</a></h2><p>It's pretty common to write models that look something like:</p><pre><codeclass="language-julia">layer1 = Dense(10, 5, σ)
# ...
model(x) = layer3(layer2(layer1(x)))</code></pre><p>For long chains, it might be a bit more intuitive to have a list of layers, like this:</p><pre><codeclass="language-julia">using Flux
model(rand(10)) # => 2-element vector</code></pre><p>Handily, this is also provided for in Flux:</p><pre><codeclass="language-julia">model2 = Chain(
Dense(10, 5, σ),
Dense(5, 2),
softmax)
model2(rand(10)) # => 2-element vector</code></pre><p>This quickly starts to look like a high-level deep learning library; yet you can see how it falls out of simple abstractions, and we lose none of the power of Julia code.</p><p>A nice property of this approach is that because "models" are just functions (possibly with trainable parameters), you can also see this as simple function composition.</p><pre><codeclass="language-julia">m = Dense(5, 2) ∘ Dense(10, 5, σ)
m(rand(10))</code></pre><p>Likewise, <code>Chain</code> will happily work with any Julia function.</p><pre><codeclass="language-julia">m = Chain(x -> x^2, x -> x+1)
m(5) # => 26</code></pre><h2><aclass="nav-anchor"id="Layer-helpers-1"href="#Layer-helpers-1">Layer helpers</a></h2><p>Flux provides a set of helpers for custom layers, which you can enable by calling</p><pre><codeclass="language-julia">Flux.@functor Affine</code></pre><p>This enables a useful extra set of functionality for our <code>Affine</code> layer, such as <ahref="../../training/optimisers/">collecting its parameters</a> or <ahref="../../gpu/">moving it to the GPU</a>.</p><h2><aclass="nav-anchor"id="Utility-functions-1"href="#Utility-functions-1">Utility functions</a></h2><p>Flux provides some utility functions to help you generate models in an automated fashion.</p><p><code>outdims</code> enables you to calculate the spatial output dimensions of layers like <code>Conv</code> when applied to input images of a given size. Currently limited to the following layers:</p><ul><li><code>Chain</code></li><li><code>Dense</code></li><li><code>Conv</code></li><li><code>Diagonal</code></li><li><code>Maxout</code></li><li><code>ConvTranspose</code></li><li><code>DepthwiseConv</code></li><li><code>CrossCor</code></li><li><code>MaxPool</code></li><li><code>MeanPool</code></li></ul><divclass="admonition warning"><divclass="admonition-title">Missing docstring.</div><divclass="admonition-text"><p>Missing docstring for <code>outdims</code>. Check Documenter's build log for details.</p></div></div><footer><hr/><aclass="previous"href="../../"><spanclass="direction">Previous</span><spanclass="title">Home</span></a><aclass="next"href="../recurrence/"><spanclass="direction">Next</span><spanclass="title">Recurrence</span></a></footer></article></body></html>