From 6b5842954d837e88340bcf2955ef996277f046d0 Mon Sep 17 00:00:00 2001 From: autodocs Date: Tue, 12 Sep 2017 13:22:13 +0000 Subject: [PATCH] build based on f205273 --- release-0.3/contributing.html | 2 +- release-0.3/data/onehot.html | 4 +- release-0.3/index.html | 4 +- release-0.3/models/basics.html | 4 +- release-0.3/models/layers.html | 4 +- release-0.3/models/recurrence.html | 13 +- release-0.3/search_index.js | 6 +- release-0.3/training/optimisers.html | 4 +- release-0.3/training/training.html | 2 +- stable/contributing.html | 2 +- stable/data/onehot.html | 4 +- stable/index.html | 4 +- stable/models/basics.html | 4 +- stable/models/layers.html | 4 +- stable/models/recurrence.html | 13 +- stable/search_index.js | 6 +- stable/training/optimisers.html | 4 +- stable/training/training.html | 2 +- v0.3.1/assets/arrow.svg | 63 ++++ v0.3.1/assets/documenter.css | 541 +++++++++++++++++++++++++++ v0.3.1/assets/documenter.js | 129 +++++++ v0.3.1/assets/search.js | 91 +++++ v0.3.1/contributing.html | 9 + v0.3.1/data/onehot.html | 40 ++ v0.3.1/index.html | 10 + v0.3.1/models/basics.html | 78 ++++ v0.3.1/models/layers.html | 14 + v0.3.1/models/recurrence.html | 42 +++ v0.3.1/search.html | 9 + v0.3.1/search_index.js | 235 ++++++++++++ v0.3.1/siteinfo.js | 1 + v0.3.1/training/optimisers.html | 30 ++ v0.3.1/training/training.html | 17 + versions.js | 1 + 34 files changed, 1352 insertions(+), 44 deletions(-) create mode 100644 v0.3.1/assets/arrow.svg create mode 100644 v0.3.1/assets/documenter.css create mode 100644 v0.3.1/assets/documenter.js create mode 100644 v0.3.1/assets/search.js create mode 100644 v0.3.1/contributing.html create mode 100644 v0.3.1/data/onehot.html create mode 100644 v0.3.1/index.html create mode 100644 v0.3.1/models/basics.html create mode 100644 v0.3.1/models/layers.html create mode 100644 v0.3.1/models/recurrence.html create mode 100644 v0.3.1/search.html create mode 100644 v0.3.1/search_index.js create mode 100644 v0.3.1/siteinfo.js create mode 100644 v0.3.1/training/optimisers.html create mode 100644 v0.3.1/training/training.html diff --git a/release-0.3/contributing.html b/release-0.3/contributing.html index 13faf7fd..698722ce 100644 --- a/release-0.3/contributing.html +++ b/release-0.3/contributing.html @@ -6,4 +6,4 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Contributing & Help

Contributing & Help

If you need help, please ask on the Julia forum, the slack (channel #machine-learning), or Flux's Gitter.

Right now, the best way to help out is to try out the examples and report any issues or missing features as you find them. The second best way is to help us spread the word, perhaps by starring the repo.

If you're interested in hacking on Flux, most of the code is pretty straightforward. Adding new layer definitions or cost functions is simple using the Flux DSL itself, and things like data utilities and training processes are all plain Julia code.

If you get stuck or need anything, let us know!

+

Contributing & Help

Contributing & Help

If you need help, please ask on the Julia forum, the slack (channel #machine-learning), or Flux's Gitter.

Right now, the best way to help out is to try out the examples and report any issues or missing features as you find them. The second best way is to help us spread the word, perhaps by starring the repo.

If you're interested in hacking on Flux, most of the code is pretty straightforward. Adding new layer definitions or cost functions is simple using the Flux DSL itself, and things like data utilities and training processes are all plain Julia code.

If you get stuck or need anything, let us know!

diff --git a/release-0.3/data/onehot.html b/release-0.3/data/onehot.html index a1f3f661..4e840987 100644 --- a/release-0.3/data/onehot.html +++ b/release-0.3/data/onehot.html @@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

One-Hot Encoding

One-Hot Encoding

It's common to encode categorical variables (like true, false or cat, dog) in "one-of-k" or "one-hot" form. Flux provides the onehot function to make this easy.

julia> using Flux: onehot
+

One-Hot Encoding

One-Hot Encoding

It's common to encode categorical variables (like true, false or cat, dog) in "one-of-k" or "one-hot" form. Flux provides the onehot function to make this easy.

julia> using Flux: onehot
 
 julia> onehot(:b, [:a, :b, :c])
 3-element Flux.OneHotVector:
@@ -37,4 +37,4 @@ julia> onecold(ans, [:a, :b, :c])
 3-element Array{Symbol,1}:
   :b
   :a
-  :b

Note that these operations returned OneHotVector and OneHotMatrix rather than Arrays. OneHotVectors behave like normal vectors but avoid any unnecessary cost compared to using an integer index directly.. For example, multiplying a matrix with a one-hot vector simply slices out the relevant row of the matrix under the hood.

+ :b

Note that these operations returned OneHotVector and OneHotMatrix rather than Arrays. OneHotVectors behave like normal vectors but avoid any unnecessary cost compared to using an integer index directly. For example, multiplying a matrix with a one-hot vector simply slices out the relevant row of the matrix under the hood.

diff --git a/release-0.3/index.html b/release-0.3/index.html index bfb2d319..b3fb6887 100644 --- a/release-0.3/index.html +++ b/release-0.3/index.html @@ -6,5 +6,5 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Home

Flux: The Julia Machine Learning Library

Flux is a library for machine learning. It comes "batteries-included" with many useful tools built in, but also lets you use the full power of the Julia language where you need it. The whole stack is implemented in clean Julia code (right down to the GPU kernels) and any part can be tweaked to your liking.

Installation

Install Julia 0.6.0 or later, if you haven't already.

Pkg.add("Flux")
-Pkg.test("Flux") # Check things installed correctly

Start with the basics. The model zoo is also a good starting point for many common kinds of models.

+

Home

Flux: The Julia Machine Learning Library

Flux is a library for machine learning. It comes "batteries-included" with many useful tools built in, but also lets you use the full power of the Julia language where you need it. The whole stack is implemented in clean Julia code (right down to the GPU kernels) and any part can be tweaked to your liking.

Installation

Install Julia 0.6.0 or later, if you haven't already.

Pkg.add("Flux")
+Pkg.test("Flux") # Check things installed correctly

Start with the basics. The model zoo is also a good starting point for many common kinds of models.

diff --git a/release-0.3/models/basics.html b/release-0.3/models/basics.html index bb829b49..b6b34a4a 100644 --- a/release-0.3/models/basics.html +++ b/release-0.3/models/basics.html @@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Basics

Model-Building Basics

Taking Gradients

Consider a simple linear regression, which tries to predict an output array y from an input x. (It's a good idea to follow this example in the Julia repl.)

W = rand(2, 5)
+

Basics

Model-Building Basics

Taking Gradients

Consider a simple linear regression, which tries to predict an output array y from an input x. (It's a good idea to follow this example in the Julia repl.)

W = rand(2, 5)
 b = rand(2)
 
 predict(x) = W*x .+ b
@@ -24,7 +24,7 @@ back!(l)

loss(x, y) returns the same number, but it& W.data .-= 0.1grad(W) -loss(x, y) # ~ 2.5

The loss has decreased a little, meaning that our prediction x is closer to the target y. If we have some data we can already try training the model.

All deep learning in Flux, however complex, is a simple generalisation of this example. Of course, models can look very different – they might have millions of parameters or complex control flow, and there are ways to manage this complexity. Let's see what that looks like.

Building Layers

It's common to create more complex models than the linear regression above. For example, we might want to have two linear layers with a nonlinearity like sigmoid (σ) in between them. In the above style we could write this as:

W1 = param(rand(3, 5))
+loss(x, y) # ~ 2.5

The loss has decreased a little, meaning that our prediction x is closer to the target y. If we have some data we can already try training the model.

All deep learning in Flux, however complex, is a simple generalisation of this example. Of course, models can look very different – they might have millions of parameters or complex control flow, and there are ways to manage this complexity. Let's see what that looks like.

Building Layers

It's common to create more complex models than the linear regression above. For example, we might want to have two linear layers with a nonlinearity like sigmoid (σ) in between them. In the above style we could write this as:

W1 = param(rand(3, 5))
 b1 = param(rand(3))
 layer1(x) = W1 * x .+ b1
 
diff --git a/release-0.3/models/layers.html b/release-0.3/models/layers.html
index aa23c806..01f8e87b 100644
--- a/release-0.3/models/layers.html
+++ b/release-0.3/models/layers.html
@@ -6,9 +6,9 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
 
 ga('create', 'UA-36890222-9', 'auto');
 ga('send', 'pageview');
-

Layer Reference

Model Layers

Flux.ChainType.
Chain(layers...)

Chain multiple layers / functions together, so that they are called in sequence on a given input.

m = Chain(x -> x^2, x -> x+1)
+

Layer Reference

Model Layers

Flux.ChainType.
Chain(layers...)

Chain multiple layers / functions together, so that they are called in sequence on a given input.

m = Chain(x -> x^2, x -> x+1)
 m(5) == 26
 
 m = Chain(Dense(10, 5), Dense(5, 2))
 x = rand(10)
-m(x) == m[2](m[1](x))

Chain also supports indexing and slicing, e.g. m[2] or m[1:end-1]. m[1:3](x) will calculate the output of the first three layers.

source
Flux.DenseType.
Dense(in::Integer, out::Integer, σ = identity)

Creates a traditional Dense layer with parameters W and b.

y = σ.(W * x .+ b)

The input x must be a vector of length in, or a batch of vectors represented as an in × N matrix. The out y will be a vector or batch of length in.

source
+m(x) == m[2](m[1](x))

Chain also supports indexing and slicing, e.g. m[2] or m[1:end-1]. m[1:3](x) will calculate the output of the first three layers.

source
Flux.DenseType.
Dense(in::Integer, out::Integer, σ = identity)

Creates a traditional Dense layer with parameters W and b.

y = σ.(W * x .+ b)

The input x must be a vector of length in, or a batch of vectors represented as an in × N matrix. The out y will be a vector or batch of length in.

source
diff --git a/release-0.3/models/recurrence.html b/release-0.3/models/recurrence.html index 40fcc25e..17e8abc2 100644 --- a/release-0.3/models/recurrence.html +++ b/release-0.3/models/recurrence.html @@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Recurrence

Recurrent Models

Recurrent Cells

In the simple feedforward case, our model m is a simple function from various inputs xᵢ to predictions yᵢ. (For example, each x might be an MNIST digit and each y a digit label.) Each prediction is completely independent of any others, and using the same x will always produce the same y.

y₁ = f(x₁)
+

Recurrence

Recurrent Models

Recurrent Cells

In the simple feedforward case, our model m is a simple function from various inputs xᵢ to predictions yᵢ. (For example, each x might be an MNIST digit and each y a digit label.) Each prediction is completely independent of any others, and using the same x will always produce the same y.

y₁ = f(x₁)
 y₂ = f(x₂)
 y₃ = f(x₃)
 # ...

Recurrent networks introduce a hidden state that gets carried over each time we run the model. The model now takes the old h as an input, and produces a new h as output, each time we run it.

h = # ... initial state ...
@@ -25,19 +25,18 @@ end
 x = rand(10) # dummy data
 h = rand(5)  # initial hidden state
 
-h, y = rnn(h, x)

If you run the last line a few times, you'll notice the output y changing slightly even though the input x is the same.

We sometimes refer to functions like rnn above, which explicitly manage state, as recurrent cells. There are various recurrent cells available, which are documented in the layer reference. The hand-written example above can be replaced with:

using Flux
+h, y = rnn(h, x)

If you run the last line a few times, you'll notice the output y changing slightly even though the input x is the same.

We sometimes refer to functions like rnn above, which explicitly manage state, as recurrent cells. There are various recurrent cells available, which are documented in the layer reference. The hand-written example above can be replaced with:

using Flux
 
-m = Flux.RNNCell(10, 5)
+rnn2 = Flux.RNNCell(10, 5)
 
 x = rand(10) # dummy data
 h = rand(5)  # initial hidden state
 
-h, y = rnn(h, x)

Stateful Models

For the most part, we don't want to manage hidden states ourselves, but to treat our models as being stateful. Flux provides the Recur wrapper to do this.

x = rand(10)
+h, y = rnn2(h, x)

Stateful Models

For the most part, we don't want to manage hidden states ourselves, but to treat our models as being stateful. Flux provides the Recur wrapper to do this.

x = rand(10)
 h = rand(5)
 
 m = Flux.Recur(rnn, h)
 
 y = m(x)

The Recur wrapper stores the state between runs in the m.state field.

If you use the RNN(10, 5) constructor – as opposed to RNNCell – you'll see that it's simply a wrapped cell.

julia> RNN(10, 5)
-Recur(RNNCell(Dense(15, 5)))

Sequences

Often we want to work with sequences of inputs, rather than individual xs.

seq = [rand(10) for i = 1:10]

With Recur, applying our model to each element of a sequence is trivial:

map(m, seq) # returns a list of 5-element vectors

To make this a bit more convenient, Flux has the Seq type. This is just a list, but tagged so that we know it's meant to be used as a sequence of data points.

seq = Seq([rand(10) for i = 1:10])
-m(seq) # returns a new Seq of length 10

When we apply the model m to a seq, it gets mapped over every item in the sequence in order. This is just like the code above, but often more convenient.

You can get this behaviour more generally with the Over wrapper.

m = Over(Dense(10,5))
-m(seq) # returns a new Seq of length 10

Truncating Gradients

By default, calculating the gradients in a recurrent layer involves the entire history. For example, if we call the model on 100 inputs, calling back! will calculate the gradient for those 100 calls. If we then calculate another 10 inputs we have to calculate 110 gradients – this accumulates and quickly becomes expensive.

To avoid this we can truncate the gradient calculation, forgetting the history.

truncate!(m)

Calling truncate! wipes the slate clean, so we can call the model with more inputs without building up an expensive gradient computation.

+Recur(RNNCell(Dense(15, 5)))

Sequences

Often we want to work with sequences of inputs, rather than individual xs.

seq = [rand(10) for i = 1:10]

With Recur, applying our model to each element of a sequence is trivial:

m.(seq) # returns a list of 5-element vectors

This works even when we've chain recurrent layers into a larger model.

m = Chain(LSTM(10, 15), Dense(15, 5))
+m.(seq)

Truncating Gradients

By default, calculating the gradients in a recurrent layer involves the entire history. For example, if we call the model on 100 inputs, calling back! will calculate the gradient for those 100 calls. If we then calculate another 10 inputs we have to calculate 110 gradients – this accumulates and quickly becomes expensive.

To avoid this we can truncate the gradient calculation, forgetting the history.

truncate!(m)

Calling truncate! wipes the slate clean, so we can call the model with more inputs without building up an expensive gradient computation.

diff --git a/release-0.3/search_index.js b/release-0.3/search_index.js index b688d75a..ac4ef48e 100644 --- a/release-0.3/search_index.js +++ b/release-0.3/search_index.js @@ -85,7 +85,7 @@ var documenterSearchIndex = {"docs": [ "page": "Recurrence", "title": "Recurrent Cells", "category": "section", - "text": "In the simple feedforward case, our model m is a simple function from various inputs xᵢ to predictions yᵢ. (For example, each x might be an MNIST digit and each y a digit label.) Each prediction is completely independent of any others, and using the same x will always produce the same y.y₁ = f(x₁)\ny₂ = f(x₂)\ny₃ = f(x₃)\n# ...Recurrent networks introduce a hidden state that gets carried over each time we run the model. The model now takes the old h as an input, and produces a new h as output, each time we run it.h = # ... initial state ...\ny₁, h = f(x₁, h)\ny₂, h = f(x₂, h)\ny₃, h = f(x₃, h)\n# ...Information stored in h is preserved for the next prediction, allowing it to function as a kind of memory. This also means that the prediction made for a given x depends on all the inputs previously fed into the model.(This might be important if, for example, each x represents one word of a sentence; the model's interpretation of the word \"bank\" should change if the previous input was \"river\" rather than \"investment\".)Flux's RNN support closely follows this mathematical perspective. The most basic RNN is as close as possible to a standard Dense layer, and the output and hidden state are the same. By convention, the hidden state is the first input and output.Wxh = randn(5, 10)\nWhh = randn(5, 5)\nb = randn(5)\n\nfunction rnn(h, x)\n h = tanh.(Wxh * x .+ Whh * h .+ b)\n return h, h\nend\n\nx = rand(10) # dummy data\nh = rand(5) # initial hidden state\n\nh, y = rnn(h, x)If you run the last line a few times, you'll notice the output y changing slightly even though the input x is the same.We sometimes refer to functions like rnn above, which explicitly manage state, as recurrent cells. There are various recurrent cells available, which are documented in the layer reference. The hand-written example above can be replaced with:using Flux\n\nm = Flux.RNNCell(10, 5)\n\nx = rand(10) # dummy data\nh = rand(5) # initial hidden state\n\nh, y = rnn(h, x)" + "text": "In the simple feedforward case, our model m is a simple function from various inputs xᵢ to predictions yᵢ. (For example, each x might be an MNIST digit and each y a digit label.) Each prediction is completely independent of any others, and using the same x will always produce the same y.y₁ = f(x₁)\ny₂ = f(x₂)\ny₃ = f(x₃)\n# ...Recurrent networks introduce a hidden state that gets carried over each time we run the model. The model now takes the old h as an input, and produces a new h as output, each time we run it.h = # ... initial state ...\ny₁, h = f(x₁, h)\ny₂, h = f(x₂, h)\ny₃, h = f(x₃, h)\n# ...Information stored in h is preserved for the next prediction, allowing it to function as a kind of memory. This also means that the prediction made for a given x depends on all the inputs previously fed into the model.(This might be important if, for example, each x represents one word of a sentence; the model's interpretation of the word \"bank\" should change if the previous input was \"river\" rather than \"investment\".)Flux's RNN support closely follows this mathematical perspective. The most basic RNN is as close as possible to a standard Dense layer, and the output and hidden state are the same. By convention, the hidden state is the first input and output.Wxh = randn(5, 10)\nWhh = randn(5, 5)\nb = randn(5)\n\nfunction rnn(h, x)\n h = tanh.(Wxh * x .+ Whh * h .+ b)\n return h, h\nend\n\nx = rand(10) # dummy data\nh = rand(5) # initial hidden state\n\nh, y = rnn(h, x)If you run the last line a few times, you'll notice the output y changing slightly even though the input x is the same.We sometimes refer to functions like rnn above, which explicitly manage state, as recurrent cells. There are various recurrent cells available, which are documented in the layer reference. The hand-written example above can be replaced with:using Flux\n\nrnn2 = Flux.RNNCell(10, 5)\n\nx = rand(10) # dummy data\nh = rand(5) # initial hidden state\n\nh, y = rnn2(h, x)" }, { @@ -101,7 +101,7 @@ var documenterSearchIndex = {"docs": [ "page": "Recurrence", "title": "Sequences", "category": "section", - "text": "Often we want to work with sequences of inputs, rather than individual xs.seq = [rand(10) for i = 1:10]With Recur, applying our model to each element of a sequence is trivial:map(m, seq) # returns a list of 5-element vectorsTo make this a bit more convenient, Flux has the Seq type. This is just a list, but tagged so that we know it's meant to be used as a sequence of data points.seq = Seq([rand(10) for i = 1:10])\nm(seq) # returns a new Seq of length 10When we apply the model m to a seq, it gets mapped over every item in the sequence in order. This is just like the code above, but often more convenient.You can get this behaviour more generally with the Over wrapper.m = Over(Dense(10,5))\nm(seq) # returns a new Seq of length 10" + "text": "Often we want to work with sequences of inputs, rather than individual xs.seq = [rand(10) for i = 1:10]With Recur, applying our model to each element of a sequence is trivial:m.(seq) # returns a list of 5-element vectorsThis works even when we've chain recurrent layers into a larger model.m = Chain(LSTM(10, 15), Dense(15, 5))\nm.(seq)" }, { @@ -213,7 +213,7 @@ var documenterSearchIndex = {"docs": [ "page": "One-Hot Encoding", "title": "Batches", "category": "section", - "text": "onehotbatch creates a batch (matrix) of one-hot vectors, and argmax treats matrices as batches.julia> using Flux: onehotbatch\n\njulia> onehotbatch([:b, :a, :b], [:a, :b, :c])\n3×3 Flux.OneHotMatrix:\n false true false\n true false true\n false false false\n\njulia> onecold(ans, [:a, :b, :c])\n3-element Array{Symbol,1}:\n :b\n :a\n :bNote that these operations returned OneHotVector and OneHotMatrix rather than Arrays. OneHotVectors behave like normal vectors but avoid any unnecessary cost compared to using an integer index directly.. For example, multiplying a matrix with a one-hot vector simply slices out the relevant row of the matrix under the hood." + "text": "onehotbatch creates a batch (matrix) of one-hot vectors, and argmax treats matrices as batches.julia> using Flux: onehotbatch\n\njulia> onehotbatch([:b, :a, :b], [:a, :b, :c])\n3×3 Flux.OneHotMatrix:\n false true false\n true false true\n false false false\n\njulia> onecold(ans, [:a, :b, :c])\n3-element Array{Symbol,1}:\n :b\n :a\n :bNote that these operations returned OneHotVector and OneHotMatrix rather than Arrays. OneHotVectors behave like normal vectors but avoid any unnecessary cost compared to using an integer index directly. For example, multiplying a matrix with a one-hot vector simply slices out the relevant row of the matrix under the hood." }, { diff --git a/release-0.3/training/optimisers.html b/release-0.3/training/optimisers.html index be8f85ae..93615408 100644 --- a/release-0.3/training/optimisers.html +++ b/release-0.3/training/optimisers.html @@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Optimisers

Optimisers

Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.

W = param(rand(2, 5))
+

Optimisers

Optimisers

Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.

W = param(rand(2, 5))
 b = param(rand(2))
 
 predict(x) = W*x .+ b
@@ -27,4 +27,4 @@ end

If we call update, the parameters W Dense(10, 5, σ), Dense(5, 2), softmax)

Instead of having to write [m[1].W, m[1].b, ...], Flux provides a params function params(m) that returns a list of all parameters in the model for you.

For the update step, there's nothing whatsoever wrong with writing the loop above – it'll work just fine – but Flux provides various optimisers that make it more convenient.

opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1
 
-opt()

An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data.

+opt()

An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data.

diff --git a/release-0.3/training/training.html b/release-0.3/training/training.html index e1c5e1c9..252ce1bf 100644 --- a/release-0.3/training/training.html +++ b/release-0.3/training/training.html @@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Training

Training

To actually train a model we need three things:

With these we can call Flux.train!:

Flux.train!(loss, data, opt)

There are plenty of examples in the model zoo.

Loss Functions

The loss that we defined in basics is completely valid for training. We can also define a loss in terms of some model:

m = Chain(
+

Training

Training

To actually train a model we need three things:

  • A loss function, that evaluates how well a model is doing given some input data.

  • A collection of data points that will be provided to the loss function.

  • An optimiser that will update the model parameters appropriately.

With these we can call Flux.train!:

Flux.train!(loss, data, opt)

There are plenty of examples in the model zoo.

Loss Functions

The loss that we defined in basics is completely valid for training. We can also define a loss in terms of some model:

m = Chain(
   Dense(784, 32, σ),
   Dense(32, 10), softmax)
 
diff --git a/stable/contributing.html b/stable/contributing.html
index 13faf7fd..698722ce 100644
--- a/stable/contributing.html
+++ b/stable/contributing.html
@@ -6,4 +6,4 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
 
 ga('create', 'UA-36890222-9', 'auto');
 ga('send', 'pageview');
-

Contributing & Help

Contributing & Help

If you need help, please ask on the Julia forum, the slack (channel #machine-learning), or Flux's Gitter.

Right now, the best way to help out is to try out the examples and report any issues or missing features as you find them. The second best way is to help us spread the word, perhaps by starring the repo.

If you're interested in hacking on Flux, most of the code is pretty straightforward. Adding new layer definitions or cost functions is simple using the Flux DSL itself, and things like data utilities and training processes are all plain Julia code.

If you get stuck or need anything, let us know!

+

Contributing & Help

Contributing & Help

If you need help, please ask on the Julia forum, the slack (channel #machine-learning), or Flux's Gitter.

Right now, the best way to help out is to try out the examples and report any issues or missing features as you find them. The second best way is to help us spread the word, perhaps by starring the repo.

If you're interested in hacking on Flux, most of the code is pretty straightforward. Adding new layer definitions or cost functions is simple using the Flux DSL itself, and things like data utilities and training processes are all plain Julia code.

If you get stuck or need anything, let us know!

diff --git a/stable/data/onehot.html b/stable/data/onehot.html index a1f3f661..4e840987 100644 --- a/stable/data/onehot.html +++ b/stable/data/onehot.html @@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

One-Hot Encoding

One-Hot Encoding

It's common to encode categorical variables (like true, false or cat, dog) in "one-of-k" or "one-hot" form. Flux provides the onehot function to make this easy.

julia> using Flux: onehot
+

One-Hot Encoding

One-Hot Encoding

It's common to encode categorical variables (like true, false or cat, dog) in "one-of-k" or "one-hot" form. Flux provides the onehot function to make this easy.

julia> using Flux: onehot
 
 julia> onehot(:b, [:a, :b, :c])
 3-element Flux.OneHotVector:
@@ -37,4 +37,4 @@ julia> onecold(ans, [:a, :b, :c])
 3-element Array{Symbol,1}:
   :b
   :a
-  :b

Note that these operations returned OneHotVector and OneHotMatrix rather than Arrays. OneHotVectors behave like normal vectors but avoid any unnecessary cost compared to using an integer index directly.. For example, multiplying a matrix with a one-hot vector simply slices out the relevant row of the matrix under the hood.

+ :b

Note that these operations returned OneHotVector and OneHotMatrix rather than Arrays. OneHotVectors behave like normal vectors but avoid any unnecessary cost compared to using an integer index directly. For example, multiplying a matrix with a one-hot vector simply slices out the relevant row of the matrix under the hood.

diff --git a/stable/index.html b/stable/index.html index bfb2d319..b3fb6887 100644 --- a/stable/index.html +++ b/stable/index.html @@ -6,5 +6,5 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Home

Flux: The Julia Machine Learning Library

Flux is a library for machine learning. It comes "batteries-included" with many useful tools built in, but also lets you use the full power of the Julia language where you need it. The whole stack is implemented in clean Julia code (right down to the GPU kernels) and any part can be tweaked to your liking.

Installation

Install Julia 0.6.0 or later, if you haven't already.

Pkg.add("Flux")
-Pkg.test("Flux") # Check things installed correctly

Start with the basics. The model zoo is also a good starting point for many common kinds of models.

+

Home

Flux: The Julia Machine Learning Library

Flux is a library for machine learning. It comes "batteries-included" with many useful tools built in, but also lets you use the full power of the Julia language where you need it. The whole stack is implemented in clean Julia code (right down to the GPU kernels) and any part can be tweaked to your liking.

Installation

Install Julia 0.6.0 or later, if you haven't already.

Pkg.add("Flux")
+Pkg.test("Flux") # Check things installed correctly

Start with the basics. The model zoo is also a good starting point for many common kinds of models.

diff --git a/stable/models/basics.html b/stable/models/basics.html index bb829b49..b6b34a4a 100644 --- a/stable/models/basics.html +++ b/stable/models/basics.html @@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Basics

Model-Building Basics

Taking Gradients

Consider a simple linear regression, which tries to predict an output array y from an input x. (It's a good idea to follow this example in the Julia repl.)

W = rand(2, 5)
+

Basics

Model-Building Basics

Taking Gradients

Consider a simple linear regression, which tries to predict an output array y from an input x. (It's a good idea to follow this example in the Julia repl.)

W = rand(2, 5)
 b = rand(2)
 
 predict(x) = W*x .+ b
@@ -24,7 +24,7 @@ back!(l)

loss(x, y) returns the same number, but it& W.data .-= 0.1grad(W) -loss(x, y) # ~ 2.5

The loss has decreased a little, meaning that our prediction x is closer to the target y. If we have some data we can already try training the model.

All deep learning in Flux, however complex, is a simple generalisation of this example. Of course, models can look very different – they might have millions of parameters or complex control flow, and there are ways to manage this complexity. Let's see what that looks like.

Building Layers

It's common to create more complex models than the linear regression above. For example, we might want to have two linear layers with a nonlinearity like sigmoid (σ) in between them. In the above style we could write this as:

W1 = param(rand(3, 5))
+loss(x, y) # ~ 2.5

The loss has decreased a little, meaning that our prediction x is closer to the target y. If we have some data we can already try training the model.

All deep learning in Flux, however complex, is a simple generalisation of this example. Of course, models can look very different – they might have millions of parameters or complex control flow, and there are ways to manage this complexity. Let's see what that looks like.

Building Layers

It's common to create more complex models than the linear regression above. For example, we might want to have two linear layers with a nonlinearity like sigmoid (σ) in between them. In the above style we could write this as:

W1 = param(rand(3, 5))
 b1 = param(rand(3))
 layer1(x) = W1 * x .+ b1
 
diff --git a/stable/models/layers.html b/stable/models/layers.html
index aa23c806..01f8e87b 100644
--- a/stable/models/layers.html
+++ b/stable/models/layers.html
@@ -6,9 +6,9 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
 
 ga('create', 'UA-36890222-9', 'auto');
 ga('send', 'pageview');
-

Layer Reference

Model Layers

Flux.ChainType.
Chain(layers...)

Chain multiple layers / functions together, so that they are called in sequence on a given input.

m = Chain(x -> x^2, x -> x+1)
+

Layer Reference

Model Layers

Flux.ChainType.
Chain(layers...)

Chain multiple layers / functions together, so that they are called in sequence on a given input.

m = Chain(x -> x^2, x -> x+1)
 m(5) == 26
 
 m = Chain(Dense(10, 5), Dense(5, 2))
 x = rand(10)
-m(x) == m[2](m[1](x))

Chain also supports indexing and slicing, e.g. m[2] or m[1:end-1]. m[1:3](x) will calculate the output of the first three layers.

source
Flux.DenseType.
Dense(in::Integer, out::Integer, σ = identity)

Creates a traditional Dense layer with parameters W and b.

y = σ.(W * x .+ b)

The input x must be a vector of length in, or a batch of vectors represented as an in × N matrix. The out y will be a vector or batch of length in.

source
+m(x) == m[2](m[1](x))

Chain also supports indexing and slicing, e.g. m[2] or m[1:end-1]. m[1:3](x) will calculate the output of the first three layers.

source
Flux.DenseType.
Dense(in::Integer, out::Integer, σ = identity)

Creates a traditional Dense layer with parameters W and b.

y = σ.(W * x .+ b)

The input x must be a vector of length in, or a batch of vectors represented as an in × N matrix. The out y will be a vector or batch of length in.

source
diff --git a/stable/models/recurrence.html b/stable/models/recurrence.html index 40fcc25e..17e8abc2 100644 --- a/stable/models/recurrence.html +++ b/stable/models/recurrence.html @@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Recurrence

Recurrent Models

Recurrent Cells

In the simple feedforward case, our model m is a simple function from various inputs xᵢ to predictions yᵢ. (For example, each x might be an MNIST digit and each y a digit label.) Each prediction is completely independent of any others, and using the same x will always produce the same y.

y₁ = f(x₁)
+

Recurrence

Recurrent Models

Recurrent Cells

In the simple feedforward case, our model m is a simple function from various inputs xᵢ to predictions yᵢ. (For example, each x might be an MNIST digit and each y a digit label.) Each prediction is completely independent of any others, and using the same x will always produce the same y.

y₁ = f(x₁)
 y₂ = f(x₂)
 y₃ = f(x₃)
 # ...

Recurrent networks introduce a hidden state that gets carried over each time we run the model. The model now takes the old h as an input, and produces a new h as output, each time we run it.

h = # ... initial state ...
@@ -25,19 +25,18 @@ end
 x = rand(10) # dummy data
 h = rand(5)  # initial hidden state
 
-h, y = rnn(h, x)

If you run the last line a few times, you'll notice the output y changing slightly even though the input x is the same.

We sometimes refer to functions like rnn above, which explicitly manage state, as recurrent cells. There are various recurrent cells available, which are documented in the layer reference. The hand-written example above can be replaced with:

using Flux
+h, y = rnn(h, x)

If you run the last line a few times, you'll notice the output y changing slightly even though the input x is the same.

We sometimes refer to functions like rnn above, which explicitly manage state, as recurrent cells. There are various recurrent cells available, which are documented in the layer reference. The hand-written example above can be replaced with:

using Flux
 
-m = Flux.RNNCell(10, 5)
+rnn2 = Flux.RNNCell(10, 5)
 
 x = rand(10) # dummy data
 h = rand(5)  # initial hidden state
 
-h, y = rnn(h, x)

Stateful Models

For the most part, we don't want to manage hidden states ourselves, but to treat our models as being stateful. Flux provides the Recur wrapper to do this.

x = rand(10)
+h, y = rnn2(h, x)

Stateful Models

For the most part, we don't want to manage hidden states ourselves, but to treat our models as being stateful. Flux provides the Recur wrapper to do this.

x = rand(10)
 h = rand(5)
 
 m = Flux.Recur(rnn, h)
 
 y = m(x)

The Recur wrapper stores the state between runs in the m.state field.

If you use the RNN(10, 5) constructor – as opposed to RNNCell – you'll see that it's simply a wrapped cell.

julia> RNN(10, 5)
-Recur(RNNCell(Dense(15, 5)))

Sequences

Often we want to work with sequences of inputs, rather than individual xs.

seq = [rand(10) for i = 1:10]

With Recur, applying our model to each element of a sequence is trivial:

map(m, seq) # returns a list of 5-element vectors

To make this a bit more convenient, Flux has the Seq type. This is just a list, but tagged so that we know it's meant to be used as a sequence of data points.

seq = Seq([rand(10) for i = 1:10])
-m(seq) # returns a new Seq of length 10

When we apply the model m to a seq, it gets mapped over every item in the sequence in order. This is just like the code above, but often more convenient.

You can get this behaviour more generally with the Over wrapper.

m = Over(Dense(10,5))
-m(seq) # returns a new Seq of length 10

Truncating Gradients

By default, calculating the gradients in a recurrent layer involves the entire history. For example, if we call the model on 100 inputs, calling back! will calculate the gradient for those 100 calls. If we then calculate another 10 inputs we have to calculate 110 gradients – this accumulates and quickly becomes expensive.

To avoid this we can truncate the gradient calculation, forgetting the history.

truncate!(m)

Calling truncate! wipes the slate clean, so we can call the model with more inputs without building up an expensive gradient computation.

+Recur(RNNCell(Dense(15, 5)))

Sequences

Often we want to work with sequences of inputs, rather than individual xs.

seq = [rand(10) for i = 1:10]

With Recur, applying our model to each element of a sequence is trivial:

m.(seq) # returns a list of 5-element vectors

This works even when we've chain recurrent layers into a larger model.

m = Chain(LSTM(10, 15), Dense(15, 5))
+m.(seq)

Truncating Gradients

By default, calculating the gradients in a recurrent layer involves the entire history. For example, if we call the model on 100 inputs, calling back! will calculate the gradient for those 100 calls. If we then calculate another 10 inputs we have to calculate 110 gradients – this accumulates and quickly becomes expensive.

To avoid this we can truncate the gradient calculation, forgetting the history.

truncate!(m)

Calling truncate! wipes the slate clean, so we can call the model with more inputs without building up an expensive gradient computation.

diff --git a/stable/search_index.js b/stable/search_index.js index b688d75a..ac4ef48e 100644 --- a/stable/search_index.js +++ b/stable/search_index.js @@ -85,7 +85,7 @@ var documenterSearchIndex = {"docs": [ "page": "Recurrence", "title": "Recurrent Cells", "category": "section", - "text": "In the simple feedforward case, our model m is a simple function from various inputs xᵢ to predictions yᵢ. (For example, each x might be an MNIST digit and each y a digit label.) Each prediction is completely independent of any others, and using the same x will always produce the same y.y₁ = f(x₁)\ny₂ = f(x₂)\ny₃ = f(x₃)\n# ...Recurrent networks introduce a hidden state that gets carried over each time we run the model. The model now takes the old h as an input, and produces a new h as output, each time we run it.h = # ... initial state ...\ny₁, h = f(x₁, h)\ny₂, h = f(x₂, h)\ny₃, h = f(x₃, h)\n# ...Information stored in h is preserved for the next prediction, allowing it to function as a kind of memory. This also means that the prediction made for a given x depends on all the inputs previously fed into the model.(This might be important if, for example, each x represents one word of a sentence; the model's interpretation of the word \"bank\" should change if the previous input was \"river\" rather than \"investment\".)Flux's RNN support closely follows this mathematical perspective. The most basic RNN is as close as possible to a standard Dense layer, and the output and hidden state are the same. By convention, the hidden state is the first input and output.Wxh = randn(5, 10)\nWhh = randn(5, 5)\nb = randn(5)\n\nfunction rnn(h, x)\n h = tanh.(Wxh * x .+ Whh * h .+ b)\n return h, h\nend\n\nx = rand(10) # dummy data\nh = rand(5) # initial hidden state\n\nh, y = rnn(h, x)If you run the last line a few times, you'll notice the output y changing slightly even though the input x is the same.We sometimes refer to functions like rnn above, which explicitly manage state, as recurrent cells. There are various recurrent cells available, which are documented in the layer reference. The hand-written example above can be replaced with:using Flux\n\nm = Flux.RNNCell(10, 5)\n\nx = rand(10) # dummy data\nh = rand(5) # initial hidden state\n\nh, y = rnn(h, x)" + "text": "In the simple feedforward case, our model m is a simple function from various inputs xᵢ to predictions yᵢ. (For example, each x might be an MNIST digit and each y a digit label.) Each prediction is completely independent of any others, and using the same x will always produce the same y.y₁ = f(x₁)\ny₂ = f(x₂)\ny₃ = f(x₃)\n# ...Recurrent networks introduce a hidden state that gets carried over each time we run the model. The model now takes the old h as an input, and produces a new h as output, each time we run it.h = # ... initial state ...\ny₁, h = f(x₁, h)\ny₂, h = f(x₂, h)\ny₃, h = f(x₃, h)\n# ...Information stored in h is preserved for the next prediction, allowing it to function as a kind of memory. This also means that the prediction made for a given x depends on all the inputs previously fed into the model.(This might be important if, for example, each x represents one word of a sentence; the model's interpretation of the word \"bank\" should change if the previous input was \"river\" rather than \"investment\".)Flux's RNN support closely follows this mathematical perspective. The most basic RNN is as close as possible to a standard Dense layer, and the output and hidden state are the same. By convention, the hidden state is the first input and output.Wxh = randn(5, 10)\nWhh = randn(5, 5)\nb = randn(5)\n\nfunction rnn(h, x)\n h = tanh.(Wxh * x .+ Whh * h .+ b)\n return h, h\nend\n\nx = rand(10) # dummy data\nh = rand(5) # initial hidden state\n\nh, y = rnn(h, x)If you run the last line a few times, you'll notice the output y changing slightly even though the input x is the same.We sometimes refer to functions like rnn above, which explicitly manage state, as recurrent cells. There are various recurrent cells available, which are documented in the layer reference. The hand-written example above can be replaced with:using Flux\n\nrnn2 = Flux.RNNCell(10, 5)\n\nx = rand(10) # dummy data\nh = rand(5) # initial hidden state\n\nh, y = rnn2(h, x)" }, { @@ -101,7 +101,7 @@ var documenterSearchIndex = {"docs": [ "page": "Recurrence", "title": "Sequences", "category": "section", - "text": "Often we want to work with sequences of inputs, rather than individual xs.seq = [rand(10) for i = 1:10]With Recur, applying our model to each element of a sequence is trivial:map(m, seq) # returns a list of 5-element vectorsTo make this a bit more convenient, Flux has the Seq type. This is just a list, but tagged so that we know it's meant to be used as a sequence of data points.seq = Seq([rand(10) for i = 1:10])\nm(seq) # returns a new Seq of length 10When we apply the model m to a seq, it gets mapped over every item in the sequence in order. This is just like the code above, but often more convenient.You can get this behaviour more generally with the Over wrapper.m = Over(Dense(10,5))\nm(seq) # returns a new Seq of length 10" + "text": "Often we want to work with sequences of inputs, rather than individual xs.seq = [rand(10) for i = 1:10]With Recur, applying our model to each element of a sequence is trivial:m.(seq) # returns a list of 5-element vectorsThis works even when we've chain recurrent layers into a larger model.m = Chain(LSTM(10, 15), Dense(15, 5))\nm.(seq)" }, { @@ -213,7 +213,7 @@ var documenterSearchIndex = {"docs": [ "page": "One-Hot Encoding", "title": "Batches", "category": "section", - "text": "onehotbatch creates a batch (matrix) of one-hot vectors, and argmax treats matrices as batches.julia> using Flux: onehotbatch\n\njulia> onehotbatch([:b, :a, :b], [:a, :b, :c])\n3×3 Flux.OneHotMatrix:\n false true false\n true false true\n false false false\n\njulia> onecold(ans, [:a, :b, :c])\n3-element Array{Symbol,1}:\n :b\n :a\n :bNote that these operations returned OneHotVector and OneHotMatrix rather than Arrays. OneHotVectors behave like normal vectors but avoid any unnecessary cost compared to using an integer index directly.. For example, multiplying a matrix with a one-hot vector simply slices out the relevant row of the matrix under the hood." + "text": "onehotbatch creates a batch (matrix) of one-hot vectors, and argmax treats matrices as batches.julia> using Flux: onehotbatch\n\njulia> onehotbatch([:b, :a, :b], [:a, :b, :c])\n3×3 Flux.OneHotMatrix:\n false true false\n true false true\n false false false\n\njulia> onecold(ans, [:a, :b, :c])\n3-element Array{Symbol,1}:\n :b\n :a\n :bNote that these operations returned OneHotVector and OneHotMatrix rather than Arrays. OneHotVectors behave like normal vectors but avoid any unnecessary cost compared to using an integer index directly. For example, multiplying a matrix with a one-hot vector simply slices out the relevant row of the matrix under the hood." }, { diff --git a/stable/training/optimisers.html b/stable/training/optimisers.html index be8f85ae..93615408 100644 --- a/stable/training/optimisers.html +++ b/stable/training/optimisers.html @@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Optimisers

Optimisers

Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.

W = param(rand(2, 5))
+

Optimisers

Optimisers

Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.

W = param(rand(2, 5))
 b = param(rand(2))
 
 predict(x) = W*x .+ b
@@ -27,4 +27,4 @@ end

If we call update, the parameters W Dense(10, 5, σ), Dense(5, 2), softmax)

Instead of having to write [m[1].W, m[1].b, ...], Flux provides a params function params(m) that returns a list of all parameters in the model for you.

For the update step, there's nothing whatsoever wrong with writing the loop above – it'll work just fine – but Flux provides various optimisers that make it more convenient.

opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1
 
-opt()

An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data.

+opt()

An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data.

diff --git a/stable/training/training.html b/stable/training/training.html index e1c5e1c9..252ce1bf 100644 --- a/stable/training/training.html +++ b/stable/training/training.html @@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Training

Training

To actually train a model we need three things:

  • A loss function, that evaluates how well a model is doing given some input data.

  • A collection of data points that will be provided to the loss function.

  • An optimiser that will update the model parameters appropriately.

With these we can call Flux.train!:

Flux.train!(loss, data, opt)

There are plenty of examples in the model zoo.

Loss Functions

The loss that we defined in basics is completely valid for training. We can also define a loss in terms of some model:

m = Chain(
+

Training

Training

To actually train a model we need three things:

  • A loss function, that evaluates how well a model is doing given some input data.

  • A collection of data points that will be provided to the loss function.

  • An optimiser that will update the model parameters appropriately.

With these we can call Flux.train!:

Flux.train!(loss, data, opt)

There are plenty of examples in the model zoo.

Loss Functions

The loss that we defined in basics is completely valid for training. We can also define a loss in terms of some model:

m = Chain(
   Dense(784, 32, σ),
   Dense(32, 10), softmax)
 
diff --git a/v0.3.1/assets/arrow.svg b/v0.3.1/assets/arrow.svg
new file mode 100644
index 00000000..ee2798d3
--- /dev/null
+++ b/v0.3.1/assets/arrow.svg
@@ -0,0 +1,63 @@
+
+
+
+
+  
+  
+  
+    
+      
+        image/svg+xml
+        
+        
+      
+    
+  
+  
+    
+  
+
diff --git a/v0.3.1/assets/documenter.css b/v0.3.1/assets/documenter.css
new file mode 100644
index 00000000..b8514efd
--- /dev/null
+++ b/v0.3.1/assets/documenter.css
@@ -0,0 +1,541 @@
+/*
+ * The default CSS style for Documenter.jl generated sites
+ *
+ * Heavily inspired by the Julia Sphinx theme
+ *     https://github.com/JuliaLang/JuliaDoc
+ * which extends the sphinx_rtd_theme
+ *     https://github.com/snide/sphinx_rtd_theme
+ *
+ * Part of Documenter.jl
+ *     https://github.com/JuliaDocs/Documenter.jl
+ *
+ * License: MIT
+ */
+
+/* fonts */
+body, input {
+  font-family: 'Lato', 'Helvetica Neue', Arial, sans-serif;
+  font-size: 16px;
+  color: #222;
+  text-rendering: optimizeLegibility;
+}
+
+pre, code {
+  font-family: 'Roboto Mono', Monaco, courier, monospace;
+  font-size: 0.90em;
+}
+
+pre code {
+  font-size: 1em;
+}
+
+a {
+    color: #2980b9;
+    text-decoration: none;
+}
+
+a:hover {
+    color: #3091d1;
+}
+
+a:visited {
+    color: #9b59b6;
+}
+
+body {
+    line-height: 1.5;
+}
+
+h1 { font-size: 1.75em; }
+h2 { font-size: 1.50em; }
+h3 { font-size: 1.25em; }
+h4 { font-size: 1.15em; }
+h5 { font-size: 1.10em; }
+h6 { font-size: 1em; }
+
+h4, h5, h6 {
+    margin: 1em 0;
+}
+
+img {
+    max-width: 100%;
+}
+
+table {
+    border-collapse: collapse;
+    margin: 1em 0;
+}
+
+th, td {
+    border: 1px solid #e1e4e5;
+    padding: 0.5em 1em;
+}
+
+th {
+    border-bottom-width: 2px;
+}
+
+tr:nth-child(even) {
+    background-color: #f3f6f6;
+}
+
+hr {
+    border: 0;
+    border-top: 1px solid #e5e5e5;
+}
+
+/* Inline code and code blocks */
+
+code {
+    padding: 0.1em;
+    background-color: rgba(0,0,0,.04);
+    border-radius: 3px;
+}
+
+pre {
+    background-color: #f5f5f5;
+    border: 1px solid #dddddd;
+    border-radius: 3px;
+    padding: 0.5em;
+    overflow: auto;
+}
+
+pre code {
+    padding: 0;
+    background-color: initial;
+}
+
+/* Headers in admonitions and docstrings */
+.admonition h1,
+article section.docstring h1 {
+    font-size: 1.25em;
+}
+
+.admonition h2,
+article section.docstring h2 {
+    font-size: 1.10em;
+}
+
+.admonition h3,
+.admonition h4,
+.admonition h5,
+.admonition h6,
+article section.docstring h3,
+article section.docstring h4,
+article section.docstring h5,
+article section.docstring h6 {
+    font-size: 1em;
+}
+
+/* Navigation */
+nav.toc {
+    position: fixed;
+    top: 0;
+    left: 0;
+    bottom: 0;
+    width: 20em;
+    overflow-y: auto;
+    padding: 1em 0;
+    background-color: #fcfcfc;
+    box-shadow: inset -14px 0px 5px -12px rgb(210,210,210);
+}
+
+nav.toc .logo {
+    margin: 0 auto;
+    display: block;
+    max-height: 6em;
+    max-width: 18em;
+}
+
+nav.toc h1 {
+    text-align: center;
+    margin-top: .57em;
+    margin-bottom: 0;
+}
+
+nav.toc select {
+    display: block;
+    height: 2em;
+    padding: 0 1.6em 0 1em;
+    min-width: 7em;
+    max-width: 90%;
+    max-width: calc(100% - 5em);
+    margin: 0 auto;
+    font-size: .83em;
+    border: 1px solid #c9c9c9;
+    border-radius: 1em;
+
+    /* TODO: doesn't seem to be centered on Safari */
+    text-align: center;
+    text-align-last: center;
+
+    appearance: none;
+    -moz-appearance: none;
+    -webkit-appearance: none;
+
+    background: white url("arrow.svg");
+    background-size: 1.155em;
+    background-repeat: no-repeat;
+    background-position: right;
+}
+
+nav.toc select:hover {
+    border: 1px solid #a0a0a0;
+}
+
+nav.toc select option {
+    text-align: center;
+}
+
+nav.toc input {
+    display: block;
+    height: 2em;
+    width: 90%;
+    width: calc(100% - 5em);
+    margin: 1.2em auto;
+    padding: 0 1em;
+    border: 1px solid #c9c9c9;
+    border-radius: 1em;
+    font-size: .83em;
+}
+
+nav.toc > ul * {
+    margin: 0;
+}
+
+nav.toc ul {
+    color: #404040;
+    padding: 0;
+    list-style: none;
+}
+
+nav.toc ul .toctext {
+    color: inherit;
+    display: block;
+}
+
+nav.toc ul a:hover {
+    color: #fcfcfc;
+    background-color: #4e4a4a;
+}
+
+nav.toc ul.internal a {
+    color: inherit;
+    display: block;
+}
+
+nav.toc ul.internal a:hover {
+    background-color: #d6d6d6;
+}
+
+nav.toc ul.internal {
+    background-color: #e3e3e3;
+    box-shadow: inset -14px 0px 5px -12px rgb(210,210,210);
+    list-style: none;
+}
+
+nav.toc ul.internal li.toplevel {
+    border-top: 1px solid #c9c9c9;
+    font-weight: bold;
+}
+
+nav.toc ul.internal li.toplevel:first-child {
+    border-top: none;
+}
+
+nav.toc .toctext {
+    padding-top: 0.3em;
+    padding-bottom: 0.3em;
+    padding-right: 1em;
+}
+
+nav.toc ul .toctext {
+    padding-left: 1em;
+}
+
+nav.toc ul ul .toctext {
+    padding-left: 2em;
+}
+
+nav.toc ul ul ul .toctext {
+    padding-left: 3em;
+}
+
+nav.toc li.current > .toctext {
+    border-top: 1px solid #c9c9c9;
+    border-bottom: 1px solid #c9c9c9;
+    color: #404040;
+    font-weight: bold;
+    background-color: white;
+}
+
+article {
+    margin-left: 20em;
+    min-width: 20em;
+    max-width: 48em;
+    padding: 2em;
+}
+
+article > header {}
+
+article > header div#topbar {
+    display: none;
+}
+
+article > header nav ul {
+    display: inline-block;
+    list-style: none;
+    margin: 0;
+    padding: 0;
+}
+
+article > header nav li {
+    display: inline-block;
+    padding-right: 0.2em;
+}
+
+article > header nav li:before {
+    content: "»";
+    padding-right: 0.2em;
+}
+
+article > header .edit-page {
+    float: right;
+}
+
+article > footer {}
+
+article > footer a.prev {
+    float: left;
+}
+article > footer a.next {
+    float: right;
+}
+
+article > footer a .direction:after {
+    content: ": ";
+}
+
+article hr {
+    margin: 1em 0;
+}
+
+article section.docstring {
+    border: 1px solid #ddd;
+    margin: 0.5em 0;
+    padding: 0.5em;
+    border-radius: 3px;
+}
+
+article section.docstring .docstring-header {
+    margin-bottom: 1em;
+}
+
+article section.docstring .docstring-binding {
+    color: #333;
+    font-weight: bold;
+}
+
+article section.docstring .docstring-category {
+    font-style: italic;
+}
+
+article section.docstring a.source-link {
+  float: left;
+  font-weight: bold;
+}
+
+.nav-anchor,
+.nav-anchor:hover,
+.nav-anchor:visited {
+    color: #333;
+}
+
+/*
+ * Admonitions
+ *
+ * Colors (title, body)
+ * warning: #f0b37e #ffedcc (orange)
+ * note:    #6ab0de #e7f2fa (blue)
+ * tip:     #1abc9c #dbfaf4 (green)
+*/
+.admonition {
+    border-radius: 3px;
+    background-color: #eeeeee;
+}
+
+.admonition-title {
+    border-radius: 3px 3px 0 0;
+    background-color: #9b9b9b;
+    padding: 0.15em 0.5em;
+}
+
+.admonition-text {
+    padding: 0.5em;
+}
+
+.admonition-text > :first-child {
+    margin-top: 0;
+}
+
+.admonition-text > :last-child {
+    margin-bottom: 0;
+}
+
+.admonition > .admonition-title:before {
+    font-family: "FontAwesome";
+    margin-right: 5px;
+    content: "\f06a";
+}
+
+.admonition.warning > .admonition-title {
+    background-color: #f0b37e;
+}
+
+.admonition.warning {
+    background-color: #ffedcc;
+}
+
+.admonition.note > .admonition-title {
+    background-color: #6ab0de;
+}
+
+.admonition.note {
+    background-color: #e7f2fa;
+}
+
+.admonition.tip > .admonition-title {
+    background-color: #1abc9c;
+}
+
+.admonition.tip {
+    background-color: #dbfaf4;
+}
+
+
+/* footnotes */
+.footnote {
+    padding-left: 0.8em;
+    border-left: 2px solid #ccc;
+}
+
+/* Search page */
+#search-results .category {
+    font-size: smaller;
+}
+
+#search-results .category:before {
+    content: " ";
+}
+
+/* Overriding the  block style of highligh.js.
+ * We have to override the padding and the background-color, since we style this
+ * part ourselves. Specifically, we style the 
 surrounding the , while
+ * highlight.js applies the .hljs style directly to the  tag.
+ */
+.hljs {
+    background-color: transparent;
+    padding: 0;
+}
+
+@media only screen and (max-width: 768px) {
+    nav.toc {
+        position: fixed;
+        overflow-y: scroll;
+        width: 16em;
+        left: -16em;
+        -webkit-overflow-scrolling: touch;
+        -webkit-transition-property: left; /* Safari */
+        -webkit-transition-duration: 0.3s; /* Safari */
+        transition-property: left;
+        transition-duration: 0.3s;
+        -webkit-transition-timing-function: ease-out; /* Safari */
+        transition-timing-function: ease-out;
+        z-index: 2;
+    }
+
+    nav.toc.show {
+        left: 0;
+    }
+
+    article {
+        margin-left: 0;
+        padding: 3em 0.9em 0 0.9em; /* top right bottom left */
+        overflow-wrap: break-word;
+    }
+
+    article > header {
+        position: fixed;
+        left: 0;
+        z-index: 1;
+    }
+
+    article > header nav, hr {
+        display: none;
+    }
+
+    article > header div#topbar {
+        display: block; /* is mobile */
+        position: fixed;
+        width: 100%;
+        height: 1.5em;
+        padding-top: 1em;
+        padding-bottom: 1em;
+        background-color: #fcfcfc;
+        box-shadow: 0 1px 3px rgba(0,0,0,.26);
+        top: 0;
+        -webkit-transition-property: top; /* Safari */
+        -webkit-transition-duration: 0.3s; /* Safari */
+        transition-property: top;
+        transition-duration: 0.3s;
+    }
+
+    article > header div#topbar.headroom--unpinned.headroom--not-top.headroom--not-bottom {
+        top: -4em;
+        -webkit-transition-property: top; /* Safari */
+        -webkit-transition-duration: 0.7s; /* Safari */
+        transition-property: top;
+        transition-duration: 0.7s;
+    }
+
+    article > header div#topbar span {
+        position: fixed;
+        width: 80%;
+        height: 1.5em;
+        margin-top: -0.1em;
+        margin-left: 0.9em;
+        font-size: 1.2em;
+        overflow: hidden;
+    }
+
+    article > header div#topbar a.fa-bars {
+        float: right;
+        padding: 0.6em;
+        margin-top: -0.6em;
+        margin-right: 0.3em;
+        font-size: 1.5em;
+    }
+
+    article > header div#topbar a.fa-bars:visited {
+        color: #3091d1;
+    }
+
+    article table {
+        overflow-x: auto;
+        display: block;
+    }
+
+    article div.MathJax_Display {
+        overflow: scroll;
+    }
+
+    article span.MathJax {
+        overflow: hidden;
+    }
+}
+
+@media only screen and (max-width: 320px) {
+    body {
+        font-size: 15px;
+    }
+}
diff --git a/v0.3.1/assets/documenter.js b/v0.3.1/assets/documenter.js
new file mode 100644
index 00000000..5d31622f
--- /dev/null
+++ b/v0.3.1/assets/documenter.js
@@ -0,0 +1,129 @@
+/*
+ * Part of Documenter.jl
+ *     https://github.com/JuliaDocs/Documenter.jl
+ *
+ * License: MIT
+ */
+
+requirejs.config({
+    paths: {
+        'jquery': 'https://cdnjs.cloudflare.com/ajax/libs/jquery/3.1.1/jquery.min',
+        'jqueryui': 'https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.12.0/jquery-ui.min',
+        'headroom': 'https://cdnjs.cloudflare.com/ajax/libs/headroom/0.9.3/headroom.min',
+        'mathjax': 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_HTML',
+        'highlight': 'https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/highlight.min',
+        'highlight-julia': 'https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/languages/julia.min',
+        'highlight-julia-repl': 'https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/languages/julia-repl.min',
+    },
+    shim: {
+        'mathjax' : {
+            exports: "MathJax"
+        },
+        'highlight-julia': ['highlight'],
+        'highlight-julia-repl': ['highlight'],
+    }
+});
+
+// Load MathJax
+require(['mathjax'], function(MathJax) {
+    MathJax.Hub.Config({
+      "tex2jax": {
+        inlineMath: [['$','$'], ['\\(','\\)']],
+        processEscapes: true
+      }
+    });
+    MathJax.Hub.Config({
+      config: ["MMLorHTML.js"],
+      jax: [
+        "input/TeX",
+        "output/HTML-CSS",
+        "output/NativeMML"
+      ],
+      extensions: [
+        "MathMenu.js",
+        "MathZoom.js",
+        "TeX/AMSmath.js",
+        "TeX/AMSsymbols.js",
+        "TeX/autobold.js",
+        "TeX/autoload-all.js"
+      ]
+    });
+    MathJax.Hub.Config({
+      TeX: { equationNumbers: { autoNumber: "AMS" } }
+    });
+})
+
+require(['jquery', 'highlight', 'highlight-julia', 'highlight-julia-repl'], function($, hljs) {
+    $(document).ready(function() {
+        hljs.initHighlighting();
+    })
+
+})
+
+// update the version selector with info from the siteinfo.js and ../versions.js files
+require(['jquery'], function($) {
+    $(document).ready(function() {
+        var version_selector = $("#version-selector");
+
+        // add the current version to the selector based on siteinfo.js, but only if the selector is empty
+        if (typeof DOCUMENTER_CURRENT_VERSION !== 'undefined' && $('#version-selector > option').length == 0) {
+            var option = $("");
+            version_selector.append(option);
+        }
+
+        if (typeof DOC_VERSIONS !== 'undefined') {
+            var existing_versions = $('#version-selector > option');
+            var existing_versions_texts = existing_versions.map(function(i,x){return x.text});
+            DOC_VERSIONS.forEach(function(each) {
+                var version_url = documenterBaseURL + "/../" + each;
+                var existing_id = $.inArray(each, existing_versions_texts);
+                // if not already in the version selector, add it as a new option,
+                // otherwise update the old option with the URL and enable it
+                if (existing_id == -1) {
+                    var option = $("");
+                    version_selector.append(option);
+                } else {
+                    var option = existing_versions[existing_id];
+                    option.value = version_url;
+                    option.disabled = false;
+                }
+            });
+        }
+
+        // only show the version selector if the selector has been populated
+        if ($('#version-selector > option').length > 0) {
+            version_selector.css("visibility", "visible");
+        }
+    })
+
+})
+
+// mobile
+require(['jquery', 'headroom'], function($, Headroom) {
+    $(document).ready(function() {
+        var navtoc = $("nav.toc");
+        $("nav.toc li.current a.toctext").click(function() {
+            navtoc.toggleClass('show');
+        });
+        $("article > header div#topbar a.fa-bars").click(function(ev) {
+            ev.preventDefault();
+            navtoc.toggleClass('show');
+            if (navtoc.hasClass('show')) {
+                var title = $("article > header div#topbar span").text();
+                $("nav.toc ul li a:contains('" + title + "')").focus();
+            }
+        });
+        $("article#docs").bind('click', function(ev) {
+            if ($(ev.target).is('div#topbar a.fa-bars')) {
+                return;
+            }
+            if (navtoc.hasClass('show')) {
+                navtoc.removeClass('show');
+            }
+        });
+        if ($("article > header div#topbar").css('display') == 'block') {
+            var headroom = new Headroom(document.querySelector("article > header div#topbar"), {"tolerance": {"up": 10, "down": 10}});
+            headroom.init();
+        }
+    })
+})
diff --git a/v0.3.1/assets/search.js b/v0.3.1/assets/search.js
new file mode 100644
index 00000000..4e3e9a4a
--- /dev/null
+++ b/v0.3.1/assets/search.js
@@ -0,0 +1,91 @@
+/*
+ * Part of Documenter.jl
+ *     https://github.com/JuliaDocs/Documenter.jl
+ *
+ * License: MIT
+ */
+
+// parseUri 1.2.2
+// (c) Steven Levithan 
+// MIT License
+function parseUri (str) {
+	var	o   = parseUri.options,
+		m   = o.parser[o.strictMode ? "strict" : "loose"].exec(str),
+		uri = {},
+		i   = 14;
+
+	while (i--) uri[o.key[i]] = m[i] || "";
+
+	uri[o.q.name] = {};
+	uri[o.key[12]].replace(o.q.parser, function ($0, $1, $2) {
+		if ($1) uri[o.q.name][$1] = $2;
+	});
+
+	return uri;
+};
+parseUri.options = {
+	strictMode: false,
+	key: ["source","protocol","authority","userInfo","user","password","host","port","relative","path","directory","file","query","anchor"],
+	q:   {
+		name:   "queryKey",
+		parser: /(?:^|&)([^&=]*)=?([^&]*)/g
+	},
+	parser: {
+		strict: /^(?:([^:\/?#]+):)?(?:\/\/((?:(([^:@]*)(?::([^:@]*))?)?@)?([^:\/?#]*)(?::(\d*))?))?((((?:[^?#\/]*\/)*)([^?#]*))(?:\?([^#]*))?(?:#(.*))?)/,
+		loose:  /^(?:(?![^:@]+:[^:@\/]*@)([^:\/?#.]+):)?(?:\/\/)?((?:(([^:@]*)(?::([^:@]*))?)?@)?([^:\/?#]*)(?::(\d*))?)(((\/(?:[^?#](?![^?#\/]*\.[^?#\/.]+(?:[?#]|$)))*\/?)?([^?#\/]*))(?:\?([^#]*))?(?:#(.*))?)/
+	}
+};
+
+requirejs.config({
+    paths: {
+        'jquery': 'https://code.jquery.com/jquery-3.1.0.js?',
+        'lunr': 'https://cdnjs.cloudflare.com/ajax/libs/lunr.js/0.7.1/lunr.min',
+    }
+});
+
+var currentScript = document.currentScript;
+
+require(["jquery", "lunr"], function($, lunr) {
+    var index = lunr(function () {
+        this.ref('location')
+        this.field('title', {boost: 10})
+        this.field('text')
+    })
+    var store = {}
+
+    documenterSearchIndex['docs'].forEach(function(e) {
+        index.add(e)
+        store[e.location] = e
+    })
+
+    $(function(){
+        function update_search(query) {
+            results = index.search(query)
+            $('#search-info').text("Number of results: " + results.length)
+            $('#search-results').empty()
+            results.forEach(function(result) {
+                data = store[result.ref]
+                link = $('')
+                link.text(data.title)
+                link.attr('href', documenterBaseURL+'/'+result.ref)
+                cat = $('('+data.category+')')
+                li = $('
  • ').append(link).append(cat) + $('#search-results').append(li) + }) + } + + function update_search_box() { + query = $('#search-query').val() + update_search(query) + } + + $('#search-query').keyup(update_search_box) + $('#search-query').change(update_search_box) + + search_query = parseUri(window.location).queryKey["q"] + if(search_query !== undefined) { + $("#search-query").val(search_query) + } + update_search_box(); + }) +}) diff --git a/v0.3.1/contributing.html b/v0.3.1/contributing.html new file mode 100644 index 00000000..698722ce --- /dev/null +++ b/v0.3.1/contributing.html @@ -0,0 +1,9 @@ + +Contributing & Help · Flux

    Contributing & Help

    Contributing & Help

    If you need help, please ask on the Julia forum, the slack (channel #machine-learning), or Flux's Gitter.

    Right now, the best way to help out is to try out the examples and report any issues or missing features as you find them. The second best way is to help us spread the word, perhaps by starring the repo.

    If you're interested in hacking on Flux, most of the code is pretty straightforward. Adding new layer definitions or cost functions is simple using the Flux DSL itself, and things like data utilities and training processes are all plain Julia code.

    If you get stuck or need anything, let us know!

    diff --git a/v0.3.1/data/onehot.html b/v0.3.1/data/onehot.html new file mode 100644 index 00000000..4e840987 --- /dev/null +++ b/v0.3.1/data/onehot.html @@ -0,0 +1,40 @@ + +One-Hot Encoding · Flux

    One-Hot Encoding

    One-Hot Encoding

    It's common to encode categorical variables (like true, false or cat, dog) in "one-of-k" or "one-hot" form. Flux provides the onehot function to make this easy.

    julia> using Flux: onehot
    +
    +julia> onehot(:b, [:a, :b, :c])
    +3-element Flux.OneHotVector:
    + false
    +  true
    + false
    +
    +julia> onehot(:c, [:a, :b, :c])
    +3-element Flux.OneHotVector:
    + false
    + false
    +  true

    The inverse is argmax (which can take a general probability distribution, as well as just booleans).

    julia> argmax(ans, [:a, :b, :c])
    +:c
    +
    +julia> argmax([true, false, false], [:a, :b, :c])
    +:a
    +
    +julia> argmax([0.3, 0.2, 0.5], [:a, :b, :c])
    +:c

    Batches

    onehotbatch creates a batch (matrix) of one-hot vectors, and argmax treats matrices as batches.

    julia> using Flux: onehotbatch
    +
    +julia> onehotbatch([:b, :a, :b], [:a, :b, :c])
    +3×3 Flux.OneHotMatrix:
    + false   true  false
    +  true  false   true
    + false  false  false
    +
    +julia> onecold(ans, [:a, :b, :c])
    +3-element Array{Symbol,1}:
    +  :b
    +  :a
    +  :b

    Note that these operations returned OneHotVector and OneHotMatrix rather than Arrays. OneHotVectors behave like normal vectors but avoid any unnecessary cost compared to using an integer index directly. For example, multiplying a matrix with a one-hot vector simply slices out the relevant row of the matrix under the hood.

    diff --git a/v0.3.1/index.html b/v0.3.1/index.html new file mode 100644 index 00000000..b3fb6887 --- /dev/null +++ b/v0.3.1/index.html @@ -0,0 +1,10 @@ + +Home · Flux

    Home

    Flux: The Julia Machine Learning Library

    Flux is a library for machine learning. It comes "batteries-included" with many useful tools built in, but also lets you use the full power of the Julia language where you need it. The whole stack is implemented in clean Julia code (right down to the GPU kernels) and any part can be tweaked to your liking.

    Installation

    Install Julia 0.6.0 or later, if you haven't already.

    Pkg.add("Flux")
    +Pkg.test("Flux") # Check things installed correctly

    Start with the basics. The model zoo is also a good starting point for many common kinds of models.

    diff --git a/v0.3.1/models/basics.html b/v0.3.1/models/basics.html new file mode 100644 index 00000000..b6b34a4a --- /dev/null +++ b/v0.3.1/models/basics.html @@ -0,0 +1,78 @@ + +Basics · Flux

    Basics

    Model-Building Basics

    Taking Gradients

    Consider a simple linear regression, which tries to predict an output array y from an input x. (It's a good idea to follow this example in the Julia repl.)

    W = rand(2, 5)
    +b = rand(2)
    +
    +predict(x) = W*x .+ b
    +loss(x, y) = sum((predict(x) .- y).^2)
    +
    +x, y = rand(5), rand(2) # Dummy data
    +loss(x, y) # ~ 3

    To improve the prediction we can take the gradients of W and b with respect to the loss function and perform gradient descent. We could calculate gradients by hand, but Flux will do it for us if we tell it that W and b are trainable parameters.

    using Flux.Tracker: param, back!, data, grad
    +
    +W = param(W)
    +b = param(b)
    +
    +l = loss(x, y)
    +
    +back!(l)

    loss(x, y) returns the same number, but it's now a tracked value that records gradients as it goes along. Calling back! then calculates the gradient of W and b. We can see what this gradient is, and modify W to train the model.

    grad(W)
    +
    +W.data .-= 0.1grad(W)
    +
    +loss(x, y) # ~ 2.5

    The loss has decreased a little, meaning that our prediction x is closer to the target y. If we have some data we can already try training the model.

    All deep learning in Flux, however complex, is a simple generalisation of this example. Of course, models can look very different – they might have millions of parameters or complex control flow, and there are ways to manage this complexity. Let's see what that looks like.

    Building Layers

    It's common to create more complex models than the linear regression above. For example, we might want to have two linear layers with a nonlinearity like sigmoid (σ) in between them. In the above style we could write this as:

    W1 = param(rand(3, 5))
    +b1 = param(rand(3))
    +layer1(x) = W1 * x .+ b1
    +
    +W2 = param(rand(2, 3))
    +b2 = param(rand(2))
    +layer2(x) = W2 * x .+ b2
    +
    +model(x) = layer2(σ.(layer1(x)))
    +
    +model(rand(5)) # => 2-element vector

    This works but is fairly unwieldy, with a lot of repetition – especially as we add more layers. One way to factor this out is to create a function that returns linear layers.

    function linear(in, out)
    +  W = param(randn(out, in))
    +  b = param(randn(out))
    +  x -> W * x .+ b
    +end
    +
    +linear1 = linear(5, 3) # we can access linear1.W etc
    +linear2 = linear(3, 2)
    +
    +model(x) = linear2(σ.(linear1(x)))
    +
    +model(x) # => 2-element vector

    Another (equivalent) way is to create a struct that explicitly represents the affine layer.

    struct Affine
    +  W
    +  b
    +end
    +
    +Affine(in::Integer, out::Integer) =
    +  Affine(param(randn(out, in)), param(randn(out)))
    +
    +# Overload call, so the object can be used as a function
    +(m::Affine)(x) = m.W * x .+ m.b
    +
    +a = Affine(10, 5)
    +
    +a(rand(10)) # => 5-element vector

    Congratulations! You just built the Dense layer that comes with Flux. Flux has many interesting layers available, but they're all things you could have built yourself very easily.

    (There is one small difference with Dense – for convenience it also takes an activation function, like Dense(10, 5, σ).)

    Stacking It Up

    It's pretty common to write models that look something like:

    layer1 = Dense(10, 5, σ)
    +# ...
    +model(x) = layer3(layer2(layer1(x)))

    For long chains, it might be a bit more intuitive to have a list of layers, like this:

    using Flux
    +
    +layers = [Dense(10, 5, σ), Dense(5, 2), softmax]
    +
    +model(x) = foldl((x, m) -> m(x), x, layers)
    +
    +model(rand(10)) # => 2-element vector

    Handily, this is also provided for in Flux:

    model2 = Chain(
    +  Dense(10, 5, σ),
    +  Dense(5, 2),
    +  softmax)
    +
    +model2(rand(10)) # => 2-element vector

    This quickly starts to look like a high-level deep learning library; yet you can see how it falls out of simple abstractions, and we lose none of the power of Julia code.

    A nice property of this approach is that because "models" are just functions (possibly with trainable parameters), you can also see this as simple function composition.

    m = Dense(5, 2) ∘ Dense(10, 5, σ)
    +
    +m(rand(10))

    Likewise, Chain will happily work with any Julia function.

    m = Chain(x -> x^2, x -> x+1)
    +
    +m(5) # => 26
    diff --git a/v0.3.1/models/layers.html b/v0.3.1/models/layers.html new file mode 100644 index 00000000..01f8e87b --- /dev/null +++ b/v0.3.1/models/layers.html @@ -0,0 +1,14 @@ + +Layer Reference · Flux

    Layer Reference

    Model Layers

    Flux.ChainType.
    Chain(layers...)

    Chain multiple layers / functions together, so that they are called in sequence on a given input.

    m = Chain(x -> x^2, x -> x+1)
    +m(5) == 26
    +
    +m = Chain(Dense(10, 5), Dense(5, 2))
    +x = rand(10)
    +m(x) == m[2](m[1](x))

    Chain also supports indexing and slicing, e.g. m[2] or m[1:end-1]. m[1:3](x) will calculate the output of the first three layers.

    source
    Flux.DenseType.
    Dense(in::Integer, out::Integer, σ = identity)

    Creates a traditional Dense layer with parameters W and b.

    y = σ.(W * x .+ b)

    The input x must be a vector of length in, or a batch of vectors represented as an in × N matrix. The out y will be a vector or batch of length in.

    source
    diff --git a/v0.3.1/models/recurrence.html b/v0.3.1/models/recurrence.html new file mode 100644 index 00000000..17e8abc2 --- /dev/null +++ b/v0.3.1/models/recurrence.html @@ -0,0 +1,42 @@ + +Recurrence · Flux

    Recurrence

    Recurrent Models

    Recurrent Cells

    In the simple feedforward case, our model m is a simple function from various inputs xᵢ to predictions yᵢ. (For example, each x might be an MNIST digit and each y a digit label.) Each prediction is completely independent of any others, and using the same x will always produce the same y.

    y₁ = f(x₁)
    +y₂ = f(x₂)
    +y₃ = f(x₃)
    +# ...

    Recurrent networks introduce a hidden state that gets carried over each time we run the model. The model now takes the old h as an input, and produces a new h as output, each time we run it.

    h = # ... initial state ...
    +y₁, h = f(x₁, h)
    +y₂, h = f(x₂, h)
    +y₃, h = f(x₃, h)
    +# ...

    Information stored in h is preserved for the next prediction, allowing it to function as a kind of memory. This also means that the prediction made for a given x depends on all the inputs previously fed into the model.

    (This might be important if, for example, each x represents one word of a sentence; the model's interpretation of the word "bank" should change if the previous input was "river" rather than "investment".)

    Flux's RNN support closely follows this mathematical perspective. The most basic RNN is as close as possible to a standard Dense layer, and the output and hidden state are the same. By convention, the hidden state is the first input and output.

    Wxh = randn(5, 10)
    +Whh = randn(5, 5)
    +b   = randn(5)
    +
    +function rnn(h, x)
    +  h = tanh.(Wxh * x .+ Whh * h .+ b)
    +  return h, h
    +end
    +
    +x = rand(10) # dummy data
    +h = rand(5)  # initial hidden state
    +
    +h, y = rnn(h, x)

    If you run the last line a few times, you'll notice the output y changing slightly even though the input x is the same.

    We sometimes refer to functions like rnn above, which explicitly manage state, as recurrent cells. There are various recurrent cells available, which are documented in the layer reference. The hand-written example above can be replaced with:

    using Flux
    +
    +rnn2 = Flux.RNNCell(10, 5)
    +
    +x = rand(10) # dummy data
    +h = rand(5)  # initial hidden state
    +
    +h, y = rnn2(h, x)

    Stateful Models

    For the most part, we don't want to manage hidden states ourselves, but to treat our models as being stateful. Flux provides the Recur wrapper to do this.

    x = rand(10)
    +h = rand(5)
    +
    +m = Flux.Recur(rnn, h)
    +
    +y = m(x)

    The Recur wrapper stores the state between runs in the m.state field.

    If you use the RNN(10, 5) constructor – as opposed to RNNCell – you'll see that it's simply a wrapped cell.

    julia> RNN(10, 5)
    +Recur(RNNCell(Dense(15, 5)))

    Sequences

    Often we want to work with sequences of inputs, rather than individual xs.

    seq = [rand(10) for i = 1:10]

    With Recur, applying our model to each element of a sequence is trivial:

    m.(seq) # returns a list of 5-element vectors

    This works even when we've chain recurrent layers into a larger model.

    m = Chain(LSTM(10, 15), Dense(15, 5))
    +m.(seq)

    Truncating Gradients

    By default, calculating the gradients in a recurrent layer involves the entire history. For example, if we call the model on 100 inputs, calling back! will calculate the gradient for those 100 calls. If we then calculate another 10 inputs we have to calculate 110 gradients – this accumulates and quickly becomes expensive.

    To avoid this we can truncate the gradient calculation, forgetting the history.

    truncate!(m)

    Calling truncate! wipes the slate clean, so we can call the model with more inputs without building up an expensive gradient computation.

    diff --git a/v0.3.1/search.html b/v0.3.1/search.html new file mode 100644 index 00000000..86fc913d --- /dev/null +++ b/v0.3.1/search.html @@ -0,0 +1,9 @@ + +Search · Flux

    Search

    Search

    Number of results: loading...

      diff --git a/v0.3.1/search_index.js b/v0.3.1/search_index.js new file mode 100644 index 00000000..ac4ef48e --- /dev/null +++ b/v0.3.1/search_index.js @@ -0,0 +1,235 @@ +var documenterSearchIndex = {"docs": [ + +{ + "location": "index.html#", + "page": "Home", + "title": "Home", + "category": "page", + "text": "" +}, + +{ + "location": "index.html#Flux:-The-Julia-Machine-Learning-Library-1", + "page": "Home", + "title": "Flux: The Julia Machine Learning Library", + "category": "section", + "text": "Flux is a library for machine learning. It comes \"batteries-included\" with many useful tools built in, but also lets you use the full power of the Julia language where you need it. The whole stack is implemented in clean Julia code (right down to the GPU kernels) and any part can be tweaked to your liking." +}, + +{ + "location": "index.html#Installation-1", + "page": "Home", + "title": "Installation", + "category": "section", + "text": "Install Julia 0.6.0 or later, if you haven't already.Pkg.add(\"Flux\")\nPkg.test(\"Flux\") # Check things installed correctlyStart with the basics. The model zoo is also a good starting point for many common kinds of models." +}, + +{ + "location": "models/basics.html#", + "page": "Basics", + "title": "Basics", + "category": "page", + "text": "" +}, + +{ + "location": "models/basics.html#Model-Building-Basics-1", + "page": "Basics", + "title": "Model-Building Basics", + "category": "section", + "text": "" +}, + +{ + "location": "models/basics.html#Taking-Gradients-1", + "page": "Basics", + "title": "Taking Gradients", + "category": "section", + "text": "Consider a simple linear regression, which tries to predict an output array y from an input x. (It's a good idea to follow this example in the Julia repl.)W = rand(2, 5)\nb = rand(2)\n\npredict(x) = W*x .+ b\nloss(x, y) = sum((predict(x) .- y).^2)\n\nx, y = rand(5), rand(2) # Dummy data\nloss(x, y) # ~ 3To improve the prediction we can take the gradients of W and b with respect to the loss function and perform gradient descent. We could calculate gradients by hand, but Flux will do it for us if we tell it that W and b are trainable parameters.using Flux.Tracker: param, back!, data, grad\n\nW = param(W)\nb = param(b)\n\nl = loss(x, y)\n\nback!(l)loss(x, y) returns the same number, but it's now a tracked value that records gradients as it goes along. Calling back! then calculates the gradient of W and b. We can see what this gradient is, and modify W to train the model.grad(W)\n\nW.data .-= 0.1grad(W)\n\nloss(x, y) # ~ 2.5The loss has decreased a little, meaning that our prediction x is closer to the target y. If we have some data we can already try training the model.All deep learning in Flux, however complex, is a simple generalisation of this example. Of course, models can look very different – they might have millions of parameters or complex control flow, and there are ways to manage this complexity. Let's see what that looks like." +}, + +{ + "location": "models/basics.html#Building-Layers-1", + "page": "Basics", + "title": "Building Layers", + "category": "section", + "text": "It's common to create more complex models than the linear regression above. For example, we might want to have two linear layers with a nonlinearity like sigmoid (σ) in between them. In the above style we could write this as:W1 = param(rand(3, 5))\nb1 = param(rand(3))\nlayer1(x) = W1 * x .+ b1\n\nW2 = param(rand(2, 3))\nb2 = param(rand(2))\nlayer2(x) = W2 * x .+ b2\n\nmodel(x) = layer2(σ.(layer1(x)))\n\nmodel(rand(5)) # => 2-element vectorThis works but is fairly unwieldy, with a lot of repetition – especially as we add more layers. One way to factor this out is to create a function that returns linear layers.function linear(in, out)\n W = param(randn(out, in))\n b = param(randn(out))\n x -> W * x .+ b\nend\n\nlinear1 = linear(5, 3) # we can access linear1.W etc\nlinear2 = linear(3, 2)\n\nmodel(x) = linear2(σ.(linear1(x)))\n\nmodel(x) # => 2-element vectorAnother (equivalent) way is to create a struct that explicitly represents the affine layer.struct Affine\n W\n b\nend\n\nAffine(in::Integer, out::Integer) =\n Affine(param(randn(out, in)), param(randn(out)))\n\n# Overload call, so the object can be used as a function\n(m::Affine)(x) = m.W * x .+ m.b\n\na = Affine(10, 5)\n\na(rand(10)) # => 5-element vectorCongratulations! You just built the Dense layer that comes with Flux. Flux has many interesting layers available, but they're all things you could have built yourself very easily.(There is one small difference with Dense – for convenience it also takes an activation function, like Dense(10, 5, σ).)" +}, + +{ + "location": "models/basics.html#Stacking-It-Up-1", + "page": "Basics", + "title": "Stacking It Up", + "category": "section", + "text": "It's pretty common to write models that look something like:layer1 = Dense(10, 5, σ)\n# ...\nmodel(x) = layer3(layer2(layer1(x)))For long chains, it might be a bit more intuitive to have a list of layers, like this:using Flux\n\nlayers = [Dense(10, 5, σ), Dense(5, 2), softmax]\n\nmodel(x) = foldl((x, m) -> m(x), x, layers)\n\nmodel(rand(10)) # => 2-element vectorHandily, this is also provided for in Flux:model2 = Chain(\n Dense(10, 5, σ),\n Dense(5, 2),\n softmax)\n\nmodel2(rand(10)) # => 2-element vectorThis quickly starts to look like a high-level deep learning library; yet you can see how it falls out of simple abstractions, and we lose none of the power of Julia code.A nice property of this approach is that because \"models\" are just functions (possibly with trainable parameters), you can also see this as simple function composition.m = Dense(5, 2) ∘ Dense(10, 5, σ)\n\nm(rand(10))Likewise, Chain will happily work with any Julia function.m = Chain(x -> x^2, x -> x+1)\n\nm(5) # => 26" +}, + +{ + "location": "models/recurrence.html#", + "page": "Recurrence", + "title": "Recurrence", + "category": "page", + "text": "" +}, + +{ + "location": "models/recurrence.html#Recurrent-Models-1", + "page": "Recurrence", + "title": "Recurrent Models", + "category": "section", + "text": "" +}, + +{ + "location": "models/recurrence.html#Recurrent-Cells-1", + "page": "Recurrence", + "title": "Recurrent Cells", + "category": "section", + "text": "In the simple feedforward case, our model m is a simple function from various inputs xᵢ to predictions yᵢ. (For example, each x might be an MNIST digit and each y a digit label.) Each prediction is completely independent of any others, and using the same x will always produce the same y.y₁ = f(x₁)\ny₂ = f(x₂)\ny₃ = f(x₃)\n# ...Recurrent networks introduce a hidden state that gets carried over each time we run the model. The model now takes the old h as an input, and produces a new h as output, each time we run it.h = # ... initial state ...\ny₁, h = f(x₁, h)\ny₂, h = f(x₂, h)\ny₃, h = f(x₃, h)\n# ...Information stored in h is preserved for the next prediction, allowing it to function as a kind of memory. This also means that the prediction made for a given x depends on all the inputs previously fed into the model.(This might be important if, for example, each x represents one word of a sentence; the model's interpretation of the word \"bank\" should change if the previous input was \"river\" rather than \"investment\".)Flux's RNN support closely follows this mathematical perspective. The most basic RNN is as close as possible to a standard Dense layer, and the output and hidden state are the same. By convention, the hidden state is the first input and output.Wxh = randn(5, 10)\nWhh = randn(5, 5)\nb = randn(5)\n\nfunction rnn(h, x)\n h = tanh.(Wxh * x .+ Whh * h .+ b)\n return h, h\nend\n\nx = rand(10) # dummy data\nh = rand(5) # initial hidden state\n\nh, y = rnn(h, x)If you run the last line a few times, you'll notice the output y changing slightly even though the input x is the same.We sometimes refer to functions like rnn above, which explicitly manage state, as recurrent cells. There are various recurrent cells available, which are documented in the layer reference. The hand-written example above can be replaced with:using Flux\n\nrnn2 = Flux.RNNCell(10, 5)\n\nx = rand(10) # dummy data\nh = rand(5) # initial hidden state\n\nh, y = rnn2(h, x)" +}, + +{ + "location": "models/recurrence.html#Stateful-Models-1", + "page": "Recurrence", + "title": "Stateful Models", + "category": "section", + "text": "For the most part, we don't want to manage hidden states ourselves, but to treat our models as being stateful. Flux provides the Recur wrapper to do this.x = rand(10)\nh = rand(5)\n\nm = Flux.Recur(rnn, h)\n\ny = m(x)The Recur wrapper stores the state between runs in the m.state field.If you use the RNN(10, 5) constructor – as opposed to RNNCell – you'll see that it's simply a wrapped cell.julia> RNN(10, 5)\nRecur(RNNCell(Dense(15, 5)))" +}, + +{ + "location": "models/recurrence.html#Sequences-1", + "page": "Recurrence", + "title": "Sequences", + "category": "section", + "text": "Often we want to work with sequences of inputs, rather than individual xs.seq = [rand(10) for i = 1:10]With Recur, applying our model to each element of a sequence is trivial:m.(seq) # returns a list of 5-element vectorsThis works even when we've chain recurrent layers into a larger model.m = Chain(LSTM(10, 15), Dense(15, 5))\nm.(seq)" +}, + +{ + "location": "models/recurrence.html#Truncating-Gradients-1", + "page": "Recurrence", + "title": "Truncating Gradients", + "category": "section", + "text": "By default, calculating the gradients in a recurrent layer involves the entire history. For example, if we call the model on 100 inputs, calling back! will calculate the gradient for those 100 calls. If we then calculate another 10 inputs we have to calculate 110 gradients – this accumulates and quickly becomes expensive.To avoid this we can truncate the gradient calculation, forgetting the history.truncate!(m)Calling truncate! wipes the slate clean, so we can call the model with more inputs without building up an expensive gradient computation." +}, + +{ + "location": "models/layers.html#", + "page": "Layer Reference", + "title": "Layer Reference", + "category": "page", + "text": "" +}, + +{ + "location": "models/layers.html#Flux.Chain", + "page": "Layer Reference", + "title": "Flux.Chain", + "category": "Type", + "text": "Chain(layers...)\n\nChain multiple layers / functions together, so that they are called in sequence on a given input.\n\nm = Chain(x -> x^2, x -> x+1)\nm(5) == 26\n\nm = Chain(Dense(10, 5), Dense(5, 2))\nx = rand(10)\nm(x) == m[2](m[1](x))\n\nChain also supports indexing and slicing, e.g. m[2] or m[1:end-1]. m[1:3](x) will calculate the output of the first three layers.\n\n\n\n" +}, + +{ + "location": "models/layers.html#Flux.Dense", + "page": "Layer Reference", + "title": "Flux.Dense", + "category": "Type", + "text": "Dense(in::Integer, out::Integer, σ = identity)\n\nCreates a traditional Dense layer with parameters W and b.\n\ny = σ.(W * x .+ b)\n\nThe input x must be a vector of length in, or a batch of vectors represented as an in × N matrix. The out y will be a vector or batch of length in.\n\n\n\n" +}, + +{ + "location": "models/layers.html#Model-Layers-1", + "page": "Layer Reference", + "title": "Model Layers", + "category": "section", + "text": "Chain\nDense" +}, + +{ + "location": "training/optimisers.html#", + "page": "Optimisers", + "title": "Optimisers", + "category": "page", + "text": "" +}, + +{ + "location": "training/optimisers.html#Optimisers-1", + "page": "Optimisers", + "title": "Optimisers", + "category": "section", + "text": "Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.W = param(rand(2, 5))\nb = param(rand(2))\n\npredict(x) = W*x .+ b\nloss(x, y) = sum((predict(x) .- y).^2)\n\nx, y = rand(5), rand(2) # Dummy data\nl = loss(x, y) # ~ 3\nback!(l)We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:using Flux.Tracker: data, grad\n\nfunction update()\n η = 0.1 # Learning Rate\n for p in (W, b)\n x, Δ = data(p), grad(p)\n x .-= η .* Δ # Apply the update\n Δ .= 0 # Clear the gradient\n end\nendIf we call update, the parameters W and b will change and our loss should go down.There are two pieces here: one is that we need a list of trainable parameters for the model ([W, b] in this case), and the other is the update step. In this case the update is simply gradient descent (x .-= η .* Δ), but we might choose to do something more advanced, like adding momentum.In this case, getting the variables is trivial, but you can imagine it'd be more of a pain with some complex stack of layers.m = Chain(\n Dense(10, 5, σ),\n Dense(5, 2), softmax)Instead of having to write [m[1].W, m[1].b, ...], Flux provides a params function params(m) that returns a list of all parameters in the model for you.For the update step, there's nothing whatsoever wrong with writing the loop above – it'll work just fine – but Flux provides various optimisers that make it more convenient.opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1\n\nopt()An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data." +}, + +{ + "location": "training/training.html#", + "page": "Training", + "title": "Training", + "category": "page", + "text": "" +}, + +{ + "location": "training/training.html#Training-1", + "page": "Training", + "title": "Training", + "category": "section", + "text": "To actually train a model we need three things:A loss function, that evaluates how well a model is doing given some input data.\nA collection of data points that will be provided to the loss function.\nAn optimiser that will update the model parameters appropriately.With these we can call Flux.train!:Flux.train!(loss, data, opt)There are plenty of examples in the model zoo." +}, + +{ + "location": "training/training.html#Loss-Functions-1", + "page": "Training", + "title": "Loss Functions", + "category": "section", + "text": "The loss that we defined in basics is completely valid for training. We can also define a loss in terms of some model:m = Chain(\n Dense(784, 32, σ),\n Dense(32, 10), softmax)\n\nloss(x, y) = Flux.mse(m(x), y)The loss will almost always be defined in terms of some cost function that measures the distance of the prediction m(x) from the target y. Flux has several of these built in, like mse for mean squared error or logloss for cross entropy loss, but you can calculate it however you want." +}, + +{ + "location": "training/training.html#Callbacks-1", + "page": "Training", + "title": "Callbacks", + "category": "section", + "text": "train! takes an additional argument, cb, that's used for callbacks so that you can observe the training process. For example:train!(loss, data, opt, cb = () -> println(\"training\"))Callbacks are called for every batch of training data. You can slow this down using Flux.throttle(f, timeout) which prevents f from being called more than once every timeout seconds.A more typical callback might look like this:test_x, test_y = # ... create single batch of test data ...\nevalcb() = @show(loss(test_x, test_y))\n\nFlux.train!(loss, data, opt,\n cb = throttle(evalcb, 5))" +}, + +{ + "location": "data/onehot.html#", + "page": "One-Hot Encoding", + "title": "One-Hot Encoding", + "category": "page", + "text": "" +}, + +{ + "location": "data/onehot.html#One-Hot-Encoding-1", + "page": "One-Hot Encoding", + "title": "One-Hot Encoding", + "category": "section", + "text": "It's common to encode categorical variables (like true, false or cat, dog) in \"one-of-k\" or \"one-hot\" form. Flux provides the onehot function to make this easy.julia> using Flux: onehot\n\njulia> onehot(:b, [:a, :b, :c])\n3-element Flux.OneHotVector:\n false\n true\n false\n\njulia> onehot(:c, [:a, :b, :c])\n3-element Flux.OneHotVector:\n false\n false\n trueThe inverse is argmax (which can take a general probability distribution, as well as just booleans).julia> argmax(ans, [:a, :b, :c])\n:c\n\njulia> argmax([true, false, false], [:a, :b, :c])\n:a\n\njulia> argmax([0.3, 0.2, 0.5], [:a, :b, :c])\n:c" +}, + +{ + "location": "data/onehot.html#Batches-1", + "page": "One-Hot Encoding", + "title": "Batches", + "category": "section", + "text": "onehotbatch creates a batch (matrix) of one-hot vectors, and argmax treats matrices as batches.julia> using Flux: onehotbatch\n\njulia> onehotbatch([:b, :a, :b], [:a, :b, :c])\n3×3 Flux.OneHotMatrix:\n false true false\n true false true\n false false false\n\njulia> onecold(ans, [:a, :b, :c])\n3-element Array{Symbol,1}:\n :b\n :a\n :bNote that these operations returned OneHotVector and OneHotMatrix rather than Arrays. OneHotVectors behave like normal vectors but avoid any unnecessary cost compared to using an integer index directly. For example, multiplying a matrix with a one-hot vector simply slices out the relevant row of the matrix under the hood." +}, + +{ + "location": "contributing.html#", + "page": "Contributing & Help", + "title": "Contributing & Help", + "category": "page", + "text": "" +}, + +{ + "location": "contributing.html#Contributing-and-Help-1", + "page": "Contributing & Help", + "title": "Contributing & Help", + "category": "section", + "text": "If you need help, please ask on the Julia forum, the slack (channel #machine-learning), or Flux's Gitter.Right now, the best way to help out is to try out the examples and report any issues or missing features as you find them. The second best way is to help us spread the word, perhaps by starring the repo.If you're interested in hacking on Flux, most of the code is pretty straightforward. Adding new layer definitions or cost functions is simple using the Flux DSL itself, and things like data utilities and training processes are all plain Julia code.If you get stuck or need anything, let us know!" +}, + +]} diff --git a/v0.3.1/siteinfo.js b/v0.3.1/siteinfo.js new file mode 100644 index 00000000..5057e969 --- /dev/null +++ b/v0.3.1/siteinfo.js @@ -0,0 +1 @@ +var DOCUMENTER_CURRENT_VERSION = "v0.3.1"; diff --git a/v0.3.1/training/optimisers.html b/v0.3.1/training/optimisers.html new file mode 100644 index 00000000..93615408 --- /dev/null +++ b/v0.3.1/training/optimisers.html @@ -0,0 +1,30 @@ + +Optimisers · Flux

      Optimisers

      Optimisers

      Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.

      W = param(rand(2, 5))
      +b = param(rand(2))
      +
      +predict(x) = W*x .+ b
      +loss(x, y) = sum((predict(x) .- y).^2)
      +
      +x, y = rand(5), rand(2) # Dummy data
      +l = loss(x, y) # ~ 3
      +back!(l)

      We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:

      using Flux.Tracker: data, grad
      +
      +function update()
      +  η = 0.1 # Learning Rate
      +  for p in (W, b)
      +    x, Δ = data(p), grad(p)
      +    x .-= η .* Δ # Apply the update
      +    Δ .= 0       # Clear the gradient
      +  end
      +end

      If we call update, the parameters W and b will change and our loss should go down.

      There are two pieces here: one is that we need a list of trainable parameters for the model ([W, b] in this case), and the other is the update step. In this case the update is simply gradient descent (x .-= η .* Δ), but we might choose to do something more advanced, like adding momentum.

      In this case, getting the variables is trivial, but you can imagine it'd be more of a pain with some complex stack of layers.

      m = Chain(
      +  Dense(10, 5, σ),
      +  Dense(5, 2), softmax)

      Instead of having to write [m[1].W, m[1].b, ...], Flux provides a params function params(m) that returns a list of all parameters in the model for you.

      For the update step, there's nothing whatsoever wrong with writing the loop above – it'll work just fine – but Flux provides various optimisers that make it more convenient.

      opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1
      +
      +opt()

      An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data.

      diff --git a/v0.3.1/training/training.html b/v0.3.1/training/training.html new file mode 100644 index 00000000..252ce1bf --- /dev/null +++ b/v0.3.1/training/training.html @@ -0,0 +1,17 @@ + +Training · Flux

      Training

      Training

      To actually train a model we need three things:

      • A loss function, that evaluates how well a model is doing given some input data.

      • A collection of data points that will be provided to the loss function.

      • An optimiser that will update the model parameters appropriately.

      With these we can call Flux.train!:

      Flux.train!(loss, data, opt)

      There are plenty of examples in the model zoo.

      Loss Functions

      The loss that we defined in basics is completely valid for training. We can also define a loss in terms of some model:

      m = Chain(
      +  Dense(784, 32, σ),
      +  Dense(32, 10), softmax)
      +
      +loss(x, y) = Flux.mse(m(x), y)

      The loss will almost always be defined in terms of some cost function that measures the distance of the prediction m(x) from the target y. Flux has several of these built in, like mse for mean squared error or logloss for cross entropy loss, but you can calculate it however you want.

      Callbacks

      train! takes an additional argument, cb, that's used for callbacks so that you can observe the training process. For example:

      train!(loss, data, opt, cb = () -> println("training"))

      Callbacks are called for every batch of training data. You can slow this down using Flux.throttle(f, timeout) which prevents f from being called more than once every timeout seconds.

      A more typical callback might look like this:

      test_x, test_y = # ... create single batch of test data ...
      +evalcb() = @show(loss(test_x, test_y))
      +
      +Flux.train!(loss, data, opt,
      +            cb = throttle(evalcb, 5))
      diff --git a/versions.js b/versions.js index 75b0abad..3eaff596 100644 --- a/versions.js +++ b/versions.js @@ -4,6 +4,7 @@ var DOC_VERSIONS = [ "release-0.3", "release-0.2", "release-0.1", + "v0.3.1", "v0.3.0", "v0.2.1", "v0.2.0",