diff --git a/latest/contributing.html b/latest/contributing.html index 203d9c39..8d05506d 100644 --- a/latest/contributing.html +++ b/latest/contributing.html @@ -6,4 +6,4 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Contributing & Help

Contributing & Help

If you need help, please ask on the Julia forum, the slack (channel #machine-learning), or Flux's Gitter.

Right now, the best way to help out is to try out the examples and report any issues or missing features as you find them. The second best way is to help us spread the word, perhaps by starring the repo.

If you're interested in hacking on Flux, most of the code is pretty straightforward. Adding new layer definitions or cost functions is simple using the Flux DSL itself, and things like data utilities and training processes are all plain Julia code.

If you get stuck or need anything, let us know!

+

Contributing & Help

Contributing & Help

If you need help, please ask on the Julia forum, the slack (channel #machine-learning), or Flux's Gitter.

Right now, the best way to help out is to try out the examples and report any issues or missing features as you find them. The second best way is to help us spread the word, perhaps by starring the repo.

If you're interested in hacking on Flux, most of the code is pretty straightforward. Adding new layer definitions or cost functions is simple using the Flux DSL itself, and things like data utilities and training processes are all plain Julia code.

If you get stuck or need anything, let us know!

diff --git a/latest/data/onehot.html b/latest/data/onehot.html index 5184c33e..d31bf049 100644 --- a/latest/data/onehot.html +++ b/latest/data/onehot.html @@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

One-Hot Encoding

One-Hot Encoding

It's common to encode categorical variables (like true, false or cat, dog) in "one-of-k" or "one-hot" form. Flux provides the onehot function to make this easy.

julia> using Flux: onehot
+

One-Hot Encoding

One-Hot Encoding

It's common to encode categorical variables (like true, false or cat, dog) in "one-of-k" or "one-hot" form. Flux provides the onehot function to make this easy.

julia> using Flux: onehot
 
 julia> onehot(:b, [:a, :b, :c])
 3-element Flux.OneHotVector:
diff --git a/latest/index.html b/latest/index.html
index f33faef2..7b308e99 100644
--- a/latest/index.html
+++ b/latest/index.html
@@ -6,5 +6,5 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
 
 ga('create', 'UA-36890222-9', 'auto');
 ga('send', 'pageview');
-

Home

Flux: The Julia Machine Learning Library

Flux is a library for machine learning. It comes "batteries-included" with many useful tools built in, but also lets you use the full power of the Julia language where you need it. The whole stack is implemented in clean Julia code (right down to the GPU kernels) and any part can be tweaked to your liking.

Installation

Install Julia 0.6.0 or later, if you haven't already.

Pkg.add("Flux")
+

Home

Flux: The Julia Machine Learning Library

Flux is a library for machine learning. It comes "batteries-included" with many useful tools built in, but also lets you use the full power of the Julia language where you need it. The whole stack is implemented in clean Julia code (right down to the GPU kernels) and any part can be tweaked to your liking.

Installation

Install Julia 0.6.0 or later, if you haven't already.

Pkg.add("Flux")
 Pkg.test("Flux") # Check things installed correctly

Start with the basics. The model zoo is also a good starting point for many common kinds of models.

diff --git a/latest/models/basics.html b/latest/models/basics.html index ddbaeb02..83275958 100644 --- a/latest/models/basics.html +++ b/latest/models/basics.html @@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Basics

Model-Building Basics

Taking Gradients

Consider a simple linear regression, which tries to predict an output array y from an input x. (It's a good idea to follow this example in the Julia repl.)

W = rand(2, 5)
+

Basics

Model-Building Basics

Taking Gradients

Consider a simple linear regression, which tries to predict an output array y from an input x. (It's a good idea to follow this example in the Julia repl.)

W = rand(2, 5)
 b = rand(2)
 
 predict(x) = W*x .+ b
@@ -24,7 +24,7 @@ back!(l)

loss(x, y) returns the same number, but it& W.data .-= 0.1grad(W) -loss(x, y) # ~ 2.5

The loss has decreased a little, meaning that our prediction x is closer to the target y. If we have some data we can already try training the model.

All deep learning in Flux, however complex, is a simple generalisation of this example. Of course, models can look very different – they might have millions of parameters or complex control flow, and there are ways to manage this complexity. Let's see what that looks like.

Building Layers

It's common to create more complex models than the linear regression above. For example, we might want to have two linear layers with a nonlinearity like sigmoid (σ) in between them. In the above style we could write this as:

W1 = param(rand(3, 5))
+loss(x, y) # ~ 2.5

The loss has decreased a little, meaning that our prediction x is closer to the target y. If we have some data we can already try training the model.

All deep learning in Flux, however complex, is a simple generalisation of this example. Of course, models can look very different – they might have millions of parameters or complex control flow, and there are ways to manage this complexity. Let's see what that looks like.

Building Layers

It's common to create more complex models than the linear regression above. For example, we might want to have two linear layers with a nonlinearity like sigmoid (σ) in between them. In the above style we could write this as:

W1 = param(rand(3, 5))
 b1 = param(rand(3))
 layer1(x) = W1 * x .+ b1
 
diff --git a/latest/models/layers.html b/latest/models/layers.html
index 8eabf70b..ba07c630 100644
--- a/latest/models/layers.html
+++ b/latest/models/layers.html
@@ -6,9 +6,9 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
 
 ga('create', 'UA-36890222-9', 'auto');
 ga('send', 'pageview');
-

Layer Reference

Model Layers

Flux.ChainType.
Chain(layers...)

Chain multiple layers / functions together, so that they are called in sequence on a given input.

m = Chain(x -> x^2, x -> x+1)
+

Layer Reference

Model Layers

Flux.ChainType.
Chain(layers...)

Chain multiple layers / functions together, so that they are called in sequence on a given input.

m = Chain(x -> x^2, x -> x+1)
 m(5) == 26
 
 m = Chain(Dense(10, 5), Dense(5, 2))
 x = rand(10)
-m(x) == m[2](m[1](x))

Chain also supports indexing and slicing, e.g. m[2] or m[1:end-1]. m[1:3](x) will calculate the output of the first three layers.

source
Flux.DenseType.
Dense(in::Integer, out::Integer, σ = identity)

Creates a traditional Dense layer with parameters W and b.

y = σ.(W * x .+ b)

The input x must be a vector of length in, or a batch of vectors represented as an in × N matrix. The out y will be a vector or batch of length in.

source
+m(x) == m[2](m[1](x))

Chain also supports indexing and slicing, e.g. m[2] or m[1:end-1]. m[1:3](x) will calculate the output of the first three layers.

source
Flux.DenseType.
Dense(in::Integer, out::Integer, σ = identity)

Creates a traditional Dense layer with parameters W and b.

y = σ.(W * x .+ b)

The input x must be a vector of length in, or a batch of vectors represented as an in × N matrix. The out y will be a vector or batch of length in.

source
diff --git a/latest/models/recurrence.html b/latest/models/recurrence.html index 9ae0984c..880095fe 100644 --- a/latest/models/recurrence.html +++ b/latest/models/recurrence.html @@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Recurrence

Recurrent Models

Recurrent Cells

In the simple feedforward case, our model m is a simple function from various inputs xᵢ to predictions yᵢ. (For example, each x might be an MNIST digit and each y a digit label.) Each prediction is completely independent of any others, and using the same x will always produce the same y.

y₁ = f(x₁)
+

Recurrence

Recurrent Models

Recurrent Cells

In the simple feedforward case, our model m is a simple function from various inputs xᵢ to predictions yᵢ. (For example, each x might be an MNIST digit and each y a digit label.) Each prediction is completely independent of any others, and using the same x will always produce the same y.

y₁ = f(x₁)
 y₂ = f(x₂)
 y₃ = f(x₃)
 # ...

Recurrent networks introduce a hidden state that gets carried over each time we run the model. The model now takes the old h as an input, and produces a new h as output, each time we run it.

h = # ... initial state ...
@@ -25,7 +25,7 @@ end
 x = rand(10) # dummy data
 h = rand(5)  # initial hidden state
 
-h, y = rnn(h, x)

If you run the last line a few times, you'll notice the output y changing slightly even though the input x is the same.

We sometimes refer to functions like rnn above, which explicitly manage state, as recurrent cells. There are various recurrent cells available, which are documented in the layer reference. The hand-written example above can be replaced with:

using Flux
+h, y = rnn(h, x)

If you run the last line a few times, you'll notice the output y changing slightly even though the input x is the same.

We sometimes refer to functions like rnn above, which explicitly manage state, as recurrent cells. There are various recurrent cells available, which are documented in the layer reference. The hand-written example above can be replaced with:

using Flux
 
 m = Flux.RNNCell(10, 5)
 
diff --git a/latest/training/optimisers.html b/latest/training/optimisers.html
index 1a4734e2..058c851f 100644
--- a/latest/training/optimisers.html
+++ b/latest/training/optimisers.html
@@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
 
 ga('create', 'UA-36890222-9', 'auto');
 ga('send', 'pageview');
-

Optimisers

Optimisers

Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.

W = param(rand(2, 5))
+

Optimisers

Optimisers

Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.

W = param(rand(2, 5))
 b = param(rand(2))
 
 predict(x) = W*x .+ b
@@ -27,4 +27,4 @@ end

If we call update, the parameters W Dense(10, 5, σ), Dense(5, 2), softmax)

Instead of having to write [m[1].W, m[1].b, ...], Flux provides a params function params(m) that returns a list of all parameters in the model for you.

For the update step, there's nothing whatsoever wrong with writing the loop above – it'll work just fine – but Flux provides various optimisers that make it more convenient.

opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1
 
-opt()

An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data.

+opt()

An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data.

diff --git a/latest/training/training.html b/latest/training/training.html index a77e9607..bd516d07 100644 --- a/latest/training/training.html +++ b/latest/training/training.html @@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Training

Training

To actually train a model we need three things:

  • A loss function, that evaluates how well a model is doing given some input data.

  • A collection of data points that will be provided to the loss function.

  • An optimiser that will update the model parameters appropriately.

With these we can call Flux.train!:

Flux.train!(loss, data, opt)

There are plenty of examples in the model zoo.

Loss Functions

The loss that we defined in basics is completely valid for training. We can also define a loss in terms of some model:

m = Chain(
+

Training

Training

To actually train a model we need three things:

  • A loss function, that evaluates how well a model is doing given some input data.

  • A collection of data points that will be provided to the loss function.

  • An optimiser that will update the model parameters appropriately.

With these we can call Flux.train!:

Flux.train!(loss, data, opt)

There are plenty of examples in the model zoo.

Loss Functions

The loss that we defined in basics is completely valid for training. We can also define a loss in terms of some model:

m = Chain(
   Dense(784, 32, σ),
   Dense(32, 10), softmax)