From 6eaac0b36394c1f602814640237e96c8ad83bdfd Mon Sep 17 00:00:00 2001 From: autodocs Date: Sun, 10 Sep 2017 01:05:15 +0000 Subject: [PATCH] build based on 17e40b1 --- latest/contributing.html | 2 +- latest/index.html | 2 +- latest/models/basics.html | 2 +- latest/models/layers.html | 4 ++-- latest/models/recurrence.html | 2 +- latest/search.html | 2 +- latest/search_index.js | 24 ++++++++++++++++++++++++ latest/training/optimisers.html | 30 ++++++++++++++++++++++++++++++ latest/training/training.html | 10 ++++++++++ 9 files changed, 71 insertions(+), 7 deletions(-) create mode 100644 latest/training/optimisers.html create mode 100644 latest/training/training.html diff --git a/latest/contributing.html b/latest/contributing.html index bd122187..b470abe0 100644 --- a/latest/contributing.html +++ b/latest/contributing.html @@ -6,4 +6,4 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Contributing & Help

Contributing & Help

If you need help, please ask on the Julia forum, the slack (channel #machine-learning), or Flux's Gitter.

Right now, the best way to help out is to try out the examples and report any issues or missing features as you find them. The second best way is to help us spread the word, perhaps by starring the repo.

If you're interested in hacking on Flux, most of the code is pretty straightforward. Adding new layer definitions or cost functions is simple using the Flux DSL itself, and things like data utilities and training processes are all plain Julia code.

If you get stuck or need anything, let us know!

+

Contributing & Help

Contributing & Help

If you need help, please ask on the Julia forum, the slack (channel #machine-learning), or Flux's Gitter.

Right now, the best way to help out is to try out the examples and report any issues or missing features as you find them. The second best way is to help us spread the word, perhaps by starring the repo.

If you're interested in hacking on Flux, most of the code is pretty straightforward. Adding new layer definitions or cost functions is simple using the Flux DSL itself, and things like data utilities and training processes are all plain Julia code.

If you get stuck or need anything, let us know!

diff --git a/latest/index.html b/latest/index.html index 88c0f7fd..2af093d3 100644 --- a/latest/index.html +++ b/latest/index.html @@ -6,5 +6,5 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Home

Flux: The Julia Machine Learning Library

Flux is a library for machine learning. It comes "batteries-included" with many useful tools built in, but also lets you use the full power of the Julia language where you need it. The whole stack is implemented in clean Julia code (right down to the GPU kernels) and any part can be tweaked to your liking.

Installation

Install Julia 0.6.0 or later, if you haven't already.

Pkg.add("Flux")
+

Home

Flux: The Julia Machine Learning Library

Flux is a library for machine learning. It comes "batteries-included" with many useful tools built in, but also lets you use the full power of the Julia language where you need it. The whole stack is implemented in clean Julia code (right down to the GPU kernels) and any part can be tweaked to your liking.

Installation

Install Julia 0.6.0 or later, if you haven't already.

Pkg.add("Flux")
 Pkg.test("Flux") # Check things installed correctly

Start with the basics. The model zoo is also a good starting point for many common kinds of models.

diff --git a/latest/models/basics.html b/latest/models/basics.html index eb7538af..6a6c68e1 100644 --- a/latest/models/basics.html +++ b/latest/models/basics.html @@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Basics

Model-Building Basics

Taking Gradients

Consider a simple linear regression, which tries to predict an output array y from an input x. (It's a good idea to follow this example in the Julia repl.)

W = rand(2, 5)
+

Basics

Model-Building Basics

Taking Gradients

Consider a simple linear regression, which tries to predict an output array y from an input x. (It's a good idea to follow this example in the Julia repl.)

W = rand(2, 5)
 b = rand(2)
 
 predict(x) = W*x .+ b
diff --git a/latest/models/layers.html b/latest/models/layers.html
index 4296bcfc..01397200 100644
--- a/latest/models/layers.html
+++ b/latest/models/layers.html
@@ -6,9 +6,9 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
 
 ga('create', 'UA-36890222-9', 'auto');
 ga('send', 'pageview');
-

Layer Reference

Model Layers

Flux.ChainType.
Chain(layers...)

Chain multiple layers / functions together, so that they are called in sequence on a given input.

m = Chain(x -> x^2, x -> x+1)
+

Layer Reference

Model Layers

Flux.ChainType.
Chain(layers...)

Chain multiple layers / functions together, so that they are called in sequence on a given input.

m = Chain(x -> x^2, x -> x+1)
 m(5) == 26
 
 m = Chain(Dense(10, 5), Dense(5, 2))
 x = rand(10)
-m(x) == m[2](m[1](x))

Chain also supports indexing and slicing, e.g. m[2] or m[1:end-1]. m[1:3](x) will calculate the output of the first three layers.

source
Flux.DenseType.
Dense(in::Integer, out::Integer, σ = identity)

Creates a traditional Dense layer with parameters W and b.

y = σ.(W * x .+ b)

The input x must be a vector of length in, or a batch of vectors represented as an in × N matrix. The out y will be a vector or batch of length in.

source
+m(x) == m[2](m[1](x))

Chain also supports indexing and slicing, e.g. m[2] or m[1:end-1]. m[1:3](x) will calculate the output of the first three layers.

source
Flux.DenseType.
Dense(in::Integer, out::Integer, σ = identity)

Creates a traditional Dense layer with parameters W and b.

y = σ.(W * x .+ b)

The input x must be a vector of length in, or a batch of vectors represented as an in × N matrix. The out y will be a vector or batch of length in.

source
diff --git a/latest/models/recurrence.html b/latest/models/recurrence.html index cf092235..779e26df 100644 --- a/latest/models/recurrence.html +++ b/latest/models/recurrence.html @@ -6,7 +6,7 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) ga('create', 'UA-36890222-9', 'auto'); ga('send', 'pageview'); -

Recurrence

Recurrent Cells

In the simple feedforward case, our model m is a simple function from various inputs xᵢ to predictions yᵢ. (For example, each x might be an MNIST digit and each y a digit label.) Each prediction is completely independent of any others, and using the same x will always produce the same y.

y₁ = f(x₁)
+

Recurrence

Recurrent Cells

In the simple feedforward case, our model m is a simple function from various inputs xᵢ to predictions yᵢ. (For example, each x might be an MNIST digit and each y a digit label.) Each prediction is completely independent of any others, and using the same x will always produce the same y.

y₁ = f(x₁)
 y₂ = f(x₂)
 y₃ = f(x₃)
 # ...

Recurrent networks introduce a hidden state that gets carried over each time we run the model. The model now takes the old h as an input, and produces a new h as output, each time we run it.

h = # ... initial state ...
diff --git a/latest/search.html b/latest/search.html
index d78a1539..62135ff1 100644
--- a/latest/search.html
+++ b/latest/search.html
@@ -6,4 +6,4 @@ m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
 
 ga('create', 'UA-36890222-9', 'auto');
 ga('send', 'pageview');
-

Search

Search

Number of results: loading...

    +

    Search

    Search

    Number of results: loading...

      diff --git a/latest/search_index.js b/latest/search_index.js index 602fb3f8..40db56bf 100644 --- a/latest/search_index.js +++ b/latest/search_index.js @@ -136,6 +136,30 @@ var documenterSearchIndex = {"docs": [ "text": "Chain\nDense" }, +{ + "location": "training/optimisers.html#", + "page": "Optimisers", + "title": "Optimisers", + "category": "page", + "text": "" +}, + +{ + "location": "training/optimisers.html#Optimisers-1", + "page": "Optimisers", + "title": "Optimisers", + "category": "section", + "text": "Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.W = param(rand(2, 5))\nb = param(rand(2))\n\npredict(x) = W*x .+ b\nloss(x, y) = sum((predict(x) .- y).^2)\n\nx, y = rand(5), rand(2) # Dummy data\nl = loss(x, y) # ~ 3\nback!(l)We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:using Flux.Tracker: data, grad\n\nfunction update()\n η = 0.1 # Learning Rate\n for p in (W, b)\n x, Δ = data(p), grad(p)\n x .-= η .* Δ # Apply the update\n Δ .= 0 # Clear the gradient\n end\nendIf we call update, the parameters W and b will change and our loss should go down.There are two pieces here: one is that we need a list of trainable parameters for the model ([W, b] in this case), and the other is the update step. In this case the update is simply gradient descent (x .-= η .* Δ), but we might choose to do something more advanced, like adding momentum.In this case, getting the variables is trivial, but you can imagine it'd be more of a pain with some complex stack of layers.m = Chain(\n Dense(10, 5, σ),\n Dense(5, 2), softmax)Instead of having to write [m[1].W, m[1].b, ...], Flux provides a params function params(m) that returns a list of all parameters in the model for you.For the update step, there's nothing whatsoever wrong with writing the loop above – it'll work just fine – but Flux provides various optimisers that make it more convenient.opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1\n\nopt()An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data." +}, + +{ + "location": "training/training.html#", + "page": "Training", + "title": "Training", + "category": "page", + "text": "Flux.train!(loss, repeated((x,y), 1000), SGD(params(m), 0.1),\n cb = throttle(() -> @show(loss(x, y)), 5))" +}, + { "location": "contributing.html#", "page": "Contributing & Help", diff --git a/latest/training/optimisers.html b/latest/training/optimisers.html new file mode 100644 index 00000000..bfd5cce3 --- /dev/null +++ b/latest/training/optimisers.html @@ -0,0 +1,30 @@ + +Optimisers · Flux

      Optimisers

      Optimisers

      Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.

      W = param(rand(2, 5))
      +b = param(rand(2))
      +
      +predict(x) = W*x .+ b
      +loss(x, y) = sum((predict(x) .- y).^2)
      +
      +x, y = rand(5), rand(2) # Dummy data
      +l = loss(x, y) # ~ 3
      +back!(l)

      We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:

      using Flux.Tracker: data, grad
      +
      +function update()
      +  η = 0.1 # Learning Rate
      +  for p in (W, b)
      +    x, Δ = data(p), grad(p)
      +    x .-= η .* Δ # Apply the update
      +    Δ .= 0       # Clear the gradient
      +  end
      +end

      If we call update, the parameters W and b will change and our loss should go down.

      There are two pieces here: one is that we need a list of trainable parameters for the model ([W, b] in this case), and the other is the update step. In this case the update is simply gradient descent (x .-= η .* Δ), but we might choose to do something more advanced, like adding momentum.

      In this case, getting the variables is trivial, but you can imagine it'd be more of a pain with some complex stack of layers.

      m = Chain(
      +  Dense(10, 5, σ),
      +  Dense(5, 2), softmax)

      Instead of having to write [m[1].W, m[1].b, ...], Flux provides a params function params(m) that returns a list of all parameters in the model for you.

      For the update step, there's nothing whatsoever wrong with writing the loop above – it'll work just fine – but Flux provides various optimisers that make it more convenient.

      opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1
      +
      +opt()

      An optimiser takes a parameter list and returns a function that does the same thing as update above. We can pass either opt or update to our training loop, which will then run the optimiser after every mini-batch of data.

      diff --git a/latest/training/training.html b/latest/training/training.html new file mode 100644 index 00000000..7b509866 --- /dev/null +++ b/latest/training/training.html @@ -0,0 +1,10 @@ + +Training · Flux