build based on 6c60c9e

This commit is contained in:
autodocs 2017-03-04 01:31:46 +00:00
parent 8a28ef8eaf
commit b76a8e30ae
13 changed files with 14 additions and 14 deletions

View File

@ -150,7 +150,7 @@ Backends
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/a03898d24d2406f1cea78e527d6d8094ecba35e9/docs/src/apis/backends.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6c60c9e2b9fc97438bea4894c114149a74429783/docs/src/apis/backends.md">
<span class="fa">
</span>

View File

@ -155,7 +155,7 @@ Batching
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/a03898d24d2406f1cea78e527d6d8094ecba35e9/docs/src/apis/batching.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6c60c9e2b9fc97438bea4894c114149a74429783/docs/src/apis/batching.md">
<span class="fa">
</span>

View File

@ -139,7 +139,7 @@ Storing Models
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/a03898d24d2406f1cea78e527d6d8094ecba35e9/docs/src/apis/storage.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6c60c9e2b9fc97438bea4894c114149a74429783/docs/src/apis/storage.md">
<span class="fa">
</span>

View File

@ -136,7 +136,7 @@ Contributing &amp; Help
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/a03898d24d2406f1cea78e527d6d8094ecba35e9/docs/src/contributing.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6c60c9e2b9fc97438bea4894c114149a74429783/docs/src/contributing.md">
<span class="fa">
</span>

View File

@ -139,7 +139,7 @@ Char RNN
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/a03898d24d2406f1cea78e527d6d8094ecba35e9/docs/src/examples/char-rnn.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6c60c9e2b9fc97438bea4894c114149a74429783/docs/src/examples/char-rnn.md">
<span class="fa">
</span>

View File

@ -139,7 +139,7 @@ Logistic Regression
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/a03898d24d2406f1cea78e527d6d8094ecba35e9/docs/src/examples/logreg.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6c60c9e2b9fc97438bea4894c114149a74429783/docs/src/examples/logreg.md">
<span class="fa">
</span>
@ -196,7 +196,7 @@ Now we define our model, which will simply be a function from one to the other.
Affine( 64), relu,
Affine( 10), softmax)
model = tf(model)</code></pre>
model = tf(m)</code></pre>
<p>
We can try this out on our data already:
</p>

View File

@ -147,7 +147,7 @@ Home
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/a03898d24d2406f1cea78e527d6d8094ecba35e9/docs/src/index.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6c60c9e2b9fc97438bea4894c114149a74429783/docs/src/index.md">
<span class="fa">
</span>

View File

@ -136,7 +136,7 @@ Internals
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/a03898d24d2406f1cea78e527d6d8094ecba35e9/docs/src/internals.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6c60c9e2b9fc97438bea4894c114149a74429783/docs/src/internals.md">
<span class="fa">
</span>

View File

@ -155,7 +155,7 @@ Model Building Basics
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/a03898d24d2406f1cea78e527d6d8094ecba35e9/docs/src/models/basics.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6c60c9e2b9fc97438bea4894c114149a74429783/docs/src/models/basics.md">
<span class="fa">
</span>

View File

@ -139,7 +139,7 @@ Debugging
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/a03898d24d2406f1cea78e527d6d8094ecba35e9/docs/src/models/debugging.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6c60c9e2b9fc97438bea4894c114149a74429783/docs/src/models/debugging.md">
<span class="fa">
</span>

View File

@ -139,7 +139,7 @@ Recurrence
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/a03898d24d2406f1cea78e527d6d8094ecba35e9/docs/src/models/recurrent.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6c60c9e2b9fc97438bea4894c114149a74429783/docs/src/models/recurrent.md">
<span class="fa">
</span>

View File

@ -155,7 +155,7 @@ Model Templates
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/a03898d24d2406f1cea78e527d6d8094ecba35e9/docs/src/models/templates.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6c60c9e2b9fc97438bea4894c114149a74429783/docs/src/models/templates.md">
<span class="fa">
</span>

View File

@ -245,7 +245,7 @@ var documenterSearchIndex = {"docs": [
"page": "Logistic Regression",
"title": "Logistic Regression with MNIST",
"category": "section",
"text": "This walkthrough example will take you through writing a multi-layer perceptron that classifies MNIST digits with high accuracy.First, we load the data using the MNIST package:using Flux, MNIST\n\ndata = [(trainfeatures(i), onehot(trainlabel(i), 0:9)) for i = 1:60_000]\ntrain = data[1:50_000]\ntest = data[50_001:60_000]The only Flux-specific function here is onehot, which takes a class label and turns it into a one-hot-encoded vector that we can use for training. For example:julia> onehot(:b, [:a, :b, :c])\n3-element Array{Int64,1}:\n 0\n 1\n 0Otherwise, the format of the data is simple enough, it's just a list of tuples from input to output. For example:julia> data[1]\n([0.0,0.0,0.0, … 0.0,0.0,0.0],[0,0,0,0,0,1,0,0,0,0])data[1][1] is a 28*28 == 784 length vector (mostly zeros due to the black background) and data[1][2] is its classification.Now we define our model, which will simply be a function from one to the other.m = Chain(\n Input(784),\n Affine(128), relu,\n Affine( 64), relu,\n Affine( 10), softmax)\n\nmodel = tf(model)We can try this out on our data already:julia> model(data[1][1])\n10-element Array{Float64,1}:\n 0.10614 \n 0.0850447\n 0.101474\n ...The model gives a probability of about 0.1 to each class which is a way of saying, \"I have no idea\". This isn't too surprising as we haven't shown it any data yet. This is easy to fix:Flux.train!(model, train, test, η = 1e-4)The training step takes about 5 minutes (to make it faster we can do smarter things like batching). If you run this code in Juno, you'll see a progress meter, which you can hover over to see the remaining computation time.Towards the end of the training process, Flux will have reported that the accuracy of the model is now about 90%. We can try it on our data again:10-element Array{Float32,1}:\n ...\n 5.11423f-7\n 0.9354 \n 3.1033f-5 \n 0.000127077\n ...Notice the class at 93%, suggesting our model is very confident about this image. We can use onecold to compare the true and predicted classes:julia> onecold(data[1][2], 0:9)\n5\n\njulia> onecold(model(data[1][1]), 0:9)\n5Success!"
"text": "This walkthrough example will take you through writing a multi-layer perceptron that classifies MNIST digits with high accuracy.First, we load the data using the MNIST package:using Flux, MNIST\n\ndata = [(trainfeatures(i), onehot(trainlabel(i), 0:9)) for i = 1:60_000]\ntrain = data[1:50_000]\ntest = data[50_001:60_000]The only Flux-specific function here is onehot, which takes a class label and turns it into a one-hot-encoded vector that we can use for training. For example:julia> onehot(:b, [:a, :b, :c])\n3-element Array{Int64,1}:\n 0\n 1\n 0Otherwise, the format of the data is simple enough, it's just a list of tuples from input to output. For example:julia> data[1]\n([0.0,0.0,0.0, … 0.0,0.0,0.0],[0,0,0,0,0,1,0,0,0,0])data[1][1] is a 28*28 == 784 length vector (mostly zeros due to the black background) and data[1][2] is its classification.Now we define our model, which will simply be a function from one to the other.m = Chain(\n Input(784),\n Affine(128), relu,\n Affine( 64), relu,\n Affine( 10), softmax)\n\nmodel = tf(m)We can try this out on our data already:julia> model(data[1][1])\n10-element Array{Float64,1}:\n 0.10614 \n 0.0850447\n 0.101474\n ...The model gives a probability of about 0.1 to each class which is a way of saying, \"I have no idea\". This isn't too surprising as we haven't shown it any data yet. This is easy to fix:Flux.train!(model, train, test, η = 1e-4)The training step takes about 5 minutes (to make it faster we can do smarter things like batching). If you run this code in Juno, you'll see a progress meter, which you can hover over to see the remaining computation time.Towards the end of the training process, Flux will have reported that the accuracy of the model is now about 90%. We can try it on our data again:10-element Array{Float32,1}:\n ...\n 5.11423f-7\n 0.9354 \n 3.1033f-5 \n 0.000127077\n ...Notice the class at 93%, suggesting our model is very confident about this image. We can use onecold to compare the true and predicted classes:julia> onecold(data[1][2], 0:9)\n5\n\njulia> onecold(model(data[1][1]), 0:9)\n5Success!"
},
{