build based on 4a9517b

This commit is contained in:
autodocs 2017-02-22 19:56:49 +00:00
parent ff3ea9b54d
commit b929091524
11 changed files with 92 additions and 12 deletions

View File

@ -140,7 +140,7 @@ Backends
</a> </a>
</li> </li>
</ul> </ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6f563b6cb7547c069a5a934f19732576248b2e48/docs/src/apis/backends.md"> <a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/4a9517b23d7909cd1b2427c2463de01b4e0da592/docs/src/apis/backends.md">
<span class="fa"> <span class="fa">
</span> </span>

View File

@ -145,7 +145,7 @@ Batching
</a> </a>
</li> </li>
</ul> </ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6f563b6cb7547c069a5a934f19732576248b2e48/docs/src/apis/batching.md"> <a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/4a9517b23d7909cd1b2427c2463de01b4e0da592/docs/src/apis/batching.md">
<span class="fa"> <span class="fa">
</span> </span>

View File

@ -126,7 +126,7 @@ Contributing &amp; Help
</a> </a>
</li> </li>
</ul> </ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6f563b6cb7547c069a5a934f19732576248b2e48/docs/src/contributing.md"> <a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/4a9517b23d7909cd1b2427c2463de01b4e0da592/docs/src/contributing.md">
<span class="fa"> <span class="fa">
</span> </span>

View File

@ -129,7 +129,7 @@ Logistic Regression
</a> </a>
</li> </li>
</ul> </ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6f563b6cb7547c069a5a934f19732576248b2e48/docs/src/examples/logreg.md"> <a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/4a9517b23d7909cd1b2427c2463de01b4e0da592/docs/src/examples/logreg.md">
<span class="fa"> <span class="fa">
</span> </span>
@ -144,7 +144,87 @@ Logistic Regression with MNIST
</a> </a>
</h1> </h1>
<p> <p>
[WIP] This walkthrough example will take you through writing a multi-layer perceptron that classifies MNIST digits with high accuracy.
</p>
<p>
First, we load the data using the MNIST package:
</p>
<pre><code class="language-julia">using Flux, MNIST
data = [(trainfeatures(i), onehot(trainlabel(i), 0:9)) for i = 1:60_000]
train = data[1:50_000]
test = data[50_001:60_000]</code></pre>
<p>
The only Flux-specific function here is
<code>onehot</code>
, which takes a class label and turns it into a one-hot-encoded vector that we can use for training. For example:
</p>
<pre><code class="language-julia">julia&gt; onehot(:b, [:a, :b, :c])
3-element Array{Int64,1}:
0
1
0</code></pre>
<p>
Otherwise, the format of the data is simple enough, it&#39;s just a list of tuples from input to output. For example:
</p>
<pre><code class="language-julia">julia&gt; data[1]
([0.0,0.0,0.0, … 0.0,0.0,0.0],[0,0,0,0,0,1,0,0,0,0])</code></pre>
<p>
<code>data[1][1]</code>
is a
<code>28*28 == 784</code>
length vector (mostly zeros due to the black background) and
<code>data[1][2]</code>
is its classification.
</p>
<p>
Now we define our model, which will simply be a function from one to the other.
</p>
<pre><code class="language-julia">m = Chain(
Input(784),
Affine(128), relu,
Affine( 64), relu,
Affine( 10), softmax)
model = tf(model)</code></pre>
<p>
We can try this out on our data already:
</p>
<pre><code class="language-julia">julia&gt; model(data[1][1])
10-element Array{Float64,1}:
0.10614
0.0850447
0.101474
...</code></pre>
<p>
The model gives a probability of about 0.1 to each class which is a way of saying, &quot;I have no idea&quot;. This isn&#39;t too surprising as we haven&#39;t shown it any data yet. This is easy to fix:
</p>
<pre><code class="language-julia">Flux.train!(model, train, test, η = 1e-4)</code></pre>
<p>
The training step takes about 5 minutes (to make it faster we can do smarter things like batching). If you run this code in Juno, you&#39;ll see a progress meter, which you can hover over to see the remaining computation time.
</p>
<p>
Towards the end of the training process, Flux will have reported that the accuracy of the model is now about 90%. We can try it on our data again:
</p>
<pre><code class="language-julia">10-element Array{Float32,1}:
...
5.11423f-7
0.9354
3.1033f-5
0.000127077
...</code></pre>
<p>
Notice the class at 93%, suggesting our model is very confident about this image. We can use
<code>onecold</code>
to compare the true and predicted classes:
</p>
<pre><code class="language-julia">julia&gt; onecold(data[1][2], 0:9)
5
julia&gt; onecold(model(data[1][1]), 0:9)
5</code></pre>
<p>
Success!
</p> </p>
<footer> <footer>
<hr/> <hr/>

View File

@ -132,7 +132,7 @@ Home
</a> </a>
</li> </li>
</ul> </ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6f563b6cb7547c069a5a934f19732576248b2e48/docs/src/index.md"> <a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/4a9517b23d7909cd1b2427c2463de01b4e0da592/docs/src/index.md">
<span class="fa"> <span class="fa">
</span> </span>

View File

@ -126,7 +126,7 @@ Internals
</a> </a>
</li> </li>
</ul> </ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6f563b6cb7547c069a5a934f19732576248b2e48/docs/src/internals.md"> <a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/4a9517b23d7909cd1b2427c2463de01b4e0da592/docs/src/internals.md">
<span class="fa"> <span class="fa">
</span> </span>

View File

@ -145,7 +145,7 @@ Model Building Basics
</a> </a>
</li> </li>
</ul> </ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6f563b6cb7547c069a5a934f19732576248b2e48/docs/src/models/basics.md"> <a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/4a9517b23d7909cd1b2427c2463de01b4e0da592/docs/src/models/basics.md">
<span class="fa"> <span class="fa">
</span> </span>

View File

@ -129,7 +129,7 @@ Debugging
</a> </a>
</li> </li>
</ul> </ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6f563b6cb7547c069a5a934f19732576248b2e48/docs/src/models/debugging.md"> <a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/4a9517b23d7909cd1b2427c2463de01b4e0da592/docs/src/models/debugging.md">
<span class="fa"> <span class="fa">
</span> </span>

View File

@ -129,7 +129,7 @@ Recurrence
</a> </a>
</li> </li>
</ul> </ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6f563b6cb7547c069a5a934f19732576248b2e48/docs/src/models/recurrent.md"> <a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/4a9517b23d7909cd1b2427c2463de01b4e0da592/docs/src/models/recurrent.md">
<span class="fa"> <span class="fa">
</span> </span>

View File

@ -145,7 +145,7 @@ Model Templates
</a> </a>
</li> </li>
</ul> </ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/6f563b6cb7547c069a5a934f19732576248b2e48/docs/src/models/templates.md"> <a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/4a9517b23d7909cd1b2427c2463de01b4e0da592/docs/src/models/templates.md">
<span class="fa"> <span class="fa">
</span> </span>

View File

@ -221,7 +221,7 @@ var documenterSearchIndex = {"docs": [
"page": "Logistic Regression", "page": "Logistic Regression",
"title": "Logistic Regression with MNIST", "title": "Logistic Regression with MNIST",
"category": "section", "category": "section",
"text": "[WIP]" "text": "This walkthrough example will take you through writing a multi-layer perceptron that classifies MNIST digits with high accuracy.First, we load the data using the MNIST package:using Flux, MNIST\n\ndata = [(trainfeatures(i), onehot(trainlabel(i), 0:9)) for i = 1:60_000]\ntrain = data[1:50_000]\ntest = data[50_001:60_000]The only Flux-specific function here is onehot, which takes a class label and turns it into a one-hot-encoded vector that we can use for training. For example:julia> onehot(:b, [:a, :b, :c])\n3-element Array{Int64,1}:\n 0\n 1\n 0Otherwise, the format of the data is simple enough, it's just a list of tuples from input to output. For example:julia> data[1]\n([0.0,0.0,0.0, … 0.0,0.0,0.0],[0,0,0,0,0,1,0,0,0,0])data[1][1] is a 28*28 == 784 length vector (mostly zeros due to the black background) and data[1][2] is its classification.Now we define our model, which will simply be a function from one to the other.m = Chain(\n Input(784),\n Affine(128), relu,\n Affine( 64), relu,\n Affine( 10), softmax)\n\nmodel = tf(model)We can try this out on our data already:julia> model(data[1][1])\n10-element Array{Float64,1}:\n 0.10614 \n 0.0850447\n 0.101474\n ...The model gives a probability of about 0.1 to each class which is a way of saying, \"I have no idea\". This isn't too surprising as we haven't shown it any data yet. This is easy to fix:Flux.train!(model, train, test, η = 1e-4)The training step takes about 5 minutes (to make it faster we can do smarter things like batching). If you run this code in Juno, you'll see a progress meter, which you can hover over to see the remaining computation time.Towards the end of the training process, Flux will have reported that the accuracy of the model is now about 90%. We can try it on our data again:10-element Array{Float32,1}:\n ...\n 5.11423f-7\n 0.9354 \n 3.1033f-5 \n 0.000127077\n ...Notice the class at 93%, suggesting our model is very confident about this image. We can use onecold to compare the true and predicted classes:julia> onecold(data[1][2], 0:9)\n5\n\njulia> onecold(model(data[1][1]), 0:9)\n5Success!"
}, },
{ {