diff --git a/latest/apis/backends.html b/latest/apis/backends.html
index f2860068..001dca34 100644
--- a/latest/apis/backends.html
+++ b/latest/apis/backends.html
@@ -140,7 +140,7 @@ Backends
-
+
diff --git a/latest/apis/batching.html b/latest/apis/batching.html
index fbab2ab5..c48af39f 100644
--- a/latest/apis/batching.html
+++ b/latest/apis/batching.html
@@ -145,7 +145,7 @@ Batching
-
+
diff --git a/latest/contributing.html b/latest/contributing.html
index 9af8bc33..572e41dd 100644
--- a/latest/contributing.html
+++ b/latest/contributing.html
@@ -126,7 +126,7 @@ Contributing & Help
-
+
diff --git a/latest/examples/logreg.html b/latest/examples/logreg.html
index 4b146ced..9e92146d 100644
--- a/latest/examples/logreg.html
+++ b/latest/examples/logreg.html
@@ -129,7 +129,7 @@ Logistic Regression
-
+
@@ -144,7 +144,87 @@ Logistic Regression with MNIST
-[WIP]
+This walkthrough example will take you through writing a multi-layer perceptron that classifies MNIST digits with high accuracy.
+
+
+First, we load the data using the MNIST package:
+
+using Flux, MNIST
+
+data = [(trainfeatures(i), onehot(trainlabel(i), 0:9)) for i = 1:60_000]
+train = data[1:50_000]
+test = data[50_001:60_000]
+
+The only Flux-specific function here is
+onehot
+, which takes a class label and turns it into a one-hot-encoded vector that we can use for training. For example:
+
+julia> onehot(:b, [:a, :b, :c])
+3-element Array{Int64,1}:
+ 0
+ 1
+ 0
+
+Otherwise, the format of the data is simple enough, it's just a list of tuples from input to output. For example:
+
+julia> data[1]
+([0.0,0.0,0.0, … 0.0,0.0,0.0],[0,0,0,0,0,1,0,0,0,0])
+
+data[1][1]
+ is a
+28*28 == 784
+ length vector (mostly zeros due to the black background) and
+data[1][2]
+ is its classification.
+
+
+Now we define our model, which will simply be a function from one to the other.
+
+m = Chain(
+ Input(784),
+ Affine(128), relu,
+ Affine( 64), relu,
+ Affine( 10), softmax)
+
+model = tf(model)
+
+We can try this out on our data already:
+
+julia> model(data[1][1])
+10-element Array{Float64,1}:
+ 0.10614
+ 0.0850447
+ 0.101474
+ ...
+
+The model gives a probability of about 0.1 to each class – which is a way of saying, "I have no idea". This isn't too surprising as we haven't shown it any data yet. This is easy to fix:
+
+Flux.train!(model, train, test, η = 1e-4)
+
+The training step takes about 5 minutes (to make it faster we can do smarter things like batching). If you run this code in Juno, you'll see a progress meter, which you can hover over to see the remaining computation time.
+
+
+Towards the end of the training process, Flux will have reported that the accuracy of the model is now about 90%. We can try it on our data again:
+
+10-element Array{Float32,1}:
+ ...
+ 5.11423f-7
+ 0.9354
+ 3.1033f-5
+ 0.000127077
+ ...
+
+Notice the class at 93%, suggesting our model is very confident about this image. We can use
+onecold
+ to compare the true and predicted classes:
+
+julia> onecold(data[1][2], 0:9)
+5
+
+julia> onecold(model(data[1][1]), 0:9)
+5
+
+Success!
diff --git a/latest/index.html b/latest/index.html
index 641b4407..aa294fcb 100644
--- a/latest/index.html
+++ b/latest/index.html
@@ -132,7 +132,7 @@ Home
-
+
diff --git a/latest/internals.html b/latest/internals.html
index 8ef0a4fb..0f479711 100644
--- a/latest/internals.html
+++ b/latest/internals.html
@@ -126,7 +126,7 @@ Internals
-
+
diff --git a/latest/models/basics.html b/latest/models/basics.html
index 02483a41..50b41f7c 100644
--- a/latest/models/basics.html
+++ b/latest/models/basics.html
@@ -145,7 +145,7 @@ Model Building Basics
-
+
diff --git a/latest/models/debugging.html b/latest/models/debugging.html
index d8bdeae5..6fbaf19a 100644
--- a/latest/models/debugging.html
+++ b/latest/models/debugging.html
@@ -129,7 +129,7 @@ Debugging
-
+
diff --git a/latest/models/recurrent.html b/latest/models/recurrent.html
index 3fed70f7..aea9f178 100644
--- a/latest/models/recurrent.html
+++ b/latest/models/recurrent.html
@@ -129,7 +129,7 @@ Recurrence
-
+
diff --git a/latest/models/templates.html b/latest/models/templates.html
index 3f49af4a..3031bd13 100644
--- a/latest/models/templates.html
+++ b/latest/models/templates.html
@@ -145,7 +145,7 @@ Model Templates
-
+
diff --git a/latest/search_index.js b/latest/search_index.js
index 1749db76..cb241458 100644
--- a/latest/search_index.js
+++ b/latest/search_index.js
@@ -221,7 +221,7 @@ var documenterSearchIndex = {"docs": [
"page": "Logistic Regression",
"title": "Logistic Regression with MNIST",
"category": "section",
- "text": "[WIP]"
+ "text": "This walkthrough example will take you through writing a multi-layer perceptron that classifies MNIST digits with high accuracy.First, we load the data using the MNIST package:using Flux, MNIST\n\ndata = [(trainfeatures(i), onehot(trainlabel(i), 0:9)) for i = 1:60_000]\ntrain = data[1:50_000]\ntest = data[50_001:60_000]The only Flux-specific function here is onehot, which takes a class label and turns it into a one-hot-encoded vector that we can use for training. For example:julia> onehot(:b, [:a, :b, :c])\n3-element Array{Int64,1}:\n 0\n 1\n 0Otherwise, the format of the data is simple enough, it's just a list of tuples from input to output. For example:julia> data[1]\n([0.0,0.0,0.0, … 0.0,0.0,0.0],[0,0,0,0,0,1,0,0,0,0])data[1][1] is a 28*28 == 784 length vector (mostly zeros due to the black background) and data[1][2] is its classification.Now we define our model, which will simply be a function from one to the other.m = Chain(\n Input(784),\n Affine(128), relu,\n Affine( 64), relu,\n Affine( 10), softmax)\n\nmodel = tf(model)We can try this out on our data already:julia> model(data[1][1])\n10-element Array{Float64,1}:\n 0.10614 \n 0.0850447\n 0.101474\n ...The model gives a probability of about 0.1 to each class – which is a way of saying, \"I have no idea\". This isn't too surprising as we haven't shown it any data yet. This is easy to fix:Flux.train!(model, train, test, η = 1e-4)The training step takes about 5 minutes (to make it faster we can do smarter things like batching). If you run this code in Juno, you'll see a progress meter, which you can hover over to see the remaining computation time.Towards the end of the training process, Flux will have reported that the accuracy of the model is now about 90%. We can try it on our data again:10-element Array{Float32,1}:\n ...\n 5.11423f-7\n 0.9354 \n 3.1033f-5 \n 0.000127077\n ...Notice the class at 93%, suggesting our model is very confident about this image. We can use onecold to compare the true and predicted classes:julia> onecold(data[1][2], 0:9)\n5\n\njulia> onecold(model(data[1][1]), 0:9)\n5Success!"
},
{