build based on e1cd688

This commit is contained in:
autodocs 2017-02-21 20:16:07 +00:00
parent e3994f9066
commit 5a6479aaaa
11 changed files with 12 additions and 12 deletions

View File

@ -140,7 +140,7 @@ Backends
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/85c3dac3b3342921e9ce692acbd0efc25d023f3a/docs/src/apis/backends.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/apis/backends.md">
<span class="fa">
</span>
@ -166,7 +166,7 @@ Currently, Flux&#39;s pure-Julia backend has no optimisations. This means that c
</p>
<pre><code class="language-julia">model(rand(10)) #&gt; [0.0650, 0.0655, ...]</code></pre>
<p>
directly won&#39;t have great performance. In order to support a computationally intensive training process, we really on a backend like MXNet or TensorFlow.
directly won&#39;t have great performance. In order to run a computationally intensive training process, we rely on a backend like MXNet or TensorFlow.
</p>
<p>
This is easy to do. Just call either

View File

@ -145,7 +145,7 @@ Batching
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/85c3dac3b3342921e9ce692acbd0efc25d023f3a/docs/src/apis/batching.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/apis/batching.md">
<span class="fa">
</span>

View File

@ -126,7 +126,7 @@ Contributing &amp; Help
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/85c3dac3b3342921e9ce692acbd0efc25d023f3a/docs/src/contributing.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/contributing.md">
<span class="fa">
</span>

View File

@ -129,7 +129,7 @@ Logistic Regression
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/85c3dac3b3342921e9ce692acbd0efc25d023f3a/docs/src/examples/logreg.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/examples/logreg.md">
<span class="fa">
</span>

View File

@ -132,7 +132,7 @@ Home
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/85c3dac3b3342921e9ce692acbd0efc25d023f3a/docs/src/index.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/index.md">
<span class="fa">
</span>

View File

@ -126,7 +126,7 @@ Internals
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/85c3dac3b3342921e9ce692acbd0efc25d023f3a/docs/src/internals.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/internals.md">
<span class="fa">
</span>

View File

@ -145,7 +145,7 @@ Model Building Basics
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/85c3dac3b3342921e9ce692acbd0efc25d023f3a/docs/src/models/basics.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/models/basics.md">
<span class="fa">
</span>

View File

@ -129,7 +129,7 @@ Debugging
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/85c3dac3b3342921e9ce692acbd0efc25d023f3a/docs/src/models/debugging.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/models/debugging.md">
<span class="fa">
</span>

View File

@ -129,7 +129,7 @@ Recurrence
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/85c3dac3b3342921e9ce692acbd0efc25d023f3a/docs/src/models/recurrent.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/models/recurrent.md">
<span class="fa">
</span>

View File

@ -145,7 +145,7 @@ Model Templates
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/85c3dac3b3342921e9ce692acbd0efc25d023f3a/docs/src/models/templates.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/models/templates.md">
<span class="fa">
</span>

View File

@ -197,7 +197,7 @@ var documenterSearchIndex = {"docs": [
"page": "Backends",
"title": "Basic Usage",
"category": "section",
"text": "model = Chain(Affine(10, 20), σ, Affine(20, 15), softmax)\nxs = rand(10)Currently, Flux's pure-Julia backend has no optimisations. This means that callingmodel(rand(10)) #> [0.0650, 0.0655, ...]directly won't have great performance. In order to support a computationally intensive training process, we really on a backend like MXNet or TensorFlow.This is easy to do. Just call either mxnet or tf on a model to convert it to a model of that kind:mxmodel = mxnet(model, (10, 1))\nmxmodel(xs) #> [0.0650, 0.0655, ...]\n# or\ntfmodel = tf(model)\ntfmodel(xs) #> [0.0650, 0.0655, ...]These new models look and feel exactly like every other model in Flux, including returning the same result when you call them, and can be trained as usual using Flux.train!(). The difference is that the computation is being carried out by a backend, which will usually give a large speedup."
"text": "model = Chain(Affine(10, 20), σ, Affine(20, 15), softmax)\nxs = rand(10)Currently, Flux's pure-Julia backend has no optimisations. This means that callingmodel(rand(10)) #> [0.0650, 0.0655, ...]directly won't have great performance. In order to run a computationally intensive training process, we rely on a backend like MXNet or TensorFlow.This is easy to do. Just call either mxnet or tf on a model to convert it to a model of that kind:mxmodel = mxnet(model, (10, 1))\nmxmodel(xs) #> [0.0650, 0.0655, ...]\n# or\ntfmodel = tf(model)\ntfmodel(xs) #> [0.0650, 0.0655, ...]These new models look and feel exactly like every other model in Flux, including returning the same result when you call them, and can be trained as usual using Flux.train!(). The difference is that the computation is being carried out by a backend, which will usually give a large speedup."
},
{