build based on c9dcc81

This commit is contained in:
autodocs 2017-05-03 17:35:58 +00:00
parent 65919307e8
commit c64cc3ff29
13 changed files with 14 additions and 14 deletions

View File

@ -150,7 +150,7 @@ Backends
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/7eea918f99669cfb67a10a5f5fdbb16f2b3dc031/docs/src/apis/backends.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c9dcc815dc9ddce0981bf004453214e3d5aab929/docs/src/apis/backends.md">
<span class="fa">
</span>
@ -176,7 +176,7 @@ Currently, Flux&#39;s pure-Julia backend has no optimisations. This means that c
</p>
<pre><code class="language-julia">model(rand(10)) #&gt; [0.0650, 0.0655, ...]</code></pre>
<p>
directly won&#39;t have great performance. In order to run a computationally intensive training process, we rely on a backend like MXNet or TensorFlow.
directly won&#39;t have great performance. In order to run a computationally intensive training process, we need to use a backend like MXNet or TensorFlow.
</p>
<p>
This is easy to do. Just call either

View File

@ -155,7 +155,7 @@ Batching
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/7eea918f99669cfb67a10a5f5fdbb16f2b3dc031/docs/src/apis/batching.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c9dcc815dc9ddce0981bf004453214e3d5aab929/docs/src/apis/batching.md">
<span class="fa">
</span>

View File

@ -139,7 +139,7 @@ Storing Models
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/7eea918f99669cfb67a10a5f5fdbb16f2b3dc031/docs/src/apis/storage.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c9dcc815dc9ddce0981bf004453214e3d5aab929/docs/src/apis/storage.md">
<span class="fa">
</span>

View File

@ -136,7 +136,7 @@ Contributing &amp; Help
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/7eea918f99669cfb67a10a5f5fdbb16f2b3dc031/docs/src/contributing.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c9dcc815dc9ddce0981bf004453214e3d5aab929/docs/src/contributing.md">
<span class="fa">
</span>

View File

@ -139,7 +139,7 @@ Char RNN
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/7eea918f99669cfb67a10a5f5fdbb16f2b3dc031/docs/src/examples/char-rnn.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c9dcc815dc9ddce0981bf004453214e3d5aab929/docs/src/examples/char-rnn.md">
<span class="fa">
</span>

View File

@ -139,7 +139,7 @@ Simple MNIST
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/7eea918f99669cfb67a10a5f5fdbb16f2b3dc031/docs/src/examples/logreg.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c9dcc815dc9ddce0981bf004453214e3d5aab929/docs/src/examples/logreg.md">
<span class="fa">
</span>

View File

@ -147,7 +147,7 @@ Home
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/7eea918f99669cfb67a10a5f5fdbb16f2b3dc031/docs/src/index.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c9dcc815dc9ddce0981bf004453214e3d5aab929/docs/src/index.md">
<span class="fa">
</span>

View File

@ -136,7 +136,7 @@ Internals
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/7eea918f99669cfb67a10a5f5fdbb16f2b3dc031/docs/src/internals.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c9dcc815dc9ddce0981bf004453214e3d5aab929/docs/src/internals.md">
<span class="fa">
</span>

View File

@ -155,7 +155,7 @@ Model Building Basics
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/7eea918f99669cfb67a10a5f5fdbb16f2b3dc031/docs/src/models/basics.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c9dcc815dc9ddce0981bf004453214e3d5aab929/docs/src/models/basics.md">
<span class="fa">
</span>

View File

@ -139,7 +139,7 @@ Debugging
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/7eea918f99669cfb67a10a5f5fdbb16f2b3dc031/docs/src/models/debugging.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c9dcc815dc9ddce0981bf004453214e3d5aab929/docs/src/models/debugging.md">
<span class="fa">
</span>

View File

@ -139,7 +139,7 @@ Recurrence
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/7eea918f99669cfb67a10a5f5fdbb16f2b3dc031/docs/src/models/recurrent.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c9dcc815dc9ddce0981bf004453214e3d5aab929/docs/src/models/recurrent.md">
<span class="fa">
</span>

View File

@ -155,7 +155,7 @@ Model Templates
</a>
</li>
</ul>
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/7eea918f99669cfb67a10a5f5fdbb16f2b3dc031/docs/src/models/templates.md">
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c9dcc815dc9ddce0981bf004453214e3d5aab929/docs/src/models/templates.md">
<span class="fa">
</span>

View File

@ -205,7 +205,7 @@ var documenterSearchIndex = {"docs": [
"page": "Backends",
"title": "Basic Usage",
"category": "section",
"text": "model = Chain(Affine(10, 20), σ, Affine(20, 15), softmax)\nxs = rand(10)Currently, Flux's pure-Julia backend has no optimisations. This means that callingmodel(rand(10)) #> [0.0650, 0.0655, ...]directly won't have great performance. In order to run a computationally intensive training process, we rely on a backend like MXNet or TensorFlow.This is easy to do. Just call either mxnet or tf on a model to convert it to a model of that kind:mxmodel = mxnet(model)\nmxmodel(xs) #> [0.0650, 0.0655, ...]\n# or\ntfmodel = tf(model)\ntfmodel(xs) #> [0.0650, 0.0655, ...]These new models look and feel exactly like every other model in Flux, including returning the same result when you call them, and can be trained as usual using Flux.train!(). The difference is that the computation is being carried out by a backend, which will usually give a large speedup."
"text": "model = Chain(Affine(10, 20), σ, Affine(20, 15), softmax)\nxs = rand(10)Currently, Flux's pure-Julia backend has no optimisations. This means that callingmodel(rand(10)) #> [0.0650, 0.0655, ...]directly won't have great performance. In order to run a computationally intensive training process, we need to use a backend like MXNet or TensorFlow.This is easy to do. Just call either mxnet or tf on a model to convert it to a model of that kind:mxmodel = mxnet(model)\nmxmodel(xs) #> [0.0650, 0.0655, ...]\n# or\ntfmodel = tf(model)\ntfmodel(xs) #> [0.0650, 0.0655, ...]These new models look and feel exactly like every other model in Flux, including returning the same result when you call them, and can be trained as usual using Flux.train!(). The difference is that the computation is being carried out by a backend, which will usually give a large speedup."
},
{