build based on c05ef62
This commit is contained in:
parent
5a6479aaaa
commit
4d3cd105ec
@ -140,7 +140,7 @@ Backends
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/apis/backends.md">
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c05ef6286aa284f818bb0f3b90d4347efe8c29e6/docs/src/apis/backends.md">
|
||||
<span class="fa">
|
||||
|
||||
</span>
|
||||
|
@ -145,7 +145,7 @@ Batching
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/apis/batching.md">
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c05ef6286aa284f818bb0f3b90d4347efe8c29e6/docs/src/apis/batching.md">
|
||||
<span class="fa">
|
||||
|
||||
</span>
|
||||
@ -298,17 +298,12 @@ Right now, the
|
||||
<ul>
|
||||
<li>
|
||||
<p>
|
||||
Code can be written in a batch-agnostic way or be generic across batching setups. Code works with a single data point, and batching happens independently.
|
||||
Code can be written in a batch-agnostic way or be generic across batching strategies.
|
||||
</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>
|
||||
Automatic batching can be done with correctness assured, reducing programmer errors when manipulating dimensions.
|
||||
</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>
|
||||
Optimisations, like switching batch dimensions, can be expressed by the programmer with compiler support; fewer code changes are required and optimisations are guaranteed not to break the model.
|
||||
Batching and optimisations, like switching batch dimensions, can be expressed by the programmer with compiler support; fewer code changes are required and optimisations are guaranteed not to break the model.
|
||||
</p>
|
||||
</li>
|
||||
<li>
|
||||
@ -317,6 +312,52 @@ This also opens the door for more automatic optimisations, e.g. having the compi
|
||||
</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p>
|
||||
Here's a more detailed illustration of how it might look for code to be "generic across batching". Take for example a weight matrix
|
||||
<code>W</code>
|
||||
times a vector
|
||||
<code>x</code>
|
||||
, as used in a logistic regression or a simple neural network:
|
||||
</p>
|
||||
<pre><code class="language-julia"> W * x => y
|
||||
(10×28) * (28) => (10)</code></pre>
|
||||
<p>
|
||||
If we want to work with a batch of 50
|
||||
<code>x</code>
|
||||
s, one option is to stack the data into a matrix of size
|
||||
<code>28 × 50</code>
|
||||
.
|
||||
</p>
|
||||
<pre><code class="language-julia"> W * x => y
|
||||
(10×28) * (28×50) => (10×50)</code></pre>
|
||||
<p>
|
||||
This works, but we may find that it's slow or doesn't fit well with the rest of the model, which batches on the first dimension. For that reason we may instead want to put the data in a
|
||||
<code>50 × 28</code>
|
||||
matrix and alter the code as follows:
|
||||
</p>
|
||||
<pre><code class="language-julia"> x * W' => y
|
||||
(50×28) * (28×10) => (50×10)</code></pre>
|
||||
<p>
|
||||
to make the shapes work out. This code change is not ideal; in more complex cases it can become fiddly and error-prone, and it means that the code is less reusable, tied to a particular implementation strategy.
|
||||
</p>
|
||||
<p>
|
||||
There's an alternative. We keep the same code, but represent the batched
|
||||
<code>x</code>
|
||||
s as either a
|
||||
<code>Batch{Vector,1}</code>
|
||||
or a
|
||||
<code>Batch{Vector,2}</code>
|
||||
, depending on how the data is stacked. Then we can simply overload
|
||||
<code>*</code>
|
||||
as follows:
|
||||
</p>
|
||||
<pre><code class="language-julia">*(W::Matrix, x::Batch{Vector,1}) = x * W'
|
||||
*(W::Matrix, x::Batch{Vector,2}) = W * x</code></pre>
|
||||
<p>
|
||||
This means that we can always write
|
||||
<code>W*x</code>
|
||||
, and the code is reusable in a larger network regardless of the overall batching approach. Moreover, Julia's type system ensures there's no runtime cost to doing this, and we can compile the code appropriately for backends like TensorFlow as well.
|
||||
</p>
|
||||
<footer>
|
||||
<hr/>
|
||||
<a class="previous" href="../models/debugging.html">
|
||||
|
@ -126,7 +126,7 @@ Contributing & Help
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/contributing.md">
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c05ef6286aa284f818bb0f3b90d4347efe8c29e6/docs/src/contributing.md">
|
||||
<span class="fa">
|
||||
|
||||
</span>
|
||||
|
@ -129,7 +129,7 @@ Logistic Regression
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/examples/logreg.md">
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c05ef6286aa284f818bb0f3b90d4347efe8c29e6/docs/src/examples/logreg.md">
|
||||
<span class="fa">
|
||||
|
||||
</span>
|
||||
|
@ -132,7 +132,7 @@ Home
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/index.md">
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c05ef6286aa284f818bb0f3b90d4347efe8c29e6/docs/src/index.md">
|
||||
<span class="fa">
|
||||
|
||||
</span>
|
||||
|
@ -126,7 +126,7 @@ Internals
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/internals.md">
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c05ef6286aa284f818bb0f3b90d4347efe8c29e6/docs/src/internals.md">
|
||||
<span class="fa">
|
||||
|
||||
</span>
|
||||
|
@ -145,7 +145,7 @@ Model Building Basics
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/models/basics.md">
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c05ef6286aa284f818bb0f3b90d4347efe8c29e6/docs/src/models/basics.md">
|
||||
<span class="fa">
|
||||
|
||||
</span>
|
||||
|
@ -129,7 +129,7 @@ Debugging
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/models/debugging.md">
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c05ef6286aa284f818bb0f3b90d4347efe8c29e6/docs/src/models/debugging.md">
|
||||
<span class="fa">
|
||||
|
||||
</span>
|
||||
|
@ -129,7 +129,7 @@ Recurrence
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/models/recurrent.md">
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c05ef6286aa284f818bb0f3b90d4347efe8c29e6/docs/src/models/recurrent.md">
|
||||
<span class="fa">
|
||||
|
||||
</span>
|
||||
|
@ -145,7 +145,7 @@ Model Templates
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/e1cd688917d90dd80ff59f926ae24e27e1c7635e/docs/src/models/templates.md">
|
||||
<a class="edit-page" href="https://github.com/MikeInnes/Flux.jl/tree/c05ef6286aa284f818bb0f3b90d4347efe8c29e6/docs/src/models/templates.md">
|
||||
<span class="fa">
|
||||
|
||||
</span>
|
||||
|
@ -173,7 +173,7 @@ var documenterSearchIndex = {"docs": [
|
||||
"page": "Batching",
|
||||
"title": "Future Work",
|
||||
"category": "section",
|
||||
"text": "The design of batching is still a fairly early work in progress, though it's used in a few places in the system. For example, all Flux models expect to be given Batch objects which are unwrapped into raw arrays for the computation. Models will convert their arguments if necessary, so it's convenient to call a model with a single data point like f([1,2,3]).Right now, the Batch or Seq types always stack along the left-most dimension. In future, this will be customisable, and Flux will provide implementations of common functions that are generic across the batch dimension. This brings the following benefits:Code can be written in a batch-agnostic way or be generic across batching setups. Code works with a single data point, and batching happens independently.\nAutomatic batching can be done with correctness assured, reducing programmer errors when manipulating dimensions.\nOptimisations, like switching batch dimensions, can be expressed by the programmer with compiler support; fewer code changes are required and optimisations are guaranteed not to break the model.\nThis also opens the door for more automatic optimisations, e.g. having the compiler explore the search base of possible batching combinations."
|
||||
"text": "The design of batching is still a fairly early work in progress, though it's used in a few places in the system. For example, all Flux models expect to be given Batch objects which are unwrapped into raw arrays for the computation. Models will convert their arguments if necessary, so it's convenient to call a model with a single data point like f([1,2,3]).Right now, the Batch or Seq types always stack along the left-most dimension. In future, this will be customisable, and Flux will provide implementations of common functions that are generic across the batch dimension. This brings the following benefits:Code can be written in a batch-agnostic way or be generic across batching strategies.\nBatching and optimisations, like switching batch dimensions, can be expressed by the programmer with compiler support; fewer code changes are required and optimisations are guaranteed not to break the model.\nThis also opens the door for more automatic optimisations, e.g. having the compiler explore the search base of possible batching combinations.Here's a more detailed illustration of how it might look for code to be \"generic across batching\". Take for example a weight matrix W times a vector x, as used in a logistic regression or a simple neural network: W * x => y\n(10×28) * (28) => (10)If we want to work with a batch of 50 xs, one option is to stack the data into a matrix of size 28 × 50. W * x => y\n(10×28) * (28×50) => (10×50)This works, but we may find that it's slow or doesn't fit well with the rest of the model, which batches on the first dimension. For that reason we may instead want to put the data in a 50 × 28 matrix and alter the code as follows: x * W' => y\n(50×28) * (28×10) => (50×10)to make the shapes work out. This code change is not ideal; in more complex cases it can become fiddly and error-prone, and it means that the code is less reusable, tied to a particular implementation strategy.There's an alternative. We keep the same code, but represent the batched xs as either a Batch{Vector,1} or a Batch{Vector,2}, depending on how the data is stacked. Then we can simply overload * as follows:*(W::Matrix, x::Batch{Vector,1}) = x * W'\n*(W::Matrix, x::Batch{Vector,2}) = W * xThis means that we can always write W*x, and the code is reusable in a larger network regardless of the overall batching approach. Moreover, Julia's type system ensures there's no runtime cost to doing this, and we can compile the code appropriately for backends like TensorFlow as well."
|
||||
},
|
||||
|
||||
{
|
||||
|
Loading…
Reference in New Issue
Block a user