diff --git a/latest/contributing.html b/latest/contributing.html
index 7b232986..2ee0dba5 100644
--- a/latest/contributing.html
+++ b/latest/contributing.html
@@ -104,7 +104,7 @@ Contributing & Help
-
+
diff --git a/latest/examples/logreg.html b/latest/examples/logreg.html
index 47ebeff5..2a320fa5 100644
--- a/latest/examples/logreg.html
+++ b/latest/examples/logreg.html
@@ -107,7 +107,7 @@ Logistic Regression
-
+
diff --git a/latest/index.html b/latest/index.html
index 98381646..dc815f77 100644
--- a/latest/index.html
+++ b/latest/index.html
@@ -110,7 +110,7 @@ Home
-
+
diff --git a/latest/internals.html b/latest/internals.html
index 254fafeb..a292efab 100644
--- a/latest/internals.html
+++ b/latest/internals.html
@@ -104,7 +104,7 @@ Internals
-
+
diff --git a/latest/models/basics.html b/latest/models/basics.html
index b1d4a22f..45c898f2 100644
--- a/latest/models/basics.html
+++ b/latest/models/basics.html
@@ -128,7 +128,7 @@ Model Building Basics
-
+
@@ -340,7 +340,68 @@ The above code is almost exactly how
Affine
is defined in Flux itself! There's no difference between "library-level" and "user-level" models, so making your code reusable doesn't involve a lot of extra complexity. Moreover, much more complex models than
Affine
- are equally simple to define, and equally close to the mathematical notation; read on to find out how.
+ are equally simple to define.
+
+
+
+@net
+ models can contain sub-models as well as just array parameters:
+
+@net type TLP
+ first
+ second
+ function (x)
+ l1 = σ(first(x))
+ l2 = softmax(second(l1))
+ end
+end
+
+Just as above, this is roughly equivalent to writing:
+
+type TLP
+ first
+ second
+end
+
+function (self::TLP)(x)
+ l1 = σ(self.first)
+ l2 = softmax(self.second(l1))
+end
+
+Clearly, the
+first
+ and
+second
+ parameters are not arrays here, but should be models themselves, and produce a result when called with an input array
+x
+. The
+Affine
+ layer fits the bill so we can instantiate
+TLP
+ with two of them:
+
+model = TLP(Affine(10, 20),
+ Affine(20, 15))
+x1 = rand(20)
+model(x1) # [0.057852,0.0409741,0.0609625,0.0575354 ...
+
+You may recognise this as being equivalent to
+
+Chain(
+ Affine(10, 20), σ
+ Affine(20, 15)), softmax
+
+given that it's just a sequence of calls. For simple networks
+Chain
+ is completely fine, although the
+@net
+ version is more powerful as we can (for example) reuse the output
+l1
+ more than once.
diff --git a/latest/models/debugging.html b/latest/models/debugging.html
index d8a3a8af..d1210a96 100644
--- a/latest/models/debugging.html
+++ b/latest/models/debugging.html
@@ -107,7 +107,7 @@ Debugging
-
+
diff --git a/latest/models/recurrent.html b/latest/models/recurrent.html
index 9013f6f1..ed83912d 100644
--- a/latest/models/recurrent.html
+++ b/latest/models/recurrent.html
@@ -107,7 +107,7 @@ Recurrence
-
+
diff --git a/latest/search_index.js b/latest/search_index.js
index e636a580..39fc8930 100644
--- a/latest/search_index.js
+++ b/latest/search_index.js
@@ -69,7 +69,15 @@ var documenterSearchIndex = {"docs": [
"page": "Model Building Basics",
"title": "The Template",
"category": "section",
- "text": "... Calculating Tax Expenses ...So how does the Affine template work? We don't want to duplicate the code above whenever we need more than one affine layer:W₁, b₁ = randn(...)\naffine₁(x) = W₁*x + b₁\nW₂, b₂ = randn(...)\naffine₂(x) = W₂*x + b₂\nmodel = Chain(affine₁, affine₂)Here's one way we could solve this: just keep the parameters in a Julia type, and define how that type acts as a function:type MyAffine\n W\n b\nend\n\n# Use the `MyAffine` layer as a model\n(l::MyAffine)(x) = l.W * x + l.b\n\n# Convenience constructor\nMyAffine(in::Integer, out::Integer) =\n MyAffine(randn(out, in), randn(out))\n\nmodel = Chain(MyAffine(5, 5), MyAffine(5, 5))\n\nmodel(x1) # [-1.54458,0.492025,0.88687,1.93834,-4.70062]This is much better: we can now make as many affine layers as we want. This is a very common pattern, so to make it more convenient we can use the @net macro:@net type MyAffine\n W\n b\n x -> W * x + b\nendThe function provided, x -> W * x + b, will be used when MyAffine is used as a model; it's just a shorter way of defining the (::MyAffine)(x) method above.However, @net does not simply save us some keystrokes; it's the secret sauce that makes everything else in Flux go. For example, it analyses the code for the forward function so that it can differentiate it or convert it to a TensorFlow graph.The above code is almost exactly how Affine is defined in Flux itself! There's no difference between \"library-level\" and \"user-level\" models, so making your code reusable doesn't involve a lot of extra complexity. Moreover, much more complex models than Affine are equally simple to define, and equally close to the mathematical notation; read on to find out how."
+ "text": "... Calculating Tax Expenses ...So how does the Affine template work? We don't want to duplicate the code above whenever we need more than one affine layer:W₁, b₁ = randn(...)\naffine₁(x) = W₁*x + b₁\nW₂, b₂ = randn(...)\naffine₂(x) = W₂*x + b₂\nmodel = Chain(affine₁, affine₂)Here's one way we could solve this: just keep the parameters in a Julia type, and define how that type acts as a function:type MyAffine\n W\n b\nend\n\n# Use the `MyAffine` layer as a model\n(l::MyAffine)(x) = l.W * x + l.b\n\n# Convenience constructor\nMyAffine(in::Integer, out::Integer) =\n MyAffine(randn(out, in), randn(out))\n\nmodel = Chain(MyAffine(5, 5), MyAffine(5, 5))\n\nmodel(x1) # [-1.54458,0.492025,0.88687,1.93834,-4.70062]This is much better: we can now make as many affine layers as we want. This is a very common pattern, so to make it more convenient we can use the @net macro:@net type MyAffine\n W\n b\n x -> W * x + b\nendThe function provided, x -> W * x + b, will be used when MyAffine is used as a model; it's just a shorter way of defining the (::MyAffine)(x) method above.However, @net does not simply save us some keystrokes; it's the secret sauce that makes everything else in Flux go. For example, it analyses the code for the forward function so that it can differentiate it or convert it to a TensorFlow graph.The above code is almost exactly how Affine is defined in Flux itself! There's no difference between \"library-level\" and \"user-level\" models, so making your code reusable doesn't involve a lot of extra complexity. Moreover, much more complex models than Affine are equally simple to define."
+},
+
+{
+ "location": "models/basics.html#Sub-Templates-1",
+ "page": "Model Building Basics",
+ "title": "Sub-Templates",
+ "category": "section",
+ "text": "@net models can contain sub-models as well as just array parameters:@net type TLP\n first\n second\n function (x)\n l1 = σ(first(x))\n l2 = softmax(second(l1))\n end\nendJust as above, this is roughly equivalent to writing:type TLP\n first\n second\nend\n\nfunction (self::TLP)(x)\n l1 = σ(self.first)\n l2 = softmax(self.second(l1))\nendClearly, the first and second parameters are not arrays here, but should be models themselves, and produce a result when called with an input array x. The Affine layer fits the bill so we can instantiate TLP with two of them:model = TLP(Affine(10, 20),\n Affine(20, 15))\nx1 = rand(20)\nmodel(x1) # [0.057852,0.0409741,0.0609625,0.0575354 ...You may recognise this as being equivalent toChain(\n Affine(10, 20), σ\n Affine(20, 15)), softmaxgiven that it's just a sequence of calls. For simple networks Chain is completely fine, although the @net version is more powerful as we can (for example) reuse the output l1 more than once."
},
{