From d6a75e1289488945ade1cfa8867717fe1ed19557 Mon Sep 17 00:00:00 2001 From: Mike J Innes Date: Tue, 26 Jun 2018 14:35:03 +0100 Subject: [PATCH] add `activations` docs --- docs/src/models/regularisation.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/docs/src/models/regularisation.md b/docs/src/models/regularisation.md index 70d06348..cd53544f 100644 --- a/docs/src/models/regularisation.md +++ b/docs/src/models/regularisation.md @@ -44,3 +44,19 @@ loss(x, y) = crossentropy(m(x), y) + sum(vecnorm, params(m)) loss(rand(28^2), rand(10)) ``` + +One can also easily add per-layer regularisation via the `activations` function: + +```julia +julia> c = Chain(Dense(10,5,σ),Dense(5,2),softmax) +Chain(Dense(10, 5, NNlib.σ), Dense(5, 2), NNlib.softmax) + +julia> activations(c, rand(10)) +3-element Array{Any,1}: + param([0.71068, 0.831145, 0.751219, 0.227116, 0.553074]) + param([0.0330606, -0.456104]) + param([0.61991, 0.38009]) + +julia> sum(vecnorm, ans) +2.639678767773633 (tracked) +```