add activations
docs
This commit is contained in:
parent
0a04e3ba61
commit
d6a75e1289
@ -44,3 +44,19 @@ loss(x, y) = crossentropy(m(x), y) + sum(vecnorm, params(m))
|
|||||||
|
|
||||||
loss(rand(28^2), rand(10))
|
loss(rand(28^2), rand(10))
|
||||||
```
|
```
|
||||||
|
|
||||||
|
One can also easily add per-layer regularisation via the `activations` function:
|
||||||
|
|
||||||
|
```julia
|
||||||
|
julia> c = Chain(Dense(10,5,σ),Dense(5,2),softmax)
|
||||||
|
Chain(Dense(10, 5, NNlib.σ), Dense(5, 2), NNlib.softmax)
|
||||||
|
|
||||||
|
julia> activations(c, rand(10))
|
||||||
|
3-element Array{Any,1}:
|
||||||
|
param([0.71068, 0.831145, 0.751219, 0.227116, 0.553074])
|
||||||
|
param([0.0330606, -0.456104])
|
||||||
|
param([0.61991, 0.38009])
|
||||||
|
|
||||||
|
julia> sum(vecnorm, ans)
|
||||||
|
2.639678767773633 (tracked)
|
||||||
|
```
|
||||||
|
Loading…
Reference in New Issue
Block a user