NNlib
Flux re-exports all of the functions exported by the NNlib package.
Activation Functions
Non-linearities that go between layers of your model. Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call σ.(xs)
, relu.(xs)
and so on.
NNlib.elu
— Function.elu(x, α = 1) =
x > 0 ? x : α * (exp(x) - 1)
Exponential Linear Unit activation function. See Fast and Accurate Deep Network Learning by Exponential Linear Units. You can also specify the coefficient explicitly, e.g. elu(x, 1)
.
NNlib.gelu
— Function.gelu(x) = 0.5x*(1 + tanh(√(2/π)*(x + 0.044715x^3)))
Gaussian Error Linear Unit activation function.
NNlib.leakyrelu
— Function.leakyrelu(x) = max(0.01x, x)
Leaky Rectified Linear Unit activation function. You can also specify the coefficient explicitly, e.g. leakyrelu(x, 0.01)
.
NNlib.logcosh
— Function.logcosh(x)
Return log(cosh(x))
which is computed in a numerically stable way.
NNlib.logsigmoid
— Function.logσ(x)
Return log(σ(x))
which is computed in a numerically stable way.
julia> logσ(0)
-0.6931471805599453
julia> logσ.([-100, -10, 100])
3-element Array{Float64,1}:
-100.0
-10.000045398899218
-3.720075976020836e-44
NNlib.sigmoid
— Function.σ(x) = 1 / (1 + exp(-x))
Classic sigmoid activation function.
NNlib.relu
— Function.relu(x) = max(0, x)
Rectified Linear Unit activation function.
NNlib.selu
— Function.selu(x) = λ * (x ≥ 0 ? x : α * (exp(x) - 1))
λ ≈ 1.0507
α ≈ 1.6733
Scaled exponential linear units. See Self-Normalizing Neural Networks.
NNlib.softplus
— Function.softplus(x) = log(exp(x) + 1)
NNlib.softsign
— Function.softsign(x) = x / (1 + |x|)
NNlib.swish
— Function.swish(x) = x * σ(x)
Self-gated activation function. See Swish: a Self-Gated Activation Function.
Softmax
NNlib.softmax
— Function.softmax(xs) = exp.(xs) ./ sum(exp.(xs))
Softmax takes log-probabilities (any real vector) and returns a probability distribution that sums to 1.
If given a matrix it will by default (dims=1
) treat it as a batch of vectors, with each column independent. Keyword dims=2
will instead treat rows independently, etc.
julia> softmax([1,2,3.])
3-element Array{Float64,1}:
0.0900306
0.244728
0.665241
NNlib.logsoftmax
— Function.logsoftmax(xs) = log.(exp.(xs) ./ sum(exp.(xs)))
Computes the log of softmax in a more numerically stable way than directly taking log.(softmax(xs))
. Commonly used in computing cross entropy loss.
Pooling
Missing docstring for NNlib.maxpool
. Check Documenter's build log for details.
Missing docstring for NNlib.meanpool
. Check Documenter's build log for details.
Convolution
Missing docstring for NNlib.conv
. Check Documenter's build log for details.
Missing docstring for NNlib.depthwiseconv
. Check Documenter's build log for details.