NNlib

NNlib

Flux re-exports all of the functions exported by the NNlib package.

Activation Functions

Non-linearities that go between layers of your model. Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call σ.(xs), relu.(xs) and so on.

NNlib.eluFunction.
elu(x, α = 1) =
  x > 0 ? x : α * (exp(x) - 1)

Exponential Linear Unit activation function. See Fast and Accurate Deep Network Learning by Exponential Linear Units. You can also specify the coefficient explicitly, e.g. elu(x, 1).

NNlib.geluFunction.
gelu(x) = 0.5x*(1 + tanh(√(2/π)*(x + 0.044715x^3)))

Gaussian Error Linear Unit activation function.

NNlib.leakyreluFunction.
leakyrelu(x) = max(0.01x, x)

Leaky Rectified Linear Unit activation function. You can also specify the coefficient explicitly, e.g. leakyrelu(x, 0.01).

NNlib.logcoshFunction.
logcosh(x)

Return log(cosh(x)) which is computed in a numerically stable way.

NNlib.logsigmoidFunction.
logσ(x)

Return log(σ(x)) which is computed in a numerically stable way.

julia> logσ(0)
-0.6931471805599453
julia> logσ.([-100, -10, 100])
3-element Array{Float64,1}:
 -100.0
  -10.000045398899218
   -3.720075976020836e-44
NNlib.sigmoidFunction.
σ(x) = 1 / (1 + exp(-x))

Classic sigmoid activation function.

NNlib.reluFunction.
relu(x) = max(0, x)

Rectified Linear Unit activation function.

NNlib.seluFunction.
selu(x) = λ * (x ≥ 0 ? x : α * (exp(x) - 1))

λ ≈ 1.0507
α ≈ 1.6733

Scaled exponential linear units. See Self-Normalizing Neural Networks.

NNlib.softplusFunction.
softplus(x) = log(exp(x) + 1)

See Deep Sparse Rectifier Neural Networks.

NNlib.softsignFunction.
softsign(x) = x / (1 + |x|)

See Quadratic Polynomials Learn Better Image Features.

NNlib.swishFunction.
swish(x) = x * σ(x)

Self-gated activation function. See Swish: a Self-Gated Activation Function.

Softmax

NNlib.softmaxFunction.
softmax(xs) = exp.(xs) ./ sum(exp.(xs))

Softmax takes log-probabilities (any real vector) and returns a probability distribution that sums to 1.

If given a matrix it will by default (dims=1) treat it as a batch of vectors, with each column independent. Keyword dims=2 will instead treat rows independently, etc.

julia> softmax([1,2,3.])
3-element Array{Float64,1}:
  0.0900306
  0.244728
  0.665241
NNlib.logsoftmaxFunction.
logsoftmax(xs) = log.(exp.(xs) ./ sum(exp.(xs)))

Computes the log of softmax in a more numerically stable way than directly taking log.(softmax(xs)). Commonly used in computing cross entropy loss.

Pooling

Missing docstring.

Missing docstring for NNlib.maxpool. Check Documenter's build log for details.

Missing docstring.

Missing docstring for NNlib.meanpool. Check Documenter's build log for details.

Convolution

Missing docstring.

Missing docstring for NNlib.conv. Check Documenter's build log for details.

Missing docstring.

Missing docstring for NNlib.depthwiseconv. Check Documenter's build log for details.