From 2f05094068067ee2738adbe2e6e455909adfff0d Mon Sep 17 00:00:00 2001 From: Adarsh Kumar <45385384+AdarshKumar712@users.noreply.github.com> Date: Mon, 2 Mar 2020 20:00:47 +0530 Subject: [PATCH] =?UTF-8?q?Added=20consistency=20with=20y=CC=82=20and=20un?= =?UTF-8?q?icode=20chars?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- src/layers/stateless.jl | 54 ++++++++++++++++++++++------------------- 1 file changed, 29 insertions(+), 25 deletions(-) diff --git a/src/layers/stateless.jl b/src/layers/stateless.jl index 01b26a8a..5f457057 100644 --- a/src/layers/stateless.jl +++ b/src/layers/stateless.jl @@ -2,7 +2,7 @@ """ mae(ŷ, y) -Return the mean of absolute error `sum(abs.(ŷ .- y)) * 1 / length(y)` +Return the mean of absolute error `sum(abs.(ŷ .- y)) / length(y)` """ mae(ŷ, y) = sum(abs.(ŷ .- y)) * 1 // length(y) @@ -16,23 +16,25 @@ mse(ŷ, y) = sum((ŷ .- y).^2) * 1 // length(y) """ - msle(ŷ, y; ϵ1=eps.(Float64.(ŷ))) + msle(ŷ, y; ϵ = eps.(Float64.(ŷ))) -Mean Squared Logarithmic Error. Returns the mean of the squared logarithmic errors `sum((log.(ŷ+ϵ1) .- log.(y+ϵ2)).^2) * 1 / length(y)`.
-The `ϵ` term provides numerical stability. This error penalizes an under-predicted estimate greater than an over-predicted estimate. +Returns the mean of the squared logarithmic errors `sum((log.(ŷ + ϵ) .- log.(y + ϵ)).^2) / length(y)`. +The `ϵ` term provides numerical stability. + +This error penalizes an under-predicted estimate greater than an over-predicted estimate. """ -msle(ŷ, y; ϵ=eps.(ŷ)) = sum((log.(ŷ+ϵ).-log.(y+ϵ)).^2) * 1 // length(y) +msle(ŷ, y; ϵ = eps.(ŷ)) = sum((log.(ŷ + ϵ).-log.(y + ϵ)).^2) * 1 // length(y) """ - huber_loss(ŷ, y; delta=1.0) + huber_loss(ŷ, y; delta = 1.0) Computes the mean of the Huber loss given the prediction `ŷ` and true values `y`. By default, delta is set to 1.0. - | 0.5*|(ŷ-y)|, for |ŷ-y|<=delta + | 0.5*|ŷ - y|, for |ŷ - y| <= delta Hubber loss = | - | delta*(|ŷ-y| - 0.5*delta), otherwise + | delta*(|ŷ- y| - 0.5*delta), otherwise [`Huber Loss`](https://en.wikipedia.org/wiki/Huber_loss). """ @@ -151,6 +153,7 @@ end poisson(ŷ, y) Poisson loss function is a measure of how the predicted distribution diverges from the expected distribution. +Returns `sum(ŷ .- y .* log.(ŷ)) / size(y, 2)` [Poisson Loss](https://peltarion.com/knowledge-center/documentation/modeling-view/build-an-ai-model/loss-functions/poisson). """ @@ -160,48 +163,49 @@ poisson(ŷ, y) = sum(ŷ .- y .* log.(ŷ)) *1 // size(y,2) hinge(ŷ, y) Measures the loss given the prediction `ŷ` and true labels `y` (containing 1 or -1). -Returns `sum((max.(0,1 .-ŷ .* y))) *1 // size(y, 2)` +Returns `sum((max.(0, 1 .- ŷ .* y))) / size(y, 2)` [Hinge Loss](https://en.wikipedia.org/wiki/Hinge_loss) See also [`squared_hinge`](@ref). """ -hinge(ŷ, y) = sum(max.(0, 1 .- ŷ .* y)) *1 // size(y,2) +hinge(ŷ, y) = sum(max.(0, 1 .- ŷ .* y)) *1 // size(y, 2) """ squared_hinge(ŷ, y) Computes squared hinge loss given the prediction `ŷ` and true labels `y` (conatining 1 or -1). -Returns `sum((max.(0,1 .-ŷ .* y)).^2) *1 // size(y, 2)` +Returns `sum((max.(0, 1 .- ŷ .* y)).^2) / size(y, 2)` See also [`hinge`](@ref). """ -squared_hinge(ŷ, y) = sum((max.(0,1 .-ŷ .* y)).^2) *1//size(y,2) +squared_hinge(ŷ, y) = sum((max.(0, 1 .- ŷ .* y)).^2) *1 // size(y, 2) """ - dice_coeff_loss(y_pred, y_true, smooth = 1) + dice_coeff_loss(ŷ, y, smooth = 1) Loss function used in Image Segmentation. Calculates loss based on dice coefficient. Similar to F1_score. - Dice_Coefficient(A,B) = 2 * sum( |A*B| + smooth) / (sum( A^2 ) + sum( B^2 )+ smooth) + Dice_Coefficient(ŷ, y) = 2 * sum( |ŷ.* y| + smooth) / (sum( ŷ.^2 ) + sum( y.^2 ) + smooth) Dice_loss = 1 - Dice_Coefficient -Ref: [V-Net: Fully Convolutional Neural Networks forVolumetric Medical Image Segmentation](https://arxiv.org/pdf/1606.04797v1.pdf) +[V-Net: Fully Convolutional Neural Networks forVolumetric Medical Image Segmentation](https://arxiv.org/pdf/1606.04797v1.pdf) """ -function dice_coeff_loss(y_pred, y_true; smooth=eltype(y_pred)(1.0)) - intersection = sum(y_true.*y_pred) - return 1 - (2*intersection + smooth)/(sum(y_true.^2) + sum(y_pred.^2)+smooth) +function dice_coeff_loss(ŷ, y; smooth = eltype(ŷ)(1.0)) + intersection = sum(y.*ŷ) + return 1 - (2*intersection + smooth) / (sum(y.^2) + sum(ŷ.^2) + smooth) end """ - tversky_loss(y_pred, y_true, beta = 0.7) + tversky_loss(ŷ, y, β = 0.7) -Used with imbalanced data to give more weightage to False negatives. Larger β weigh recall higher than precision (by placing more emphasis on false negatives) +Used with imbalanced data to give more weightage to False negatives. +Larger β weigh recall higher than precision (by placing more emphasis on false negatives) - tversky_loss(ŷ,y,beta) = 1 - sum(|y.*ŷ| + 1) / (sum(y.*ŷ + beta*(1 .- y).*ŷ + (1 .- beta)*y.*(1 .- ŷ))+ 1) + tversky_loss(ŷ, y, β) = 1 - sum(|y.*ŷ| + 1) / (sum(y.*ŷ + β *(1 .- y).*ŷ + (1 - β).*y.*(1 .- ŷ))+ 1) -Ref: [Tversky loss function for image segmentation using 3D fully convolutional deep networks](https://arxiv.org/pdf/1706.05721.pdf) +[Tversky loss function for image segmentation using 3D fully convolutional deep networks](https://arxiv.org/pdf/1706.05721.pdf) """ -function tversky_loss(y_pred, y_true; beta = eltype(y_pred)(0.7)) - intersection = sum(y_true.*y_pred) - return 1 - (intersection+1)/(sum(y_true.*y_pred + beta*(1 .- y_true).* y_pred + (1-beta).*y_true.*(1 .- y_pred))+1) +function tversky_loss(ŷ, y; β = eltype(ŷ)(0.7)) + intersection = sum(y.*ŷ) + return 1 - (intersection + 1) / (sum(y.* ŷ + β *(1 .- y).* ŷ + (1 - β).*y.*(1 .- ŷ)) + 1) end