1053: Added Some Loss functions with some doc improvements r=CarloLucibello a=AdarshKumar712
Added the following loss functions with tests:
1. mae
2. mean squared logarithmic error
3. huber loss
4. squared hinge loss
5. dice coeff loss
6. tversky loss
Also added some documentation improvements for few other functions.
Co-authored-by: Adarsh Kumar <45385384+AdarshKumar712@users.noreply.github.com>
960: Added utility function outdims to compute output dimensions of a layer r=dhairyagandhi96 a=darsnack
Based on Slack chatter, I added a utility function, `outdims`, that computes the output dimensions for given input dimensions.
Example
```julia
layer = Conv((3, 3), 3 => 16)
outdims(layer, (10, 10)) # returns (8, 8)
```
Co-authored-by: Kyle Daruwalla <daruwalla@wisc.edu>
680: Added new loss functions. r=thebhatman a=thebhatman
I have added the KL Divergence Loss function, Poisson loss function, Logcosh loss, and Hinge loss function.
Co-authored-by: Manjunath Bhat <manjunathbhat9920@gmail.com>
Co-authored-by: thebhatman <manjunathbhat9920@gmail.com>
865: Functor r=MikeInnes a=MikeInnes
This refactors our current `@treelike` infrastructure. It somewhat formalises what we're doing around the idea of a Flux model as a functor, i.e. something that can be mapped over.
This is much more flexible than what we had before, and avoids some issues. It allows layers to have state that isn't mappable; it allows for dispatch when walking the tree, which means layers like `BatchNorm` can have non-trainable parameters; and it also allows for zipped mapping like `fmap(+, xs, ys)`, which isn't implemented yet but will be useful for the new optimisers work.
The main downside is that the term `functor` has been previously used in the Julia community as a malapropism for "thing that behaves like a function"; but hopefully this can start to reduce that usage.
Co-authored-by: Mike Innes <mike.j.innes@gmail.com>