Commit Graph

153 Commits

Author SHA1 Message Date
cossio feb72d400a NaN 2020-05-07 12:44:32 +02:00
cossio 8314200c51 generic 2020-05-05 19:23:05 +02:00
cossio 06c1e20372 add tests 2020-05-05 19:05:04 +02:00
cossio 480473a81b xlogy 2020-05-05 18:33:50 +02:00
Dhairya Gandhi 5086c0f4f0 merge conflicts 2020-04-29 16:11:39 +05:30
DrChainsaw 1544f84bb9 Fix merge conflicts 2020-04-24 21:56:26 +02:00
Garben Tanghe 746e3310f1 removed Flatten struct
updated documentation
2020-03-08 14:22:03 +01:00
Garben Tanghe 82e16a5b29 split up Flatten layer to use the flatten function 2020-03-08 14:21:59 +01:00
Garben Tanghe 3e14bd878c added GlobalMaxPool, GlobalMeanPool, and Flatten layers 2020-03-08 14:18:48 +01:00
bors[bot] af23a5756c
Merge #1053
1053: Added Some Loss functions with some doc improvements r=CarloLucibello a=AdarshKumar712

Added the following loss functions with tests:
1. mae
2. mean squared logarithmic error
3. huber loss
4. squared hinge loss
5. dice coeff loss
6. tversky loss 

Also added some documentation improvements for few other functions. 

Co-authored-by: Adarsh Kumar <45385384+AdarshKumar712@users.noreply.github.com>
2020-03-03 23:56:21 +00:00
Adarsh Kumar 92e09e204d
Test argument consistency with ŷ and y 2020-03-02 20:33:12 +05:30
Adarsh Kumar 5565250c28
Updated test for tversky 2020-03-02 13:46:33 +05:30
Kyle Daruwalla 4cebf36361
Merge branch 'master' into feature/istraining 2020-03-01 12:32:15 -06:00
Kyle Daruwalla c001d0f3c5 Added trainmode! and updated docs with warning 2020-03-01 12:30:41 -06:00
Adarsh Kumar 57c1b67d08
Merge branch 'master' into patch-1 2020-03-01 11:49:33 +05:30
Kyle Daruwalla 568ecb1c97 Removed trainmode from tests 2020-02-29 16:25:18 -06:00
Kyle Daruwalla 5cbd2cecf2 Changed testmode! to return model 2020-02-29 16:09:59 -06:00
Adarsh Kumar 8afed01345
Apply suggestions from code review
Co-Authored-By: David Lung <lungd@users.noreply.github.com>
2020-02-27 23:23:53 +05:30
Adarsh Kumar 3d8965230f
Added tests for dice and Tversky loss 2020-02-27 02:29:39 +05:30
Dhairya Gandhi cd931793ef more docs and constructors 2020-02-26 22:29:14 +05:30
bors[bot] 55616afc11
Merge #960
960: Added utility function outdims to compute output dimensions of a layer r=dhairyagandhi96 a=darsnack

Based on Slack chatter, I added a utility function, `outdims`, that computes the output dimensions for given input dimensions.

Example
```julia
layer = Conv((3, 3), 3 => 16)
outdims(layer, (10, 10)) # returns (8, 8)
```

Co-authored-by: Kyle Daruwalla <daruwalla@wisc.edu>
2020-02-25 17:40:05 +00:00
Kyle Daruwalla 7c12af065a Added testmode! functionality back to normalization layers. 2020-02-21 14:35:10 -06:00
Adarsh Kumar 659ba074d1
Updated test for msle 2020-02-06 01:21:51 +05:30
Adarsh Kumar 44a977b7a4
Added tests for new loss functions 2020-02-05 23:20:06 +05:30
bors[bot] d1edd9b16d
Merge #680
680: Added new loss functions. r=thebhatman a=thebhatman

I have added the KL Divergence Loss function, Poisson loss function, Logcosh loss, and Hinge loss function.

Co-authored-by: Manjunath Bhat <manjunathbhat9920@gmail.com>
Co-authored-by: thebhatman <manjunathbhat9920@gmail.com>
2020-01-13 15:46:25 +00:00
Manjunath Bhat 747e01ea02
Test to check for spurious promotions 2020-01-13 18:33:30 +05:30
Elliot Saba 0fdcc00923 Give `NNPACK` a bit of numerical leeway 2019-12-23 01:31:26 -08:00
Kyle Daruwalla 0cdd11c0dc Added tests for varying padding, stride, and dilation with outdims. 2019-12-07 14:05:50 -06:00
Kyle Daruwalla 6265b1fa39 Added tests for outdims 2019-12-05 22:54:25 -06:00
DrChainsaw 755536bf5e Merge remote-tracking branch 'upstream/master' into samepad 2019-12-04 23:45:03 +01:00
Dhairya Gandhi ec872bb579 test that bias has no grads with Zeros 2019-11-27 19:45:04 +05:30
Dhairya Gandhi c031ae1a94 correct channel value 2019-11-24 13:31:31 +05:30
Dhairya Gandhi 5f21238d1a no grad dims helper 2019-11-24 13:25:02 +05:30
dsweber2 dea29532ef Merge branch 'master' into activations 2019-11-15 17:19:43 -08:00
Mike Innes e24215ca98 guard test on 1.0 2019-11-15 15:59:42 +00:00
dsweber2 58c794702d simpler test 2019-11-14 14:05:53 -08:00
dsweber2 db92b0e3ce super simple test 2019-11-14 13:40:52 -08:00
DrChainsaw 411ce5dbd8 Add SamePad for pooling layers 2019-10-20 13:43:39 +02:00
DrChainsaw fc123d6279 Add SamePad for conv layers 2019-10-20 13:43:23 +02:00
Dhairya Gandhi 49ea43e711 ZeroType => Zeros 2019-10-08 20:02:04 +05:30
Dhairya Gandhi b596faaffa tests bias switch 2019-10-08 17:18:39 +05:30
Dhairya Gandhi 55ef7c1aba add weight and bias kwargs 2019-10-06 04:25:23 +05:30
Dhairya Gandhi dced8c04e5 use ZeroType 2019-10-01 21:25:07 +05:30
Manjunath Bhat 2b30319a55
Merge branch 'master' into patch-6 2019-09-30 21:05:02 +05:30
thebhatman 6e289ef939 Merge branch 'patch-6' of https://github.com/thebhatman/Flux.jl into patch-6 2019-09-30 20:55:44 +05:30
Dhairya Gandhi 5ea6a33f44 make bias optional 2019-09-27 11:48:12 +05:30
bors[bot] acb6a89245
Merge #865
865: Functor r=MikeInnes a=MikeInnes

This refactors our current `@treelike` infrastructure. It somewhat formalises what we're doing around the idea of a Flux model as a functor, i.e. something that can be mapped over.

This is much more flexible than what we had before, and avoids some issues. It allows layers to have state that isn't mappable; it allows for dispatch when walking the tree, which means layers like `BatchNorm` can have non-trainable parameters; and it also allows for zipped mapping like `fmap(+, xs, ys)`, which isn't implemented yet but will be useful for the new optimisers work.

The main downside is that the term `functor` has been previously used in the Julia community as a malapropism for "thing that behaves like a function"; but hopefully this can start to reduce that usage.

Co-authored-by: Mike Innes <mike.j.innes@gmail.com>
2019-09-24 16:36:10 +00:00
Mike Innes b60df53ba1 pkg up 2019-09-19 18:33:33 +01:00
Mike Innes b951377426 fix normalisation layer params 2019-09-19 15:33:24 +01:00
Mike Innes f8d5d3b5fc broken normalisation layer params 2019-09-19 14:12:11 +01:00