Commit Graph

579 Commits

Author SHA1 Message Date
Fredrik Bagge Carlson
304b433daa
Add RADAM to tests 2019-08-19 13:01:14 +08:00
thebhatman
a128a7718d gradients test updated in cudnn 2019-07-16 17:27:35 +05:30
Manjunath Bhat
4ef5ec0005
brackets corrected 2019-07-12 21:03:57 +05:30
thebhatman
8d6028e27a tests with gradients 2019-07-12 20:47:43 +05:30
Mike Innes
e2bf46b7fd gpu test fixes 2019-07-12 14:52:01 +01:00
Manjunath Bhat
2b379d0ec0
Allow scalar indexing or onehotbatch tests will fail 2019-07-12 17:56:47 +05:30
DrChainsaw
9b96a3d69b Change to array due to "type definition not allowed inside a local scope" 2019-07-09 01:15:55 +02:00
DrChainsaw
16d5f2bc24 Add x to seen in prefor to avoid infinite recursion if passed something self-referential 2019-07-08 23:11:35 +02:00
thebhatman
8292cfd81f Decay checking test added back 2019-07-03 00:30:16 +05:30
thebhatman
517219ba23 Renamed gradients test file 2019-07-02 16:13:42 +05:30
thebhatman
9f6793d63a Project.toml and Manifest updated 2019-07-02 12:16:24 +05:30
thebhatman
618f8a03c8 Hopefully the tests pass 2019-06-20 00:46:11 +05:30
thebhatman
f1bf39977b nograd defined for sleep 2019-06-20 00:38:24 +05:30
thebhatman
e6d5846e49 Temporary removal of Float16 test 2019-06-14 23:24:31 +05:30
thebhatman
ce6a1bf84f Modifying tests in curnn.jl 2019-06-13 18:45:37 +05:30
thebhatman
80c680c598 Updated tests in cudnn.jl 2019-06-13 18:44:46 +05:30
thebhatman
25f74d1b4a Modified tests in cuda.jl 2019-06-13 18:44:17 +05:30
thebhatman
1ff4e3188e back on mse failing for Float16 2019-06-13 16:41:25 +05:30
thebhatman
c7c0ee2cbc Resolving Merge Conflicts 2019-06-12 21:34:42 +05:30
thebhatman
a56cfb73c3 BatchNorm test corrected 2019-06-11 20:34:48 +05:30
thebhatman
f465665c73 Corrected test for asymmetric padding 2019-06-11 20:20:00 +05:30
thebhatman
94a2d1987d Updated tests of normalisation layers. 2019-06-11 20:05:07 +05:30
thebhatman
a782524a0e Temporarily removed tests of cudnn and curnn. 2019-06-10 18:29:55 +05:30
thebhatman
0ddb5f0265 Tests for Optimisers supporting Zygote 2019-06-06 04:09:17 +05:30
Mike J Innes
b98075817c
Merge branch 'master' into DenseBlock 2019-06-05 14:27:47 +01:00
ayush-1506
98a027a505 typo 2019-05-14 02:56:12 -07:00
ayush-1506
bfc5bb0079 rebase 2019-05-14 02:53:48 -07:00
ayush-1506
0a2e288c3f another small test 2019-05-14 02:53:06 -07:00
ayush-1506
2161163a82 added crosscor 2019-05-14 02:52:28 -07:00
ayush-1506
7c28f7f883 Merge branch 'crosscor' of https://github.com/ayush-1506/Flux.jl into crosscor 2019-05-14 02:47:28 -07:00
Bruno Hebling Vieira
c5fc2fb9a3 Added tests 2019-05-13 16:32:00 -03:00
bors[bot]
68ba6e4e2f Merge #563
563: noise shape for dropout r=MikeInnes a=chengchingwen

I add the noise shape for dropout, similar to the `noise_shape` argument in [`tf.nn.dropout`](https://www.tensorflow.org/api_docs/python/tf/nn/dropout)

Co-authored-by: chengchingwen <adgjl5645@hotmail.com>
Co-authored-by: Peter <adgjl5645@hotmail.com>
2019-05-13 17:16:10 +00:00
chengchingwen
2fc2a5282c Merge remote-tracking branch 'upstream/master' into drop_shape 2019-05-14 00:50:59 +08:00
Elliot Saba
48fcc66094 Remove vestigial testing println() 2019-05-12 11:20:24 -07:00
Elliot Saba
2e6561bb6a Change DepthwiseConv() to use in=>out instead of in=>mult.
This is an API change, but I think it makes more sense, and is more
consistent with our `Conv()` api.
2019-05-12 11:20:24 -07:00
chengchingwen
5c5140683c make dims as field of Dropout 2019-05-10 23:45:50 +08:00
ayush-1506
99d07e67db another small test 2019-05-09 16:43:28 +05:30
ayush-1506
9a3aa18c17 conflicts 2019-05-08 11:56:46 +05:30
Jan Weidner
e96a9d7eaf Switch broken #700 test to pass 2019-05-03 22:36:32 +02:00
Jan Weidner
73c5d9f25c fix 2019-05-03 22:22:52 +02:00
Jan Weidner
27a9a7b9cf add broken test for #700 2019-05-03 22:22:52 +02:00
Mike J Innes
5b79453773 passing tests... ish 2019-05-02 18:54:01 -07:00
Mike J Innes
0c265f305a fix most tests 2019-05-02 18:52:09 -07:00
Mike J Innes
f9d8ea81fb move jacobian test to Tracker 2019-05-02 18:52:09 -07:00
ayush-1506
20b79e0bdf added crosscor 2019-05-01 22:29:00 +05:30
Dhairya Gandhi
221670a2b1
Merge pull request #733 from thebhatman/expdecay-fix
Fixed ExpDecay
2019-05-01 18:58:37 +05:30
thebhatman
5ffc3b2d40 Comparing decay steps with expected true decay steps 2019-05-02 00:12:14 +05:30
thebhatman
5e06d8bb76 Test for decay_step 2019-05-01 23:10:00 +05:30
Dhairya Gandhi
9bbbd17e4b
Merge branch 'master' into onecold 2019-04-30 19:09:36 +05:30
Elliot Saba
6e22cd4931 Add asymmetric padding to convolutional layers 2019-04-25 09:55:23 -07:00
Elliot Saba
113ddc8760 Update Flux code for new NNlib branch 2019-04-25 09:55:23 -07:00
thebhatman
e459551336 weights updated in tests 2019-04-11 21:59:50 +05:30
thebhatman
fb3001b8b2 Added test for ExpDecay 2019-04-11 21:53:36 +05:30
Mike J Innes
54d9229be9
Merge pull request #710 from johnnychen94/master
naive implementation of activations
2019-04-05 15:33:31 +01:00
JohnnyChen
4626f7568c rewrite one test case 2019-04-05 18:50:15 +08:00
JohnnyChen
de7a5f4024 correct the function behavior; support Any type 2019-04-05 18:16:44 +08:00
thebhatman
b84ab7ac95 Removed logcosh 2019-04-05 03:16:54 +05:30
bors[bot]
bd9d73a941 Merge #655
655: Added support for Float64 for DepthwiseConv r=dhairyagandhi96 a=thebhatman

DepthwiseConv was giving errors for Float64. This fixes the issue.

Co-authored-by: Manjunath Bhat <manjunathbhat9920@gmail.com>
2019-04-04 17:25:52 +00:00
chengchingwen
261235311c change dims as unbroadcasted dims and keyword argument 2019-04-05 01:19:20 +08:00
Dhairya Gandhi
4f1336905f fix colon indexing 2019-04-04 19:16:14 +05:30
bors[bot]
25097c4322 Merge #712
712: Enable GPU CI r=dhairyagandhi96 a=dhairyagandhi96

Looking for feedback on this policy for doing GPU CI.

Co-authored-by: Dhairya Gandhi <dhairya@juliacopmuting.com>
2019-04-03 12:54:18 +00:00
Dhairya Gandhi
f4f8ba32fe fix variable name 2019-04-03 16:01:27 +05:30
Dhairya Gandhi
cff1dfd258 conditionally execute RNN tests 2019-04-01 19:56:49 +05:30
Dhairya Gandhi
bc33108e66 disable rnn tests 2019-03-31 00:29:10 +05:30
Dhairya Gandhi
ac467cfe77 fixes 2019-03-30 18:17:57 +05:30
Dhairya Gandhi
492a3ca707 disable GRU tests 2019-03-30 18:15:42 +05:30
JohnnyChen
82595648e2 change 4-spaces tab to 2-spaces tab 2019-03-28 22:40:24 +08:00
JohnnyChen
13c58494ec add x into results 2019-03-28 19:28:59 +08:00
Johnny Chen
c4ebd199db
move test cases to "basic" testset 2019-03-28 17:58:02 +08:00
Johnny Chen
47728b1899
fix test case error 2019-03-28 17:45:12 +08:00
JohnnyChen
5c2a071713 add support for 0-element Chain 2019-03-28 17:20:41 +08:00
JohnnyChen
ccfe0f8720 naive implementation of activations 2019-03-28 17:07:04 +08:00
Shreyas
c810fd4818 Corrected Group Size In Batch Norm Test For Group Norm 2019-03-28 01:35:38 +05:30
Shreyas
61c1fbd013 Made Requested Changes 2019-03-28 01:33:04 +05:30
Shreyas
671aed963e Made a few fixes. Added tests 2019-03-28 00:51:50 +05:30
thebhatman
4efcc69ba5 logcosh averaged 2019-03-26 23:23:02 +05:30
thebhatman
c4d12e57fe Loss function names in lowercase 2019-03-26 03:09:48 +05:30
Lyndon White
f0cc4a328d make Maxout trainable 2019-03-25 16:02:46 +00:00
Mike J Innes
b637311642
Merge pull request #647 from oxinabox/ox/maxout
Add MaxOut layer
2019-03-22 12:18:53 +00:00
Lyndon White
ca68bf9bec correct casing 2019-03-18 12:20:46 +00:00
Lyndon White
e23c8ddd13 take zero-arge closure 2019-03-18 12:20:46 +00:00
Lyndon White
fcc3ec471a Add MaxOut layer 2019-03-18 12:19:44 +00:00
chengchingwen
59da68b4d9 update test 2019-03-14 21:55:37 +08:00
Manjunath Bhat
57a52e3375
Error of recurrent decimals fixed. 2019-03-12 02:58:32 +05:30
Manjunath Bhat
61386c04f8
Tests added. 2019-03-12 02:36:37 +05:30
Joshua Whittemore
0cac373539 add tests for Data.Iris module 2019-03-09 13:02:59 -08:00
Manjunath Bhat
d4a1d33a31
Added Float64 tests for DepthwiseConv 2019-03-09 20:17:22 +05:30
Mike J Innes
b348e31f07
Merge pull request #667 from FluxML/donottrack
rm Tracker
2019-03-08 11:38:37 +00:00
David Pollack
83b4b3a714 changes based on PR comments 2019-03-07 09:46:44 +01:00
David Pollack
129a708b6f instance normalization 2019-03-07 09:46:44 +01:00
Mike J Innes
b5a148fa37 rm Tracker 2019-03-07 01:33:02 +00:00
Mike Innes
4cf43c0c41 simpler/nicer training loop 2019-02-28 14:58:42 +00:00
Dhairya Gandhi
eb9da4084f
remove spurious line change 2019-02-15 20:33:21 +05:30
Dhairya Gandhi
c50ad6cdb5
Merge branch 'master' into tiny_stack_bugfix 2019-02-15 20:20:01 +05:30
Dhairya Gandhi
2ec35861b5 removing non-allocating functions and tests 2019-02-11 21:22:32 +05:30
Dhairya Gandhi
d16ef75b1c remove duplicate allowscalar call 2019-02-11 20:32:23 +05:30
Dhairya Gandhi
1ada9afe81 assert no scalar indexing for onecold 2019-02-09 22:38:49 +05:30
Dhairya Gandhi
35cd9761a8 adding tests 2019-02-09 22:32:02 +05:30
pshashk
ae10421bfe
fix normalise test for dims kwarg 2019-02-08 16:02:03 +03:00
pshashk
37385e0dbd
test normalise 2019-02-08 15:43:50 +03:00
pshashk
4f6432d133
test 2019-02-08 15:28:07 +03:00
Mike J Innes
601e2d8ae0
Merge pull request #586 from KristofferC/kc/batchnorm
work around extreme slowdown in BatchNorm due to julia performance bug in broadcast fusion
2019-02-08 11:00:33 +00:00
Mike J Innes
fe712bf338
Merge pull request #596 from IvanYashchuk/ivan/topic-issue-542
Fixed issue #542.
2019-02-08 10:38:23 +00:00
Ivan Yashchuk
6471790819 Pass symmetric matrix to logdet gradtest 2019-02-08 12:22:08 +02:00
Ivan Yashchuk
e00ac88016 Added tracking of logdet and logabsdet. Added gradtests. 2019-02-08 09:55:33 +02:00
KristofferC
9914c531f6 work around extreme slowdown due julia performance bug 2019-02-06 16:19:29 +01:00
Mike J Innes
ecc55ec9e1
Revert "Fix OneHotVector/Matrix performance on GPU" 2019-02-06 14:31:15 +00:00
Mike J Innes
e8b2ec6f67
Merge pull request #311 from tejank10/conv_transpose
2D Conv transpose support
2019-02-06 14:14:14 +00:00
Mike J Innes
7fc920240d
Merge pull request #591 from dhairyagandhi96/onehot
Fix OneHotVector/Matrix performance on GPU
2019-02-04 13:53:55 +00:00
Dhairya Gandhi
2f916f9763 better tests 2019-02-04 18:43:25 +05:30
Dhairya Gandhi
6654ebfc90 added onecold broadcast test 2019-02-04 17:57:34 +05:30
Mike J Innes
cfe6859186 auto-collect in forward 2019-02-04 10:37:02 +00:00
Mike J Innes
838070968e vcat with scalars 2019-02-04 00:05:16 +00:00
Tejan Karmali
e54df2de06
Merge branch 'master' into conv_transpose 2019-02-02 10:20:45 +05:30
Mike J Innes
0469394715
Merge pull request #576 from mcabbott/patch-1
PermutedDimsArray
2019-01-29 14:55:55 +00:00
Anand Bisen
3670fabbe6 add tests for stack and unstack 2019-01-29 01:41:15 -08:00
Michael Abbott
55a7359f67
PermutedDimsArray test 2019-01-28 18:19:06 +01:00
Mike J Innes
0f2975d905 update -> apply 2019-01-28 13:59:23 +00:00
Mike J Innes
013b421b08
Merge pull request #570 from avik-pal/ap/batchnorm_fixes
Patches for default initializers
2019-01-28 10:40:55 +00:00
Mike J Innes
58ac415f6b forward mode 2019-01-25 16:14:24 +00:00
Mike J Innes
791939709b numeric precision utilities 2019-01-25 10:06:37 +00:00
Avik Pal
2f3ad56166 Add test for Depthwise Conv 2019-01-24 18:53:04 +05:30
Mike Innes
0142d89943 test onecold-of-tracked-gpu-vector
see #556
2019-01-24 10:40:52 +00:00
chengchingwen
06003b72c7 noise shape for dropout 2019-01-22 23:51:38 +08:00
Mike J Innes
152ce4a164 conversions for dual numbers 2019-01-22 10:07:42 +00:00
Mike J Innes
f6397e7358
Merge pull request #517 from FluxML/fix_adamw
Fix decay argument in ADAMW
2019-01-18 10:06:23 +00:00
Mike J Innes
4d79f499bf fixes #549 2019-01-15 15:49:37 +00:00
Mike J Innes
a3e0de1ee5 fixes #516 2019-01-15 15:49:18 +00:00
Mike J Innes
67d9016319
Merge pull request #538 from KristofferC/kc/promote
fix promotion by avoiding integer division in mse and crossentropy
2019-01-15 13:20:46 +00:00
Kristoffer Carlsson
c74aa67c5d fix promotion by avoiding integer division in mse and crossentropy
oops

add tests
2019-01-15 14:15:05 +01:00
Mike J Innes
735b970c12 fix update for scalars 2019-01-10 10:19:05 +00:00
kolia
9b897fc601 Tiny bugfix: stack was still calling julia 0.6 cat
Also added tiny test for good measure.
2018-12-20 10:03:21 -05:00
Mike J Innes
6b11c552f3 better h/vcat, fixes #378 2018-12-19 11:19:01 +00:00
Dhairya Gandhi
e48268ff06 fix argument name in ADAMW 2018-12-12 16:47:42 +05:30
Tejan Karmali
1648414a5d fixes for layer and test 2018-12-04 11:08:40 -05:00
Tejan Karmali
95e490a2c5 merge conflict resolved 2018-11-28 11:10:22 -05:00
Tejan Karmali
a71ee386d0 1.0 fix for conv transpose 2018-11-28 10:55:21 -05:00
Mike J Innes
1c36504768 fixup 2018-11-27 18:44:07 -05:00
Avik Pal
dfd680646c Fix conflict 2018-11-14 22:18:57 +05:30
Mike J Innes
b3331205d1 faster default gradient performance 2018-11-12 23:39:25 +00:00
Mike J Innes
903db70673 float32 param initialisers 2018-11-12 20:10:47 +00:00
Avik Pal
9f12e8ec68 Make the test more reliable 2018-11-10 14:00:25 +05:30
Avik Pal
4df9e10516 Add test for 2D inputs 2018-11-10 11:52:23 +05:30
Avik Pal
564518e448 Merge branch 'master' of https://github.com/FluxML/Flux.jl into cudnn_batchnorm 2018-11-08 19:13:34 +05:30
Mike J Innes
30486f9c03
Merge pull request #441 from Paethon/rm_initn
Removes initn initialization
2018-11-08 13:25:02 +00:00
Mike J Innes
5e572df557
Merge pull request #485 from dhairyagandhi96/master
Add call back
2018-11-08 13:18:17 +00:00
Dhairya Gandhi
392c3c942b re-add removed call function 2018-11-08 18:44:57 +05:30
Mike J Innes
d0e4fbb1e0 Merge branch 'master' into ed/diagm-pair 2018-11-05 11:51:29 +00:00
Mike J Innes
43c5f90d93
Merge pull request #379 from dhairyagandhi96/master
New optimisers interface
2018-10-31 16:38:40 +00:00
Mike J Innes
bffaceee02 tweaks 2018-10-31 14:58:55 +00:00