Commit Graph

153 Commits

Author SHA1 Message Date
Mike Innes 250aef5a5a normalise test fixes 2019-09-10 16:19:55 +01:00
Mike Innes 2f7ad895aa test cleanups 2019-08-19 15:22:50 +01:00
Mike Innes 9590aa63e3 rm last uses of param/data 2019-08-19 15:14:42 +01:00
thebhatman 8d6028e27a tests with gradients 2019-07-12 20:47:43 +05:30
Mike Innes e2bf46b7fd gpu test fixes 2019-07-12 14:52:01 +01:00
thebhatman e6d5846e49 Temporary removal of Float16 test 2019-06-14 23:24:31 +05:30
thebhatman 1ff4e3188e back on mse failing for Float16 2019-06-13 16:41:25 +05:30
thebhatman c7c0ee2cbc Resolving Merge Conflicts 2019-06-12 21:34:42 +05:30
thebhatman a56cfb73c3 BatchNorm test corrected 2019-06-11 20:34:48 +05:30
thebhatman f465665c73 Corrected test for asymmetric padding 2019-06-11 20:20:00 +05:30
thebhatman 94a2d1987d Updated tests of normalisation layers. 2019-06-11 20:05:07 +05:30
Mike J Innes b98075817c
Merge branch 'master' into DenseBlock 2019-06-05 14:27:47 +01:00
ayush-1506 98a027a505 typo 2019-05-14 02:56:12 -07:00
ayush-1506 bfc5bb0079 rebase 2019-05-14 02:53:48 -07:00
ayush-1506 0a2e288c3f another small test 2019-05-14 02:53:06 -07:00
ayush-1506 2161163a82 added crosscor 2019-05-14 02:52:28 -07:00
ayush-1506 7c28f7f883 Merge branch 'crosscor' of https://github.com/ayush-1506/Flux.jl into crosscor 2019-05-14 02:47:28 -07:00
Bruno Hebling Vieira c5fc2fb9a3 Added tests 2019-05-13 16:32:00 -03:00
bors[bot] 68ba6e4e2f Merge #563
563: noise shape for dropout r=MikeInnes a=chengchingwen

I add the noise shape for dropout, similar to the `noise_shape` argument in [`tf.nn.dropout`](https://www.tensorflow.org/api_docs/python/tf/nn/dropout)

Co-authored-by: chengchingwen <adgjl5645@hotmail.com>
Co-authored-by: Peter <adgjl5645@hotmail.com>
2019-05-13 17:16:10 +00:00
chengchingwen 2fc2a5282c Merge remote-tracking branch 'upstream/master' into drop_shape 2019-05-14 00:50:59 +08:00
Elliot Saba 48fcc66094 Remove vestigial testing `println()` 2019-05-12 11:20:24 -07:00
Elliot Saba 2e6561bb6a Change `DepthwiseConv()` to use `in=>out` instead of `in=>mult`.
This is an API change, but I think it makes more sense, and is more
consistent with our `Conv()` api.
2019-05-12 11:20:24 -07:00
chengchingwen 5c5140683c make dims as field of Dropout 2019-05-10 23:45:50 +08:00
ayush-1506 99d07e67db another small test 2019-05-09 16:43:28 +05:30
ayush-1506 9a3aa18c17 conflicts 2019-05-08 11:56:46 +05:30
Jan Weidner e96a9d7eaf Switch broken #700 test to pass 2019-05-03 22:36:32 +02:00
Jan Weidner 73c5d9f25c fix 2019-05-03 22:22:52 +02:00
Jan Weidner 27a9a7b9cf add broken test for #700 2019-05-03 22:22:52 +02:00
Mike J Innes 5b79453773 passing tests... ish 2019-05-02 18:54:01 -07:00
Mike J Innes 0c265f305a fix most tests 2019-05-02 18:52:09 -07:00
ayush-1506 20b79e0bdf added crosscor 2019-05-01 22:29:00 +05:30
Elliot Saba 6e22cd4931 Add asymmetric padding to convolutional layers 2019-04-25 09:55:23 -07:00
Elliot Saba 113ddc8760 Update `Flux` code for new NNlib branch 2019-04-25 09:55:23 -07:00
Mike J Innes 54d9229be9
Merge pull request #710 from johnnychen94/master
naive implementation of activations
2019-04-05 15:33:31 +01:00
JohnnyChen 4626f7568c rewrite one test case 2019-04-05 18:50:15 +08:00
JohnnyChen de7a5f4024 correct the function behavior; support Any type 2019-04-05 18:16:44 +08:00
thebhatman b84ab7ac95 Removed logcosh 2019-04-05 03:16:54 +05:30
bors[bot] bd9d73a941 Merge #655
655: Added support for Float64 for DepthwiseConv r=dhairyagandhi96 a=thebhatman

DepthwiseConv was giving errors for Float64. This fixes the issue.

Co-authored-by: Manjunath Bhat <manjunathbhat9920@gmail.com>
2019-04-04 17:25:52 +00:00
chengchingwen 261235311c change `dims` as unbroadcasted dims and keyword argument 2019-04-05 01:19:20 +08:00
JohnnyChen 82595648e2 change 4-spaces tab to 2-spaces tab 2019-03-28 22:40:24 +08:00
JohnnyChen 13c58494ec add x into results 2019-03-28 19:28:59 +08:00
Johnny Chen c4ebd199db
move test cases to "basic" testset 2019-03-28 17:58:02 +08:00
Johnny Chen 47728b1899
fix test case error 2019-03-28 17:45:12 +08:00
JohnnyChen 5c2a071713 add support for 0-element Chain 2019-03-28 17:20:41 +08:00
JohnnyChen ccfe0f8720 naive implementation of activations 2019-03-28 17:07:04 +08:00
Shreyas c810fd4818 Corrected Group Size In Batch Norm Test For Group Norm 2019-03-28 01:35:38 +05:30
Shreyas 61c1fbd013 Made Requested Changes 2019-03-28 01:33:04 +05:30
Shreyas 671aed963e Made a few fixes. Added tests 2019-03-28 00:51:50 +05:30
thebhatman 4efcc69ba5 logcosh averaged 2019-03-26 23:23:02 +05:30
thebhatman c4d12e57fe Loss function names in lowercase 2019-03-26 03:09:48 +05:30
Lyndon White f0cc4a328d make Maxout trainable 2019-03-25 16:02:46 +00:00
Lyndon White ca68bf9bec correct casing 2019-03-18 12:20:46 +00:00
Lyndon White e23c8ddd13 take zero-arge closure 2019-03-18 12:20:46 +00:00
Lyndon White fcc3ec471a Add MaxOut layer 2019-03-18 12:19:44 +00:00
chengchingwen 59da68b4d9 update test 2019-03-14 21:55:37 +08:00
Manjunath Bhat 57a52e3375
Error of recurrent decimals fixed. 2019-03-12 02:58:32 +05:30
Manjunath Bhat 61386c04f8
Tests added. 2019-03-12 02:36:37 +05:30
Manjunath Bhat d4a1d33a31
Added Float64 tests for DepthwiseConv 2019-03-09 20:17:22 +05:30
David Pollack 83b4b3a714 changes based on PR comments 2019-03-07 09:46:44 +01:00
David Pollack 129a708b6f instance normalization 2019-03-07 09:46:44 +01:00
KristofferC 9914c531f6 work around extreme slowdown due julia performance bug 2019-02-06 16:19:29 +01:00
Avik Pal 2f3ad56166 Add test for Depthwise Conv 2019-01-24 18:53:04 +05:30
chengchingwen 06003b72c7 noise shape for dropout 2019-01-22 23:51:38 +08:00
Kristoffer Carlsson c74aa67c5d fix promotion by avoiding integer division in mse and crossentropy
oops

add tests
2019-01-15 14:15:05 +01:00
Avik Pal dfd680646c Fix conflict 2018-11-14 22:18:57 +05:30
Mike J Innes 903db70673 float32 param initialisers 2018-11-12 20:10:47 +00:00
Avik Pal 2559e7b4e6 Fix merge conflicts 2018-10-23 21:53:29 +05:30
JohnnyChen 3bf18347e0 Fix dimensional error in test 2018-09-26 22:03:38 +08:00
JohnnyChen b20ae0546b rebase to pass the test 2018-09-26 20:30:13 +08:00
Avik Pal cc812a8f89 Fix tests 2018-09-11 17:30:54 +05:30
Avik Pal dd2fa77681 Fix tests 2018-09-11 17:06:18 +05:30
Avik Pal 7e7a501efd Fix tests 2018-09-11 16:32:14 +05:30
Avik Pal 8bea60d980
Merge branch 'master' into cudnn_batchnorm 2018-09-11 15:34:25 +05:30
Johnny Chen 44049ce00c
Merge branch 'master' into issue-#354 2018-09-06 09:39:31 -05:00
Mike J Innes 1e90226077 actually run tests 2018-09-04 14:35:20 +01:00
Mike J Innes 1e0fd07b09 use `expand` 2018-09-04 14:30:02 +01:00
Mike J Innes e6be639436 Merge branch 'master' into HEAD 2018-09-04 14:03:46 +01:00
Johnny Chen b35664c59f Update testsets 2018-08-25 16:30:46 +08:00
Yueh-Hua Tu 634d34686e Add new constructors and test 2018-08-24 10:31:13 +08:00
Johnny Chen 4baf85bbe2 update Testset of basic.jl 2018-08-23 22:29:03 +08:00
Johnny Chen 81e5f7c991 Update test/layers/basic.jl 2018-08-23 21:59:41 +08:00
Johnny Chen 6743d52d08 Fix issue #354 2018-08-23 21:34:11 +08:00
Avik Pal d3c78a80be Fix layers errors 2018-08-11 17:20:27 +05:30
Avik Pal 3b448ce1ac
Merge branch 'master' into cudnn_batchnorm 2018-08-11 15:02:55 +05:30
Simon Mandlik 0471c489e6 depwarns 2018-08-03 15:14:25 +01:00
pevnak 3510c837a8 zeros replaced by zero 2018-08-03 15:14:25 +01:00
Avik Pal da7fe93b31 Fix test 2018-07-17 09:47:45 +05:30
Avik Pal 646db81f94 Pull BatchNorm CPU updates 2018-07-17 09:24:38 +05:30
CarloLucibello 185e9148b6 fix cpu batchnorm 2018-07-16 07:11:33 +02:00
Matthew Kelley 0e95be3326 Call Flux.Tracker.data() on ŷ for bce 2018-06-26 14:48:51 -06:00
Matthew Kelley ed032cdb1e Change epsilon value to eps(ŷ) 2018-06-26 12:29:06 -06:00
Matthew Kelley e08fd7a6d2 Added epsilon term to binarycrossentropy 2018-06-26 11:43:16 -06:00
Avik Pal a4e35e9e91 Adjust atol in tests 2018-06-20 16:22:25 +05:30
Mike J Innes 5fd240f525 interface tweaks 2018-04-15 20:04:42 +01:00
Brad Safnuk 07b0f95d61 Tests for batch norm with 2D and 3D convolutions. 2018-03-15 22:52:09 -04:00
Brad Safnuk 6653ec86d9 Allow multidimensional inputs to batchnorm.
Can be used in conjunction with convolutional layers, in addition
to dense layers, with the same api.
2018-03-15 21:48:59 -04:00
boathit 6e65789828 Register back! for logsigmoid and implement (logit)binarycrossentropy 2018-02-06 19:32:46 +08:00
Mike J Innes e3a688e706 use kwarg 2017-12-13 15:27:15 +00:00
Mike J Innes 128725cefd Merge branch 'master' into sf/weighted_crossentropy 2017-12-13 15:14:47 +00:00
Elliot Saba 41446d547f Add `weighted_crossentropy` for imbalanced classification problems 2017-12-05 17:09:05 -08:00