Mike Innes
b60df53ba1
pkg up
2019-09-19 18:33:33 +01:00
Mike Innes
250aef5a5a
normalise test fixes
2019-09-10 16:19:55 +01:00
Mike Innes
2f7ad895aa
test cleanups
2019-08-19 15:22:50 +01:00
Mike Innes
9590aa63e3
rm last uses of param/data
2019-08-19 15:14:42 +01:00
thebhatman
8d6028e27a
tests with gradients
2019-07-12 20:47:43 +05:30
Mike Innes
e2bf46b7fd
gpu test fixes
2019-07-12 14:52:01 +01:00
thebhatman
e6d5846e49
Temporary removal of Float16 test
2019-06-14 23:24:31 +05:30
thebhatman
1ff4e3188e
back on mse failing for Float16
2019-06-13 16:41:25 +05:30
thebhatman
c7c0ee2cbc
Resolving Merge Conflicts
2019-06-12 21:34:42 +05:30
thebhatman
a56cfb73c3
BatchNorm test corrected
2019-06-11 20:34:48 +05:30
thebhatman
f465665c73
Corrected test for asymmetric padding
2019-06-11 20:20:00 +05:30
thebhatman
94a2d1987d
Updated tests of normalisation layers.
2019-06-11 20:05:07 +05:30
Mike J Innes
b98075817c
Merge branch 'master' into DenseBlock
2019-06-05 14:27:47 +01:00
ayush-1506
98a027a505
typo
2019-05-14 02:56:12 -07:00
ayush-1506
bfc5bb0079
rebase
2019-05-14 02:53:48 -07:00
ayush-1506
0a2e288c3f
another small test
2019-05-14 02:53:06 -07:00
ayush-1506
2161163a82
added crosscor
2019-05-14 02:52:28 -07:00
ayush-1506
7c28f7f883
Merge branch 'crosscor' of https://github.com/ayush-1506/Flux.jl into crosscor
2019-05-14 02:47:28 -07:00
Bruno Hebling Vieira
c5fc2fb9a3
Added tests
2019-05-13 16:32:00 -03:00
bors[bot]
68ba6e4e2f
Merge #563
...
563: noise shape for dropout r=MikeInnes a=chengchingwen
I add the noise shape for dropout, similar to the `noise_shape` argument in [`tf.nn.dropout`](https://www.tensorflow.org/api_docs/python/tf/nn/dropout )
Co-authored-by: chengchingwen <adgjl5645@hotmail.com>
Co-authored-by: Peter <adgjl5645@hotmail.com>
2019-05-13 17:16:10 +00:00
chengchingwen
2fc2a5282c
Merge remote-tracking branch 'upstream/master' into drop_shape
2019-05-14 00:50:59 +08:00
Elliot Saba
48fcc66094
Remove vestigial testing println()
2019-05-12 11:20:24 -07:00
Elliot Saba
2e6561bb6a
Change DepthwiseConv()
to use in=>out
instead of in=>mult
.
...
This is an API change, but I think it makes more sense, and is more
consistent with our `Conv()` api.
2019-05-12 11:20:24 -07:00
chengchingwen
5c5140683c
make dims as field of Dropout
2019-05-10 23:45:50 +08:00
ayush-1506
99d07e67db
another small test
2019-05-09 16:43:28 +05:30
ayush-1506
9a3aa18c17
conflicts
2019-05-08 11:56:46 +05:30
Jan Weidner
e96a9d7eaf
Switch broken #700 test to pass
2019-05-03 22:36:32 +02:00
Jan Weidner
73c5d9f25c
fix
2019-05-03 22:22:52 +02:00
Jan Weidner
27a9a7b9cf
add broken test for #700
2019-05-03 22:22:52 +02:00
Mike J Innes
5b79453773
passing tests... ish
2019-05-02 18:54:01 -07:00
Mike J Innes
0c265f305a
fix most tests
2019-05-02 18:52:09 -07:00
ayush-1506
20b79e0bdf
added crosscor
2019-05-01 22:29:00 +05:30
Elliot Saba
6e22cd4931
Add asymmetric padding to convolutional layers
2019-04-25 09:55:23 -07:00
Elliot Saba
113ddc8760
Update Flux
code for new NNlib branch
2019-04-25 09:55:23 -07:00
Mike J Innes
54d9229be9
Merge pull request #710 from johnnychen94/master
...
naive implementation of activations
2019-04-05 15:33:31 +01:00
JohnnyChen
4626f7568c
rewrite one test case
2019-04-05 18:50:15 +08:00
JohnnyChen
de7a5f4024
correct the function behavior; support Any type
2019-04-05 18:16:44 +08:00
bors[bot]
bd9d73a941
Merge #655
...
655: Added support for Float64 for DepthwiseConv r=dhairyagandhi96 a=thebhatman
DepthwiseConv was giving errors for Float64. This fixes the issue.
Co-authored-by: Manjunath Bhat <manjunathbhat9920@gmail.com>
2019-04-04 17:25:52 +00:00
chengchingwen
261235311c
change dims
as unbroadcasted dims and keyword argument
2019-04-05 01:19:20 +08:00
JohnnyChen
82595648e2
change 4-spaces tab to 2-spaces tab
2019-03-28 22:40:24 +08:00
JohnnyChen
13c58494ec
add x into results
2019-03-28 19:28:59 +08:00
Johnny Chen
c4ebd199db
move test cases to "basic" testset
2019-03-28 17:58:02 +08:00
Johnny Chen
47728b1899
fix test case error
2019-03-28 17:45:12 +08:00
JohnnyChen
5c2a071713
add support for 0-element Chain
2019-03-28 17:20:41 +08:00
JohnnyChen
ccfe0f8720
naive implementation of activations
2019-03-28 17:07:04 +08:00
Shreyas
c810fd4818
Corrected Group Size In Batch Norm Test For Group Norm
2019-03-28 01:35:38 +05:30
Shreyas
61c1fbd013
Made Requested Changes
2019-03-28 01:33:04 +05:30
Shreyas
671aed963e
Made a few fixes. Added tests
2019-03-28 00:51:50 +05:30
Lyndon White
f0cc4a328d
make Maxout trainable
2019-03-25 16:02:46 +00:00
Lyndon White
ca68bf9bec
correct casing
2019-03-18 12:20:46 +00:00