Mike J Innes
|
7103a0ed7d
|
tweaks
|
2018-08-03 15:19:10 +01:00 |
|
pevnak
|
926411a449
|
removed most error, the only one in Fallbacks test persits
|
2018-08-03 15:14:25 +01:00 |
|
pevnak
|
c657d4e47f
|
fixed the sum as suggested by mike
|
2018-08-03 15:14:25 +01:00 |
|
pevnak
|
3510c837a8
|
zeros replaced by zero
|
2018-08-03 15:14:25 +01:00 |
|
pevnak
|
8ab209126d
|
removed zeros fix
|
2018-08-03 15:14:25 +01:00 |
|
pevnak
|
e98538673a
|
updated sum to be compliant with latest beta. Removed some depwarns
|
2018-08-03 15:14:25 +01:00 |
|
Mike J Innes
|
70718e7a64
|
update treelike
|
2018-08-03 13:02:47 +01:00 |
|
Mike J Innes
|
aa209ee137
|
no longer needed
|
2018-08-03 13:02:47 +01:00 |
|
Mike J Innes
|
88a265154c
|
deprecations
|
2018-08-03 12:54:31 +01:00 |
|
Yueh-Hua Tu
|
5b37319289
|
Add Maxpool and Meanpool
|
2018-08-01 00:10:53 +08:00 |
|
Avik Pal
|
7dd5ec16c9
|
Fix
|
2018-07-17 11:22:12 +05:30 |
|
Avik Pal
|
646db81f94
|
Pull BatchNorm CPU updates
|
2018-07-17 09:24:38 +05:30 |
|
CarloLucibello
|
071dcdda87
|
update docs
|
2018-07-16 07:32:13 +02:00 |
|
CarloLucibello
|
185e9148b6
|
fix cpu batchnorm
|
2018-07-16 07:11:33 +02:00 |
|
Avik Pal
|
0aabf9d86b
|
Merge branch 'master' into depthwiseconv
|
2018-07-13 14:04:19 +05:30 |
|
Avik Pal
|
681d8c4dfc
|
Remove cache
|
2018-06-28 12:11:32 +05:30 |
|
Avik Pal
|
8f43258ab7
|
Get the batchnorm working without cache
|
2018-06-28 12:04:25 +05:30 |
|
Avik Pal
|
4916c8e6da
|
Add treelike for now
|
2018-06-27 14:54:49 +05:30 |
|
Matthew Kelley
|
864d72eef5
|
Overload Base.eps() for TrackedReal
|
2018-06-26 23:55:43 -06:00 |
|
Matthew Kelley
|
0e95be3326
|
Call Flux.Tracker.data() on ŷ for bce
|
2018-06-26 14:48:51 -06:00 |
|
Matthew Kelley
|
ed032cdb1e
|
Change epsilon value to eps(ŷ)
|
2018-06-26 12:29:06 -06:00 |
|
Matthew Kelley
|
e08fd7a6d2
|
Added epsilon term to binarycrossentropy
|
2018-06-26 11:43:16 -06:00 |
|
Mike J Innes
|
0a04e3ba61
|
Chain activations
|
2018-06-26 14:30:46 +01:00 |
|
Mike J Innes
|
7726a5b605
|
inferrable
|
2018-06-26 14:12:57 +01:00 |
|
Mike J Innes
|
3b575930ca
|
Merge branch 'master' into scalar_pad_stride
|
2018-06-26 14:05:07 +01:00 |
|
Avik Pal
|
3339ad5181
|
Integrate cudnn BatchNorm with Flux
|
2018-06-20 15:50:30 +05:30 |
|
Avik Pal
|
714ca23aba
|
Change default value of epsilon to prevent CuDNN BatchNorm warnings
|
2018-06-20 12:11:22 +05:30 |
|
Avik Pal
|
65f2c33991
|
Merge pull request #2 from FluxML/master
rebase
|
2018-06-11 15:40:57 +05:30 |
|
Avik Pal
|
7f3d11cae0
|
Merge branch 'master' into depthwiseconv
|
2018-06-09 11:06:07 +05:30 |
|
Avik Pal
|
1d93fb8e59
|
Add new constructor and fix a typo in display
|
2018-06-09 11:02:15 +05:30 |
|
Tejan Karmali
|
d20771d6be
|
Default value of dilation
dilation should be 1 by default
|
2018-06-09 02:29:46 +05:30 |
|
Mike J Innes
|
4915b0c8dd
|
Merge pull request #268 from staticfloat/patch-2
Add `dilation` kwarg to `Conv`
|
2018-06-07 13:49:02 +01:00 |
|
Avik Pal
|
cd6a0856d5
|
Adds support for Depthwise Convolutions
|
2018-05-30 15:53:57 +05:30 |
|
Elliot Saba
|
e6efca4bf4
|
Add dilation kwarg to Conv
Now that we have dilated convolution support in `NNlib`, this is enables support in Flux's `Conv` layer.
|
2018-05-21 13:44:13 -07:00 |
|
James Bradbury
|
af12f006f2
|
Use broadcast for dropout
Should be fast enough on GPU now that it's not going to be an optimization target again for a while. Hopefully isn't meaningfully slower on CPU?
|
2018-05-20 04:04:33 -07:00 |
|
Mike J Innes
|
7d7d89569c
|
rm this deprecation for 0.6
|
2018-05-01 12:20:36 +01:00 |
|
Mike J Innes
|
baff20514d
|
gpu broadcast fix
|
2018-04-17 18:05:58 +01:00 |
|
Mike J Innes
|
8f73dc6e14
|
fix gpu cross entropy
|
2018-04-17 17:56:47 +01:00 |
|
tejank10
|
2ef25775c6
|
removed extra expand and fixed bug
|
2018-04-16 01:18:26 +05:30 |
|
Mike Innes
|
d12fb98f2a
|
nicer batchnorm shape error
|
2018-04-15 20:29:25 +01:00 |
|
tejank10
|
2f5473d435
|
added expand in conv constructor
|
2018-04-16 00:59:11 +05:30 |
|
Mike J Innes
|
5fd240f525
|
interface tweaks
|
2018-04-15 20:04:42 +01:00 |
|
Mike J Innes
|
73a0be3e04
|
Merge branch 'master' into pull-request/07b0f95d
|
2018-04-15 17:10:29 +01:00 |
|
tejank10
|
b080f5c82e
|
Scalar pad and stride
|
2018-04-15 20:32:40 +05:30 |
|
Mike J Innes
|
cb3ae8df6a
|
rename normalise.jl
|
2018-04-15 15:45:46 +01:00 |
|
tejank10
|
f6097d58d6
|
Scalar pad/stride for Conv constructor
|
2018-04-15 12:15:41 +05:30 |
|
Brad Safnuk
|
b9a66c679d
|
Fix error in initialization of σ.
|
2018-03-22 22:20:21 -04:00 |
|
Brad Safnuk
|
35299d4621
|
Fix type instability when loading onto a gpu.
Also fixes Issue #216.
|
2018-03-22 21:32:32 -04:00 |
|
Brad Safnuk
|
db2d9efb72
|
Update BatchNorm documentation
|
2018-03-15 21:59:38 -04:00 |
|
Brad Safnuk
|
6653ec86d9
|
Allow multidimensional inputs to batchnorm.
Can be used in conjunction with convolutional layers, in addition
to dense layers, with the same api.
|
2018-03-15 21:48:59 -04:00 |
|
Mike J Innes
|
8019f789f8
|
use normal log
|
2018-03-01 16:35:49 +00:00 |
|
Mike J Innes
|
ac57fc3c26
|
use @ fix in a few places
|
2018-03-01 16:31:20 +00:00 |
|
Mike J Innes
|
c2fea2acf6
|
revert this
|
2018-02-28 23:06:53 +00:00 |
|
Mike J Innes
|
7606b1a399
|
single-batch convolution
|
2018-02-28 14:25:32 +00:00 |
|
Mike J Innes
|
15d1d3256b
|
conv api updates
|
2018-02-26 22:43:07 +00:00 |
|
Mike J Innes
|
491785a681
|
ignore state in mapleaves
|
2018-02-22 00:22:51 +00:00 |
|
Mike J Innes
|
ec65e2cec7
|
fix printing
|
2018-02-22 00:21:48 +00:00 |
|
Mike J Innes
|
989adcdc7d
|
gpu fix
|
2018-02-17 12:41:53 +00:00 |
|
Mike J Innes
|
7aa6854c64
|
more correct
|
2018-02-16 00:06:15 +00:00 |
|
Mike J Innes
|
63862c2324
|
easier initialisation with weights
|
2018-02-15 20:52:29 +00:00 |
|
Mike J Innes
|
01c31e7fcc
|
conv bias
|
2018-02-15 20:15:41 +00:00 |
|
Mike J Innes
|
2f29733888
|
Merge branch 'master' into HEAD
|
2018-02-13 14:45:37 +00:00 |
|
Mike J Innes
|
8432d8db06
|
batchnorm fix
|
2018-02-13 14:02:35 +00:00 |
|
Mike J Innes
|
fcbdc49d6b
|
fix reserve usage
|
2018-02-08 10:27:26 +00:00 |
|
boathit
|
6e65789828
|
Register back! for logsigmoid and implement (logit)binarycrossentropy
|
2018-02-06 19:32:46 +08:00 |
|
Mike J Innes
|
9a6fcf057b
|
hook up interface
|
2018-02-02 16:42:18 +00:00 |
|
Mike J Innes
|
0f1e7b5578
|
update rnn structure
|
2018-02-01 20:57:39 +00:00 |
|
Mike J Innes
|
4bfb603da6
|
gru forward
|
2018-01-31 13:46:55 +00:00 |
|
Mike J Innes
|
4207fb98f2
|
basic GPU tests
|
2018-01-16 17:58:14 +00:00 |
|
Mike J Innes
|
8f8589a7f4
|
fix initialisation
|
2018-01-10 14:11:52 +00:00 |
|
Mike J Innes
|
b44237468e
|
Merge branch 'master' into gru
|
2018-01-10 13:59:33 +00:00 |
|
Mehul Tikekar
|
2fef799109
|
fix typo in conv.jl (fixes #133)
|
2018-01-08 16:46:58 -05:00 |
|
Mike J Innes
|
e3577d759c
|
conv docs
|
2017-12-18 18:05:48 +00:00 |
|
Mike J Innes
|
269d8f36b9
|
conv padding
|
2017-12-18 18:05:38 +00:00 |
|
Mike J Innes
|
51f93d9f0e
|
conv polish
|
2017-12-15 16:24:45 +00:00 |
|
Mike J Innes
|
9d0dd9fb7e
|
layer wip
|
2017-12-15 13:22:57 +00:00 |
|
Mike J Innes
|
27d896943e
|
Merge pull request #120 from staticfloat/sf/dense_initialization
Better default initialization for Dense layers
|
2017-12-13 16:18:02 +00:00 |
|
Mike J Innes
|
e3a688e706
|
use kwarg
|
2017-12-13 15:27:15 +00:00 |
|
Mike J Innes
|
128725cefd
|
Merge branch 'master' into sf/weighted_crossentropy
|
2017-12-13 15:14:47 +00:00 |
|
Mike J Innes
|
403cc26327
|
Merge branch 'master' into gru
|
2017-12-12 16:54:00 +00:00 |
|
Mike J Innes
|
86097e76fd
|
tweak batchnorm example
|
2017-12-08 19:34:34 +00:00 |
|
Mike J Innes
|
6f997e798a
|
Merge branch 'master' into batchnorm
|
2017-12-08 19:31:50 +00:00 |
|
Mike J Innes
|
1d916c81b5
|
Merge branch 'master' into HEAD
|
2017-12-08 18:31:55 +00:00 |
|
Elliot Saba
|
41446d547f
|
Add weighted_crossentropy for imbalanced classification problems
|
2017-12-05 17:09:05 -08:00 |
|
Elliot Saba
|
c59b820bed
|
Add glorot (Xavier) initialization
Set default `Dense` and `RNN` inits to `glorot_uniform()` for `W`, `zeros` for `b`.
|
2017-12-05 14:24:48 -08:00 |
|
baggepinnen
|
fa718c7475
|
Implement Gated Recurrent Unit
|
2017-11-24 14:33:06 +01:00 |
|
Mike J Innes
|
dc1f08a709
|
Merge pull request #98 from FluxML/log
GPU-ready log function
|
2017-11-23 17:17:39 +00:00 |
|
Mike J Innes
|
351d3d4771
|
std derivative
|
2017-11-21 17:04:04 +01:00 |
|
Mike J Innes
|
b06884b912
|
LayerNorm tweaks
|
2017-11-21 16:32:36 +01:00 |
|
skariel
|
11d53781b2
|
adding layer normalization
|
2017-11-21 16:30:24 +01:00 |
|
Mike J Innes
|
e0657d93ec
|
mv numeric.jl to nnlib
|
2017-11-09 15:06:29 +00:00 |
|
Mike J Innes
|
2cb94981a0
|
gpu-ready log
|
2017-11-09 15:04:01 +00:00 |
|
Iblis Lin
|
6c7613e02b
|
batchnorm: leverage TrackedArray mean
|
2017-11-02 14:20:34 +08:00 |
|
Iblis Lin
|
88bd8a8fbd
|
batchnorm: make CuArrays happy
|
2017-11-02 14:02:41 +08:00 |
|
Iblis Lin
|
477da75428
|
batchnorm: fix mapchildren
|
2017-11-02 13:32:12 +08:00 |
|
Iblis Lin
|
5253841acc
|
batchnorm: update docs
|
2017-11-02 13:32:12 +08:00 |
|
Iblis Lin
|
b3356cc6bb
|
batchnorm: batch σ correct coefficient
|
2017-11-02 13:32:12 +08:00 |
|
Iblis Lin
|
e0201be770
|
batchnorm: parameterize momentum and epsilon
|
2017-11-02 13:32:12 +08:00 |
|
Iblis Lin
|
669273b008
|
layer: implement BatchNorm layer
See [Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift](https://arxiv.org/pdf/1502.03167.pdf)
|
2017-11-02 13:32:12 +08:00 |
|
Mike J Innes
|
e943a39ee7
|
combine special cases
|
2017-10-31 16:37:33 +00:00 |
|