Commit Graph

305 Commits

Author SHA1 Message Date
Manjunath Bhat
1d310d4532
Removed {typeof(p)} 2019-03-07 21:55:26 +05:30
thebhatman
f4543b7adf Value of alpha updated and dot operations changed 2019-03-08 03:21:26 +05:30
thebhatman
8e5965ac41 Indentation fixed 2019-03-05 16:28:05 +05:30
thebhatman
d6608682fc Suggested changes made 2019-03-05 16:18:50 +05:30
Manjunath Bhat
29b853e0bb
Made sure Gradients are not lost. 2019-03-04 22:17:19 +05:30
Manjunath Bhat
97f874abcf
Added AlphaDropout which is used in SNNs. 2019-03-04 01:05:46 +05:30
Rohith Pentaparthy
1b1dff1266 Added an example of Conv to Flux.jl/src/layers/conv.jl, and clarified what WHCN means 2019-02-23 14:31:27 -06:00
pshashk
b0a5844afb Remove dims=1 from normalise (#619)
* remove `dims=1`

* add dims arg

* fix test

* remove dims=1 only from deprecated version
2019-02-11 16:11:47 +00:00
pshashk
b074b2491a
fix docstring 2019-02-08 21:49:53 +03:00
pshashk
c3e04392d8
drop dims type restriction 2019-02-08 16:15:37 +03:00
pshashk
911c901294
dims kwarg 2019-02-08 16:00:32 +03:00
Mike J Innes
601e2d8ae0
Merge pull request #586 from KristofferC/kc/batchnorm
work around extreme slowdown in BatchNorm due to julia performance bug in broadcast fusion
2019-02-08 11:00:33 +00:00
KristofferC
9914c531f6 work around extreme slowdown due julia performance bug 2019-02-06 16:19:29 +01:00
Mike J Innes
e8b2ec6f67
Merge pull request #311 from tejank10/conv_transpose
2D Conv transpose support
2019-02-06 14:14:14 +00:00
Moksh Jain
046f7b4eae fix std arguments in normalise 2019-02-05 18:36:04 +05:30
Moksh Jain
c6409d7686 add support for n-dimensional input to normalise layer 2019-02-05 17:09:22 +05:30
Mike J Innes
940b1e6dbf
Merge pull request #587 from KristofferC/patch-2
use uncorrected standard deviation in normalise
2019-02-04 14:35:25 +00:00
Mike J Innes
17f33b4a6a
Merge pull request #583 from KristofferC/kc/small_fixes
clarify docs on single batch image to conv
2019-02-04 12:33:34 +00:00
Lyndon White
26550dacda
Default to zero'ed initial state 2019-02-02 20:01:28 +00:00
Tejan Karmali
84eabcd2ae
fixed DepthwiseConv dilation 2019-02-02 12:19:35 +05:30
Tejan Karmali
e54df2de06
Merge branch 'master' into conv_transpose 2019-02-02 10:20:45 +05:30
Kristoffer Carlsson
fd0f1c7a82
use uncorrected standard deviation in normalise
fixes https://github.com/FluxML/Flux.jl/issues/529
2019-01-30 17:42:19 +01:00
Mike J Innes
013b421b08
Merge pull request #570 from avik-pal/ap/batchnorm_fixes
Patches for default initializers
2019-01-28 10:40:55 +00:00
Mike J Innes
1cf37ab9eb rm some old deprecations 2019-01-25 09:54:32 +00:00
Avik Pal
733879681e Change initializer to glorot_uniform 2019-01-24 18:48:30 +05:30
Avik Pal
bb72c528e1 Change initializers to Float32 2019-01-24 18:43:39 +05:30
Kristoffer Carlsson
325e3a4f70 clarify docs on single batch image to conv
fixes #309
2019-01-24 11:24:10 +01:00
Mike J Innes
496dbfabd2 make chain collectable 2019-01-22 00:31:55 +00:00
Kristoffer Carlsson
c74aa67c5d fix promotion by avoiding integer division in mse and crossentropy
oops

add tests
2019-01-15 14:15:05 +01:00
Tejan Karmali
ed835f26fe printing ConvTranspose layer 2018-12-09 12:50:09 -05:00
Tejan Karmali
1648414a5d fixes for layer and test 2018-12-04 11:08:40 -05:00
Tejan Karmali
519c3db5c0 clean code 2018-11-28 11:48:53 -05:00
Tejan Karmali
95e490a2c5 merge conflict resolved 2018-11-28 11:10:22 -05:00
Tejan Karmali
89f2709b61 resolved conflicts 2018-11-28 11:07:43 -05:00
Tejan Karmali
a71ee386d0 1.0 fix for conv transpose 2018-11-28 10:55:21 -05:00
Mike J Innes
dd154ca049
Merge pull request #294 from avik-pal/cudnn_batchnorm
Wrapper for CuDNN BatchNorm
2018-11-27 23:51:32 +00:00
Mike J Innes
3d41dca338 immutable chain 2018-11-16 12:22:15 +00:00
Avik Pal
dfd680646c Fix conflict 2018-11-14 22:18:57 +05:30
Mike J Innes
3ef6bfc0ac
Merge pull request #473 from avik-pal/patch-2
Update CUDNN function calls
2018-11-14 16:07:02 +00:00
Mike J Innes
a57f66e58a adapt updates 2018-11-14 15:36:18 +00:00
Mike J Innes
75ecc0b6ba downconversion for conv 2018-11-12 20:21:27 +00:00
Mike J Innes
903db70673 float32 param initialisers 2018-11-12 20:10:47 +00:00
Avik Pal
564518e448 Merge branch 'master' of https://github.com/FluxML/Flux.jl into cudnn_batchnorm 2018-11-08 19:13:34 +05:30
Avik Pal
02efc264e7 Fix unintentional change to spaces 2018-11-08 19:12:38 +05:30
Mike J Innes
30486f9c03
Merge pull request #441 from Paethon/rm_initn
Removes initn initialization
2018-11-08 13:25:02 +00:00
Tejan Karmali
f540a0daf7 merge with upstream 2018-10-23 13:40:06 -04:00
Avik Pal
2559e7b4e6 Fix merge conflicts 2018-10-23 21:53:29 +05:30
Mike J Innes
bbccdb3eec
Merge pull request #279 from avik-pal/depthwiseconv
Adds support for Depthwise Convolutions
2018-10-23 17:22:15 +01:00
Tejan Karmali
e9bf86dbff Merge branch 'master' of https://github.com/FluxML/Flux.jl into conv_transpose 2018-10-19 02:08:25 -04:00
Sebastian Stabinger
94e5e9f993 Removes initn initialization
Is replaced with glorot_uniform for Conv following Keras
2018-10-17 17:11:16 +02:00