Commit Graph

1626 Commits

Author SHA1 Message Date
Mike J Innes
5839e166f6
Merge pull request #860 from dsweber2/activations
Activations
2019-11-19 16:44:25 +00:00
Tim Besard
2fa3e5673e
Merge pull request #924 from FluxML/tb/cuda_init
CUDA package initialization improvements
2019-11-19 16:48:45 +01:00
Tim Besard
c45cec4cba Simplify warning. 2019-11-19 16:05:41 +01:00
Tim Besard
69bf84278f Remove wrong warning. 2019-11-19 15:53:43 +01:00
Mike J Innes
4f73e434a4
Merge pull request #935 from baggepinnen/patch-4
Fix AMSGrad on GPU
2019-11-19 12:58:37 +00:00
Troels Arnfred Bojesen
2b80573248 Fix Glorot initialization, add He initialization
Should fix #442 .
Adds He weight initialization as a bonus :-)
2019-11-19 18:16:29 +09:00
Fredrik Bagge Carlson
2da22f31f0
Avoid unnecessary conversion
This initialization works for both cpu and gpu
2019-11-19 16:31:04 +08:00
Fredrik Bagge Carlson
df7ffb0ef8
Fix AMSGrad on GPU
The previous initialization created a CPU array. Now, the same type of array as `x` is created.
2019-11-19 16:27:44 +08:00
Troels Arnfred Bojesen
4530ac65c7 Fix Glorot initialization, add He initialization
Should fix the issue reported at https://github.com/FluxML/Flux.jl/issues/442 .
Adds He weight initialization as a bonus :-)
2019-11-19 16:50:40 +09:00
dsweber2
dea29532ef Merge branch 'master' into activations 2019-11-15 17:19:43 -08:00
dsweber2
20eb840882 keeping activations separate 2019-11-15 12:03:08 -08:00
dsweber2
58c794702d simpler test 2019-11-14 14:05:53 -08:00
dsweber2
0fe3ac4e77 bring activations into function call 2019-11-14 13:40:52 -08:00
dsweber2
6475f6a43e recursive way of doing activations 2019-11-14 13:40:52 -08:00
dsweber2
99679f7e16 deal with empty Chain 2019-11-14 13:40:52 -08:00
dsweber2
d0202a2945 adding the extra commits broke the accumulate version 2019-11-14 13:40:52 -08:00
dsweber2
cdaaca8cfa make activations zygote friendly 2019-11-14 13:40:29 -08:00
DrChainsaw
453ecd1f24 Merge remote-tracking branch 'upstream/master' into samepad 2019-11-08 18:49:47 +01:00
janEbert
3dceef427f Fix binarycrossentropy on CuArrays 2019-11-08 16:48:11 +01:00
Tim Besard
a82b76cf24 Conditionally include the CUDNN glue code. 2019-11-04 15:27:11 +01:00
Tim Besard
39ab740fb7 Check for CUDA availability at run time. 2019-11-02 11:18:06 +01:00
janEbert
7b41bc4ab5 Change gate function to view instead of copy
Only for vector input as copying a matrix may be more efficient due to
caching. A matrix is sliced per row, meaning the view will not be
aligned.
2019-10-24 12:45:22 +02:00
bors[bot]
645aa04464
Merge #898
898: Fix problem in crossentropy breaking GPU compilation r=MikeInnes a=kshyatt

Trying to run this simple example
```
using Flux, CuArrays
using Flux: crossentropy
model = Chain(
        Dense(728, 128, σ),
        LSTM(128, 256),
        LSTM(256, 128),
        Dense(128, 10),
        softmax) |> gpu
data = [rand(728) for i in 1:100];
out  = [rand(10) for i in 1:100];
loss(x, y) = crossentropy(model(x), y);
Flux.train!(loss, params(model), zip(gpu.(data), gpu.(out)), ADAM())
```
Old version of `crossentropy`:
```
ERROR: GPU compilation of #23(CuArrays.CuKernelState, CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global}, Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(*),Tuple{Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(conj),Tuple{Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}}}}}}) failed
KernelError: passing and using non-bitstype argument

Argument 4 to your kernel function is of type Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(*),Tuple{Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(conj),Tuple{Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}}}}}}.
That type is not isbits, and such arguments are only allowed when they are unused by the kernel.  .args is of type Tuple{Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(conj),Tuple{Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}}}}} which is not isbits.
    .1 is of type Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}} which is not isbits.
      .x is of type Array{Float32,1} which is not isbits.


Stacktrace:
 [1] check_invocation(::CUDAnative.CompilerJob, ::LLVM.Function) at /mnt/home/khyatt/.julia/dev/CUDAnative/src/compiler/validation.jl:70
 [2] macro expansion at /mnt/home/khyatt/.julia/dev/CUDAnative/src/compiler/driver.jl:187 [inlined]
 [3] macro expansion at /mnt/home/khyatt/.julia/packages/TimerOutputs/7zSea/src/TimerOutput.jl:216 [inlined]
 [4] #codegen#136(::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::typeof(CUDAnative.codegen), ::Symbol, ::CUDAnative.CompilerJob) at /mnt/home/khyatt/.julia/dev/CUDAnative/src/compiler/driver.jl:186
 [5] #codegen at ./none:0 [inlined]
 [6] #compile#135(::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::typeof(CUDAnative.compile), ::Symbol, ::CUDAnative.CompilerJob) at /mnt/home/khyatt/.julia/dev/CUDAnative/src/compiler/driver.jl:47
 [7] #compile#134 at ./none:0 [inlined]
 [8] #compile at ./none:0 [inlined] (repeats 2 times)
 [9] macro expansion at /mnt/home/khyatt/.julia/dev/CUDAnative/src/execution.jl:389 [inlined]
 [10] #cufunction#176(::Nothing, ::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::typeof(CUDAnative.cufunction), ::GPUArrays.var"#23#24", ::Type{Tuple{CuArrays.CuKernelState,CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(*),Tuple{Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(conj),Tuple{Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}}}}}}}}) at /mnt/home/khyatt/.julia/dev/CUDAnative/src/execution.jl:357
 [11] cufunction(::Function, ::Type) at /mnt/home/khyatt/.julia/dev/CUDAnative/src/execution.jl:357
 [12] macro expansion at /mnt/home/khyatt/.julia/dev/CUDAnative/src/execution.jl:174 [inlined]
 [13] macro expansion at ./gcutils.jl:91 [inlined]
 [14] macro expansion at /mnt/home/khyatt/.julia/dev/CUDAnative/src/execution.jl:171 [inlined]
 [15] _gpu_call(::CuArrays.CuArrayBackend, ::Function, ::CuArray{Float32,1}, ::Tuple{CuArray{Float32,1},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(*),Tuple{Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(conj),Tuple{Base.Broadcast.Extruded{CuArray{Float32,1},Tuple{Bool},Tuple{Int64}}}}}}}, ::Tuple{Tuple{Int64},Tuple{Int64}}) at /mnt/home/khyatt/.julia/dev/CuArrays/src/gpuarray_interface.jl:60
 [16] gpu_call at /mnt/home/khyatt/.julia/dev/GPUArrays/src/abstract_gpu_interface.jl:151 [inlined]
 [17] gpu_call at /mnt/home/khyatt/.julia/dev/GPUArrays/src/abstract_gpu_interface.jl:128 [inlined]
 [18] copyto! at /mnt/home/khyatt/.julia/dev/GPUArrays/src/broadcast.jl:48 [inlined]
 [19] copyto! at ./broadcast.jl:863 [inlined]
 [20] copy at ./broadcast.jl:839 [inlined]
 [21] materialize at ./broadcast.jl:819 [inlined]
 [22] (::Zygote.var"#1310#1311"{CuArray{Float32,1},CuArray{Float32,1}})(::Array{Float32,1}) at /mnt/home/khyatt/.julia/dev/Zygote/src/lib/broadcast.jl:68
```
New version:
```
julia> Flux.train!(loss, params(model), zip(gpu.(data), gpu.(out)), ADAM())

julia> # everyone finished happily and went on with their lives
```

Co-authored-by: Katharine Hyatt <khyatt@flatironinstitute.org>
2019-10-23 14:31:53 +00:00
Katharine Hyatt
e0c1c0e057 Fix problem in crossentropy breaking GPU compilation 2019-10-22 14:00:57 -04:00
bors[bot]
fa5737fb5c
Merge #904
904: Documenting Optimiser Interface r=MikeInnes a=MikeInnes

I needed to add one extra commit to #875 before merging.

Co-authored-by: Dhairya Gandhi <dhairya@juliacopmuting.com>
Co-authored-by: Dhairya Gandhi <dhairya@juliacomputing.com>
Co-authored-by: Mike Innes <mike.j.innes@gmail.com>
2019-10-22 12:38:19 +00:00
Mike Innes
7ead2d6c7b typo 2019-10-22 13:36:39 +01:00
Katharine Hyatt
b8b4bc48b9 Backticks and examples for normalise 2019-10-21 10:31:44 -04:00
DrChainsaw
530d4edb67 Fix for reading comprehension error (dim is not always 2 * (N-2)) Fix for ambiguous method sig 2019-10-20 16:03:01 +02:00
DrChainsaw
411ce5dbd8 Add SamePad for pooling layers 2019-10-20 13:43:39 +02:00
DrChainsaw
fc123d6279 Add SamePad for conv layers 2019-10-20 13:43:23 +02:00
Dhairya Gandhi
4477dd8d54 reviews 2019-10-10 20:27:11 +05:30
Dhairya Gandhi
f19066ee29 more docstrings 2019-10-10 16:48:12 +05:30
Dhairya Gandhi
fe52689cfe in depth docstrings 2019-10-09 16:16:11 +05:30
thebhatman
96a23c295c Changes to docs 2019-10-09 14:53:03 +05:30
bors[bot]
af0dcb2c63
Merge #882
882: Check if CUDA availability changed during init. r=MikeInnes a=maleadt

With this PR, Flux checks using CUDAapi if CUDA is available during initialization, and forces recompilation if that does not agree with what was decided during precompilation. This avoids the scenario where Flux was precompiled without GPU support, consequently not allowing use of the GPU even if the user fixed his CUDA/GPU set-up because that does not force recompilation (and we can't add precompilation dependencies on stuff that doesn't exist).

However, we can't do the same for the case where we have a GPU/CUDA but CuArrays fails to import (checking if it imports during `__init__` would be much too expensive, if even possible), so this PR removes support for having CUDA/a GPU but CuArrays being broken. That's a little risky now that Flux depends on CuArrays, but the package is pretty mature and I haven't seen many bug reports failing to load it recently.

Fixes https://github.com/FluxML/Flux.jl/pull/852#issuecomment-538028314

cc @MikeInnes @xukai92

Co-authored-by: Tim Besard <tim.besard@gmail.com>
2019-10-08 13:24:49 +00:00
Dhairya Gandhi
b503741651 expanded docstrings 2019-10-04 14:46:03 +05:30
Tim Besard
8aea15e6e0 Demote to const variables. 2019-10-03 21:28:55 +02:00
Tim Besard
2369b2b3fd Add an environment variable to disable CUDA usage. 2019-10-03 21:27:54 +02:00
Tim Besard
63d196aa37 Check if CUDA availability changed during init. 2019-10-03 20:05:32 +02:00
thebhatman
ec886c8ce8 Added docstring for hinge loss 2019-10-03 21:13:09 +05:30
Manjunath Bhat
2b30319a55
Merge branch 'master' into patch-6 2019-09-30 21:05:02 +05:30
thebhatman
6e289ef939 Merge branch 'patch-6' of https://github.com/thebhatman/Flux.jl into patch-6 2019-09-30 20:55:44 +05:30
Filippo Vicentini
606fe58854
Use <:Number 2019-09-29 12:33:02 +02:00
Filippo Vicentini
14e94c291e
Make it actually work 2019-09-29 12:28:01 +02:00
Filippo Vicentini
d91677f651
Fix params! to work with complex numbers 2019-09-29 12:23:41 +02:00
Dhairya Gandhi
8013c728b1 clearer optimiser docstrings 2019-09-28 16:09:00 +05:30
Dhairya Gandhi
0175485a80 fixup 2019-09-27 22:08:25 +05:30
Dhairya Gandhi
8bb0db7d0c opt docstrings 2019-09-27 22:04:53 +05:30
Mike Innes
b90b02872f Merge branch 'master' into tb/cuarrays_dnn 2019-09-27 14:58:32 +01:00
Mike Innes
46bc8e5e64 move pullbacks to CuArrays 2019-09-26 17:14:18 +01:00
Michael Abbott
806e0c5c57 line 2019-09-25 15:20:13 +02:00
Michael Abbott
4245d9acad eg 2019-09-25 15:18:40 +02:00
Michael Abbott
2de84ce79f simplify 2019-09-25 13:59:32 +02:00
Michael Abbott
1a1a96571a +Chain 2019-09-25 13:47:29 +02:00
Michael Abbott
19830c71b1 fix printing of SkipConnection 2019-09-25 13:37:01 +02:00
bors[bot]
acb6a89245
Merge #865
865: Functor r=MikeInnes a=MikeInnes

This refactors our current `@treelike` infrastructure. It somewhat formalises what we're doing around the idea of a Flux model as a functor, i.e. something that can be mapped over.

This is much more flexible than what we had before, and avoids some issues. It allows layers to have state that isn't mappable; it allows for dispatch when walking the tree, which means layers like `BatchNorm` can have non-trainable parameters; and it also allows for zipped mapping like `fmap(+, xs, ys)`, which isn't implemented yet but will be useful for the new optimisers work.

The main downside is that the term `functor` has been previously used in the Julia community as a malapropism for "thing that behaves like a function"; but hopefully this can start to reduce that usage.

Co-authored-by: Mike Innes <mike.j.innes@gmail.com>
2019-09-24 16:36:10 +00:00
Dhairya Gandhi
822288d63d merge conflicts 2019-09-24 00:31:44 +05:30
Dhairya Gandhi
6846551f57 fix cuda init 2019-09-22 22:02:05 +05:30
Mike Innes
b60df53ba1 pkg up 2019-09-19 18:33:33 +01:00
Mike Innes
cabb81e30b internal rename 2019-09-19 15:53:31 +01:00
Mike Innes
b951377426 fix normalisation layer params 2019-09-19 15:33:24 +01:00
Mike Innes
6529dbcbe6 functor refactor 2019-09-19 15:22:11 +01:00
Mike Innes
2c71fc282b rename functor.jl 2019-09-19 14:15:28 +01:00
Mike Innes
c5e56b7e04 move setweights and copy_transpose 2019-09-17 17:22:35 +01:00
Mike Innes
5baebf48f4 Merge branch 'master' into tb/cuarrays_dnn 2019-09-17 16:17:09 +01:00
Mike Innes
368b1f53b4 tuple support 2019-09-17 15:49:39 +01:00
Mike Innes
b348b20452 cudnn rnns + implicit gradients 2019-09-17 15:41:42 +01:00
Mike Innes
fe57215b7e test fillarray gradients 2019-09-17 15:21:03 +01:00
Tim Besard
4942d7fcfd Move functionality over to CuArrays. 2019-09-13 08:21:45 +02:00
Tim Besard
1e7ff4f65d Query the worksize. 2019-09-13 08:04:05 +02:00
Tim Besard
04fce70019 Move low-level CUDNN wrappers to CuArrays. 2019-09-13 08:04:05 +02:00
Mike Innes
de2049450b docs mostly fixed 2019-09-10 15:17:07 +01:00
Mike Innes
c8d460ff84 doctests passing 2019-09-10 15:02:43 +01:00
Mike J Innes
67c38b3099 Merge branch 'master' into zygote 2019-09-06 15:18:58 +01:00
thebhatman
ecc9ce9d64 Gradient on AlphaDropout now working 2019-09-06 16:34:19 +05:30
Mike J Innes
3c1ac84676
Merge pull request #842 from baggepinnen/patch-4
Add RADAM optimizer
2019-09-02 14:36:40 +01:00
Manjunath Bhat
c3cc4bf966
Remove double docstring 2019-08-31 01:35:40 +05:30
thebhatman
2f1a187665 Update AlphaDropout 2019-08-31 01:28:58 +05:30
Fredrik Bagge Carlson
cb3bfd72f3
Export RADAM from Optimise 2019-08-29 07:46:45 +08:00
Mike J Innes
9cd97f06f7 define has_cuarrays when no cuda 2019-08-27 15:06:04 +01:00
Tim Besard
4fef9d8508 Don't depend on unreleased CuArrays. 2019-08-27 09:40:22 +02:00
Tim Besard
6ad3cdd138 Replace Requires with direct CuArrays dependency. 2019-08-27 09:33:15 +02:00
janEbert
dec1b37e8e Merge remote-tracking branch 'origin/master' into HEAD 2019-08-24 12:23:10 +02:00
janEbert
978d7bf195 Fix CuArrays.libcudnn imports 2019-08-24 02:21:54 +02:00
Mike Innes
487000ac31 fix cuda code and tests 2019-08-19 16:56:48 +01:00
Mike Innes
6c67404398 update cleanup 2019-08-19 15:44:51 +01:00
Mike Innes
447fd9d604 conv docstring formatting 2019-08-19 15:30:59 +01:00
Mike Innes
2f7ad895aa test cleanups 2019-08-19 15:22:50 +01:00
Mike Innes
9590aa63e3 rm last uses of param/data 2019-08-19 15:14:42 +01:00
thebhatman
a76e4d128b Remove param from crosscor 2019-08-19 19:19:53 +05:30
Manjunath Bhat
8456b7ba45
Remove param from groupnorm 2019-08-19 19:16:21 +05:30
Mike Innes
3ecca436e4 formatting fix 2019-08-19 14:42:07 +01:00
Mike Innes
49044dff7c avoid adjoint on abstract type 2019-08-19 14:39:09 +01:00
Mike Innes
b8fabad337 deprecate param/data 2019-08-19 14:35:48 +01:00
Fredrik Bagge Carlson
3287cf23db
Add RADAM export 2019-08-19 13:07:39 +08:00
Fredrik Bagge Carlson
ebbad0d135
Add RADAM optimizer 2019-08-19 12:22:32 +08:00
Miguel Madrid Mencía
14affbc91b
Use CuArrays.ones instead cuones which is deprecated 2019-08-11 13:38:44 +02:00
Mike J Innes
7c111e7cde fixes #645
fixes #831
2019-08-09 13:53:11 +01:00
Moelf
4d00957b36
Fix CuArray zeros deprecation 2019-08-06 22:23:21 +02:00
Christopher Rackauckas
ed12d4e7c0
Momentum doesn't need params 2019-07-31 17:56:51 -04:00
Mike J Innes
f3551da5a2 dropout printing 2019-07-24 11:20:39 -04:00
thebhatman
faac0ff08b Updated InstanceNorm and GroupNorm to avoid mutation 2019-07-18 16:13:58 +05:30
Manjunath Bhat
b779d43aca
replaced trunc Int with div 2019-07-16 17:52:55 +05:30
thebhatman
2816fbb9b2 Fix for getindex error in BatchNorm 2019-07-12 22:19:41 +05:30
Mike Innes
a140c31f72 fix batchnorm 2019-07-12 16:09:42 +01:00
Mike Innes
1fc584102d fix dropout 2019-07-12 15:38:28 +01:00
Mike Innes
e2bf46b7fd gpu test fixes 2019-07-12 14:52:01 +01:00
Mike Innes
33c8d84a60 cuparam -> cuarray 2019-07-11 14:14:56 +01:00
Manjunath Bhat
11c9a8450c
Remove active from GroupNorm 2019-07-11 18:40:48 +05:30
Mike Innes
c2cd7dab91 re-export gradient 2019-07-11 13:55:12 +01:00
DrChainsaw
16d5f2bc24 Add x to seen in prefor to avoid infinite recursion if passed something self-referential 2019-07-08 23:11:35 +02:00
thebhatman
cf5bc801d3 Check for nothing in update step 2019-07-08 19:22:23 +05:30
thebhatman
8d78b437ff Merge branch 'sf/zygote_updated' of https://github.com/thebhatman/Flux.jl 2019-07-08 18:47:17 +05:30
thebhatman
812541f8d6 zeros replaced by fill to avoid nothing grad 2019-07-06 19:41:03 +05:30
thebhatman
3ee2a76f61 Removed .data from LSTMCell 2019-07-02 17:38:30 +05:30
thebhatman
b194e7e3a8 Callback being called now 2019-06-20 00:37:54 +05:30
Dhairya Gandhi
dd9cdbef14 remove uncessary call to beta 2019-06-16 19:09:50 +05:30
Dhairya Gandhi
67f18663d9 pick beta from state in NADAM 2019-06-16 19:06:59 +05:30
thebhatman
7ab9d8ed3d Minor update 2019-06-13 18:59:03 +05:30
thebhatman
ce11804dc1 CrossCor test passing, hopefully. 2019-06-13 01:21:58 +05:30
thebhatman
48ed93cdaa Silly error in Dropout corrected. 2019-06-12 23:16:15 +05:30
thebhatman
e9797408ec DepthwiseConv corrected again. 2019-06-12 23:01:51 +05:30
thebhatman
00a4f4c26d Correcting Dropout 2019-06-12 22:39:30 +05:30
thebhatman
bd7e3b1f41 Dropout with dims test passing. 2019-06-12 22:16:11 +05:30
thebhatman
c7c0ee2cbc Resolving Merge Conflicts 2019-06-12 21:34:42 +05:30
thebhatman
dfd2965e85 GroupNorm tests corrected 2019-06-11 22:32:54 +05:30
thebhatman
11073dcd25 GroupNorm made to use istraining() 2019-06-11 22:04:33 +05:30
thebhatman
ef63f80644 No ops defined for param and data 2019-06-10 18:24:18 +05:30
Mike J Innes
b98075817c
Merge branch 'master' into DenseBlock 2019-06-05 14:27:47 +01:00
ayush-1506
2161163a82 added crosscor 2019-05-14 02:52:28 -07:00
Bruno Hebling Vieira
796a2957c9 Added news and removed type annotation from SkipConnection structure 2019-05-13 16:33:31 -03:00
Bruno Hebling Vieira
e7d76b8423 Added the SkipConnection layer and constructor
Added missing export

Corrected channel placement

Dimension 4 cannot be assumed to always be the Channel dimension

Deprecation of `treelike`

Code now makes use of `@treelike` macro instead of the deprecated `treelike` function (it worked on my end because I'm on Julia 0.7, while Julia 1.0 deprecated stuff)

Update basic.jl

Renaming to SkipConnection

* Update Flux.jl

* Update basic.jl

Updated `SkipConnection` with a `connection` field

I'm pretty sure I broke something now, but this PR should follow along these lines `cat` needs special treatment (the user can declare his own `concatenate` connection, but I foresee it's going to be used often so we can simply define special treatment)

Forgot to remove some rebasing text

Forgot to remove some more rebasing text

Removed local copy and default cat method from the function calls

Adjusted some more types for inference, could improve on this as well

Re-placed some left-over spaces
2019-05-13 16:32:00 -03:00
bors[bot]
68ba6e4e2f Merge #563
563: noise shape for dropout r=MikeInnes a=chengchingwen

I add the noise shape for dropout, similar to the `noise_shape` argument in [`tf.nn.dropout`](https://www.tensorflow.org/api_docs/python/tf/nn/dropout)

Co-authored-by: chengchingwen <adgjl5645@hotmail.com>
Co-authored-by: Peter <adgjl5645@hotmail.com>
2019-05-13 17:16:10 +00:00
chengchingwen
2fc2a5282c Merge remote-tracking branch 'upstream/master' into drop_shape 2019-05-14 00:50:59 +08:00
Elliot Saba
2e6561bb6a Change DepthwiseConv() to use in=>out instead of in=>mult.
This is an API change, but I think it makes more sense, and is more
consistent with our `Conv()` api.
2019-05-12 11:20:24 -07:00
chengchingwen
5c5140683c make dims as field of Dropout 2019-05-10 23:45:50 +08:00
Mike J Innes
92ddc618f8 update for arrays 2019-05-02 18:57:52 -07:00
Mike J Innes
c70276ddfe rm more deprecations 2019-05-02 18:57:52 -07:00
Mike J Innes
256695262c rm optimiser deprecations 2019-05-02 18:54:01 -07:00
Mike J Innes
82ee61f5be implement #643 2019-05-02 18:52:09 -07:00
Mike J Innes
c313be8e95 rm data/param 2019-05-02 18:52:09 -07:00
Mike J Innes
aa4d221f8c break all the things 2019-05-02 18:50:52 -07:00
Avik Pal
a0be6fa837
Add missing activation function for batchnorm 2019-05-01 19:47:54 +05:30
Dhairya Gandhi
221670a2b1
Merge pull request #733 from thebhatman/expdecay-fix
Fixed ExpDecay
2019-05-01 18:58:37 +05:30
Dhairya Gandhi
9bbbd17e4b
Merge branch 'master' into onecold 2019-04-30 19:09:36 +05:30
Roger-luo
d63338c242 fix doctest 2019-04-26 18:12:14 +08:00
Mike J Innes
6c3a939133
Update src/onehot.jl
Co-Authored-By: Roger-luo <hiroger@qq.com>
2019-04-26 18:09:14 +08:00
Roger-luo
fabcd05ff2 add examples 2019-04-26 18:05:03 +08:00
Elliot Saba
732f97fe16 Split out conv_transpose_dims() so that Zygote can ignore it 2019-04-25 10:24:19 -07:00
Elliot Saba
6e22cd4931 Add asymmetric padding to convolutional layers 2019-04-25 09:55:23 -07:00