Commit Graph

1713 Commits

Author SHA1 Message Date
Kyle Daruwalla
c001d0f3c5 Added trainmode! and updated docs with warning 2020-03-01 12:30:41 -06:00
Martijn Visser
d67a2e40b3 remove stray code block start from docstring 2020-03-01 15:20:40 +01:00
Martijn Visser
f4365dab94 fix docstring example indentation as well 2020-03-01 15:19:22 +01:00
Martijn Visser
32e0aa9fcb docstring ensure signature code formatting
by using a four space indent instead of two
2020-03-01 15:15:39 +01:00
Martijn Visser
6076847a45 fix a few typos in docstrings 2020-03-01 15:07:12 +01:00
Adarsh Kumar
08dabce57e
Updated loss function docs 2020-03-01 12:00:11 +05:30
Adarsh Kumar
57c1b67d08
Merge branch 'master' into patch-1 2020-03-01 11:49:33 +05:30
Kyle Daruwalla
5cbd2cecf2 Changed testmode! to return model 2020-02-29 16:09:59 -06:00
CarloLucibello
a72258ea2a fix rebase 2020-02-29 18:55:49 +01:00
CarloLucibello
97141e8c98 improve docstring 2020-02-29 18:51:00 +01:00
CarloLucibello
487002878e restrict train! special casing 2020-02-29 18:51:00 +01:00
CarloLucibello
b6c79b38b4 add DataLoader
special case train! for the unsupervised data iterator
2020-02-29 18:50:59 +01:00
bors[bot]
37af9fb15c
Merge #1023
1023: Feature: Added Boston Housing Dataset r=CarloLucibello a=pranjaldatta

[Boston Housing Dataset](https://archive.ics.uci.edu/ml/machine-learning-databases/housing/) is one of the most common datasets that are used by beginners. It is as popular as other datasets like Iris etc. Hence, it feels only natural that this dataset is a part of Flux.

Added src/data/housing.jl: code for downloading and loading the dataset
Edited src/data/Data.jl: To include and export housing.jl
Edited test/data.jl: Added test for the dataset.

*All tests in test/data.jl are passing*

Co-authored-by: pranjaldatta <pranjaldatta99@gmail.com>
Co-authored-by: Pranjal  Datta <pranjaldatta99@gmail.com>
2020-02-29 15:54:34 +00:00
Carlo Lucibello
425fcdbe69 NNlib docs + misc docs improvements 2020-02-29 11:14:48 +01:00
Adarsh Kumar
8afed01345
Apply suggestions from code review
Co-Authored-By: David Lung <lungd@users.noreply.github.com>
2020-02-27 23:23:53 +05:30
Adarsh Kumar
9dce623214
Updated Msle loss 2020-02-27 16:26:17 +05:30
Adarsh Kumar
980ce72914
Added tversky and dice loss 2020-02-27 02:00:28 +05:30
CarloLucibello
759fe9df2f update docs and export update! 2020-02-26 20:27:39 +01:00
Dhairya Gandhi
20e78e274e docs fix 2020-02-26 22:41:45 +05:30
Dhairya Gandhi
cf82393ae8 type signatures 2020-02-26 22:36:25 +05:30
Dhairya Gandhi
cd931793ef more docs and constructors 2020-02-26 22:29:14 +05:30
Dhairya Gandhi
58211e31bd docs improve 2020-02-26 22:22:11 +05:30
Dhairya Gandhi
f889d0c4d4 add kwarg constructors 2020-02-26 22:19:17 +05:30
pranjaldatta
569021a9f1 added newlines at end of file 2020-02-26 15:05:23 +05:30
bors[bot]
55616afc11
Merge #960
960: Added utility function outdims to compute output dimensions of a layer r=dhairyagandhi96 a=darsnack

Based on Slack chatter, I added a utility function, `outdims`, that computes the output dimensions for given input dimensions.

Example
```julia
layer = Conv((3, 3), 3 => 16)
outdims(layer, (10, 10)) # returns (8, 8)
```

Co-authored-by: Kyle Daruwalla <daruwalla@wisc.edu>
2020-02-25 17:40:05 +00:00
Tim Besard
4ed7d984db Adapt to CuArrays ArrayStyle changes. 2020-02-25 14:09:03 +01:00
Bulat Suleymanov
db4eaf254b
Edit description of convolutional layer 2020-02-24 13:16:51 +05:00
Kyle Daruwalla
924b8f49ec Updated to place function definitions in the appropriate places. 2020-02-21 15:10:28 -06:00
Kyle Daruwalla
7c12af065a Added testmode! functionality back to normalization layers. 2020-02-21 14:35:10 -06:00
Dhairya Gandhi
88b0c65d72
Merge pull request #1035 from matsueushi/remove_get_macro
Remove get! macro
2020-02-20 12:58:16 +05:30
bors[bot]
e4a84c120f
Merge #1021
1021: nograd for onecold, onehot, onehotbatch r=MikeInnes a=CarloLucibello

fixes #1020 

Co-authored-by: CarloLucibello <carlo.lucibello@gmail.com>
2020-02-17 14:12:48 +00:00
matsueushi
6ea7b95384 Remove unused using 2020-02-15 20:06:15 -05:00
Marco
ae0455517a Remove outdated reference to truncate! 2020-02-10 00:03:11 -08:00
pranjaldatta
197a1a70c0 added BostonHousing dataset and testing 2020-02-07 03:47:19 +05:30
CarloLucibello
6499344af3 nograd for onecold, onehot, onehotbatch 2020-02-06 15:41:46 +01:00
Adarsh Kumar
7710bb0b4b
Removed spurious promotions 2020-02-06 01:06:41 +05:30
Adarsh Kumar
b5184553d4
Error correction in mae 2020-02-05 23:32:55 +05:30
Adarsh Kumar
643086c8db
Updated squared_hinge 2020-02-05 22:40:07 +05:30
Adarsh Kumar
7ac647a7ac
Added loss functions 2020-02-05 22:29:15 +05:30
Dhairya Gandhi
bc20103ea6 no-op copy 2020-01-31 13:23:33 +05:30
Dhairya Gandhi
b9fbee1ff0 ::typeof(op) -> op 2020-01-31 12:24:36 +05:30
Tim Besard
d88f63adb4 Remove unused imports. 2020-01-29 12:15:41 +01:00
bors[bot]
d1edd9b16d
Merge #680
680: Added new loss functions. r=thebhatman a=thebhatman

I have added the KL Divergence Loss function, Poisson loss function, Logcosh loss, and Hinge loss function.

Co-authored-by: Manjunath Bhat <manjunathbhat9920@gmail.com>
Co-authored-by: thebhatman <manjunathbhat9920@gmail.com>
2020-01-13 15:46:25 +00:00
Mike J Innes
17732e7023 restructure; closes #747 2020-01-06 11:53:47 +00:00
Dhairya Gandhi
a72ca2b05d fix args 2019-12-09 23:18:01 +05:30
Dhairya Gandhi
894c075b6d rm Zeros setindex 2019-12-09 21:40:58 +05:30
Dhairya Gandhi
f39e184814 rm Zeros warning 2019-12-09 21:07:30 +05:30
Kyle Daruwalla
0cdd11c0dc Added tests for varying padding, stride, and dilation with outdims. 2019-12-07 14:05:50 -06:00
Kyle Daruwalla
a64378b112 Switched to using NNlib for conv.jl outdims. 2019-12-07 13:21:26 -06:00
Kyle Daruwalla
6265b1fa39 Added tests for outdims 2019-12-05 22:54:25 -06:00
Kyle Daruwalla
31dda0ce6c Updated with all basic and conv layers outdims 2019-12-05 21:57:10 -06:00
DrChainsaw
755536bf5e Merge remote-tracking branch 'upstream/master' into samepad 2019-12-04 23:45:03 +01:00
Kyle Daruwalla
b4ed16ad9c Added outdims for some basic layers 2019-12-03 22:48:48 -06:00
Fredrik Bagge Carlson
e67f09c06d Correct some comments in decay docs 2019-12-03 15:32:23 +08:00
Fredrik Bagge Carlson
6e94e59afd Improve docs for decay optimisers 2019-12-03 15:27:44 +08:00
Dhairya Gandhi
245563077b cleaner API 2019-11-27 19:40:58 +05:30
bors[bot]
90a38a3201
Merge #937
937: Fix Glorot initialization, add He initialization r=MikeInnes a=Sleort

Should fix #442 .
Adds He weight initialization as a bonus :-)

Co-authored-by: Troels Arnfred Bojesen <tr-ab@online.no>
2019-11-26 16:17:06 +00:00
bors[bot]
fb4a48f970
Merge #943
943: Fixes #900 r=MikeInnes a=dhairyagandhi96

Thoughts on the test?

cc @MikeInnes

Co-authored-by: Dhairya Gandhi <dhairya@juliacopmuting.com>
2019-11-26 15:09:27 +00:00
Dhairya Gandhi
59bb0d81b0 add TODO 2019-11-26 16:23:09 +05:30
Mike J Innes
4c69b44a7c
Merge pull request #940 from matsueushi/feature/cuda-logitbc
Fix logitbinarycrossentropy on CuArrays
2019-11-26 10:18:07 +00:00
Tim Besard
fbb377a7b4
Merge pull request #941 from FluxML/tb/include_during_precompile
Don't include the CUDA module during precompilation.
2019-11-24 08:55:43 +01:00
Dhairya Gandhi
5f21238d1a no grad dims helper 2019-11-24 13:25:02 +05:30
Tim Besard
4ece13c649 Don't include the CUDA module during precompilation.
If we do, we could end up replacing it at runtime.
2019-11-22 18:03:51 +01:00
matsueushi
a0314ce682 Fix logitbinarycrossentropy on CuArrays 2019-11-22 05:23:24 +00:00
Troels Arnfred Bojesen
af96a197c1 Fix Glorot initialization
Should fix #442
2019-11-20 13:20:42 +09:00
Mike J Innes
5839e166f6
Merge pull request #860 from dsweber2/activations
Activations
2019-11-19 16:44:25 +00:00
Tim Besard
2fa3e5673e
Merge pull request #924 from FluxML/tb/cuda_init
CUDA package initialization improvements
2019-11-19 16:48:45 +01:00
Tim Besard
c45cec4cba Simplify warning. 2019-11-19 16:05:41 +01:00
Tim Besard
69bf84278f Remove wrong warning. 2019-11-19 15:53:43 +01:00
Mike J Innes
4f73e434a4
Merge pull request #935 from baggepinnen/patch-4
Fix AMSGrad on GPU
2019-11-19 12:58:37 +00:00
Troels Arnfred Bojesen
2b80573248 Fix Glorot initialization, add He initialization
Should fix #442 .
Adds He weight initialization as a bonus :-)
2019-11-19 18:16:29 +09:00
Fredrik Bagge Carlson
2da22f31f0
Avoid unnecessary conversion
This initialization works for both cpu and gpu
2019-11-19 16:31:04 +08:00
Fredrik Bagge Carlson
df7ffb0ef8
Fix AMSGrad on GPU
The previous initialization created a CPU array. Now, the same type of array as `x` is created.
2019-11-19 16:27:44 +08:00
Dhairya Gandhi
eb41715d26 define manual rules 2019-11-19 13:30:33 +05:30
Troels Arnfred Bojesen
4530ac65c7 Fix Glorot initialization, add He initialization
Should fix the issue reported at https://github.com/FluxML/Flux.jl/issues/442 .
Adds He weight initialization as a bonus :-)
2019-11-19 16:50:40 +09:00
dsweber2
dea29532ef Merge branch 'master' into activations 2019-11-15 17:19:43 -08:00
dsweber2
20eb840882 keeping activations separate 2019-11-15 12:03:08 -08:00
dsweber2
58c794702d simpler test 2019-11-14 14:05:53 -08:00
dsweber2
0fe3ac4e77 bring activations into function call 2019-11-14 13:40:52 -08:00
dsweber2
6475f6a43e recursive way of doing activations 2019-11-14 13:40:52 -08:00
dsweber2
99679f7e16 deal with empty Chain 2019-11-14 13:40:52 -08:00
dsweber2
d0202a2945 adding the extra commits broke the accumulate version 2019-11-14 13:40:52 -08:00
dsweber2
cdaaca8cfa make activations zygote friendly 2019-11-14 13:40:29 -08:00
Dhairya Gandhi
e89b8eba77 fixes 2019-11-13 01:12:26 +05:30
DrChainsaw
453ecd1f24 Merge remote-tracking branch 'upstream/master' into samepad 2019-11-08 18:49:47 +01:00
janEbert
3dceef427f Fix binarycrossentropy on CuArrays 2019-11-08 16:48:11 +01:00
Dhairya Gandhi
a4a987f0b0 hook into bcasting 2019-11-07 16:53:41 +05:30
Tim Besard
a82b76cf24 Conditionally include the CUDNN glue code. 2019-11-04 15:27:11 +01:00
Tim Besard
39ab740fb7 Check for CUDA availability at run time. 2019-11-02 11:18:06 +01:00
janEbert
7b41bc4ab5 Change gate function to view instead of copy
Only for vector input as copying a matrix may be more efficient due to
caching. A matrix is sliced per row, meaning the view will not be
aligned.
2019-10-24 12:45:22 +02:00
Dhairya Gandhi
7c90fb469d use array to define Zeros 2019-10-23 20:02:15 +05:30
bors[bot]
645aa04464
Merge #898
898: Fix problem in crossentropy breaking GPU compilation r=MikeInnes a=kshyatt

Trying to run this simple example
```
using Flux, CuArrays
using Flux: crossentropy
model = Chain(
        Dense(728, 128, σ),
        LSTM(128, 256),
        LSTM(256, 128),
        Dense(128, 10),
        softmax) |> gpu
data = [rand(728) for i in 1:100];
out  = [rand(10) for i in 1:100];
loss(x, y) = crossentropy(model(x), y);
Flux.train!(loss, params(model), zip(gpu.(data), gpu.(out)), ADAM())
```
Old version of `crossentropy`:
```
ERROR: GPU compilation of #23(CuArrays.CuKernelState, CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global}, Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(*),Tuple{Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(conj),Tuple{Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}}}}}}) failed
KernelError: passing and using non-bitstype argument

Argument 4 to your kernel function is of type Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(*),Tuple{Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(conj),Tuple{Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}}}}}}.
That type is not isbits, and such arguments are only allowed when they are unused by the kernel.  .args is of type Tuple{Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(conj),Tuple{Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}}}}} which is not isbits.
    .1 is of type Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}} which is not isbits.
      .x is of type Array{Float32,1} which is not isbits.


Stacktrace:
 [1] check_invocation(::CUDAnative.CompilerJob, ::LLVM.Function) at /mnt/home/khyatt/.julia/dev/CUDAnative/src/compiler/validation.jl:70
 [2] macro expansion at /mnt/home/khyatt/.julia/dev/CUDAnative/src/compiler/driver.jl:187 [inlined]
 [3] macro expansion at /mnt/home/khyatt/.julia/packages/TimerOutputs/7zSea/src/TimerOutput.jl:216 [inlined]
 [4] #codegen#136(::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::typeof(CUDAnative.codegen), ::Symbol, ::CUDAnative.CompilerJob) at /mnt/home/khyatt/.julia/dev/CUDAnative/src/compiler/driver.jl:186
 [5] #codegen at ./none:0 [inlined]
 [6] #compile#135(::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::typeof(CUDAnative.compile), ::Symbol, ::CUDAnative.CompilerJob) at /mnt/home/khyatt/.julia/dev/CUDAnative/src/compiler/driver.jl:47
 [7] #compile#134 at ./none:0 [inlined]
 [8] #compile at ./none:0 [inlined] (repeats 2 times)
 [9] macro expansion at /mnt/home/khyatt/.julia/dev/CUDAnative/src/execution.jl:389 [inlined]
 [10] #cufunction#176(::Nothing, ::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::typeof(CUDAnative.cufunction), ::GPUArrays.var"#23#24", ::Type{Tuple{CuArrays.CuKernelState,CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(*),Tuple{Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(conj),Tuple{Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}}}}}}}}) at /mnt/home/khyatt/.julia/dev/CUDAnative/src/execution.jl:357
 [11] cufunction(::Function, ::Type) at /mnt/home/khyatt/.julia/dev/CUDAnative/src/execution.jl:357
 [12] macro expansion at /mnt/home/khyatt/.julia/dev/CUDAnative/src/execution.jl:174 [inlined]
 [13] macro expansion at ./gcutils.jl:91 [inlined]
 [14] macro expansion at /mnt/home/khyatt/.julia/dev/CUDAnative/src/execution.jl:171 [inlined]
 [15] _gpu_call(::CuArrays.CuArrayBackend, ::Function, ::CuArray{Float32,1}, ::Tuple{CuArray{Float32,1},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(*),Tuple{Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(conj),Tuple{Base.Broadcast.Extruded{CuArray{Float32,1},Tuple{Bool},Tuple{Int64}}}}}}}, ::Tuple{Tuple{Int64},Tuple{Int64}}) at /mnt/home/khyatt/.julia/dev/CuArrays/src/gpuarray_interface.jl:60
 [16] gpu_call at /mnt/home/khyatt/.julia/dev/GPUArrays/src/abstract_gpu_interface.jl:151 [inlined]
 [17] gpu_call at /mnt/home/khyatt/.julia/dev/GPUArrays/src/abstract_gpu_interface.jl:128 [inlined]
 [18] copyto! at /mnt/home/khyatt/.julia/dev/GPUArrays/src/broadcast.jl:48 [inlined]
 [19] copyto! at ./broadcast.jl:863 [inlined]
 [20] copy at ./broadcast.jl:839 [inlined]
 [21] materialize at ./broadcast.jl:819 [inlined]
 [22] (::Zygote.var"#1310#1311"{CuArray{Float32,1},CuArray{Float32,1}})(::Array{Float32,1}) at /mnt/home/khyatt/.julia/dev/Zygote/src/lib/broadcast.jl:68
```
New version:
```
julia> Flux.train!(loss, params(model), zip(gpu.(data), gpu.(out)), ADAM())

julia> # everyone finished happily and went on with their lives
```

Co-authored-by: Katharine Hyatt <khyatt@flatironinstitute.org>
2019-10-23 14:31:53 +00:00
Katharine Hyatt
e0c1c0e057 Fix problem in crossentropy breaking GPU compilation 2019-10-22 14:00:57 -04:00
bors[bot]
fa5737fb5c
Merge #904
904: Documenting Optimiser Interface r=MikeInnes a=MikeInnes

I needed to add one extra commit to #875 before merging.

Co-authored-by: Dhairya Gandhi <dhairya@juliacopmuting.com>
Co-authored-by: Dhairya Gandhi <dhairya@juliacomputing.com>
Co-authored-by: Mike Innes <mike.j.innes@gmail.com>
2019-10-22 12:38:19 +00:00
Mike Innes
7ead2d6c7b typo 2019-10-22 13:36:39 +01:00
Dhairya Gandhi
4a183aeaf0 make Zeros a dimensionlesss number 2019-10-22 16:11:27 +05:30
Katharine Hyatt
b8b4bc48b9 Backticks and examples for normalise 2019-10-21 10:31:44 -04:00
DrChainsaw
530d4edb67 Fix for reading comprehension error (dim is not always 2 * (N-2)) Fix for ambiguous method sig 2019-10-20 16:03:01 +02:00
DrChainsaw
411ce5dbd8 Add SamePad for pooling layers 2019-10-20 13:43:39 +02:00
DrChainsaw
fc123d6279 Add SamePad for conv layers 2019-10-20 13:43:23 +02:00
Dhairya Gandhi
4477dd8d54 reviews 2019-10-10 20:27:11 +05:30
Dhairya Gandhi
f19066ee29 more docstrings 2019-10-10 16:48:12 +05:30
Dhairya Gandhi
fe52689cfe in depth docstrings 2019-10-09 16:16:11 +05:30
thebhatman
96a23c295c Changes to docs 2019-10-09 14:53:03 +05:30
Dhairya Gandhi
c85bad4427 replace weight with filter 2019-10-08 20:26:09 +05:30
Dhairya Gandhi
49ea43e711 ZeroType => Zeros 2019-10-08 20:02:04 +05:30
bors[bot]
af0dcb2c63
Merge #882
882: Check if CUDA availability changed during init. r=MikeInnes a=maleadt

With this PR, Flux checks using CUDAapi if CUDA is available during initialization, and forces recompilation if that does not agree with what was decided during precompilation. This avoids the scenario where Flux was precompiled without GPU support, consequently not allowing use of the GPU even if the user fixed his CUDA/GPU set-up because that does not force recompilation (and we can't add precompilation dependencies on stuff that doesn't exist).

However, we can't do the same for the case where we have a GPU/CUDA but CuArrays fails to import (checking if it imports during `__init__` would be much too expensive, if even possible), so this PR removes support for having CUDA/a GPU but CuArrays being broken. That's a little risky now that Flux depends on CuArrays, but the package is pretty mature and I haven't seen many bug reports failing to load it recently.

Fixes https://github.com/FluxML/Flux.jl/pull/852#issuecomment-538028314

cc @MikeInnes @xukai92

Co-authored-by: Tim Besard <tim.besard@gmail.com>
2019-10-08 13:24:49 +00:00
Dhairya Gandhi
95c5845e99 document bias switch 2019-10-08 17:54:01 +05:30
Dhairya Gandhi
040697fb2b add bias and weight kwarg 2019-10-08 17:18:19 +05:30
Dhairya Gandhi
f3904b4e04 add ZeroType back 2019-10-08 17:17:36 +05:30
Dhairya Gandhi
a1e826b888 fixes 2019-10-06 05:10:56 +05:30
Dhairya Gandhi
214f71f492 add N 2019-10-06 04:55:33 +05:30
Dhairya Gandhi
2ae3ad3b31 doc fixes 2019-10-06 04:46:13 +05:30
Dhairya Gandhi
d00f833c17 rm ZeroType 2019-10-06 04:44:50 +05:30
Dhairya Gandhi
e97d61f257 fixes 2019-10-06 04:42:26 +05:30
Dhairya Gandhi
48a305bd21 ditto remaining layers 2019-10-06 04:41:06 +05:30
Dhairya Gandhi
55ef7c1aba add weight and bias kwargs 2019-10-06 04:25:23 +05:30
Dhairya Gandhi
b503741651 expanded docstrings 2019-10-04 14:46:03 +05:30
Tim Besard
8aea15e6e0 Demote to const variables. 2019-10-03 21:28:55 +02:00
Tim Besard
2369b2b3fd Add an environment variable to disable CUDA usage. 2019-10-03 21:27:54 +02:00
Tim Besard
63d196aa37 Check if CUDA availability changed during init. 2019-10-03 20:05:32 +02:00
thebhatman
ec886c8ce8 Added docstring for hinge loss 2019-10-03 21:13:09 +05:30
Dhairya Gandhi
1fe321781b add to docs 2019-10-01 21:29:18 +05:30
Dhairya Gandhi
dced8c04e5 use ZeroType 2019-10-01 21:25:07 +05:30
Manjunath Bhat
2b30319a55
Merge branch 'master' into patch-6 2019-09-30 21:05:02 +05:30
thebhatman
6e289ef939 Merge branch 'patch-6' of https://github.com/thebhatman/Flux.jl into patch-6 2019-09-30 20:55:44 +05:30
Filippo Vicentini
606fe58854
Use <:Number 2019-09-29 12:33:02 +02:00
Filippo Vicentini
14e94c291e
Make it actually work 2019-09-29 12:28:01 +02:00
Filippo Vicentini
d91677f651
Fix params! to work with complex numbers 2019-09-29 12:23:41 +02:00
Dhairya Gandhi
8013c728b1 clearer optimiser docstrings 2019-09-28 16:09:00 +05:30
Dhairya Gandhi
0175485a80 fixup 2019-09-27 22:08:25 +05:30
Dhairya Gandhi
8bb0db7d0c opt docstrings 2019-09-27 22:04:53 +05:30
Mike Innes
b90b02872f Merge branch 'master' into tb/cuarrays_dnn 2019-09-27 14:58:32 +01:00
Dhairya Gandhi
a801fcb9e7 docstrings 2019-09-27 12:07:55 +05:30
Dhairya Gandhi
9f2ac8fdef ditto remaining conv layers 2019-09-27 12:04:27 +05:30
Dhairya Gandhi
5ea6a33f44 make bias optional 2019-09-27 11:48:12 +05:30
Mike Innes
46bc8e5e64 move pullbacks to CuArrays 2019-09-26 17:14:18 +01:00
Michael Abbott
806e0c5c57 line 2019-09-25 15:20:13 +02:00
Michael Abbott
4245d9acad eg 2019-09-25 15:18:40 +02:00
Michael Abbott
2de84ce79f simplify 2019-09-25 13:59:32 +02:00
Michael Abbott
1a1a96571a +Chain 2019-09-25 13:47:29 +02:00
Michael Abbott
19830c71b1 fix printing of SkipConnection 2019-09-25 13:37:01 +02:00
bors[bot]
acb6a89245
Merge #865
865: Functor r=MikeInnes a=MikeInnes

This refactors our current `@treelike` infrastructure. It somewhat formalises what we're doing around the idea of a Flux model as a functor, i.e. something that can be mapped over.

This is much more flexible than what we had before, and avoids some issues. It allows layers to have state that isn't mappable; it allows for dispatch when walking the tree, which means layers like `BatchNorm` can have non-trainable parameters; and it also allows for zipped mapping like `fmap(+, xs, ys)`, which isn't implemented yet but will be useful for the new optimisers work.

The main downside is that the term `functor` has been previously used in the Julia community as a malapropism for "thing that behaves like a function"; but hopefully this can start to reduce that usage.

Co-authored-by: Mike Innes <mike.j.innes@gmail.com>
2019-09-24 16:36:10 +00:00
Dhairya Gandhi
822288d63d merge conflicts 2019-09-24 00:31:44 +05:30
Dhairya Gandhi
6846551f57 fix cuda init 2019-09-22 22:02:05 +05:30
Mike Innes
b60df53ba1 pkg up 2019-09-19 18:33:33 +01:00
Mike Innes
cabb81e30b internal rename 2019-09-19 15:53:31 +01:00
Mike Innes
b951377426 fix normalisation layer params 2019-09-19 15:33:24 +01:00
Mike Innes
6529dbcbe6 functor refactor 2019-09-19 15:22:11 +01:00
Mike Innes
2c71fc282b rename functor.jl 2019-09-19 14:15:28 +01:00