1023: Feature: Added Boston Housing Dataset r=CarloLucibello a=pranjaldatta
[Boston Housing Dataset](https://archive.ics.uci.edu/ml/machine-learning-databases/housing/) is one of the most common datasets that are used by beginners. It is as popular as other datasets like Iris etc. Hence, it feels only natural that this dataset is a part of Flux.
Added src/data/housing.jl: code for downloading and loading the dataset
Edited src/data/Data.jl: To include and export housing.jl
Edited test/data.jl: Added test for the dataset.
*All tests in test/data.jl are passing*
Co-authored-by: pranjaldatta <pranjaldatta99@gmail.com>
Co-authored-by: Pranjal Datta <pranjaldatta99@gmail.com>
960: Added utility function outdims to compute output dimensions of a layer r=dhairyagandhi96 a=darsnack
Based on Slack chatter, I added a utility function, `outdims`, that computes the output dimensions for given input dimensions.
Example
```julia
layer = Conv((3, 3), 3 => 16)
outdims(layer, (10, 10)) # returns (8, 8)
```
Co-authored-by: Kyle Daruwalla <daruwalla@wisc.edu>
680: Added new loss functions. r=thebhatman a=thebhatman
I have added the KL Divergence Loss function, Poisson loss function, Logcosh loss, and Hinge loss function.
Co-authored-by: Manjunath Bhat <manjunathbhat9920@gmail.com>
Co-authored-by: thebhatman <manjunathbhat9920@gmail.com>
937: Fix Glorot initialization, add He initialization r=MikeInnes a=Sleort
Should fix#442 .
Adds He weight initialization as a bonus :-)
Co-authored-by: Troels Arnfred Bojesen <tr-ab@online.no>
898: Fix problem in crossentropy breaking GPU compilation r=MikeInnes a=kshyatt
Trying to run this simple example
```
using Flux, CuArrays
using Flux: crossentropy
model = Chain(
Dense(728, 128, σ),
LSTM(128, 256),
LSTM(256, 128),
Dense(128, 10),
softmax) |> gpu
data = [rand(728) for i in 1:100];
out = [rand(10) for i in 1:100];
loss(x, y) = crossentropy(model(x), y);
Flux.train!(loss, params(model), zip(gpu.(data), gpu.(out)), ADAM())
```
Old version of `crossentropy`:
```
ERROR: GPU compilation of #23(CuArrays.CuKernelState, CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global}, Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(*),Tuple{Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(conj),Tuple{Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}}}}}}) failed
KernelError: passing and using non-bitstype argument
Argument 4 to your kernel function is of type Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(*),Tuple{Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(conj),Tuple{Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}}}}}}.
That type is not isbits, and such arguments are only allowed when they are unused by the kernel. .args is of type Tuple{Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(conj),Tuple{Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}}}}} which is not isbits.
.1 is of type Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}} which is not isbits.
.x is of type Array{Float32,1} which is not isbits.
Stacktrace:
[1] check_invocation(::CUDAnative.CompilerJob, ::LLVM.Function) at /mnt/home/khyatt/.julia/dev/CUDAnative/src/compiler/validation.jl:70
[2] macro expansion at /mnt/home/khyatt/.julia/dev/CUDAnative/src/compiler/driver.jl:187 [inlined]
[3] macro expansion at /mnt/home/khyatt/.julia/packages/TimerOutputs/7zSea/src/TimerOutput.jl:216 [inlined]
[4] #codegen#136(::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::typeof(CUDAnative.codegen), ::Symbol, ::CUDAnative.CompilerJob) at /mnt/home/khyatt/.julia/dev/CUDAnative/src/compiler/driver.jl:186
[5] #codegen at ./none:0 [inlined]
[6] #compile#135(::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::typeof(CUDAnative.compile), ::Symbol, ::CUDAnative.CompilerJob) at /mnt/home/khyatt/.julia/dev/CUDAnative/src/compiler/driver.jl:47
[7] #compile#134 at ./none:0 [inlined]
[8] #compile at ./none:0 [inlined] (repeats 2 times)
[9] macro expansion at /mnt/home/khyatt/.julia/dev/CUDAnative/src/execution.jl:389 [inlined]
[10] #cufunction#176(::Nothing, ::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::typeof(CUDAnative.cufunction), ::GPUArrays.var"#23#24", ::Type{Tuple{CuArrays.CuKernelState,CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(*),Tuple{Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(conj),Tuple{Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}}}}}}}}) at /mnt/home/khyatt/.julia/dev/CUDAnative/src/execution.jl:357
[11] cufunction(::Function, ::Type) at /mnt/home/khyatt/.julia/dev/CUDAnative/src/execution.jl:357
[12] macro expansion at /mnt/home/khyatt/.julia/dev/CUDAnative/src/execution.jl:174 [inlined]
[13] macro expansion at ./gcutils.jl:91 [inlined]
[14] macro expansion at /mnt/home/khyatt/.julia/dev/CUDAnative/src/execution.jl:171 [inlined]
[15] _gpu_call(::CuArrays.CuArrayBackend, ::Function, ::CuArray{Float32,1}, ::Tuple{CuArray{Float32,1},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(*),Tuple{Base.Broadcast.Extruded{Array{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(conj),Tuple{Base.Broadcast.Extruded{CuArray{Float32,1},Tuple{Bool},Tuple{Int64}}}}}}}, ::Tuple{Tuple{Int64},Tuple{Int64}}) at /mnt/home/khyatt/.julia/dev/CuArrays/src/gpuarray_interface.jl:60
[16] gpu_call at /mnt/home/khyatt/.julia/dev/GPUArrays/src/abstract_gpu_interface.jl:151 [inlined]
[17] gpu_call at /mnt/home/khyatt/.julia/dev/GPUArrays/src/abstract_gpu_interface.jl:128 [inlined]
[18] copyto! at /mnt/home/khyatt/.julia/dev/GPUArrays/src/broadcast.jl:48 [inlined]
[19] copyto! at ./broadcast.jl:863 [inlined]
[20] copy at ./broadcast.jl:839 [inlined]
[21] materialize at ./broadcast.jl:819 [inlined]
[22] (::Zygote.var"#1310#1311"{CuArray{Float32,1},CuArray{Float32,1}})(::Array{Float32,1}) at /mnt/home/khyatt/.julia/dev/Zygote/src/lib/broadcast.jl:68
```
New version:
```
julia> Flux.train!(loss, params(model), zip(gpu.(data), gpu.(out)), ADAM())
julia> # everyone finished happily and went on with their lives
```
Co-authored-by: Katharine Hyatt <khyatt@flatironinstitute.org>
904: Documenting Optimiser Interface r=MikeInnes a=MikeInnes
I needed to add one extra commit to #875 before merging.
Co-authored-by: Dhairya Gandhi <dhairya@juliacopmuting.com>
Co-authored-by: Dhairya Gandhi <dhairya@juliacomputing.com>
Co-authored-by: Mike Innes <mike.j.innes@gmail.com>
882: Check if CUDA availability changed during init. r=MikeInnes a=maleadt
With this PR, Flux checks using CUDAapi if CUDA is available during initialization, and forces recompilation if that does not agree with what was decided during precompilation. This avoids the scenario where Flux was precompiled without GPU support, consequently not allowing use of the GPU even if the user fixed his CUDA/GPU set-up because that does not force recompilation (and we can't add precompilation dependencies on stuff that doesn't exist).
However, we can't do the same for the case where we have a GPU/CUDA but CuArrays fails to import (checking if it imports during `__init__` would be much too expensive, if even possible), so this PR removes support for having CUDA/a GPU but CuArrays being broken. That's a little risky now that Flux depends on CuArrays, but the package is pretty mature and I haven't seen many bug reports failing to load it recently.
Fixes https://github.com/FluxML/Flux.jl/pull/852#issuecomment-538028314
cc @MikeInnes @xukai92
Co-authored-by: Tim Besard <tim.besard@gmail.com>