1208: Fixing output format for `onehot` r=dhairyagandhi96 a=natema
Currently `Flux.OneHotVector` is displayed as a binary vector (0/1) rather than a boolean one (true/false). This is also shown in successive examples in the same page.
I fixed the `onehot(:b, [:a, :b, :c])` and `onehot(:c, [:a, :b, :c])` outputs in the first example of the page accordingly.
Co-authored-by: natema <natema@users.noreply.github.com>
`Flux.OneHotVector` is displayed as a binary vector (0/1) rather than a boolean (true/false) one, as is also shown in successive examples in the same page, so I fixed the `onehot(:b, [:a, :b, :c])` and `onehot(:c, [:a, :b, :c])` output as given by the current Julia version 1.4.2.
1206: Fixing ambiguous remark in Preserve inputs' types r=dhairyagandhi96 a=natema
This PR is based on the [discussion in the forum](https://discourse.julialang.org/t/not-clear-what-0-01f0x-is-in-the-flux-docs/40553?u=mathematics) on the ambiguity of `0.01f0x` in the line
> While one could change the activation function (e.g. to use `0.01f0x`)
Co-authored-by: natema <natema@users.noreply.github.com>
1191: Pull Request Template r=MikeInnes a=MikeInnes
Hopefully makes it a little clearer what the requirements are, which will lead to easier review, and encourage things like NEWS.md that we want to be better in sync.
cc @dhairyagandhi96 and @CarloLucibello for thoughts.
Co-authored-by: Mike J Innes <mike.j.innes@gmail.com>
1190: Correcting advanced.md r=dhairyagandhi96 a=Sleort
To make the example consistent, it should be
```
julia> Flux.trainable(a::Affine) = (a.W,)
```
not
```
julia> Flux.trainable(a::Affine) = (a.W, a.b)
```
Co-authored-by: Troels Arnfred Bojesen <tr-ab@online.no>
To make the example consistent, it should be
```
julia> Flux.trainable(a::Affine) = (a.W,)
```
not
```
julia> Flux.trainable(a::Affine) = (a.W, a.b)
```
1185: Add some news r=dhairyagandhi96 a=dhairyagandhi96
cc @CarloLucibello please add to this list as well
Co-authored-by: Dhairya Gandhi <dhairya@juliacopmuting.com>
957: Add some gradient checking tests on GPUs r=dhairyagandhi96 a=dhairyagandhi96
Good to add generic tests for tracking gradients through the various layers on the GPU.
Co-authored-by: Dhairya Gandhi <dhairya@juliacopmuting.com>
Co-authored-by: Dhairya Gandhi <dhairya@juliacomputing.com>
1165: Fix docstring of logitcrossentropy r=dhairyagandhi96 a=cossio
Since `y` is a logit, there is no log (see the diff).
Co-authored-by: cossio <cossio@users.noreply.github.com>
1175: xlogy broadcast adjoint r=MikeInnes a=MikeInnes
This is helpful for performance, since it avoids having to differentiate `xlogy` itself inside of a map.
Co-authored-by: Mike J Innes <mike.j.innes@gmail.com>
1174: Functors r=MikeInnes a=MikeInnes
Just splits out the implementation to the [Functors](https://github.com/FluxML/Functors.jl) package, so the same traits can be used elsewhere (e.g. Optimisers.jl) without depending on all of Flux.
Co-authored-by: Mike J Innes <mike.j.innes@gmail.com>
1166: Fix crossentropy when some probabilities are zero r=dhairyagandhi96 a=cossio
Use a function `xlogy(x,y) = x * log(y)` that has the correct limit at `x=0`.
Before this PR:
```julia
julia> Flux.crossentropy([0.1,0.0,0.9], [0.1,0.0,0.9])
NaN
```
After this PR:
```julia
julia> Flux.crossentropy([0.1,0.0,0.9], [0.1,0.0,0.9])
0.3250829733914482
```
Co-authored-by: cossio <j.cossio.diaz@gmail.com>
1160: Build docs on Julia 1.3 r=dhairyagandhi96 a=dhairyagandhi96
This causes red CI otherwise
Co-authored-by: Dhairya Gandhi <dhairya@juliacopmuting.com>
1156: Add correct overload for apply! in docs r=dhairyagandhi96 a=dhairyagandhi96
Maybe we should considering adding a `const` name that is better than `apply!` (or rename `apply!`) and export it, so folks can just overload `descriptive_apply_my_optimiser_rule!` rather than have to go to the sub-project `Flux.Optimise`?
Co-authored-by: Dhairya Gandhi <dhairya@juliacopmuting.com>