gpu support docs
This commit is contained in:
parent
7c8dba0b85
commit
8e63ac766e
|
@ -14,8 +14,8 @@ makedocs(modules=[Flux],
|
|||
"Training Models" =>
|
||||
["Optimisers" => "training/optimisers.md",
|
||||
"Training" => "training/training.md"],
|
||||
"Data Munging" =>
|
||||
["One-Hot Encoding" => "data/onehot.md"],
|
||||
"One-Hot Encoding" => "data/onehot.md",
|
||||
"GPU Support" => "gpu.md",
|
||||
"Contributing & Help" => "contributing.md"])
|
||||
|
||||
deploydocs(
|
||||
|
|
|
@ -0,0 +1,33 @@
|
|||
# GPU Support
|
||||
|
||||
Support for array operations on other hardware backends, like GPUs, is provided by external packages like [CuArrays](https://github.com/JuliaGPU/CuArrays.jl) and [CLArrays](https://github.com/JuliaGPU/CLArrays.jl). Flux doesn't care what array type you use, so we can just plug these in without any other changes.
|
||||
|
||||
For example, we can use `CuArrays` (with the `cu` array converter) to run our [basic example](models/basics.md) on an NVIDIA GPU.
|
||||
|
||||
```julia
|
||||
using CuArrays
|
||||
|
||||
W = cu(rand(2, 5))
|
||||
b = cu(rand(2))
|
||||
|
||||
predict(x) = W*x .+ b
|
||||
loss(x, y) = sum((predict(x) .- y).^2)
|
||||
|
||||
x, y = cu(rand(5)), cu(rand(2)) # Dummy data
|
||||
loss(x, y) # ~ 3
|
||||
```
|
||||
|
||||
Note that we convert both the parameters (`W`, `b`) and the data set (`x`, `y`) to cuda arrays. Taking derivatives and training works exactly as before.
|
||||
|
||||
If you define a structured model, like a `Dense` layer or `Chain`, you just need to convert the internal parameters. Flux provides `mapparams`, which allows you to alter all parameters of a model at once.
|
||||
|
||||
```julia
|
||||
d = Dense(10, 5, σ)
|
||||
d = mapparams(cu, d)
|
||||
d.W # Tracked CuArray
|
||||
d(cu(rand(10))) # CuArray output
|
||||
|
||||
m = Chain(Dense(10, 5, σ), Dense(5, 2), softmax)
|
||||
m = mapparams(cu, m)
|
||||
d(cu(rand(10)))
|
||||
```
|
Loading…
Reference in New Issue