51 lines
5.7 KiB
HTML
51 lines
5.7 KiB
HTML
<!DOCTYPE html>
|
||
<html lang="en"><head><meta charset="UTF-8"/><meta name="viewport" content="width=device-width, initial-scale=1.0"/><title>GPU Support · Flux</title><script>(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
|
||
|
||
ga('create', 'UA-36890222-9', 'auto');
|
||
ga('send', 'pageview');
|
||
</script><link href="https://cdnjs.cloudflare.com/ajax/libs/normalize/4.2.0/normalize.min.css" rel="stylesheet" type="text/css"/><link href="https://fonts.googleapis.com/css?family=Lato|Roboto+Mono" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.6.3/css/font-awesome.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css" rel="stylesheet" type="text/css"/><script>documenterBaseURL=".."</script><script src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.2.0/require.min.js" data-main="../assets/documenter.js"></script><script src="../siteinfo.js"></script><script src="../../versions.js"></script><link href="../assets/documenter.css" rel="stylesheet" type="text/css"/><link href="../assets/flux.css" rel="stylesheet" type="text/css"/></head><body><nav class="toc"><h1>Flux</h1><select id="version-selector" onChange="window.location.href=this.value" style="visibility: hidden"></select><form class="search" id="search-form" action="../search/"><input id="search-query" name="q" type="text" placeholder="Search docs"/></form><ul><li><a class="toctext" href="../">Home</a></li><li><span class="toctext">Building Models</span><ul><li><a class="toctext" href="../models/basics/">Basics</a></li><li><a class="toctext" href="../models/recurrence/">Recurrence</a></li><li><a class="toctext" href="../models/regularisation/">Regularisation</a></li><li><a class="toctext" href="../models/layers/">Model Reference</a></li></ul></li><li><span class="toctext">Training Models</span><ul><li><a class="toctext" href="../training/optimisers/">Optimisers</a></li><li><a class="toctext" href="../training/training/">Training</a></li></ul></li><li><a class="toctext" href="../data/onehot/">One-Hot Encoding</a></li><li class="current"><a class="toctext" href>GPU Support</a><ul class="internal"></ul></li><li><a class="toctext" href="../saving/">Saving & Loading</a></li><li><span class="toctext">Internals</span><ul><li><a class="toctext" href="../internals/tracker/">Backpropagation</a></li></ul></li><li><a class="toctext" href="../community/">Community</a></li></ul></nav><article id="docs"><header><nav><ul><li><a href>GPU Support</a></li></ul><a class="edit-page" href="https://github.com/FluxML/Flux.jl/blob/master/docs/src/gpu.md"><span class="fa"></span> Edit on GitHub</a></nav><hr/><div id="topbar"><span>GPU Support</span><a class="fa fa-bars" href="#"></a></div></header><h1><a class="nav-anchor" id="GPU-Support-1" href="#GPU-Support-1">GPU Support</a></h1><p>Support for array operations on other hardware backends, like GPUs, is provided by external packages like <a href="https://github.com/JuliaGPU/CuArrays.jl">CuArrays</a>. Flux is agnostic to array types, so we simply need to move model weights and data to the GPU and Flux will handle it.</p><p>For example, we can use <code>CuArrays</code> (with the <code>cu</code> converter) to run our <a href="../models/basics/">basic example</a> on an NVIDIA GPU.</p><p>(Note that you need to have CUDA available to use CuArrays – please see the <a href="https://github.com/JuliaGPU/CuArrays.jl">CuArrays.jl</a> instructions for more details.)</p><pre><code class="language-julia">using CuArrays
|
||
|
||
W = cu(rand(2, 5)) # a 2×5 CuArray
|
||
b = cu(rand(2))
|
||
|
||
predict(x) = W*x .+ b
|
||
loss(x, y) = sum((predict(x) .- y).^2)
|
||
|
||
x, y = cu(rand(5)), cu(rand(2)) # Dummy data
|
||
loss(x, y) # ~ 3</code></pre><p>Note that we convert both the parameters (<code>W</code>, <code>b</code>) and the data set (<code>x</code>, <code>y</code>) to cuda arrays. Taking derivatives and training works exactly as before.</p><p>If you define a structured model, like a <code>Dense</code> layer or <code>Chain</code>, you just need to convert the internal parameters. Flux provides <code>mapleaves</code>, which allows you to alter all parameters of a model at once.</p><pre><code class="language-julia">d = Dense(10, 5, σ)
|
||
d = mapleaves(cu, d)
|
||
d.W # Tracked CuArray
|
||
d(cu(rand(10))) # CuArray output
|
||
|
||
m = Chain(Dense(10, 5, σ), Dense(5, 2), softmax)
|
||
m = mapleaves(cu, m)
|
||
d(cu(rand(10)))</code></pre><p>As a convenience, Flux provides the <code>gpu</code> function to convert models and data to the GPU if one is available. By default, it'll do nothing, but loading <code>CuArrays</code> will cause it to move data to the GPU instead.</p><pre><code class="language-julia">julia> using Flux, CuArrays
|
||
|
||
julia> m = Dense(10,5) |> gpu
|
||
Dense(10, 5)
|
||
|
||
julia> x = rand(10) |> gpu
|
||
10-element CuArray{Float32,1}:
|
||
0.800225
|
||
⋮
|
||
0.511655
|
||
|
||
julia> m(x)
|
||
Tracked 5-element CuArray{Float32,1}:
|
||
-0.30535
|
||
⋮
|
||
-0.618002</code></pre><p>The analogue <code>cpu</code> is also available for moving models and data back off of the GPU.</p><pre><code class="language-julia">julia> x = rand(10) |> gpu
|
||
10-element CuArray{Float32,1}:
|
||
0.235164
|
||
⋮
|
||
0.192538
|
||
|
||
julia> x |> cpu
|
||
10-element Array{Float32,1}:
|
||
0.235164
|
||
⋮
|
||
0.192538</code></pre><footer><hr/><a class="previous" href="../data/onehot/"><span class="direction">Previous</span><span class="title">One-Hot Encoding</span></a><a class="next" href="../saving/"><span class="direction">Next</span><span class="title">Saving & Loading</span></a></footer></article></body></html>
|