32 lines
24 KiB
HTML
32 lines
24 KiB
HTML
<!DOCTYPE html>
|
||
<html lang="en"><head><meta charset="UTF-8"/><meta name="viewport" content="width=device-width, initial-scale=1.0"/><title>NNlib · Flux</title><script>(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
|
||
|
||
ga('create', 'UA-36890222-9', 'auto');
|
||
ga('send', 'pageview', {'page': location.pathname + location.search + location.hash});
|
||
</script><link href="https://fonts.googleapis.com/css?family=Lato|Roboto+Mono" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/fontawesome.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/solid.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/brands.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.11.1/katex.min.css" rel="stylesheet" type="text/css"/><script>documenterBaseURL="../.."</script><script src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.6/require.min.js" data-main="../../assets/documenter.js"></script><script src="../../siteinfo.js"></script><script src="../../../versions.js"></script><link href="../../assets/flux.css" rel="stylesheet" type="text/css"/><link class="docs-theme-link" rel="stylesheet" type="text/css" href="../../assets/themes/documenter-dark.css" data-theme-name="documenter-dark"/><link class="docs-theme-link" rel="stylesheet" type="text/css" href="../../assets/themes/documenter-light.css" data-theme-name="documenter-light" data-theme-primary/><script src="../../assets/themeswap.js"></script></head><body><div id="documenter"><nav class="docs-sidebar"><div class="docs-package-name"><span class="docs-autofit">Flux</span></div><form class="docs-search" action="../../search/"><input class="docs-search-query" id="documenter-search-query" name="q" type="text" placeholder="Search docs"/></form><ul class="docs-menu"><li><a class="tocitem" href="../../">Home</a></li><li><span class="tocitem">Building Models</span><ul><li><a class="tocitem" href="../basics/">Basics</a></li><li><a class="tocitem" href="../recurrence/">Recurrence</a></li><li><a class="tocitem" href="../regularisation/">Regularisation</a></li><li><a class="tocitem" href="../layers/">Model Reference</a></li><li><a class="tocitem" href="../advanced/">Advanced Model Building</a></li><li class="is-active"><a class="tocitem" href>NNlib</a><ul class="internal"><li><a class="tocitem" href="#Activation-Functions-1"><span>Activation Functions</span></a></li><li><a class="tocitem" href="#Softmax-1"><span>Softmax</span></a></li><li><a class="tocitem" href="#Pooling-1"><span>Pooling</span></a></li><li><a class="tocitem" href="#Convolution-1"><span>Convolution</span></a></li><li><a class="tocitem" href="#Batched-Operations-1"><span>Batched Operations</span></a></li></ul></li></ul></li><li><span class="tocitem">Handling Data</span><ul><li><a class="tocitem" href="../../data/onehot/">One-Hot Encoding</a></li><li><a class="tocitem" href="../../data/dataloader/">DataLoader</a></li></ul></li><li><span class="tocitem">Training Models</span><ul><li><a class="tocitem" href="../../training/optimisers/">Optimisers</a></li><li><a class="tocitem" href="../../training/training/">Training</a></li></ul></li><li><a class="tocitem" href="../../gpu/">GPU Support</a></li><li><a class="tocitem" href="../../saving/">Saving & Loading</a></li><li><a class="tocitem" href="../../ecosystem/">The Julia Ecosystem</a></li><li><a class="tocitem" href="../../utilities/">Utility Functions</a></li><li><a class="tocitem" href="../../performance/">Performance Tips</a></li><li><a class="tocitem" href="../../datasets/">Datasets</a></li><li><a class="tocitem" href="../../community/">Community</a></li></ul><div class="docs-version-selector field has-addons"><div class="control"><span class="docs-label button is-static is-size-7">Version</span></div><div class="docs-selector control is-expanded"><div class="select is-fullwidth is-size-7"><select id="documenter-version-selector"></select></div></div></div></nav><div class="docs-main"><header class="docs-navbar"><nav class="breadcrumb"><ul class="is-hidden-mobile"><li><a class="is-disabled">Building Models</a></li><li class="is-active"><a href>NNlib</a></li></ul><ul class="is-hidden-tablet"><li class="is-active"><a href>NNlib</a></li></ul></nav><div class="docs-right"><a class="docs-edit-link" href="https://github.com/FluxML/Flux.jl/blob/master/docs/src/models/nnlib.md" title="Edit on GitHub"><span class="docs-icon fab"></span><span class="docs-label is-hidden-touch">Edit on GitHub</span></a><a class="docs-settings-button fas fa-cog" id="documenter-settings-button" href="#" title="Settings"></a><a class="docs-sidebar-button fa fa-bars is-hidden-desktop" id="documenter-sidebar-button" href="#"></a></div></header><article class="content" id="documenter-page"><h1 id="NNlib-1"><a class="docs-heading-anchor" href="#NNlib-1">NNlib</a><a class="docs-heading-anchor-permalink" href="#NNlib-1" title="Permalink"></a></h1><p>Flux re-exports all of the functions exported by the <a href="https://github.com/FluxML/NNlib.jl">NNlib</a> package.</p><h2 id="Activation-Functions-1"><a class="docs-heading-anchor" href="#Activation-Functions-1">Activation Functions</a><a class="docs-heading-anchor-permalink" href="#Activation-Functions-1" title="Permalink"></a></h2><p>Non-linearities that go between layers of your model. Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call <code>σ.(xs)</code>, <code>relu.(xs)</code> and so on.</p><article class="docstring"><header><a class="docstring-binding" id="NNlib.celu" href="#NNlib.celu"><code>NNlib.celu</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">celu(x, α=1) =
|
||
(x ≥ 0 ? x : α * (exp(x/α) - 1))</code></pre><p>Continuously Differentiable Exponential Linear Units See <a href="https://arxiv.org/pdf/1704.07483.pdf">Continuously Differentiable Exponential Linear Units</a>.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.elu" href="#NNlib.elu"><code>NNlib.elu</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">elu(x, α=1) =
|
||
x > 0 ? x : α * (exp(x) - 1)</code></pre><p>Exponential Linear Unit activation function. See <a href="https://arxiv.org/abs/1511.07289">Fast and Accurate Deep Network Learning by Exponential Linear Units</a>. You can also specify the coefficient explicitly, e.g. <code>elu(x, 1)</code>.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.gelu" href="#NNlib.gelu"><code>NNlib.gelu</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">gelu(x) = 0.5x * (1 + tanh(√(2/π) * (x + 0.044715x^3)))</code></pre><p><a href="https://arxiv.org/pdf/1606.08415.pdf">Gaussian Error Linear Unit</a> activation function.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.hardsigmoid" href="#NNlib.hardsigmoid"><code>NNlib.hardsigmoid</code></a> — <span class="docstring-category">Function</span></header><section><div><p>hardσ(x, a=0.2) = max(0, min(1.0, a * x + 0.5))</p><p>Segment-wise linear approximation of sigmoid See: <a href="https://arxiv.org/pdf/1511.00363.pdf">BinaryConnect: Training Deep Neural Networks withbinary weights during propagations</a></p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.hardtanh" href="#NNlib.hardtanh"><code>NNlib.hardtanh</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">hardtanh(x) = max(-1, min(1, x))</code></pre><p>Segment-wise linear approximation of tanh. Cheaper and more computational efficient version of tanh. See: (http://ronan.collobert.org/pub/matos/2004<em>phdthesis</em>lip6.pdf)</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.leakyrelu" href="#NNlib.leakyrelu"><code>NNlib.leakyrelu</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">leakyrelu(x, a=0.01) = max(a*x, x)</code></pre><p>Leaky <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)">Rectified Linear Unit</a> activation function. You can also specify the coefficient explicitly, e.g. <code>leakyrelu(x, 0.01)</code>.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.lisht" href="#NNlib.lisht"><code>NNlib.lisht</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">lisht(x) = x * tanh(x)</code></pre><p>Non-Parametric Linearly Scaled Hyperbolic Tangent Activation Function See <a href="https://arxiv.org/abs/1901.05894">LiSHT</a></p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.logcosh" href="#NNlib.logcosh"><code>NNlib.logcosh</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">logcosh(x)</code></pre><p>Return <code>log(cosh(x))</code> which is computed in a numerically stable way.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.logsigmoid" href="#NNlib.logsigmoid"><code>NNlib.logsigmoid</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">logσ(x)</code></pre><p>Return <code>log(σ(x))</code> which is computed in a numerically stable way.</p><pre><code class="language-none">julia> logσ(0)
|
||
-0.6931471805599453
|
||
julia> logσ.([-100, -10, 100])
|
||
3-element Array{Float64,1}:
|
||
-100.0
|
||
-10.000045398899218
|
||
-3.720075976020836e-44</code></pre></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.mish" href="#NNlib.mish"><code>NNlib.mish</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">mish(x) = x * tanh(softplus(x))</code></pre><p>Self Regularized Non-Monotonic Neural Activation Function See <a href="https://arxiv.org/abs/1908.08681">Mish: A Self Regularized Non-Monotonic Neural Activation Function</a>.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.relu" href="#NNlib.relu"><code>NNlib.relu</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">relu(x) = max(0, x)</code></pre><p><a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)">Rectified Linear Unit</a> activation function.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.relu6" href="#NNlib.relu6"><code>NNlib.relu6</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">relu6(x) = min(max(0, x), 6)</code></pre><p><a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)">Rectified Linear Unit</a> activation function capped at 6. See <a href="http://www.cs.utoronto.ca/%7Ekriz/conv-cifar10-aug2010.pdf">Convolutional Deep Belief Networks on CIFAR-10</a></p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.rrelu" href="#NNlib.rrelu"><code>NNlib.rrelu</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">rrelu(x, l=1/8, u=1/3) = max(a*x, x)
|
||
|
||
a = randomly sampled from uniform distribution U(l, u)</code></pre><p>Randomized Leaky <a href="https://arxiv.org/pdf/1505.00853.pdf">Rectified Linear Unit</a> activation function. You can also specify the bound explicitly, e.g. <code>rrelu(x, 0.0, 1.0)</code>.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.selu" href="#NNlib.selu"><code>NNlib.selu</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">selu(x) = λ * (x ≥ 0 ? x : α * (exp(x) - 1))
|
||
|
||
λ ≈ 1.0507
|
||
α ≈ 1.6733</code></pre><p>Scaled exponential linear units. See <a href="https://arxiv.org/pdf/1706.02515.pdf">Self-Normalizing Neural Networks</a>.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.sigmoid" href="#NNlib.sigmoid"><code>NNlib.sigmoid</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">σ(x) = 1 / (1 + exp(-x))</code></pre><p>Classic <a href="https://en.wikipedia.org/wiki/Sigmoid_function">sigmoid</a> activation function.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.softplus" href="#NNlib.softplus"><code>NNlib.softplus</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">softplus(x) = log(exp(x) + 1)</code></pre><p>See <a href="http://proceedings.mlr.press/v15/glorot11a/glorot11a.pdf">Deep Sparse Rectifier Neural Networks</a>.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.softshrink" href="#NNlib.softshrink"><code>NNlib.softshrink</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">softshrink(x, λ=0.5) =
|
||
(x ≥ λ ? x - λ : (-λ ≥ x ? x + λ : 0))</code></pre><p>See <a href="https://www.gabormelli.com/RKB/Softshrink_Activation_Function">Softshrink Activation Function</a></p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.softsign" href="#NNlib.softsign"><code>NNlib.softsign</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">softsign(x) = x / (1 + |x|)</code></pre><p>See <a href="http://www.iro.umontreal.ca/~lisa/publications2/index.php/attachments/single/205">Quadratic Polynomials Learn Better Image Features</a>.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.swish" href="#NNlib.swish"><code>NNlib.swish</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">swish(x) = x * σ(x)</code></pre><p>Self-gated activation function. See <a href="https://arxiv.org/pdf/1710.05941.pdf">Swish: a Self-Gated Activation Function</a>.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.tanhshrink" href="#NNlib.tanhshrink"><code>NNlib.tanhshrink</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">tanhshrink(x) = x - tanh(x)</code></pre><p>See <a href="https://www.gabormelli.com/RKB/Tanhshrink_Activation_Function">Tanhshrink Activation Function</a></p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.trelu" href="#NNlib.trelu"><code>NNlib.trelu</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">trelu(x, theta = 1.0) = x > theta ? x : 0</code></pre><p>Threshold Gated Rectified Linear See <a href="https://arxiv.org/pdf/1402.3337.pdf">ThresholdRelu</a></p></div></section></article><h2 id="Softmax-1"><a class="docs-heading-anchor" href="#Softmax-1">Softmax</a><a class="docs-heading-anchor-permalink" href="#Softmax-1" title="Permalink"></a></h2><article class="docstring"><header><a class="docstring-binding" id="NNlib.softmax" href="#NNlib.softmax"><code>NNlib.softmax</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">softmax(x; dims=1)</code></pre><p><a href="https://en.wikipedia.org/wiki/Softmax_function">Softmax</a> turns input array <code>x</code> into probability distributions that sum to 1 along the dimensions specified by <code>dims</code>. It is semantically equivalent to the following:</p><pre><code class="language-none">softmax(x; dims=1) = exp.(x) ./ sum(exp.(x), dims=dims)</code></pre><p>with additional manipulations enhancing numerical stability.</p><p>For a matrix input <code>x</code> it will by default (<code>dims=1</code>) treat it as a batch of vectors, with each column independent. Keyword <code>dims=2</code> will instead treat rows independently, etc...</p><pre><code class="language-julia-repl">julia> softmax([1, 2, 3])
|
||
3-element Array{Float64,1}:
|
||
0.0900306
|
||
0.244728
|
||
0.665241</code></pre><p>See also <a href="#NNlib.logsoftmax"><code>logsoftmax</code></a>.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.logsoftmax" href="#NNlib.logsoftmax"><code>NNlib.logsoftmax</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">logsoftmax(x; dims=1)</code></pre><p>Computes the log of softmax in a more numerically stable way than directly taking <code>log.(softmax(xs))</code>. Commonly used in computing cross entropy loss.</p><p>It is semantically equivalent to the following:</p><pre><code class="language-none">logsoftmax(x; dims=1) = x .- log.(sum(exp.(x), dims=dims))</code></pre><p>See also <a href="#NNlib.softmax"><code>softmax</code></a>.</p></div></section></article><h2 id="Pooling-1"><a class="docs-heading-anchor" href="#Pooling-1">Pooling</a><a class="docs-heading-anchor-permalink" href="#Pooling-1" title="Permalink"></a></h2><article class="docstring"><header><a class="docstring-binding" id="NNlib.maxpool" href="#NNlib.maxpool"><code>NNlib.maxpool</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">maxpool(x, k::NTuple; pad=0, stride=k)</code></pre><p>Perform max pool operation with window size <code>k</code> on input tensor <code>x</code>.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.meanpool" href="#NNlib.meanpool"><code>NNlib.meanpool</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">meanpool(x, k::NTuple; pad=0, stride=k)</code></pre><p>Perform mean pool operation with window size <code>k</code> on input tensor <code>x</code>.</p></div></section></article><h2 id="Convolution-1"><a class="docs-heading-anchor" href="#Convolution-1">Convolution</a><a class="docs-heading-anchor-permalink" href="#Convolution-1" title="Permalink"></a></h2><article class="docstring"><header><a class="docstring-binding" id="NNlib.conv" href="#NNlib.conv"><code>NNlib.conv</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">conv(x, w; stride=1, pad=0, dilation=1, flipped=false)</code></pre><p>Apply convolution filter <code>w</code> to input <code>x</code>. <code>x</code> and <code>w</code> are 3d/4d/5d tensors in 1d/2d/3d convolutions respectively. </p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.depthwiseconv" href="#NNlib.depthwiseconv"><code>NNlib.depthwiseconv</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">depthwiseconv(x, w; stride=1, pad=0, dilation=1, flipped=false)</code></pre><p>Depthwise convolution operation with filter <code>w</code> on input <code>x</code>. <code>x</code> and <code>w</code> are 3d/4d/5d tensors in 1d/2d/3d convolutions respectively. </p></div></section></article><h2 id="Batched-Operations-1"><a class="docs-heading-anchor" href="#Batched-Operations-1">Batched Operations</a><a class="docs-heading-anchor-permalink" href="#Batched-Operations-1" title="Permalink"></a></h2><article class="docstring"><header><a class="docstring-binding" id="NNlib.batched_mul" href="#NNlib.batched_mul"><code>NNlib.batched_mul</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">batched_mul(A, B) -> C</code></pre><p>Batched matrix multiplication. Result has <code>C[:,:,k] == A[:,:,k] * B[:,:,k]</code> for all <code>k</code>.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.batched_mul!" href="#NNlib.batched_mul!"><code>NNlib.batched_mul!</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">batched_mul!(C, A, B) -> C</code></pre><p>In-place batched matrix multiplication, equivalent to <code>mul!(C[:,:,k], A[:,:,k], B[:,:,k])</code> for all <code>k</code>.</p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.batched_adjoint" href="#NNlib.batched_adjoint"><code>NNlib.batched_adjoint</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">batched_transpose(A::AbstractArray{T,3})
|
||
batched_adjoint(A)</code></pre><p>Equivalent to applying <code>transpose</code> or <code>adjoint</code> to each matrix <code>A[:,:,k]</code>.</p><p>These exist to control how <code>batched_mul</code> behaves, as it operated on such matrix slices of an array with <code>ndims(A)==3</code>.</p><pre><code class="language-none">BatchedTranspose{T, N, S} <: AbstractBatchedMatrix{T, N}
|
||
BatchedAdjoint{T, N, S}</code></pre><p>Lazy wrappers analogous to <code>Transpose</code> and <code>Adjoint</code>, returned by <code>batched_transpose</code></p></div></section></article><article class="docstring"><header><a class="docstring-binding" id="NNlib.batched_transpose" href="#NNlib.batched_transpose"><code>NNlib.batched_transpose</code></a> — <span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia">batched_transpose(A::AbstractArray{T,3})
|
||
batched_adjoint(A)</code></pre><p>Equivalent to applying <code>transpose</code> or <code>adjoint</code> to each matrix <code>A[:,:,k]</code>.</p><p>These exist to control how <code>batched_mul</code> behaves, as it operated on such matrix slices of an array with <code>ndims(A)==3</code>.</p><pre><code class="language-none">BatchedTranspose{T, N, S} <: AbstractBatchedMatrix{T, N}
|
||
BatchedAdjoint{T, N, S}</code></pre><p>Lazy wrappers analogous to <code>Transpose</code> and <code>Adjoint</code>, returned by <code>batched_transpose</code></p></div></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../advanced/">« Advanced Model Building</a><a class="docs-footer-nextpage" href="../../data/onehot/">One-Hot Encoding »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Wednesday 27 May 2020 11:52">Wednesday 27 May 2020</span>. Using Julia version 1.3.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
|