1062: docstring ensure signature code formatting r=CarloLucibello a=visr

by using a four space indent instead of two

Fixes issues seen here:

![image](https://user-images.githubusercontent.com/4471859/75627427-54aa6600-5bd0-11ea-93d3-92901d44db59.png)

Where the type signature has no code formatting, and a code block is introduced that throws off the rest of the formatting.

Co-authored-by: Martijn Visser <mgvisser@gmail.com>
This commit is contained in:
bors[bot] 2020-03-01 22:28:10 +00:00 committed by GitHub
commit 3cf131b8de
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 11 additions and 15 deletions

View File

@ -11,7 +11,7 @@ struct DataLoader
end
"""
DataLoader(data...; batchsize=1, shuffle=false, partial=true)
DataLoader(data...; batchsize=1, shuffle=false, partial=true)
An object that iterates over mini-batches of `data`, each mini-batch containing `batchsize` observations
(except possibly the last one).

View File

@ -28,7 +28,6 @@ function load()
end
"""
labels()
Get the labels of the iris dataset, a 150 element array of strings listing the
@ -53,7 +52,6 @@ function labels()
end
"""
features()
Get the features of the iris dataset. This is a 4x150 matrix of Float64

View File

@ -6,7 +6,7 @@ const ϵ = 1e-8
# TODO: should use weak refs
"""
Descent(η)
Descent(η)
Classic gradient descent optimiser with learning rate `η`.
For each parameter `p` and its gradient `δp`, this runs `p -= η*δp`
@ -441,17 +441,16 @@ function apply!(o::Optimiser, x, Δ)
end
"""
InvDecay(γ)
InvDecay(γ)
Applies inverse time decay to an optimiser, i.e., the effective step size at iteration `n` is `eta / (1 + γ * n)` where `eta` is the initial step size. The wrapped optimiser's step size is not modified.
```
## Parameters
- gamma (γ): Defaults to `0.001`
## Example
```julia
Optimiser(InvDecay(..), Opt(..))
Optimiser(InvDecay(..), Opt(..))
```
"""
mutable struct InvDecay
@ -470,7 +469,7 @@ function apply!(o::InvDecay, x, Δ)
end
"""
ExpDecay(eta, decay, decay_step, clip)
ExpDecay(eta, decay, decay_step, clip)
Discount the learning rate `eta` by a multiplicative factor `decay` every `decay_step` till a minimum of `clip`.
@ -483,9 +482,8 @@ Discount the learning rate `eta` by a multiplicative factor `decay` every `decay
## Example
To apply exponential decay to an optimiser:
```julia
Optimiser(ExpDecay(..), Opt(..))
opt = Optimiser(ExpDecay(), ADAM())
Optimiser(ExpDecay(..), Opt(..))
opt = Optimiser(ExpDecay(), ADAM())
```
"""
mutable struct ExpDecay
@ -509,7 +507,7 @@ function apply!(o::ExpDecay, x, Δ)
end
"""
WeightDecay(wd)
WeightDecay(wd)
Decays the weight by `wd`

View File

@ -3,8 +3,8 @@ import Zygote: Params, gradient
"""
update!(opt, p, g)
update!(opt, ps::Params, gs)
update!(opt, p, g)
update!(opt, ps::Params, gs)
Perform an update step of the parameters `ps` (or the single parameter `p`)
according to optimizer `opt` and the gradients `gs` (the gradient `g`).

View File

@ -60,7 +60,7 @@ head(x::Tuple) = reverse(Base.tail(reverse(x)))
squeezebatch(x) = reshape(x, head(size(x)))
"""
batch(xs)
batch(xs)
Batch the arrays in `xs` into a single array.