Fix and improve docs

Add missing docstrings, improve existing ones, fix links to functions
or files.
This commit is contained in:
janEbert 2020-04-04 23:00:34 +02:00
parent 2ce5f6d9bf
commit 73d631f5cd
8 changed files with 35 additions and 8 deletions

View File

@ -3,4 +3,4 @@ Flux provides the `DataLoader` type in the `Flux.Data` module to handle iteratio
```@docs ```@docs
Flux.Data.DataLoader Flux.Data.DataLoader
``` ```

View File

@ -31,6 +31,11 @@ julia> onecold([0.3, 0.2, 0.5], [:a, :b, :c])
:c :c
``` ```
```@docs
Flux.onehot
Flux.onecold
```
## Batches ## Batches
`onehotbatch` creates a batch (matrix) of one-hot vectors, and `onecold` treats matrices as batches. `onehotbatch` creates a batch (matrix) of one-hot vectors, and `onecold` treats matrices as batches.
@ -52,3 +57,7 @@ julia> onecold(ans, [:a, :b, :c])
``` ```
Note that these operations returned `OneHotVector` and `OneHotMatrix` rather than `Array`s. `OneHotVector`s behave like normal vectors but avoid any unnecessary cost compared to using an integer index directly. For example, multiplying a matrix with a one-hot vector simply slices out the relevant row of the matrix under the hood. Note that these operations returned `OneHotVector` and `OneHotMatrix` rather than `Array`s. `OneHotVector`s behave like normal vectors but avoid any unnecessary cost compared to using an integer index directly. For example, multiplying a matrix with a one-hot vector simply slices out the relevant row of the matrix under the hood.
```@docs
Flux.onehotbatch
```

View File

@ -220,7 +220,7 @@ Flux.@functor Affine
This enables a useful extra set of functionality for our `Affine` layer, such as [collecting its parameters](../training/optimisers.md) or [moving it to the GPU](../gpu.md). This enables a useful extra set of functionality for our `Affine` layer, such as [collecting its parameters](../training/optimisers.md) or [moving it to the GPU](../gpu.md).
For some more helpful tricks, including parameter freezing, please checkout the [advanced usage guide](advacned.md). For some more helpful tricks, including parameter freezing, please checkout the [advanced usage guide](advanced.md).
## Utility functions ## Utility functions
@ -240,5 +240,5 @@ Currently limited to the following layers:
- `MeanPool` - `MeanPool`
```@docs ```@docs
outdims Flux.outdims
``` ```

View File

@ -32,6 +32,7 @@ RNN
LSTM LSTM
GRU GRU
Flux.Recur Flux.Recur
Flux.reset!
``` ```
## Other General Purpose Layers ## Other General Purpose Layers
@ -49,20 +50,22 @@ SkipConnection
These layers don't affect the structure of the network but may improve training times or reduce overfitting. These layers don't affect the structure of the network but may improve training times or reduce overfitting.
```@docs ```@docs
Flux.normalise
BatchNorm BatchNorm
Dropout
Flux.dropout Flux.dropout
Dropout
AlphaDropout AlphaDropout
LayerNorm LayerNorm
InstanceNorm
GroupNorm GroupNorm
``` ```
### Testmode ### Testmode
Many normalisation layers behave differently under training and inference (testing). By default, Flux will automatically determine when a layer evaluation is part of training or inference. Still, depending on your use case, it may be helpful to manually specify when these layers should be treated as being trained or not. For this, Flux provides `testmode!`. When called on a model (e.g. a layer or chain of layers), this function will place the model into the mode specified. Many normalisation layers behave differently under training and inference (testing). By default, Flux will automatically determine when a layer evaluation is part of training or inference. Still, depending on your use case, it may be helpful to manually specify when these layers should be treated as being trained or not. For this, Flux provides `Flux.testmode!`. When called on a model (e.g. a layer or chain of layers), this function will place the model into the mode specified.
```@docs ```@docs
testmode! Flux.testmode!
trainmode! trainmode!
``` ```

View File

@ -64,3 +64,7 @@ julia> activations(c, rand(10))
julia> sum(norm, ans) julia> sum(norm, ans)
2.1166067f0 2.1166067f0
``` ```
```@docs
Flux.activations
```

View File

@ -52,6 +52,7 @@ Momentum
Nesterov Nesterov
RMSProp RMSProp
ADAM ADAM
RADAM
AdaMax AdaMax
ADAGrad ADAGrad
ADADelta ADADelta

View File

@ -32,7 +32,7 @@ Flux.train!(loss, ps, data, opt)
``` ```
The objective will almost always be defined in terms of some *cost function* that measures the distance of the prediction `m(x)` from the target `y`. Flux has several of these built in, like `mse` for mean squared error or `crossentropy` for cross entropy loss, but you can calculate it however you want. The objective will almost always be defined in terms of some *cost function* that measures the distance of the prediction `m(x)` from the target `y`. Flux has several of these built in, like `mse` for mean squared error or `crossentropy` for cross entropy loss, but you can calculate it however you want.
For a list of all built-in loss functions, check out the [reference](loss_functions.md). For a list of all built-in loss functions, check out the [layer reference](../models/layers.md).
At first glance it may seem strange that the model that we want to train is not part of the input arguments of `Flux.train!` too. However the target of the optimizer is not the model itself, but the objective function that represents the departure between modelled and observed data. In other words, the model is implicitly defined in the objective function, and there is no need to give it explicitly. Passing the objective function instead of the model and a cost function separately provides more flexibility, and the possibility of optimizing the calculations. At first glance it may seem strange that the model that we want to train is not part of the input arguments of `Flux.train!` too. However the target of the optimizer is not the model itself, but the objective function that represents the departure between modelled and observed data. In other words, the model is implicitly defined in the objective function, and there is no need to give it explicitly. Passing the objective function instead of the model and a cost function separately provides more flexibility, and the possibility of optimizing the calculations.
@ -95,6 +95,10 @@ julia> @epochs 2 Flux.train!(...)
# Train for two epochs # Train for two epochs
``` ```
```@docs
Flux.@epochs
```
## Callbacks ## Callbacks
`train!` takes an additional argument, `cb`, that's used for callbacks so that you can observe the training process. For example: `train!` takes an additional argument, `cb`, that's used for callbacks so that you can observe the training process. For example:

View File

@ -35,9 +35,15 @@ Flux.glorot_uniform
Flux.glorot_normal Flux.glorot_normal
``` ```
## Model Abstraction
```@docs
Flux.destructure
```
## Callback Helpers ## Callback Helpers
```@docs ```@docs
Flux.throttle Flux.throttle
Flux.stop
``` ```