From 73d631f5cdf8d64c563d857455854fbe78aba29a Mon Sep 17 00:00:00 2001 From: janEbert Date: Sat, 4 Apr 2020 23:00:34 +0200 Subject: [PATCH] Fix and improve docs Add missing docstrings, improve existing ones, fix links to functions or files. --- docs/src/data/dataloader.md | 2 +- docs/src/data/onehot.md | 9 +++++++++ docs/src/models/basics.md | 4 ++-- docs/src/models/layers.md | 9 ++++++--- docs/src/models/regularisation.md | 4 ++++ docs/src/training/optimisers.md | 1 + docs/src/training/training.md | 6 +++++- docs/src/utilities.md | 8 +++++++- 8 files changed, 35 insertions(+), 8 deletions(-) diff --git a/docs/src/data/dataloader.md b/docs/src/data/dataloader.md index 70a883c9..f6edc709 100644 --- a/docs/src/data/dataloader.md +++ b/docs/src/data/dataloader.md @@ -3,4 +3,4 @@ Flux provides the `DataLoader` type in the `Flux.Data` module to handle iteratio ```@docs Flux.Data.DataLoader -``` \ No newline at end of file +``` diff --git a/docs/src/data/onehot.md b/docs/src/data/onehot.md index 0bc3531b..23d6f196 100644 --- a/docs/src/data/onehot.md +++ b/docs/src/data/onehot.md @@ -31,6 +31,11 @@ julia> onecold([0.3, 0.2, 0.5], [:a, :b, :c]) :c ``` +```@docs +Flux.onehot +Flux.onecold +``` + ## Batches `onehotbatch` creates a batch (matrix) of one-hot vectors, and `onecold` treats matrices as batches. @@ -52,3 +57,7 @@ julia> onecold(ans, [:a, :b, :c]) ``` Note that these operations returned `OneHotVector` and `OneHotMatrix` rather than `Array`s. `OneHotVector`s behave like normal vectors but avoid any unnecessary cost compared to using an integer index directly. For example, multiplying a matrix with a one-hot vector simply slices out the relevant row of the matrix under the hood. + +```@docs +Flux.onehotbatch +``` diff --git a/docs/src/models/basics.md b/docs/src/models/basics.md index 24230ab1..06901d99 100644 --- a/docs/src/models/basics.md +++ b/docs/src/models/basics.md @@ -220,7 +220,7 @@ Flux.@functor Affine This enables a useful extra set of functionality for our `Affine` layer, such as [collecting its parameters](../training/optimisers.md) or [moving it to the GPU](../gpu.md). -For some more helpful tricks, including parameter freezing, please checkout the [advanced usage guide](advacned.md). +For some more helpful tricks, including parameter freezing, please checkout the [advanced usage guide](advanced.md). ## Utility functions @@ -240,5 +240,5 @@ Currently limited to the following layers: - `MeanPool` ```@docs -outdims +Flux.outdims ``` diff --git a/docs/src/models/layers.md b/docs/src/models/layers.md index 2b5c1591..54ce5791 100644 --- a/docs/src/models/layers.md +++ b/docs/src/models/layers.md @@ -32,6 +32,7 @@ RNN LSTM GRU Flux.Recur +Flux.reset! ``` ## Other General Purpose Layers @@ -49,20 +50,22 @@ SkipConnection These layers don't affect the structure of the network but may improve training times or reduce overfitting. ```@docs +Flux.normalise BatchNorm -Dropout Flux.dropout +Dropout AlphaDropout LayerNorm +InstanceNorm GroupNorm ``` ### Testmode -Many normalisation layers behave differently under training and inference (testing). By default, Flux will automatically determine when a layer evaluation is part of training or inference. Still, depending on your use case, it may be helpful to manually specify when these layers should be treated as being trained or not. For this, Flux provides `testmode!`. When called on a model (e.g. a layer or chain of layers), this function will place the model into the mode specified. +Many normalisation layers behave differently under training and inference (testing). By default, Flux will automatically determine when a layer evaluation is part of training or inference. Still, depending on your use case, it may be helpful to manually specify when these layers should be treated as being trained or not. For this, Flux provides `Flux.testmode!`. When called on a model (e.g. a layer or chain of layers), this function will place the model into the mode specified. ```@docs -testmode! +Flux.testmode! trainmode! ``` diff --git a/docs/src/models/regularisation.md b/docs/src/models/regularisation.md index 02aa3da8..535dd096 100644 --- a/docs/src/models/regularisation.md +++ b/docs/src/models/regularisation.md @@ -64,3 +64,7 @@ julia> activations(c, rand(10)) julia> sum(norm, ans) 2.1166067f0 ``` + +```@docs +Flux.activations +``` diff --git a/docs/src/training/optimisers.md b/docs/src/training/optimisers.md index 1ee526b3..5ed083ee 100644 --- a/docs/src/training/optimisers.md +++ b/docs/src/training/optimisers.md @@ -52,6 +52,7 @@ Momentum Nesterov RMSProp ADAM +RADAM AdaMax ADAGrad ADADelta diff --git a/docs/src/training/training.md b/docs/src/training/training.md index 1fe10783..48b7b42d 100644 --- a/docs/src/training/training.md +++ b/docs/src/training/training.md @@ -32,7 +32,7 @@ Flux.train!(loss, ps, data, opt) ``` The objective will almost always be defined in terms of some *cost function* that measures the distance of the prediction `m(x)` from the target `y`. Flux has several of these built in, like `mse` for mean squared error or `crossentropy` for cross entropy loss, but you can calculate it however you want. -For a list of all built-in loss functions, check out the [reference](loss_functions.md). +For a list of all built-in loss functions, check out the [layer reference](../models/layers.md). At first glance it may seem strange that the model that we want to train is not part of the input arguments of `Flux.train!` too. However the target of the optimizer is not the model itself, but the objective function that represents the departure between modelled and observed data. In other words, the model is implicitly defined in the objective function, and there is no need to give it explicitly. Passing the objective function instead of the model and a cost function separately provides more flexibility, and the possibility of optimizing the calculations. @@ -95,6 +95,10 @@ julia> @epochs 2 Flux.train!(...) # Train for two epochs ``` +```@docs +Flux.@epochs +``` + ## Callbacks `train!` takes an additional argument, `cb`, that's used for callbacks so that you can observe the training process. For example: diff --git a/docs/src/utilities.md b/docs/src/utilities.md index d788e69f..7986ec23 100644 --- a/docs/src/utilities.md +++ b/docs/src/utilities.md @@ -35,9 +35,15 @@ Flux.glorot_uniform Flux.glorot_normal ``` +## Model Abstraction + +```@docs +Flux.destructure +``` + ## Callback Helpers ```@docs Flux.throttle +Flux.stop ``` -