From d1ad8db625d0a870ef7bd6bcbcbcb7342953e54c Mon Sep 17 00:00:00 2001 From: Yao Lu Date: Sat, 9 May 2020 16:40:26 +0800 Subject: [PATCH] add to docs --- docs/src/training/optimisers.md | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/docs/src/training/optimisers.md b/docs/src/training/optimisers.md index 5ed083ee..25cfd3ce 100644 --- a/docs/src/training/optimisers.md +++ b/docs/src/training/optimisers.md @@ -140,3 +140,16 @@ ExpDecay InvDecay WeightDecay ``` + +## Gradient Clipping + +Gradient clipping is useful for training recurrent neural networks, which have a tendency to suffer from the exploding gradient problem. An example usage is + +```julia +opt = Optimiser(ClipValue(1e-3), ADAM(1e-3)) +``` + +```@docs +ClipValue +ClipNorm +``` \ No newline at end of file