optimiser clarity

This commit is contained in:
Mike J Innes 2017-10-18 12:22:45 +01:00
parent 7426faf37d
commit c4166fd725
1 changed files with 3 additions and 1 deletions

View File

@ -48,13 +48,15 @@ For the update step, there's nothing whatsoever wrong with writing the loop abov
```julia
opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1
opt()
opt() # Carry out the update, modifying `W` and `b`.
```
An optimiser takes a parameter list and returns a function that does the same thing as `update` above. We can pass either `opt` or `update` to our [training loop](training.md), which will then run the optimiser after every mini-batch of data.
## Optimiser Reference
All optimisers return a function that, when called, will update the parameters passed to it.
```@docs
SGD
Momentum