optimiser clarity
This commit is contained in:
parent
7426faf37d
commit
c4166fd725
|
@ -48,13 +48,15 @@ For the update step, there's nothing whatsoever wrong with writing the loop abov
|
|||
```julia
|
||||
opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1
|
||||
|
||||
opt()
|
||||
opt() # Carry out the update, modifying `W` and `b`.
|
||||
```
|
||||
|
||||
An optimiser takes a parameter list and returns a function that does the same thing as `update` above. We can pass either `opt` or `update` to our [training loop](training.md), which will then run the optimiser after every mini-batch of data.
|
||||
|
||||
## Optimiser Reference
|
||||
|
||||
All optimisers return a function that, when called, will update the parameters passed to it.
|
||||
|
||||
```@docs
|
||||
SGD
|
||||
Momentum
|
||||
|
|
Loading…
Reference in New Issue