This commit is contained in:
Mike J Innes 2017-09-09 21:12:39 -04:00
parent 17e40b1f76
commit 726a8acefe
1 changed files with 1 additions and 1 deletions

View File

@ -51,4 +51,4 @@ opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1
opt()
```
An optimiser takes a parameter list and returns a function that does the same thing as `update` above. We can pass either `opt` or `update` to our [training loop](training.html), which will then run the optimiser after every mini-batch of data.
An optimiser takes a parameter list and returns a function that does the same thing as `update` above. We can pass either `opt` or `update` to our [training loop](./training.html), which will then run the optimiser after every mini-batch of data.