1052: update docs and export update! r=dhairyagandhi96 a=CarloLucibello

Fix #951 

Co-authored-by: CarloLucibello <carlo.lucibello@gmail.com>
This commit is contained in:
bors[bot] 2020-02-26 19:33:53 +00:00 committed by GitHub
commit 531d3d4d8b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 18 additions and 4 deletions

View File

@ -21,7 +21,7 @@ grads = gradient(() -> loss(x, y), θ)
We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:
```julia
using Flux: update!
using Flux.Optimise: update!
η = 0.1 # Learning Rate
for p in (W, b)
@ -46,6 +46,7 @@ An optimiser `update!` accepts a parameter and a gradient, and updates the param
All optimisers return an object that, when passed to `train!`, will update the parameters passed to it.
```@docs
Flux.Optimise.update!
Descent
Momentum
Nesterov

View File

@ -1,6 +1,6 @@
module Optimise
export train!,
export train!, update!,
SGD, Descent, ADAM, Momentum, Nesterov, RMSProp,
ADAGrad, AdaMax, ADADelta, AMSGrad, NADAM, ADAMW,RADAM,
InvDecay, ExpDecay, WeightDecay, stop, Optimiser

View File

@ -1,9 +1,22 @@
using Juno
import Zygote: Params, gradient
"""
update!(opt, p, g)
update!(opt, ps::Params, gs)
Perform an update step of the parameters `ps` (or the single parameter `p`)
according to optimizer `opt` and the gradients `gs` (the gradient `g`).
As a result, the parameters are mutated and the optimizer's internal state may change.
update!(x, )
Update the array `x` according to `x .-= x̄`.
"""
function update!(x::AbstractArray, )
x .+=
return x
x .-=
end
function update!(opt, x, )