update docs and export update!
This commit is contained in:
parent
55616afc11
commit
759fe9df2f
|
@ -21,7 +21,7 @@ grads = gradient(() -> loss(x, y), θ)
|
|||
We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:
|
||||
|
||||
```julia
|
||||
using Flux: update!
|
||||
using Flux.Optimise: update!
|
||||
|
||||
η = 0.1 # Learning Rate
|
||||
for p in (W, b)
|
||||
|
@ -46,6 +46,7 @@ An optimiser `update!` accepts a parameter and a gradient, and updates the param
|
|||
All optimisers return an object that, when passed to `train!`, will update the parameters passed to it.
|
||||
|
||||
```@docs
|
||||
Flux.Optimise.update!
|
||||
Descent
|
||||
Momentum
|
||||
Nesterov
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
module Optimise
|
||||
|
||||
export train!,
|
||||
export train!, update!,
|
||||
SGD, Descent, ADAM, Momentum, Nesterov, RMSProp,
|
||||
ADAGrad, AdaMax, ADADelta, AMSGrad, NADAM, ADAMW,RADAM,
|
||||
InvDecay, ExpDecay, WeightDecay, stop, Optimiser
|
||||
|
|
|
@ -1,9 +1,22 @@
|
|||
using Juno
|
||||
import Zygote: Params, gradient
|
||||
|
||||
|
||||
"""
|
||||
update!(opt, p, g)
|
||||
update!(opt, ps::Params, gs)
|
||||
|
||||
Perform an update step of the parameters `ps` (or the single parameter `p`)
|
||||
according to optimizer `opt` and the gradients `gs` (the gradient `g`).
|
||||
|
||||
As a result, the parameters are mutated and the optimizer's internal state may change.
|
||||
|
||||
update!(x, x̄)
|
||||
|
||||
Update the array `x` according to `x .-= x̄`.
|
||||
"""
|
||||
function update!(x::AbstractArray, x̄)
|
||||
x .+= x̄
|
||||
return x
|
||||
x .-= x̄
|
||||
end
|
||||
|
||||
function update!(opt, x, x̄)
|
||||
|
|
Loading…
Reference in New Issue