Merge #1052
1052: update docs and export update! r=dhairyagandhi96 a=CarloLucibello Fix #951 Co-authored-by: CarloLucibello <carlo.lucibello@gmail.com>
This commit is contained in:
commit
531d3d4d8b
|
@ -21,7 +21,7 @@ grads = gradient(() -> loss(x, y), θ)
|
||||||
We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:
|
We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:
|
||||||
|
|
||||||
```julia
|
```julia
|
||||||
using Flux: update!
|
using Flux.Optimise: update!
|
||||||
|
|
||||||
η = 0.1 # Learning Rate
|
η = 0.1 # Learning Rate
|
||||||
for p in (W, b)
|
for p in (W, b)
|
||||||
|
@ -46,6 +46,7 @@ An optimiser `update!` accepts a parameter and a gradient, and updates the param
|
||||||
All optimisers return an object that, when passed to `train!`, will update the parameters passed to it.
|
All optimisers return an object that, when passed to `train!`, will update the parameters passed to it.
|
||||||
|
|
||||||
```@docs
|
```@docs
|
||||||
|
Flux.Optimise.update!
|
||||||
Descent
|
Descent
|
||||||
Momentum
|
Momentum
|
||||||
Nesterov
|
Nesterov
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
module Optimise
|
module Optimise
|
||||||
|
|
||||||
export train!,
|
export train!, update!,
|
||||||
SGD, Descent, ADAM, Momentum, Nesterov, RMSProp,
|
SGD, Descent, ADAM, Momentum, Nesterov, RMSProp,
|
||||||
ADAGrad, AdaMax, ADADelta, AMSGrad, NADAM, ADAMW,RADAM,
|
ADAGrad, AdaMax, ADADelta, AMSGrad, NADAM, ADAMW,RADAM,
|
||||||
InvDecay, ExpDecay, WeightDecay, stop, Optimiser
|
InvDecay, ExpDecay, WeightDecay, stop, Optimiser
|
||||||
|
|
|
@ -1,9 +1,22 @@
|
||||||
using Juno
|
using Juno
|
||||||
import Zygote: Params, gradient
|
import Zygote: Params, gradient
|
||||||
|
|
||||||
|
|
||||||
|
"""
|
||||||
|
update!(opt, p, g)
|
||||||
|
update!(opt, ps::Params, gs)
|
||||||
|
|
||||||
|
Perform an update step of the parameters `ps` (or the single parameter `p`)
|
||||||
|
according to optimizer `opt` and the gradients `gs` (the gradient `g`).
|
||||||
|
|
||||||
|
As a result, the parameters are mutated and the optimizer's internal state may change.
|
||||||
|
|
||||||
|
update!(x, x̄)
|
||||||
|
|
||||||
|
Update the array `x` according to `x .-= x̄`.
|
||||||
|
"""
|
||||||
function update!(x::AbstractArray, x̄)
|
function update!(x::AbstractArray, x̄)
|
||||||
x .+= x̄
|
x .-= x̄
|
||||||
return x
|
|
||||||
end
|
end
|
||||||
|
|
||||||
function update!(opt, x, x̄)
|
function update!(opt, x, x̄)
|
||||||
|
|
Loading…
Reference in New Issue