5.4 KiB
5.4 KiB
v0.11
- Change to
DataLoader
's constructor [https://github.com/FluxML/Flux.jl/pull/1152] - Use
DataLoader
withNamedTuple
s, so that tensors can be accessed by name [https://github.com/FluxML/Flux.jl/pull/1221]. - Error if Dense layers weights and biases are not arrays [https://github.com/FluxML/Flux.jl/pull/1218].
v0.10.5
- Add option for same padding to conv and pooling layers by setting
pad=SamePad()
. - Added option to set
bias
to Flux.Zeros to eliminatingbias
from being trained. - Added
GlobalMaxPool
andGlobalMeanPool
layers for performing global pooling operations. - Added
ClipValue
andClipNorm
in this pr toFlux.Optimise
to provide a cleaner API for gradient clipping. - Added new kwarg-only constructors for the various convolutional layers.
- Documented the convolutional layer constructors accepting
weight
andbias
keyword arguments to supply custom arrays for those fields. - Testing suite improvements now test for gradients of all layers along with GPU support.
- Functors have now moved to Functors.jl to allow for their use outside of Flux.
- Added helper functions
Flux.convfilter
andFlux.depthwiseconvfilter
to construct weight arrays for convolutions outside of layer constructors so as to not have to depend on the default layers for custom implementations.
v0.10.0
- The default AD engine has switched from Tracker to Zygote.jl
- The dependency on Tracker.jl has been removed.
- This means Flux now does not depend on using a specialised
TrackedArray
type, and can be used with normal Array implementations directly. - Tracker compatibility is maintained in most common cases, but Zygote will be the preferred AD backend for Flux from now on.
- The CUDNN wrappers have been moved from Flux into CuArrays, to allow for better supporting the CUDA backend, and improve user experience, not to mention making Flux lean.
*crossentropy
functions now work as expected with CuArrays. PR for binarycrossentropy.- Added clearer docs around training and the Optimiser interface.
- Layer initialisations have been improved with a clearer API on how to extend it for other purposes.
- Better messaging around CUDA availability, with hooks to initialize the GPU as default where possible.
@treelike
has been formalised as a functor, with an effective deprecation.testmode!
is deprecated in favour of istraining
v0.9.0
- Depthwise convolutional layer API changes from
in => mult
channel specification toin => out
channel specification, and deprecates implicitout
constructor. - New SkipConnection, which can be used to train residual neural network architectures.
- New RADAM optimiser.
v0.8.0
- Dropout now has a
dims
argument for specifying the unbroadcast dimensions. - New ConvTranspose layer.
- New Maxout layer
- Datasets are now hash verified on download to avoid corruption.
- We now zero the initial state for RNNs.
- Normalisation can now work on arbitrary
dims
. - Many docs and bugfixes thanks to @KristofferC and others.
- NamedTuples now work like Tuples when doing
mapleaves
. - New "performance tips" section of the docs.
- The training loop is now more readable and better shows how to use the lower-level APIs.
- New AlphaDropout.
- Data.Iris makes Fisher's Iris dataset available with
Iris.labels
andIris.features
. - New InstanceNorm, as popularized by Instance Normalization: The Missing Ingredient for Fast Stylization.
- New GroupNorm, as described in Group Normalization.
- New CrossCor.
AD Changes:
det
,logdet
andlogabsdet
now have adjoints.- Support for PermuteDimsArray.
- Flux.Tracker is now its own package, in preparation for replacing it with Zygote.
v0.7.0
Despite the heroic efforts of scholars and archeologists, pre-0.7 history is lost to the sands of time.