Relax! Flux is the ML library that doesn't make you tensor
Go to file
Mike J Innes d09a95c4f0 Update README.md 2017-02-21 18:37:11 +00:00
docs backend usage docs 2017-02-21 18:31:21 +00:00
examples backend usage docs 2017-02-21 18:31:21 +00:00
src try to get biases working somewhat naturally 2017-02-21 16:07:58 +00:00
test mxnet error message test 2017-02-21 16:52:24 +00:00
.gitignore more structure for docs 2017-01-18 01:15:55 +00:00
.travis.yml remove mxnet too for now 2017-02-01 13:06:28 +05:30
LICENSE.md Flux.jl generated files. 2016-03-22 19:58:58 +00:00
README.md Update README.md 2017-02-21 18:37:11 +00:00
REQUIRE remove hard dep on tensorflow 2017-01-30 22:59:02 +05:30

README.md

Флукс

Build Status Join the chat at https://gitter.im/MikeInnes/Flux.jl

Flux is a high-level API for machine learning, implemented in Julia.

Flux aims to provide a concise and expressive syntax for architectures that are hard to express within other frameworks. The notation should be familiar and extremely close to what you'd find in a paper or description of the model.

The current focus is on ANNs with TensorFlow or MXNet as a backend. While it's in a very early working-prototype stage, you can see what works so far in the examples folder.

Brief Examples

Simple multi-layer-perceptron for MNIST:

Chain(
  Input(784),
  Affine(128), relu,
  Affine( 64), relu,
  Affine( 10), softmax)

LSTM example:

@net type LSTM
  Wxf; Wyf; bf
  Wxi; Wyi; bi
  Wxo; Wyo; bo
  Wxc; Wyc; bc
  y; state
  function (x)
    # Gates
    forget = σ( x * Wxf + y{-1} * Wyf + bf )
    input  = σ( x * Wxi + y{-1} * Wyi + bi )
    output = σ( x * Wxo + y{-1} * Wyo + bo )
    # State update and output
    state = tanh( x * Wxc + y{-1} * Wyc + bc )
    state  = forget .* state{-1} + input .* state
    y = output .* tanh(state)
  end
end

Chain(
  Input(N),
  LSTM(N, 256),
  LSTM(256, 256),
  Affine(256, N),
  softmax)