[](https://travis-ci.org/MikeInnes/Flux.jl) [](https://mikeinnes.github.io/Flux.jl/stable) [](https://gitter.im/MikeInnes/Flux.jl?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
After adding the `@net` annotation we can take advantage of various optimisations, parallelism, and access to GPUs that TensorFlow provides. Unlike a TensorFlow graph, `f` continues to behave like Julia code; you still get good stack traces, can step through in the debugger, etc.
On top of this foundation we build a set of flexible machine learning abstractions and utilities that interoperate well with other approaches like [Knet](https://github.com/denizyuret/Knet.jl). This gives you great flexibility; you can go high level or stay mathematical, write custom GPU kernels, build your own abstractions, and mix and match approaches.
Check out the [docs](https://mikeinnes.github.io/Flux.jl/stable/) to get started. Flux is in alpha so **please open issues liberally**; we would love to help you get started.