writing gooder

This commit is contained in:
Mike J Innes 2017-05-03 18:32:58 +01:00
parent 7eea918f99
commit 50245a16b0
1 changed files with 1 additions and 1 deletions

View File

@ -13,7 +13,7 @@ Currently, Flux's pure-Julia backend has no optimisations. This means that calli
model(rand(10)) #> [0.0650, 0.0655, ...]
```
directly won't have great performance. In order to run a computationally intensive training process, we rely on a backend like MXNet or TensorFlow.
directly won't have great performance. In order to run a computationally intensive training process, we need to use a backend like MXNet or TensorFlow.
This is easy to do. Just call either `mxnet` or `tf` on a model to convert it to a model of that kind: