link fixes

This commit is contained in:
Mike J Innes 2017-09-12 11:34:04 +01:00
parent 1042d490a6
commit 519f4c3c32
5 changed files with 7 additions and 7 deletions

View File

@ -11,4 +11,4 @@ Pkg.add("Flux")
Pkg.test("Flux") # Check things installed correctly
```
Start with the [basics](./models/basics.html). The [model zoo](https://github.com/FluxML/model-zoo/) is also a good starting point for many common kinds of models.
Start with the [basics](models/basics.md). The [model zoo](https://github.com/FluxML/model-zoo/) is also a good starting point for many common kinds of models.

View File

@ -38,7 +38,7 @@ W.data .-= 0.1grad(W)
loss(x, y) # ~ 2.5
```
The loss has decreased a little, meaning that our prediction `x` is closer to the target `y`. If we have some data we can already try [training the model](../training/training.html).
The loss has decreased a little, meaning that our prediction `x` is closer to the target `y`. If we have some data we can already try [training the model](../training/training.md).
All deep learning in Flux, however complex, is a simple generalisation of this example. Of course, models can *look* very different they might have millions of parameters or complex control flow, and there are ways to manage this complexity. Let's see what that looks like.

View File

@ -45,7 +45,7 @@ h, y = rnn(h, x)
If you run the last line a few times, you'll notice the output `y` changing slightly even though the input `x` is the same.
We sometimes refer to functions like `rnn` above, which explicitly manage state, as recurrent *cells*. There are various recurrent cells available, which are documented in the [layer reference](layers.html). The hand-written example above can be replaced with:
We sometimes refer to functions like `rnn` above, which explicitly manage state, as recurrent *cells*. There are various recurrent cells available, which are documented in the [layer reference](layers.md). The hand-written example above can be replaced with:
```julia
using Flux

View File

@ -1,6 +1,6 @@
# Optimisers
Consider a [simple linear regression](../models/basics.html). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
Consider a [simple linear regression](../models/basics.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
```julia
W = param(rand(2, 5))
@ -51,4 +51,4 @@ opt = SGD([W, b], 0.1) # Gradient descent with learning rate 0.1
opt()
```
An optimiser takes a parameter list and returns a function that does the same thing as `update` above. We can pass either `opt` or `update` to our [training loop](./training.html), which will then run the optimiser after every mini-batch of data.
An optimiser takes a parameter list and returns a function that does the same thing as `update` above. We can pass either `opt` or `update` to our [training loop](training.md), which will then run the optimiser after every mini-batch of data.

View File

@ -4,7 +4,7 @@ To actually train a model we need three things:
* A *loss function*, that evaluates how well a model is doing given some input data.
* A collection of data points that will be provided to the loss function.
* An [optimiser](./optimisers.html) that will update the model parameters appropriately.
* An [optimiser](optimisers.md) that will update the model parameters appropriately.
With these we can call `Flux.train!`:
@ -16,7 +16,7 @@ There are plenty of examples in the [model zoo](https://github.com/FluxML/model-
## Loss Functions
The `loss` that we defined in [basics](../models/basics.html) is completely valid for training. We can also define a loss in terms of some model:
The `loss` that we defined in [basics](../models/basics.md) is completely valid for training. We can also define a loss in terms of some model:
```julia
m = Chain(