build based on 4c1b1eb

This commit is contained in:
autodocs 2017-10-26 11:10:31 +00:00
parent 460beaca56
commit f5f19f1795
2 changed files with 20 additions and 4 deletions

File diff suppressed because one or more lines are too long

View File

@ -224,6 +224,22 @@ var documenterSearchIndex = {"docs": [
"text": "Non-linearities that go between layers of your model. Most of these functions are defined in NNlib but are available by default in Flux.Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call σ.(xs), relu.(xs) and so on.σ\nrelu\nleakyrelu\nelu\nswish" "text": "Non-linearities that go between layers of your model. Most of these functions are defined in NNlib but are available by default in Flux.Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call σ.(xs), relu.(xs) and so on.σ\nrelu\nleakyrelu\nelu\nswish"
}, },
{
"location": "models/layers.html#Flux.Dropout",
"page": "Model Reference",
"title": "Flux.Dropout",
"category": "Type",
"text": "Dropout(p)\n\nA Dropout layer. For each input, either sets that input to 0 (with probability p) or scales it by 1/(1-p). This is used as a regularisation, i.e. it reduces overfitting during training.\n\nDoes nothing to the input once in testmode!.\n\n\n\n"
},
{
"location": "models/layers.html#Normalisation-and-Regularisation-1",
"page": "Model Reference",
"title": "Normalisation & Regularisation",
"category": "section",
"text": "These layers don't affect the structure of the network but may improve training times or reduce overfitting.Dropout"
},
{ {
"location": "training/optimisers.html#", "location": "training/optimisers.html#",
"page": "Optimisers", "page": "Optimisers",