Commit Graph

1824 Commits

Author SHA1 Message Date
Mike J Innes
d42130b8cd fix for matmul 2016-10-28 15:02:48 +01:00
Mike J Innes
1a726033f4 do this properly 2016-10-26 15:49:35 +01:00
Mike J Innes
d5d7242c53 export unroll 2016-10-26 15:37:30 +01:00
Mike J Innes
0ad569596b tf reorg 2016-10-26 14:25:10 +01:00
Mike J Innes
82d69757c7 BatchSeq convenience alias 2016-10-26 12:37:48 +01:00
Mike J Innes
823792bc19 unrolled type 2016-10-26 11:57:03 +01:00
Mike J Innes
2a58b23085 more interesting recurrent model 2016-10-26 11:34:17 +01:00
Mike J Innes
652c26728e better loop lifting semantics 2016-10-26 11:19:45 +01:00
Mike J Innes
42b50c976a fix unrolling 2016-10-26 00:49:32 +01:00
Mike J Innes
ba60c4596b graph op fixes 2016-10-26 00:39:16 +01:00
Mike J Innes
eb78f67a93 refactor input model 2016-10-25 23:10:35 +01:00
Mike J Innes
10761a4bee update for flow 2016-10-25 22:28:30 +01:00
Mike J Innes
734b77e5eb test on 0.5 only 2016-10-25 21:37:43 +01:00
Mike J Innes
83c4aa41a2 fix travis.yml 2016-10-25 21:35:02 +01:00
Mike J Innes
91a62a04bc update for flow exports 2016-10-25 21:32:51 +01:00
Mike J Innes
14e4117837 basic unrolling 2016-10-25 21:10:04 +01:00
Mike J Innes
1fde7b4615 preserve default values for hidden states 2016-10-25 19:10:26 +01:00
Mike J Innes
dea85df8b7 use param object rather than named input 2016-10-25 17:57:20 +01:00
Mike J Innes
ee0c5ae14e remove mxnet for now 2016-10-25 17:37:37 +01:00
Mike J Innes
18502158f0 some form of testing 2016-10-25 16:43:59 +01:00
Mike J Innes
d442dd8c5b use Float32 here 2016-10-25 16:23:04 +01:00
Mike J Innes
a06145a145 use new batching approach in TensorFlow 2016-10-25 16:21:17 +01:00
Mike J Innes
46550e4863 suspiciously similar seq data structure 2016-10-25 14:10:49 +01:00
Mike J Innes
95b955246d export rawbatch 2016-10-25 14:10:32 +01:00
Mike J Innes
7438ee6108 move convert method 2016-10-25 14:10:27 +01:00
Mike J Innes
1847809e99 batching refactor, nested batches 2016-10-25 13:48:30 +01:00
Mike J Innes
183c3b0680 batch tweaks 2016-10-15 18:16:04 +01:00
Mike J Innes
6d53b7af47 batch data structure 2016-10-12 22:49:08 +01:00
Mike J Innes
b4390e6a23 tighter 2016-10-12 17:09:57 +01:00
Mike J Innes
af8001bdb8 update notebook 2016-10-12 17:07:56 +01:00
Mike J Innes
c9f9665e4e move batching logic 2016-10-12 17:07:22 +01:00
Mike J Innes
69551caadb tweak 2016-10-12 16:30:45 +01:00
Mike J Innes
b115d8ce3f model -> net 2016-10-12 16:28:16 +01:00
Mike J Innes
066ecafd71 readme tweaks 2016-10-12 16:28:07 +01:00
Mike J Innes
bfb8d961e2 working mnist-conv example 2016-10-10 23:48:25 +01:00
Mike J Innes
a56af5d16e reshape layer 2016-10-10 23:48:16 +01:00
Mike J Innes
438dc9d40a fix conv2d shape inference 2016-10-10 23:20:40 +01:00
Mike J Innes
4961bf72af updates 2016-10-10 23:04:26 +01:00
Mike J Innes
409c44d362 comments 2016-10-04 22:50:49 +01:00
Mike J Innes
2a375c4eb2 non-working example 2016-10-04 22:50:42 +01:00
Mike J Innes
45d30312b6 tf flatten 2016-10-04 22:50:20 +01:00
Mike J Innes
bf04b70ad1 Float32 by default 2016-10-04 22:36:56 +01:00
Mike J Innes
c646ba4483 initial conv example changes for TF 2016-10-04 22:23:53 +01:00
Mike J Innes
9e9c57d49b more TF support 2016-10-04 22:23:37 +01:00
Mike J Innes
cc1ca4c3c2 Conv2D tweaks 2016-10-04 22:23:26 +01:00
Mike J Innes
c709041d73 forward useful size method 2016-10-04 22:23:10 +01:00
Mike J Innes
503265c964 example predictions 2016-10-04 21:11:03 +01:00
Mike J Innes
2609d47ce9 work more nicely with TF batching 2016-10-04 21:10:50 +01:00
Mike J Innes
8335ab8134 sort-of working mnist example 2016-09-29 21:28:53 +01:00
Mike J Innes
a2aade718d get basic training working 2016-09-29 20:50:43 +01:00