993 lines
54 KiB
Plaintext
993 lines
54 KiB
Plaintext
Learning Structured Sparsity in Deep Neural
|
||
Networks
|
||
|
||
|
||
|
||
|
||
Wei Wen Chunpeng Wu Yandan Wang
|
||
University of Pittsburgh University of Pittsburgh University of Pittsburgh
|
||
wew57@pitt.edu chw127@pitt.edu yaw46@pitt.edu
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
arXiv:1608.03665v4 [cs.NE] 18 Oct 2016 Yiran Chen Hai Li
|
||
University of Pittsburgh University of Pittsburgh
|
||
yic52@pitt.edu hal66@pitt.edu
|
||
|
||
|
||
|
||
Abstract
|
||
|
||
High demand for computation resources severely hinders deployment of large-scale
|
||
Deep Neural Networks (DNN) in resource constrained devices. In this work, we
|
||
propose aStructured Sparsity Learning(SSL) method to regularize the structures
|
||
(i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1)
|
||
learn a compact structure from a bigger DNN to reduce computation cost; (2)
|
||
obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate
|
||
the DNN’s evaluation. Experimental results show that SSL achieves on average
|
||
5.1 |