pith. machine review for the scientific record. sign in

arxiv: 1706.06197 · v5 · submitted 2017-06-19 · 💻 cs.LG · cs.AI· cs.CL· cs.CV

Recognition: unknown

meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIcs.CLcs.CV
keywords onlypropagationbackcomputedgradientlearningmepropresult
0
0 comments X
read the original abstract

We propose a simple yet effective technique for neural network learning. The forward propagation is computed as usual. In back propagation, only a small subset of the full gradient is computed to update the model parameters. The gradient vectors are sparsified in such a way that only the top-$k$ elements (in terms of magnitude) are kept. As a result, only $k$ rows or columns (depending on the layout) of the weight matrix are modified, leading to a linear reduction ($k$ divided by the vector dimension) in the computational cost. Surprisingly, experimental results demonstrate that we can update only 1-4% of the weights at each back propagation pass. This does not result in a larger number of training iterations. More interestingly, the accuracy of the resulting models is actually improved rather than degraded, and a detailed analysis is given. The code is available at https://github.com/lancopku/meProp

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.