pith. machine review for the scientific record. sign in

arxiv: 1709.06709 · v2 · submitted 2017-09-20 · 💻 cs.LG

Recognition: unknown

Online Learning of a Memory for Learning Rates

Authors on Pith no claims yet
classification 💻 cs.LG
keywords learningmemoryobservedonlinetasksalgorithmgradientmeta-learner
0
0 comments X
read the original abstract

The promise of learning to learn for robotics rests on the hope that by extracting some information about the learning process itself we can speed up subsequent similar learning tasks. Here, we introduce a computationally efficient online meta-learning algorithm that builds and optimizes a memory model of the optimal learning rate landscape from previously observed gradient behaviors. While performing task specific optimization, this memory of learning rates predicts how to scale currently observed gradients. After applying the gradient scaling our meta-learner updates its internal memory based on the observed effect its prediction had. Our meta-learner can be combined with any gradient-based optimizer, learns on the fly and can be transferred to new optimization tasks. In our evaluations we show that our meta-learning algorithm speeds up learning of MNIST classification and a variety of learning control tasks, either in batch or online learning settings.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.