pith. machine review for the scientific record. sign in

arxiv: 1703.04782 · v3 · submitted 2017-03-14 · 💻 cs.LG · stat.ML

Recognition: unknown

Online Learning Rate Adaptation with Hypergradient Descent

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords rategradientlearningdescentmethodhypergradientoptimizationstochastic
0
0 comments X
read the original abstract

We introduce a general method for improving the convergence rate of gradient-based optimizers that is easy to implement and works well in practice. We demonstrate the effectiveness of the method in a range of optimization problems by applying it to stochastic gradient descent, stochastic gradient descent with Nesterov momentum, and Adam, showing that it significantly reduces the need for the manual tuning of the initial learning rate for these commonly used algorithms. Our method works by dynamically updating the learning rate during optimization using the gradient with respect to the learning rate of the update rule itself. Computing this "hypergradient" needs little additional computation, requires only one extra copy of the original gradient to be stored in memory, and relies upon nothing more than what is provided by reverse-mode automatic differentiation.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Causal Stability Selection

    stat.ME 2026-05 unverdicted novelty 6.0

    Causal stability selection identifies treatment effect modifiers with a non-asymptotic bound on expected false positives by integrating cross-fitted CATE estimation and stability selection.