pith. machine review for the scientific record. sign in

arxiv: 1902.09843 · v1 · submitted 2019-02-26 · 💻 cs.LG · stat.ML

Recognition: unknown

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords learningmethodsadaptiveratesachievetheyadaboundadam
0
0 comments X
read the original abstract

Adaptive optimization methods such as AdaGrad, RMSprop and Adam have been proposed to achieve a rapid training process with an element-wise scaling term on learning rates. Though prevailing, they are observed to generalize poorly compared with SGD or even fail to converge due to unstable and extreme learning rates. Recent work has put forward some algorithms such as AMSGrad to tackle this issue but they failed to achieve considerable improvement over existing methods. In our paper, we demonstrate that extreme learning rates can lead to poor performance. We provide new variants of Adam and AMSGrad, called AdaBound and AMSBound respectively, which employ dynamic bounds on learning rates to achieve a gradual and smooth transition from adaptive methods to SGD and give a theoretical proof of convergence. We further conduct experiments on various popular tasks and models, which is often insufficient in previous work. Experimental results show that new variants can eliminate the generalization gap between adaptive methods and SGD and maintain higher learning speed early in training at the same time. Moreover, they can bring significant improvement over their prototypes, especially on complex deep networks. The implementation of the algorithm can be found at https://github.com/Luolc/AdaBound .

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 6 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. \mathsf{VISTA}: Decentralized Machine Learning in Adversary Dominated Environments

    cs.LG 2026-05 unverdicted novelty 6.0

    VISTA adaptively tunes consistency thresholds in decentralized SGD so that the system converges asymptotically like standard SGD even when adversaries dominate the worker pool.

  2. Anon: Extrapolating Adaptivity Beyond SGD and Adam

    cs.AI 2026-05 unverdicted novelty 6.0

    Anon optimizer uses tunable adaptivity and incremental delay update to achieve convergence guarantees and outperform existing methods on image classification, diffusion, and language modeling tasks.

  3. Delve into the Applicability of Advanced Optimizers for Multi-Task Learning

    cs.LG 2026-04 unverdicted novelty 6.0

    APT augments multi-task learning by adapting advanced optimizers via momentum balancing and light direction preservation, delivering performance gains on four standard MTL datasets.

  4. AdaMeZO: Adam-style Zeroth-Order Optimizer for LLM Fine-tuning Without Maintaining the Moments

    cs.LG 2026-05 unverdicted novelty 5.0

    AdaMeZO adapts Adam moment estimates to zeroth-order LLM fine-tuning without extra memory storage, outperforming MeZO with up to 70% fewer forward passes.

  5. PR3DICTR: A modular AI framework for medical 3D image-based detection and outcome prediction

    cs.CV 2026-04 unverdicted novelty 4.0

    PR3DICTR is a new open-access modular framework for 3D medical image classification and outcome prediction that works with as little as two lines of code.

  6. Neural Network Optimization Reimagined: Decoupled Techniques for Scratch and Fine-Tuning

    cs.CV 2026-04 unverdicted novelty 3.0

    DualOpt decouples optimization by using real-time layer-wise weight decay for scratch training and weight rollback for fine-tuning to improve convergence, generalization, and reduce knowledge forgetting.