pith. machine review for the scientific record. sign in

arxiv: 2602.07425 · v2 · submitted 2026-02-07 · 💻 cs.LG · cs.CL· math.OC

Recognition: unknown

Sign-Based Optimizers Are Effective Under Heavy-Tailed Noise

Authors on Pith no claims yet
classification 💻 cs.LG cs.CLmath.OC
keywords noiseheavy-tailedsign-basedtheoreticalunderanalysisempiricalgeneralized
0
0 comments X
read the original abstract

While adaptive gradient methods are the workhorse of modern machine learning, sign-based optimization algorithms such as Lion and Muon have recently demonstrated superior empirical performance over AdamW in training large language models (LLM). However, a theoretical understanding of why sign-based updates outperform variance-adapted methods remains elusive. In this paper, we aim to bridge the gap between theory and practice through the lens of heavy-tailed gradient noise, a phenomenon frequently observed in language modeling tasks. Theoretically, we introduce a novel generalized heavy-tailed noise condition that captures the behavior of LLMs more accurately than standard finite variance assumptions. Under this noise model, we establish sharp convergence rates of SignSGD and Lion for generalized smooth function classes, matching or surpassing previous best-known bounds. Furthermore, we extend our analysis to Muon and Muonlight, providing what is, to our knowledge, the first rigorous analysis of matrix optimization under heavy-tailed stochasticity. These results offer a strong theoretical justification for the empirical superiority of sign-based optimizers, showcasing that they are naturally suited to handle the noisy gradients associated with heavy tails. Empirically, LLM pretraining experiments validate our theoretical insights and confirm that our proposed noise models are well-aligned with practice.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. StoSignSGD: Unbiased Structural Stochasticity Fixes SignSGD for Training Large Language Models

    cs.LG 2026-04 unverdicted novelty 6.0

    StoSignSGD resolves SignSGD divergence on non-smooth objectives via structural stochasticity, matching optimal convex rates and improving non-convex bounds while delivering 1.44-2.14x speedups in FP8 LLM pretraining.

  2. CLion: Efficient Cautious Lion Optimizer with Enhanced Generalization

    cs.LG 2026-04 unverdicted novelty 6.0

    CLion achieves O(1/N) generalization error and O(√d / T^{1/4}) convergence for nonconvex stochastic optimization, improving on Lion's O(1/(N τ^T)) bound.