Recognition: unknown
Adding Gradient Noise Improves Learning for Very Deep Networks
read the original abstract
Deep feedforward and recurrent networks have achieved impressive results in many perception and language processing applications. This success is partially attributed to architectural innovations such as convolutional and long short-term memory networks. The main motivation for these architectural innovations is that they capture better domain knowledge, and importantly are easier to optimize than more basic architectures. Recently, more complex architectures such as Neural Turing Machines and Memory Networks have been proposed for tasks including question answering and general computation, creating a new set of optimization challenges. In this paper, we discuss a low-overhead and easy-to-implement technique of adding gradient noise which we find to be surprisingly effective when training these very deep architectures. The technique not only helps to avoid overfitting, but also can result in lower training loss. This method alone allows a fully-connected 20-layer deep network to be trained with standard gradient descent, even starting from a poor initialization. We see consistent improvements for many complex models, including a 72% relative reduction in error rate over a carefully-tuned baseline on a challenging question-answering task, and a doubling of the number of accurate binary multiplication models learned across 7,000 random restarts. We encourage further application of this technique to additional complex modern architectures.
This paper has not been read by Pith yet.
Forward citations
Cited by 8 Pith papers
-
Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets
Neural networks exhibit grokking on small algorithmic datasets, achieving perfect generalization well after overfitting.
-
Weight-Decay Turns Transformer Loss Landscapes Villani: Functional-Analytic Foundations for Optimization and Generalization
The regularized Transformer loss satisfies Villani's coercive energy criteria, yielding log-Sobolev constants C_LS ≤ λ^{-1} + d/λ² and finite-time convergence bounds for noisy SGD.
-
Language Models (Mostly) Know What They Know
Language models show good calibration when asked to estimate the probability that their own answers are correct, with performance improving as models get larger.
-
ST-MoE: Designing Stable and Transferable Sparse Expert Models
ST-MoE introduces stability techniques for sparse expert models, allowing a 269B-parameter model to achieve state-of-the-art transfer learning results across reasoning, summarization, and QA tasks at the compute cost ...
-
A General Language Assistant as a Laboratory for Alignment
Ranked preference modeling outperforms imitation learning for language model alignment and scales more favorably with model size.
-
Enhancing SignSGD: Small-Batch Convergence Analysis and a Hybrid Switching Strategy
SignSGD with pre-sign dithering and a calibrated hybrid switch to SGD achieves 92.18% accuracy on CIFAR-10 with ResNet-18, outperforming pure SGD and SignSGD, plus better results than Adam on CIFAR-100.
-
Endogenous Regime Switching Driven by Scalar-Irreducible Learning Dynamics
Scalar-irreducible dynamics enable internally generated regime transitions in learning systems via feedback between fast dynamical variables and slow structural adaptation.
-
Preventing overfitting in deep learning using differential privacy
Differential privacy techniques can help prevent overfitting and improve generalization in deep neural networks.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.