pith. machine review for the scientific record. sign in

arxiv: 1410.7455 · v8 · submitted 2014-10-27 · 💻 cs.NE · cs.LG· stat.ML

Recognition: unknown

Parallel training of DNNs with Natural Gradient and Parameter Averaging

Authors on Pith no claims yet
classification 💻 cs.NE cs.LGstat.ML
keywords trainingmethodgradientmachineswelldatadnnsmachine
0
0 comments X
read the original abstract

We describe the neural-network training framework used in the Kaldi speech recognition toolkit, which is geared towards training DNNs with large amounts of training data using multiple GPU-equipped or multi-core machines. In order to be as hardware-agnostic as possible, we needed a way to use multiple machines without generating excessive network traffic. Our method is to average the neural network parameters periodically (typically every minute or two), and redistribute the averaged parameters to the machines for further training. Each machine sees different data. By itself, this method does not work very well. However, we have another method, an approximate and efficient implementation of Natural Gradient for Stochastic Gradient Descent (NG-SGD), which seems to allow our periodic-averaging method to work well, as well as substantially improving the convergence of SGD on a single machine.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

    cs.LG 2015-02 conditional novelty 8.0

    Batch Normalization normalizes layer inputs per mini-batch to reduce internal covariate shift, allowing higher learning rates, less careful initialization, and faster convergence in deep networks.

  2. Rescaled Asynchronous SGD: Optimal Distributed Optimization under Data and System Heterogeneity

    cs.LG 2026-05 unverdicted novelty 6.0

    Rescaled ASGD recovers convergence to the true global objective by rescaling worker stepsizes proportional to computation times, matching the known time lower bound in the leading term under non-convex smoothness and ...

  3. Stabilized Proximal Point Method via Trust Region Control

    math.OC 2026-04 unverdicted novelty 6.0

    A trust-region stabilized proximal point method enforces a displacement condition to achieve linear descent for general nonsmooth convex problems.