pith. machine review for the scientific record. sign in

arxiv: 1502.05767 · v4 · submitted 2015-02-20 · 💻 cs.SC · cs.LG· stat.ML

Recognition: unknown

Automatic differentiation in machine learning: a survey

Authors on Pith no claims yet
classification 💻 cs.SC cs.LGstat.ML
keywords differentiationlearningmachineautomatictechniquesapplicationsautodiffbeen
0
0 comments X
read the original abstract

Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "autodiff", is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational fluid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other's results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names "dynamic computational graphs" and "differentiable programming". We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms "autodiff", "automatic differentiation", and "symbolic differentiation" as these are encountered more and more in machine learning settings.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 7 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. ADELIA: Automatic Differentiation for Efficient Laplace Inference Approximations

    cs.DC 2026-05 conditional novelty 7.0

    ADELIA is the first AD-enabled INLA system that computes exact hyperparameter gradients via a structure-exploiting multi-GPU backward pass, delivering 4.2-7.9x per-gradient speedups and 5-8x better energy efficiency t...

  2. Exploring the Boundaries of Differentiable Radiation Transport and Detector Simulation

    physics.ins-det 2026-05 unverdicted novelty 6.0

    Targeted halting of gradient flow at unstable material boundaries enables stable derivatives for optimizing detector designs in radiation transport simulations.

  3. Large-eddy simulation nets (LESnets) based on physics-informed neural operator for wall-bounded turbulence

    physics.flu-dyn 2026-04 unverdicted novelty 6.0

    LESnets integrates LES equations and the law of the wall into F-FNO to enable data-free, stable long-term predictions of wall-bounded turbulence at Re_tau up to 1000 on coarse grids, matching traditional LES accuracy ...

  4. Efficient optimisation of multi-parameter quantum control protocols for strongly-coupled systems

    quant-ph 2026-04 unverdicted novelty 6.0

    Gradient-based optimization of SUPER and FTPE pulse protocols via auto-differentiation and uniTEMPO yields higher preparation fidelities than resonant pi-pulses or standard two-photon excitation, with the advantage in...

  5. Heterogeneous Variational Inference for Markov Degradation Hazard Models: Discretized Mixture with Interpretable Clusters

    cs.LG 2026-04 unverdicted novelty 5.0

    A discretized finite mixture model with ADVI identifies interpretable low- and high-risk clusters in Markov degradation hazard models for 280 industrial pumps, achieving 84x speedup over NUTS while enforcing stability...

  6. Physics-Informed Neural Networks for Solving Two-Flavor Neutrino Oscillations in Vacuum and Matter Environments for Atmospheric and Reactor Neutrinos

    hep-ph 2026-04 unverdicted novelty 5.0

    Physics-informed neural networks solve two-flavor neutrino oscillation equations in vacuum and matter with mean squared errors of order 10^{-3} to 10^{-4}, matching analytical results.

  7. Physics-Informed Neural Networks for Solving Two-Flavor Neutrino Oscillations in Vacuum and Matter Environments for Atmospheric and Reactor Neutrinos

    hep-ph 2026-04 unverdicted novelty 5.0

    PINNs solve two-flavor neutrino oscillation equations in vacuum and matter with mean squared errors of 10^{-3} to 10^{-4}, matching analytical solutions.