pith. machine review for the scientific record. sign in

arxiv: 1804.11271 · v2 · submitted 2018-04-30 · 📊 stat.ML · cs.LG

Recognition: unknown

Gaussian Process Behaviour in Wide Deep Neural Networks

Authors on Pith no claims yet
classification 📊 stat.ML cs.LG
keywords gaussiannetworksdeepprocesswidebehaviourliteratureneural
0
0 comments X
read the original abstract

Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties. In this paper, we study the relationship between random, wide, fully connected, feedforward networks with more than one hidden layer and Gaussian processes with a recursive kernel definition. We show that, under broad conditions, as we make the architecture increasingly wide, the implied random function converges in distribution to a Gaussian process, formalising and extending existing results by Neal (1996) to deep networks. To evaluate convergence rates empirically, we use maximum mean discrepancy. We then compare finite Bayesian deep networks from the literature to Gaussian processes in terms of the key predictive quantities of interest, finding that in some cases the agreement can be very close. We discuss the desirability of Gaussian process behaviour and review non-Gaussian alternative models from the literature.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. How Long Does Infinite Width Last? Signal Propagation in Long-Range Linear Recurrences

    cs.LG 2026-05 unverdicted novelty 7.0

    In linear recurrent models, infinite-width signal propagation remains accurate only for depths t much smaller than sqrt(width n), with a critical regime at t ~ c sqrt(n) where finite-width effects emerge and dominate ...

  2. Stochastic Scaling Limits and Synchronization by Noise in Deep Transformer Models

    math.PR 2026-04 unverdicted novelty 7.0

    Transformers converge pathwise to a stochastic particle system and SPDE in the scaling limit, exhibiting synchronization by noise and exponential energy dissipation when common noise is coercive relative to self-atten...

  3. On the (In-)Security of the Shuffling Defense in the Transformer Secure Inference

    cs.CR 2026-05 conditional novelty 6.0

    An attack aligns differently shuffled intermediate activations from secure Transformer inference queries to recover model weights with low error using roughly one dollar of queries.

  4. Optimal Architecture and Fundamental Bounds in Neural Network Field Theory

    hep-th 2026-04 unverdicted novelty 6.0

    α=0 architecture in NNFT minimizes finite-width variance, removes IR corrections, and sets a fundamental SNR bound for correlation functions in scalar field theory.