pith. machine review for the scientific record. sign in

Implicit Regularization in Deep Matrix Factorization, October 2019

2 Pith papers cite this work. Polarity classification is still indexing.

2 Pith papers citing it

years

2026 2

representative citing papers

A Theory of Saddle Escape in Deep Nonlinear Networks

cs.LG · 2026-05-02 · conditional · novelty 7.0 · 2 refs

An exact norm-imbalance identity classifies activations into four classes and reduces deep nonlinear training flow to a scalar ODE that predicts saddle escape time scaling as ε to the power of minus (r-2) for r bottleneck layers.

citing papers explorer

Showing 2 of 2 citing papers.

  • Estimating Implicit Regularization in Deep Learning stat.ML · 2026-05-06 · unverdicted · none · ref 4

    Gradient matching empirically recovers implicit regularization effects such as l2 penalties from early stopping and dropout in neural networks.

  • A Theory of Saddle Escape in Deep Nonlinear Networks cs.LG · 2026-05-02 · conditional · none · ref 6 · 2 links

    An exact norm-imbalance identity classifies activations into four classes and reduces deep nonlinear training flow to a scalar ODE that predicts saddle escape time scaling as ε to the power of minus (r-2) for r bottleneck layers.