pith. machine review for the scientific record. sign in

arxiv: 1611.01491 · v6 · submitted 2016-11-04 · 💻 cs.LG · cond-mat.dis-nn· cs.AI· cs.CC· stat.ML

Recognition: unknown

Understanding Deep Neural Networks with Rectified Linear Units

Authors on Pith no claims yet
classification 💻 cs.LG cond-mat.dis-nncs.AIcs.CCstat.ML
keywords reludeepexponentialfamilyfunctionshiddensizeconstruction
0
0 comments X
read the original abstract

In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give an algorithm to train a ReLU DNN with one hidden layer to *global optimality* with runtime polynomial in the data size albeit exponential in the input dimension. Further, we improve on the known lower bounds on size (from exponential to super exponential) for approximating a ReLU deep net function by a shallower ReLU net. Our gap theorems hold for smoothly parametrized families of "hard" functions, contrary to countable, discrete families known in the literature. An example consequence of our gap theorems is the following: for every natural number $k$ there exists a function representable by a ReLU DNN with $k^2$ hidden layers and total size $k^3$, such that any ReLU DNN with at most $k$ hidden layers will require at least $\frac{1}{2}k^{k+1}-1$ total nodes. Finally, for the family of $\mathbb{R}^n\to \mathbb{R}$ DNNs with ReLU activations, we show a new lowerbound on the number of affine pieces, which is larger than previous constructions in certain regimes of the network architecture and most distinctively our lowerbound is demonstrated by an explicit construction of a *smoothly parameterized* family of functions attaining this scaling. Our construction utilizes the theory of zonotopes from polyhedral theory.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 6 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Reconstructing the Stripping History of the Sagittarius Stream with Neural Networks

    astro-ph.GA 2026-05 unverdicted novelty 7.0

    A neural network trained on simulations infers stripping times for Sagittarius stream stars from phase-space data, measuring a 0.3 dex/Gyr metallicity gradient and estimating ages for globular clusters such as Pal 12 ...

  2. Non-Uniqueness of Solutions in Neural Variational Methods

    math.NA 2026-05 unverdicted novelty 7.0

    Finite linear measurements in variational neural discretizations cause ill-posed discrete problems with non-unique minimizers, independent of the underlying continuous variational problem's well-posedness.

  3. Non-Uniqueness of Solutions in Neural Variational Methods

    math.NA 2026-05 unverdicted novelty 6.0

    Variational neural discretizations are structurally ill-posed with non-unique minimizers due to finite linear measurements, independent of the continuous variational problem's well-posedness.

  4. Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models

    cs.LG 2024-01 unverdicted novelty 6.0

    SPIN lets weak LLMs become strong by self-generating training data from previous model versions and training to prefer human-annotated responses over its own outputs, outperforming DPO even with extra GPT-4 data on be...

  5. Low-Cost Stereo Vision for Robust 3D Positioning of Thin Radiata Pine Branches in Autonomous Drone Pruning

    cs.CV 2026-05 unverdicted novelty 5.0

    A drone-mounted stereo camera pipeline with YOLO segmentation, deep stereo depth, centroid triangulation, and MAD outlier rejection achieves robust 3D positioning of thin pine branches at 1-2 m distances.

  6. Positioning radiata pine branches requiring pruning by drone stereo vision

    cs.CV 2026-04 unverdicted novelty 3.0

    Drone stereo vision pipeline segments pine branches with YOLO variants and estimates depth with deep stereo networks, yielding more coherent maps than SGBM at 1-2 m distances.