pith. machine review for the scientific record. sign in

arxiv: 1903.11257 · v2 · submitted 2019-03-27 · 💻 cs.LG · stat.ML

Recognition: unknown

How Can We Be So Dense? The Benefits of Using Highly Sparse Representations

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords sparsenetworksrepresentationsdenseaccuracybenefitsdimensionalitynoise
0
0 comments X
read the original abstract

Most artificial networks today rely on dense representations, whereas biological networks rely on sparse representations. In this paper we show how sparse representations can be more robust to noise and interference, as long as the underlying dimensionality is sufficiently high. A key intuition that we develop is that the ratio of the operable volume around a sparse vector divided by the volume of the representational space decreases exponentially with dimensionality. We then analyze computationally efficient sparse networks containing both sparse weights and activations. Simulations on MNIST and the Google Speech Command Dataset show that such networks demonstrate significantly improved robustness and stability compared to dense networks, while maintaining competitive accuracy. We discuss the potential benefits of sparsity on accuracy, noise robustness, hyperparameter tuning, learning speed, computational efficiency, and power requirements.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Selectivity and Shape in the Design of Forward-Forward Goodness Functions

    cs.LG 2026-03 unverdicted novelty 7.0

    Shape- and peak-sensitive goodness functions for Forward-Forward deliver up to 72pp gains over sum-of-squares, reaching 98.2% on MNIST and 89% on Fashion-MNIST.