pith. machine review for the scientific record. sign in

arxiv: 1006.0448 · v1 · submitted 2010-06-02 · 💻 cs.NE

Recognition: unknown

Emergence of Complex-Like Cells in a Temporal Product Network with Local Receptive Fields

Authors on Pith no claims yet
classification 💻 cs.NE
keywords cellsunitscomplexfeaturefeatureslocalreceptivearchitecture
0
0 comments X
read the original abstract

We introduce a new neural architecture and an unsupervised algorithm for learning invariant representations from temporal sequence of images. The system uses two groups of complex cells whose outputs are combined multiplicatively: one that represents the content of the image, constrained to be constant over several consecutive frames, and one that represents the precise location of features, which is allowed to vary over time but constrained to be sparse. The architecture uses an encoder to extract features, and a decoder to reconstruct the input from the features. The method was applied to patches extracted from consecutive movie frames and produces orientation and frequency selective units analogous to the complex cells in V1. An extension of the method is proposed to train a network composed of units with local receptive field spread over a large image of arbitrary size. A layer of complex cells, subject to sparsity constraints, pool feature units over overlapping local neighborhoods, which causes the feature units to organize themselves into pinwheel patterns of orientation-selective receptive fields, similar to those observed in the mammalian visual cortex. A feed-forward encoder efficiently computes the feature representation of full images.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. What-Where Transformer: A Slot-Centric Visual Backbone for Concurrent Representation and Localization

    cs.CV 2026-05 unverdicted novelty 7.0

    The What-Where Transformer achieves explicit what-where separation in a ViT-style backbone via concurrent token and attention-map streams, yielding emergent object discovery from attention maps and better weakly-super...