Recognition: unknown
Steerable CNNs
read the original abstract
It has long been recognized that the invariance and equivariance properties of a representation are critically important for success in many vision tasks. In this paper we present Steerable Convolutional Neural Networks, an efficient and flexible class of equivariant convolutional networks. We show that steerable CNNs achieve state of the art results on the CIFAR image classification benchmark. The mathematical theory of steerable representations reveals a type system in which any steerable representation is a composition of elementary feature types, each one associated with a particular kind of symmetry. We show how the parameter cost of a steerable filter bank depends on the types of the input and output features, and show how to use this knowledge to construct CNNs that utilize parameters effectively.
This paper has not been read by Pith yet.
Forward citations
Cited by 4 Pith papers
-
Rotation Equivariant Mamba for Vision Tasks
EQ-VMamba adds rotation-equivariant cross-scan and group Mamba blocks to enforce end-to-end rotation equivariance, yielding better rotation robustness, competitive accuracy, and roughly 50% fewer parameters than non-e...
-
Graph Neural Networks in the Wilson Loop Representation of Abelian Lattice Gauge Theories
A gauge-invariant GNN using Wilson loops as inputs accurately predicts observables and simulates dynamics in Z2 and U(1) lattice gauge models.
-
Rotation Equivariant Convolutions in Deformable Registration of Brain MRI
Rotation-equivariant convolutions in deformable brain MRI registration networks deliver higher accuracy with fewer parameters, greater robustness to rotations, and better performance on limited training data.
-
Leveraging Kernel Symmetry for Joint Compression and Error Mitigation in Edge Model Transfer
A DoF codec exploiting kernel symmetries compresses neural models for noisy channels and projects received weights onto the symmetry subspace to mitigate errors, outperforming pruning on MNIST and CIFAR-10.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.