pith. machine review for the scientific record. sign in

arxiv: 1403.1840 · v3 · submitted 2014-03-07 · 💻 cs.CV

Recognition: unknown

Multi-scale Orderless Pooling of Deep Convolutional Activation Features

Authors on Pith no claims yet
classification 💻 cs.CV
keywords activationsclassificationorderlesspoolingconvolutionaldatasetsdeepglobal
0
0 comments X
read the original abstract

Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Parameter-Efficient Architectural Modifications for Translation-Invariant CNNs

    cs.CV 2026-04 unverdicted novelty 5.0

    Strategic insertion of Global Average Pooling layers in VGG-16 reduces trainable parameters by 98%, maintains 66.4% ImageNet Top-1 accuracy, doubles translation robustness, and yields superior Spearman correlations in...