pith. machine review for the scientific record. sign in

arxiv: 1704.02895 · v1 · submitted 2017-04-10 · 💻 cs.CV

Recognition: unknown

ActionVLAD: Learning spatio-temporal aggregation for action classification

Authors on Pith no claims yet
classification 💻 cs.CV
keywords classificationacrossspatio-temporalvideoactionaggregationarchitecturebase
0
0 comments X
read the original abstract

In this work, we introduce a new video representation for action classification that aggregates local convolutional features across the entire spatio-temporal extent of the video. We do so by integrating state-of-the-art two-stream networks with learnable spatio-temporal feature aggregation. The resulting architecture is end-to-end trainable for whole-video classification. We investigate different strategies for pooling across space and time and combining signals from the different streams. We find that: (i) it is important to pool jointly across space and time, but (ii) appearance and motion streams are best aggregated into their own separate representations. Finally, we show that our representation outperforms the two-stream base architecture by a large margin (13% relative) as well as out-performs other baselines with comparable base architectures on HMDB51, UCF101, and Charades video classification benchmarks.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.