pith. machine review for the scientific record. sign in

arxiv: 1512.00818 · v2 · submitted 2015-12-02 · 💻 cs.CV · cs.CL· cs.LG

Recognition: unknown

Zero-Shot Event Detection by Multimodal Distributional Semantic Embedding of Videos

Authors on Pith no claims yet
classification 💻 cs.CV cs.CLcs.LG
keywords eventvideosdistributionalsemanticdetectionqueryembeddingfree
0
0 comments X
read the original abstract

We propose a new zero-shot Event Detection method by Multi-modal Distributional Semantic embedding of videos. Our model embeds object and action concepts as well as other available modalities from videos into a distributional semantic space. To our knowledge, this is the first Zero-Shot event detection model that is built on top of distributional semantics and extends it in the following directions: (a) semantic embedding of multimodal information in videos (with focus on the visual modalities), (b) automatically determining relevance of concepts/attributes to a free text query, which could be useful for other applications, and (c) retrieving videos by free text event query (e.g., "changing a vehicle tire") based on their content. We embed videos into a distributional semantic space and then measure the similarity between videos and the event query in a free text form. We validated our method on the large TRECVID MED (Multimedia Event Detection) challenge. Using only the event title as a query, our method outperformed the state-of-the-art that uses big descriptions from 12.6% to 13.5% with MAP metric and 0.73 to 0.83 with ROC-AUC metric. It is also an order of magnitude faster.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.