pith. machine review for the scientific record. sign in

arxiv: 1811.00347 · v2 · submitted 2018-11-01 · 💻 cs.CL

Recognition: unknown

How2: A Large-scale Dataset for Multimodal Language Understanding

Authors on Pith no claims yet
classification 💻 cs.CL
keywords languagemultimodalhow2translationunderstandingautomaticavailablebaselines
0
0 comments X
read the original abstract

In this paper, we introduce How2, a multimodal collection of instructional videos with English subtitles and crowdsourced Portuguese translations. We also present integrated sequence-to-sequence baselines for machine translation, automatic speech recognition, spoken language translation, and multimodal summarization. By making available data and code for several multimodal natural language tasks, we hope to stimulate more research on these and similar challenges, to obtain a deeper understanding of multimodality in language processing.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation

    cs.CV 2023-07 unverdicted novelty 6.0

    InternVid supplies 7M videos and LLM captions to train ViCLIP, which reaches leading zero-shot action recognition and competitive retrieval performance.

  2. Video-guided Machine Translation with Global Video Context

    cs.CV 2026-04 unverdicted novelty 4.0

    A globally video-guided multimodal translation framework retrieves semantically related video segments with a vector database and applies attention mechanisms to improve subtitle translation accuracy in long videos.