pith. machine review for the scientific record. sign in

arxiv: 1702.01101 · v1 · submitted 2017-02-03 · 💻 cs.CL

Recognition: unknown

Multilingual Multi-modal Embeddings for Natural Language Processing

Authors on Pith no claims yet
classification 💻 cs.CL
keywords embeddingsmultilingualdiscriminativeimprovementsmodelmulti-modaladditionaladvantage
0
0 comments X
read the original abstract

We propose a novel discriminative model that learns embeddings from multilingual and multi-modal data, meaning that our model can take advantage of images and descriptions in multiple languages to improve embedding quality. To that end, we introduce a modification of a pairwise contrastive estimation optimisation function as our training objective. We evaluate our embeddings on an image-sentence ranking (ISR), a semantic textual similarity (STS), and a neural machine translation (NMT) task. We find that the additional multilingual signals lead to improvements on both the ISR and STS tasks, and the discriminative cost can also be used in re-ranking $n$-best lists produced by NMT models, yielding strong improvements.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.