pith. machine review for the scientific record. sign in

arxiv: 1312.6173 · v4 · submitted 2013-12-20 · 💻 cs.CL

Recognition: unknown

Multilingual Distributed Representations without Word Alignment

Authors on Pith no claims yet
classification 💻 cs.CL
keywords representationsdistributedsemanticacrossaligneddatalanguageslearning
0
0 comments X
read the original abstract

Distributed representations of meaning are a natural way to encode covariance relationships between words and phrases in NLP. By overcoming data sparsity problems, as well as providing information about semantic relatedness which is not available in discrete representations, distributed representations have proven useful in many NLP tasks. Recent work has shown how compositional semantic representations can successfully be applied to a number of monolingual applications such as sentiment analysis. At the same time, there has been some initial success in work on learning shared word-level representations across languages. We combine these two approaches by proposing a method for learning distributed representations in a multilingual setup. Our model learns to assign similar embeddings to aligned sentences and dissimilar ones to sentence which are not aligned while not requiring word alignments. We show that our representations are semantically informative and apply them to a cross-lingual document classification task where we outperform the previous state of the art. Further, by employing parallel corpora of multiple language pairs we find that our model learns representations that capture semantic relationships across languages for which no parallel data was used.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain

    cs.CR 2017-08 unverdicted novelty 7.0

    Adversaries can create backdoored neural networks during outsourced training that maintain high accuracy on normal data but misbehave on attacker-chosen triggers.