pith. machine review for the scientific record. sign in

arxiv: 1809.04505 · v1 · submitted 2018-09-12 · 💻 cs.CL

Recognition: unknown

Emo2Vec: Learning Generalized Emotion Representation by Multi-task Training

Authors on Pith no claims yet
classification 💻 cs.CL
keywords emo2vecclassificationdetectionemotionlearningmulti-tasktaskstraining
0
0 comments X
read the original abstract

In this paper, we propose Emo2Vec which encodes emotional semantics into vectors. We train Emo2Vec by multi-task learning six different emotion-related tasks, including emotion/sentiment analysis, sarcasm classification, stress detection, abusive language classification, insult detection, and personality recognition. Our evaluation of Emo2Vec shows that it outperforms existing affect-related representations, such as Sentiment-Specific Word Embedding and DeepMoji embeddings with much smaller training corpora. When concatenated with GloVe, Emo2Vec achieves competitive performances to state-of-the-art results on several tasks using a simple logistic regression classifier.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Efficient Emotion-Aware Iconic Gesture Prediction for Robot Co-Speech

    cs.RO 2026-04 unverdicted novelty 5.0

    Lightweight transformer predicts iconic gesture placement and intensity from text and emotion for robot co-speech, outperforming GPT-4o on BEAT2 without audio input.

  2. Efficient Emotion-Aware Iconic Gesture Prediction for Robot Co-Speech

    cs.RO 2026-04 unverdicted novelty 4.0

    A compact transformer predicts iconic gesture placement and intensity from text and emotion alone, outperforming GPT-4o on the BEAT2 dataset for robot co-speech use.