Recognition: unknown
Emo2Vec: Learning Generalized Emotion Representation by Multi-task Training
read the original abstract
In this paper, we propose Emo2Vec which encodes emotional semantics into vectors. We train Emo2Vec by multi-task learning six different emotion-related tasks, including emotion/sentiment analysis, sarcasm classification, stress detection, abusive language classification, insult detection, and personality recognition. Our evaluation of Emo2Vec shows that it outperforms existing affect-related representations, such as Sentiment-Specific Word Embedding and DeepMoji embeddings with much smaller training corpora. When concatenated with GloVe, Emo2Vec achieves competitive performances to state-of-the-art results on several tasks using a simple logistic regression classifier.
This paper has not been read by Pith yet.
Forward citations
Cited by 2 Pith papers
-
Efficient Emotion-Aware Iconic Gesture Prediction for Robot Co-Speech
Lightweight transformer predicts iconic gesture placement and intensity from text and emotion for robot co-speech, outperforming GPT-4o on BEAT2 without audio input.
-
Efficient Emotion-Aware Iconic Gesture Prediction for Robot Co-Speech
A compact transformer predicts iconic gesture placement and intensity from text and emotion alone, outperforming GPT-4o on the BEAT2 dataset for robot co-speech use.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.