Recognition: unknown
MOSI: Multimodal Corpus of Sentiment Intensity and Subjectivity Analysis in Online Opinion Videos
read the original abstract
People are sharing their opinions, stories and reviews through online video sharing websites every day. Studying sentiment and subjectivity in these opinion videos is experiencing a growing attention from academia and industry. While sentiment analysis has been successful for text, it is an understudied research question for videos and multimedia content. The biggest setbacks for studies in this direction are lack of a proper dataset, methodology, baselines and statistical analysis of how information from different modality sources relate to each other. This paper introduces to the scientific community the first opinion-level annotated corpus of sentiment and subjectivity analysis in online videos called Multimodal Opinion-level Sentiment Intensity dataset (MOSI). The dataset is rigorously annotated with labels for subjectivity, sentiment intensity, per-frame and per-opinion annotated visual features, and per-milliseconds annotated audio features. Furthermore, we present baselines for future studies in this direction as well as a new multimodal fusion approach that jointly models spoken words and visual gestures.
This paper has not been read by Pith yet.
Forward citations
Cited by 7 Pith papers
-
McNdroid: A Longitudinal Multimodal Benchmark for Robust Drift Detection in Android Malware
McNdroid is a new longitudinal multimodal benchmark showing that Android malware detectors degrade over time but multimodal approaches maintain better performance across long temporal gaps.
-
Are We Making Progress in Multimodal Domain Generalization? A Comprehensive Benchmark Study
A large-scale benchmark finds that recent multimodal domain generalization methods give only marginal gains over a plain ERM baseline, with no method winning consistently and all degrading sharply under corruption or ...
-
EmoTrans: A Benchmark for Understanding, Reasoning, and Predicting Emotion Transitions in Multimodal LLMs
EmoTrans is a new video benchmark with four progressive tasks that measures how well current multimodal LLMs handle dynamic emotion transitions rather than static recognition.
-
Enhance-then-Balance Modality Collaboration for Robust Multimodal Sentiment Analysis
EBMC framework enhances weaker modalities via semantic disentanglement and cross-modal boosting, then balances them with energy-guided coordination and instance-aware trust distillation for improved MSA performance an...
-
Nano-EmoX: Unifying Multimodal Emotional Intelligence from Perception to Empathy
Nano-EmoX is a compact 2.2B multimodal model that unifies six core affective tasks across perception, understanding, and interaction levels via a curriculum framework, achieving competitive benchmark performance.
-
Mitigating Multimodal Inconsistency via Cognitive Dual-Pathway Reasoning for Intent Recognition
CDPR uses an intuition pathway for cross-modal consensus and a reasoning pathway for quantifying and mitigating inconsistencies to improve multimodal intent recognition.
-
Modality-Aware Contrastive and Uncertainty-Regularized Emotion Recognition
MCUR improves multimodal emotion recognition across heterogeneous modality setups by combining modality-combination contrastive learning with sample-wise uncertainty regularization, yielding F1 gains of 2.2-4.37% on M...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.