Recognition: unknown
Efficient Low-rank Multimodal Fusion with Modality-Specific Factors
read the original abstract
Multimodal research is an emerging field of artificial intelligence, and one of the main research problems in this field is multimodal fusion. The fusion of multimodal data is the process of integrating multiple unimodal representations into one compact multimodal representation. Previous research in this field has exploited the expressiveness of tensors for multimodal representation. However, these methods often suffer from exponential increase in dimensions and in computational complexity introduced by transformation of input into tensor. In this paper, we propose the Low-rank Multimodal Fusion method, which performs multimodal fusion using low-rank tensors to improve efficiency. We evaluate our model on three different tasks: multimodal sentiment analysis, speaker trait analysis, and emotion recognition. Our model achieves competitive results on all these tasks while drastically reducing computational complexity. Additional experiments also show that our model can perform robustly for a wide range of low-rank settings, and is indeed much more efficient in both training and inference compared to other methods that utilize tensor representations.
This paper has not been read by Pith yet.
Forward citations
Cited by 5 Pith papers
-
Group Cognition Learning: Making Everything Better Through Governed Two-Stage Agents Collaboration
GCL uses a two-stage protocol with Routing, Auditing, Public-Factor, and Aggregation Agents to mitigate modality dominance and spurious coupling in multimodal learning, achieving state-of-the-art results on CMU-MOSI, ...
-
Attention-Based Multimodal Survival Prediction with Cross-Modal Bilinear Fusion
A multimodal survival model using attention-based histology features, RNA-seq encoders, and low-rank bilinear fusion shows improved performance over concatenation baselines on the CHIMERA dataset for HR-NMIBC.
-
Simultaneous Long-tailed Recognition and Multi-modal Fusion for Highly Imbalanced Multi-modal Data
A multi-modal extension of multi-expert architectures uses confidence-guided fusion from modality-specific networks to handle long-tailed class imbalance across heterogeneous inputs.
-
Multimodal Deep Generative Model for Semi-Supervised Learning under Class Imbalance
A multimodal generative model replaces Gaussians with t-distributions and uses gamma-power divergence to improve semi-supervised classification performance on imbalanced partially labeled data.
-
Group Cognition Learning: Making Everything Better Through Governed Two-Stage Agents Collaboration
Group Cognition Learning uses governed two-stage agents after separate modality encoding to mitigate dominance and spurious coupling, reporting state-of-the-art results on CMU-MOSI, CMU-MOSEI, and MIntRec for regressi...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.