PersonaGest uses a semantic-guided RVQ-VAE with a Semantic-Aware Motion Codebook and contrastive learning in stage one, followed by a Masked Generative Transformer and Style Residual Transformers in stage two, to achieve state-of-the-art co-speech gesture generation with semantic coherence and style
Hand and mind: What gestures reveal about thought.Language and Speech, 37(2):203–209
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.GR 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
PersonaGest: Personalized Co-Speech Gesture Generation with Semantic-Guided Hierarchical Motion Representation
PersonaGest uses a semantic-guided RVQ-VAE with a Semantic-Aware Motion Codebook and contrastive learning in stage one, followed by a Masked Generative Transformer and Style Residual Transformers in stage two, to achieve state-of-the-art co-speech gesture generation with semantic coherence and style