SentiAvatar generates expressive interactive 3D avatars in real time by combining a 37-hour mocap dialogue dataset with a pre-trained motion foundation model and an audio-aware plan-then-infill architecture that separates semantic planning from prosody-driven frame interpolation.
Title resolution pending
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
citation-role summary
dataset 1
citation-polarity summary
fields
cs.CV 1years
2026 1verdicts
UNVERDICTED 1roles
dataset 1polarities
use dataset 1representative citing papers
citing papers explorer
-
SentiAvatar: Towards Expressive and Interactive Digital Humans
SentiAvatar generates expressive interactive 3D avatars in real time by combining a 37-hour mocap dialogue dataset with a pre-trained motion foundation model and an audio-aware plan-then-infill architecture that separates semantic planning from prosody-driven frame interpolation.