pith. machine review for the scientific record. sign in

arxiv: 1808.01410 · v1 · submitted 2018-08-04 · 💻 cs.CL · cs.LG· cs.SD· eess.AS· stat.ML

Recognition: unknown

Predicting Expressive Speaking Style From Text In End-To-End Speech Synthesis

Authors on Pith no claims yet
classification 💻 cs.CL cs.LGcs.SDeess.ASstat.ML
keywords stylespeakingexpressivespeechtp-gstaudiodemonstrateend-to-end
0
0 comments X
read the original abstract

Global Style Tokens (GSTs) are a recently-proposed method to learn latent disentangled representations of high-dimensional data. GSTs can be used within Tacotron, a state-of-the-art end-to-end text-to-speech synthesis system, to uncover expressive factors of variation in speaking style. In this work, we introduce the Text-Predicted Global Style Token (TP-GST) architecture, which treats GST combination weights or style embeddings as "virtual" speaking style labels within Tacotron. TP-GST learns to predict stylistic renderings from text alone, requiring neither explicit labels during training nor auxiliary inputs for inference. We show that, when trained on a dataset of expressive speech, our system generates audio with more pitch and energy variation than two state-of-the-art baseline models. We further demonstrate that TP-GSTs can synthesize speech with background noise removed, and corroborate these analyses with positive results on human-rated listener preference audiobook tasks. Finally, we demonstrate that multi-speaker TP-GST models successfully factorize speaker identity and speaking style. We provide a website with audio samples for each of our findings.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.