pith. machine review for the scientific record. sign in

arxiv: 1803.09017 · v1 · submitted 2018-03-23 · 💻 cs.CL · cs.LG· cs.SD· eess.AS

Recognition: unknown

Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis

Authors on Pith no claims yet
classification 💻 cs.CL cs.LGcs.SDeess.AS
keywords stylesynthesisgstsspeechtrainedcontrolembeddingsend-to-end
0
0 comments X
read the original abstract

In this work, we propose "global style tokens" (GSTs), a bank of embeddings that are jointly trained within Tacotron, a state-of-the-art end-to-end speech synthesis system. The embeddings are trained with no explicit labels, yet learn to model a large range of acoustic expressiveness. GSTs lead to a rich set of significant results. The soft interpretable "labels" they generate can be used to control synthesis in novel ways, such as varying speed and speaking style - independently of the text content. They can also be used for style transfer, replicating the speaking style of a single audio clip across an entire long-form text corpus. When trained on noisy, unlabeled found data, GSTs learn to factorize noise and speaker identity, providing a path towards highly scalable but robust speech synthesis.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.