Problem-agnostic speech embeddings for multi-speaker text-to-speech with SampleRNN

Other authors

Universitat Politècnica de Catalunya. Doctorat en Arquitectura de Computadors

Universitat Politècnica de Catalunya. Departament de Teoria del Senyal i Comunicacions

Universitat Politècnica de Catalunya. IDEAI-UPC - Intelligent Data sciEnce and Artificial Intelligence Research Group

Publication date

2019



Abstract

Text-to-speech (TTS) acoustic models map linguistic features into an acoustic representation out of which an audible waveform is generated. The latest and most natural TTS systems build a direct mapping between linguistic and waveform domains, like SampleRNN. This way, possible signal naturalness losses are avoided as intermediate acoustic representations are discarded. Another important dimension of study apart from naturalness is their adaptability to generate voice from new speakers that were unseen during training. In this paper we first propose the use of problem-agnostic speech embeddings in a multi-speaker acoustic model for TTS based on SampleRNN. This way, we feed the acoustic model with speaker acousticallydependent representations that enrich the waveform generation more than embeddings unrelated to these factors. Our first results suggest that the proposed embeddings lead to better quality voices than those obtained with one-hot embeddings. Furthermore, as we can use any speech segment as an encoded representation during inference, the model is capable to generalize to new speaker identities without retraining the network. We finally show that, with a small increase of speech duration in the embedding extractor, we dramatically reduce the spectral distortion to close the gap towards the target identities.


This research was supported by the project TEC2015-69266-P (MINECO/FEDER, UE).


Peer Reviewed


Postprint (published version)

Document Type

Conference report

Language

English

Publisher

International Speech Communication Association (ISCA)

Related items

https://www.isca-archive.org/ssw_2019/alvarez19_ssw.html

info:eu-repo/grantAgreement/MINECO//TEC2015-69266-P/ES/TECNOLOGIAS DE APRENDIZAJE PROFUNDO APLICADAS AL PROCESADO DE VOZ Y AUDIO/

Recommended citation

This citation was generated automatically.

Rights

Open Access

This item appears in the following Collection(s)

E-prints [72263]