Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation
Authors: Dongchan Min, Dong Bok Lee, Eunho Yang, Sung Ju Hwang
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results show that our models generate high-quality speech which accurately follows the speaker s voice with single short-duration (1-3 sec) speech audio, significantly outperforming baselines. |
| Researcher Affiliation | Collaboration | 1Graduate School of AI, Korea Advanced Institute of Science and Technology (KAIST), Seoul, South Korea 2AITRICS, Seoul, South Korea. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The audio samples are available at https://stylespeech.github.io/. This link only provides audio samples, not the source code for the methodology. No other explicit statement about code release is found. |
| Open Datasets | Yes | We train Style Speech and Meta-Style Speech on Libri TTS dataset (Zen et al., 2019), which is a multispeaker English corpus derived from Libri Speech (Panayotov et al., 2015). For evaluation of the models performance on unseen speaker adaptation tasks, we use the VCTK (Yamagishi et al., 2019) dataset which contains audios of 108 speakers. |
| Dataset Splits | No | We split the dataset into a training and a validation (test) set, and use the validation set for the evaluation on the trained speakers. This statement does not provide specific percentages or sample counts for the splits, which are needed for full reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models or memory used for running experiments. |
| Software Dependencies | No | The paper mentions using "Librosa (Mc Fee et al., 2015)", "an open-source grapheme-to-phoneme tool" (https://github.com/Kyubyong/g2p), and "Mel GAN (Kumar et al., 2019) as the vocoder," but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | The generator in Style Speech uses 4 FFT blocks on both phoneme encoder and mel-spectrogram decoder following Fast Speech2. ... For the mel-style encoder, the dimensionality of all latent hidden vectors are set to 128... We use Mish (Misra, 2020) activation for both the generator and the mel-style encoder. ... We train Style Speech for 100k steps. For Meta-Style Speech, we start from pretrained Style Speech that is trained for 60k steps, and then meta-train the model for additional 40k steps... We train our models with a minibatch size of 48 for Style Speech and 20 for Meta-Style Speech using the Adam optimizer. The parameters we use for the Adam optimizer are β1 = 0.9, β2 = 0.98, ϵ = 10 9. The learning rate of generator and mel-style encoder follows Vaswani et al. (2017), while the learning rate of discriminator is fixed as 0.0002. We set α = 10 in our experiments. |