RobuTrans: A Robust Transformer-Based Text-to-Speech Model
Authors: Naihan Li, Yanqing Liu, Yu Wu, Shujie Liu, Sheng Zhao, Ming Liu8228-8235
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on two test sets, including a general set and a bad-case set. Our model doesn t make any mistake for the samples in the badcase set, at the same time, it achieves on parity MOS (4.36) with Transformer TTS (4.37) and to Tacotron2 (4.37) on the general set. |
| Researcher Affiliation | Collaboration | Naihan Li, 1,4,5 Yanqing Liu,2 Yu Wu,3 Shujie Liu,3 Sheng Zhao,2 Ming Liu1,4,5 1School of Computer Science and Engineering, University of Electronic Science and Technology of China 2Microsoft STC Asia 3Microsoft Research Asia 4CETC Big Data Research Institute Co.,Ltd, Guiyang 5Big Data Application on Improving Government Governance Capabilities National Engineering Laboratory, Guiyang |
| Pseudocode | No | The paper includes architectural diagrams (Figure 1, Figure 3, Figure 4) but no explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states, 'Generated audio samples are accessible in the supplementary materials', but does not mention the availability of source code for the described methodology. |
| Open Datasets | No | We use an internal US English dataset, which contains 20 hours of speech from a single professional speaker. |
| Dataset Splits | No | The paper mentions an 'internal US English dataset' for training and two test sets ('general set' and 'bad-case set'), but does not explicitly describe a separate validation split or how data was partitioned for validation purposes. |
| Hardware Specification | Yes | We use 4 Nvidia Tesla P100 to train our model. Since the lengths of training samples vary greatly, a fixed batch size will either run out of memory when the batch size is large, or makes the training procedure inefficient and unstable if the batch is small. Therefore, a dynamic batch size is adapted. Each GPU has a memory of 16GB, which can hold 6000 frames (total length of 10 40 samples), and thus the batch size is 40 160. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., specific deep learning frameworks like PyTorch or TensorFlow, or other libraries with their versions). |
| Experiment Setup | Yes | Since the lengths of training samples vary greatly, a fixed batch size will either run out of memory when the batch size is large, or makes the training procedure inefficient and unstable if the batch is small. Therefore, a dynamic batch size is adapted. Each GPU has a memory of 16GB, which can hold 6000 frames (total length of 10 40 samples), and thus the batch size is 40 160. For the training set, we use an internal US English dataset, which contains 20 hours of speech from a single professional speaker. 80-channel mel scaled spectrum is extracted from 16k normalized wave, and all the training texts are also normalized. The time consuming for a single training step is 0.55 seconds, and it takes 150,000 steps (about 23 hours) to converge. Mean squared error (MSE) is employed as the loss function. |