Zero-Shot Face-Based Voice Conversion: Bottleneck-Free Speech Disentanglement in the Real-World Scenario

Authors: Shao-En Weng, Hong-Han Shuai, Wen-Huang Cheng

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Quantitative experiments show that our method outperforms previous work.
Researcher Affiliation Academia Shao-En Weng, Hong-Han Shuai, Wen-Huang Cheng National Yang Ming Chiao Tung University anita4213.ee09@nycu.edu.tw, hhshuai@nycu.edu.tw, whcheng@nycu.edu.tw
Pseudocode No The paper describes the model architecture and training strategy in text and diagrams, but it does not provide pseudocode or a clearly labeled algorithm block.
Open Source Code No The paper provides a link to a demo website for audio samples (https://sites.google.com/view/spfacevc-demo/) but does not explicitly state that the source code for the methodology is openly available or provide a link to a code repository.
Open Datasets Yes LRS3 (Afouras, Chung, and Zisserman 2018) dataset is collected from TED and TEDx videos downloaded from You Tube.
Dataset Splits No The paper mentions using 'training speakers' (100, 200, 400) and 'unseen utterances' for evaluation, but it does not specify explicit training, validation, and test dataset splits as percentages or sample counts to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper mentions specific software components like Waveglow, ADAM, and Parselmouth library, but it does not provide their specific version numbers required for reproducible software dependencies.
Experiment Setup Yes The learning rate for the generator is set to 0.0001. For the discriminator, it is set to 0.0004 and with β1 = 0.9, β2 = 0.999. [...] Empirically, we set α = 1, β = 0.1, γ = 100, and δ = 0.1. [...] Here, we set the batch size to 2.