AniFaceGAN: Animatable 3D-Aware Face Image Generation for Video Avatars

Authors: Yue Wu, Yu Deng, Jiaolong Yang, Fangyun Wei, Qifeng Chen, Xin Tong

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate our superior performance over prior works. Project page: https://yuewuhkust.github.io/Ani Face GAN/
Researcher Affiliation Collaboration 1HKUST 2Tsinghua University 3Microsoft Research
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Project page: https://yuewuhkust.github.io/Ani Face GAN/
Open Datasets Yes We train our method on the FFHQ [26] dataset3 which contains 70K face images. ... FFHQ is released under the Creative Commons BY-NC-SA 4.0 license; the human face images therein were published on Flickr by their authors under licenses that all allow free use for non-commercial purposes.
Dataset Splits No The paper mentions training on FFHQ but does not specify explicit train/validation/test splits with percentages or sample counts for reproduction.
Hardware Specification Yes We train our models on 8 Nvidia Tesla V100 GPUs with a batch size of 32 at the resolution of 128 128.
Software Dependencies No In our experiments, the Adam optimizer [29] with β1 = 0 and β2 = 0.9 is applied for training our model. The paper mentions software components but does not provide specific version numbers for libraries or frameworks used.
Experiment Setup Yes We set the learning rate to 2e 5 for the deformation network and the generative radiance manifolds, and 2e 4 for the discriminator. We train our models on 8 Nvidia Tesla V100 GPUs with a batch size of 32 at the resolution of 128 128.