GDFace: Gated Deformation for Multi-View Face Image Synthesis

Authors: Xuemiao Xu, Keke Li, Cheng Xu, Shengfeng He12532-12540

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on five widely-used benchmarks show that our approach performs favorably against the state-of-the-arts on multi-view face synthesis, especially for large pose changes.
Researcher Affiliation Academia 1School of Computer Science and Engineering, South China University of Technology, China 2State Key Laboratory of Subtropical Building Science 3Guangdong Provincial Key Lab of Computational Intelligence and Cyberspace Information
Pseudocode No The paper describes the model architecture and mathematical formulations but does not include pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link for open-source code for the described methodology.
Open Datasets Yes Multi-PIE (Gross et al. 2010) is the largest multi-view face recognition benchmark... Celeb A (Liu et al. 2015) is a large-scale face dataset... IJB-A (Klare et al. 2015) is a challenging face dataset... CFP (Sengupta et al. 2016) is a widely-used dataset... LFW (Huang et al. 2008) is the most commonly used databases for face recognition...
Dataset Splits No The paper describes training and testing splits for datasets (e.g., 'The first setting only uses faces of the first 150 subjects for training and the rest 100 subjects for testing.'), but does not explicitly specify a separate validation dataset split.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper mentions 'Pytorch' and 'Light CNN' but does not specify software versions for reproducibility.
Experiment Setup Yes The image size of source and target images for training is 128 128. Our network is implemented using Pytorch, the batch size is set to 16 and learning rate is 0.0001. We empirically set λ1 = 5, λ2 = 10, λ3 = 0.01, λ4 = 10, λ5 = 0.0001. ... To balance the performance and computing costs, we set the number of gated deformable convolution blocks Nb = 11. We adopt 5-point facial landmarks as pose conditions...