FaceController: Controllable Attribute Editing for Face in the Wild

Authors: Zhiliang Xu, Xiyu Yu, Zhibin Hong, Zhen Zhu, Junyu Han, Jingtuo Liu, Errui Ding, Xiang Bai3083-3091

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive quantitative and qualitative evaluations have been conducted. In a single framework, our method achieves the best or competitive scores on a variety of face applications. Experiments Implementation details. The training images for Face Controller are collected from Celeb A-HQ (Karras et al. 2017), FFHQ (Karras, Laine, and Aila 2019), and VGGFace (Parkhi, Vedaldi, and Zisserman 2015) datasets.
Researcher Affiliation Collaboration 1 Huazhong University of Science and Technology 2 Baidu Inc.
Pseudocode No Not found.
Open Source Code No Not found. The paper only links to code for a comparison method (Face Swap) not their own.
Open Datasets Yes The training images for Face Controller are collected from Celeb A-HQ (Karras et al. 2017), FFHQ (Karras, Laine, and Aila 2019), and VGGFace (Parkhi, Vedaldi, and Zisserman 2015) datasets.
Dataset Splits No The paper uses some training data for face reconstruction (20%) but does not specify overall train/validation/test splits for model training, nor does it specify how the datasets mentioned (Celeb A-HQ, FFHQ, VGGFace) are split for training and validation.
Hardware Specification No No specific hardware details (like GPU/CPU models) are mentioned.
Software Dependencies No The paper mentions software like 'PyTorch' and 'Bi Se Net model' and 'VGG network' but does not specify their version numbers.
Experiment Setup Yes The generator and discriminator are trained around 500K steps, respectively. More details can be found in the Supplementary Materials. ... L = Ladv + λid Lid + λlm Llm + λhm Lhm + λper Lper, (9) where Ladv denotes GAN loss. We set the λid = 10, λlm = 10000, λhm = 100, and λper = 100, respectively.