Reference Guided Face Component Editing

Authors: Qiyao Deng, Jie Cao, Yunfan Liu, Zhenhua Chai, Qi Li, Zhenan Sun

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments Both qualitative and quantitative results demonstrate that our model is superior to existing literature. Following most of face portrait methods [He et al., 2019; Wu et al., 2019], we leverage Fr echet Inception Distance (FID, lower value indicates better quality) [Heusel et al., 2017] and Multi-Scale Structural SIMilarity (MS-SSIM, higher value indicates better quality) [Wang et al., 2003] to evaluate the performance of our model.
Researcher Affiliation Collaboration Qiyao Deng1,4 , Jie Cao1,4 , Yunfan Liu1,4 , Zhenhua Chai5 , Qi Li1,2,4 , Zhenan Sun1,3,4 1Center for Research on Intelligent Perception and Computing, NLPR, CASIA, Beijing, China 2Artificial Intelligence Research, CAS, Qingdao, China 3Center for Excellence in Brain Science and Intelligence Technology, CAS, Beijing, China 4School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China 5Vision Intelligence Center, AI Platform, Meituandianping Group {qiyao.deng, jie.cao, yunfan.liu}@cripac.ia.ac.cn, {qli, znsun}@nlpr.ia.ac.cn, chaizhenhua@meituan.com
Pseudocode No The paper describes the method using prose, equations, and diagrams, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not explicitly state that source code for the described methodology is available, nor does it provide any links to a code repository.
Open Datasets Yes The face attribute dataset Celeb AMask-HQ [Lee et al., 2019] contains 30000 aligned facial images with the size of 1024 1024 and corresponding 30000 semantic segmentation labels with the size of 512 512.
Dataset Splits No We take 2,000 images as the test set for performance evaluation, using rest images to train our model. (No explicit mention of a separate validation split).
Hardware Specification Yes Our end-to-end network is trained on four Ge Force GTX TITAN X GPUs of 12GB memory.
Software Dependencies No The paper mentions using Adam optimizer and VGG-19 network but does not specify software dependencies with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x).
Experiment Setup Yes Adam optimizer is used in experiments with β1 = 0.0 and β2 = 0.9. The hyperparameters from λ1 to λ6 are assigned as 0.1, 250, 1, 0.5, 0.1, 0.01 respectively. For each source image, we remove two or three target face components for training.