Anti-Makeup: Learning A Bi-Level Adversarial Network for Makeup-Invariant Face Verification

Authors: Yi Li, Lingxiao Song, Xiang Wu, Ran He, Tieniu Tan

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on three benchmark makeup face datasets demonstrate that our method achieves state-of-the-art verification accuracy across makeup status and can produce photo-realistic non-makeup face images.We evaluate our proposed BLAN on three makeup datasets. Both visualized results of synthesized non-makeup images and quantitative verification performance are present in this section. Furthermore, we explore the effects of all losses and report them in the ablation studies.
Researcher Affiliation Academia Yi Li, Lingxiao Song, Xiang Wu, Ran He, Tieniu Tan National Laboratory of Pattern Recognition, CASIA Center for Research on Intelligent Perception and Computing, CASIA Center for Excellence in Brain Science and Intelligence Technology, CAS University of Chinese Academy of Sciences, Beijing 100190, China
Pseudocode No The paper describes the network architecture and components using text and mathematical formulas, but does not provide any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper mentions using PyTorch ('We accomplish our network on Py Torch (Paszke, Gross, and Chintala 2017)'), but does not provide any explicit statement or link for the source code of the methodology described in the paper itself.
Open Datasets Yes Dataset 1: This dataset is collected in (Guo, Wen, and Yan 2014) and contains 1002 face images of 501 female individuals. [...] Dataset 2: Assembled in (Sun et al. 2017), there are 203 pairs of images with and without makeup, each pair corresponding to a female individual. [...] Dataset 3 (FAM) (Hu et al. 2013): Different from the other two datasets, FAM involves 222 males and 297 females, with 1038 images belonging to 519 subjects in total.
Dataset Splits Yes Following the settings in (Guo, Wen, and Yan 2014; Sun et al. 2017; Hu et al. 2013), we adopt five-fold cross validation in our experiments. In each round, we use about 4/5 paired data for training and the rest 1/5 for testing, no overlap between training set and testing set.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No We accomplish our network on Py Torch (Paszke, Gross, and Chintala 2017). The Light CNN used for feature extracting is pretrained on MS-Celeb-1M (Guo et al. 2016)...
Experiment Setup Yes It takes about 3 hours to train BLAN on Dataset 1, with a learning rate of 10 4. Data augmentation of mirroring images is also adopted in the training phase. As for the loss weights, we empirically set λ1 = 3 10 3, λ2 = 0.02 and λ3 = 3 10 3. In particular, we also set a weight of 0.1 to the edge loss and 0.3 to the symmetry loss inside Lcons p.