Makeup Like a Superstar: Deep Localized Makeup Transfer Network

Authors: Si Liu, Xinyu Ou, Ruihe Qian, Wei Wang, Xiaochun Cao

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Qualitative and quantitative experiments show that our network performs much better than the methods of [Guo and Sim, 2009] and two variants of Nerual Style [Gatys et al., 2015a].
Researcher Affiliation Academia 1State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences 2School of Computer Science and Technology, Huazhong University of Science and Technology 3YNGBZX, Yunnan Open University 4University of Electronic Science and Technology of China, Yingcai Experimental School
Pseudocode No The paper includes mathematical formulations and a flowchart (Figure 2) but no structured pseudocode or algorithm blocks.
Open Source Code No The paper refers to using existing open-source code for baseline comparisons (footnote 5: 'we use the code https://github.com/jcjohnson/neural-style'), but does not provide any statement or link for the open-source code of their own proposed methodology.
Open Datasets No We collect a new dataset with 1000 before-makeup faces and 1000 reference faces. Some before-makeup faces are nude makeup or very light makeup. Among the 2000 faces, 100 before-makeup faces and 500 reference faces are randomly selected as test set. The remaining 1300 faces and 100 faces are used as training and validation set.
Dataset Splits Yes The remaining 1300 faces and 100 faces are used as training and validation set. Given one before-makeup test face, the most similar ones among the 500 reference test faces are chosen to transfer the makeup.
Hardware Specification Yes The proposed model can transfer the makeup in 6 seconds for an 224 224 image pair using TITAN X GPU.
Software Dependencies No The paper mentions using a 'Fully Convolution Network (FCN)' and the 'VGG-Face model' based on 'VGG-Very Deep-16 CNN architecture', and adapting 'neural-style' code for baselines (footnote 5), but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes The weights [λs λe λl λf] are set as [10 40 500 100]. The weights of different labels in the weighted FCN are set as [1.4 1.2 1] for {eyebrows, eyes, eye shadows}, {lip, inner mouth} and {face, background}, respectively.