Real-World Automatic Makeup via Identity Preservation Makeup Net
Authors: Zhikun Huang, Zhedong Zheng, Chenggang Yan, Hongtao Xie, Yaoqi Sun, Jianzhong Wang, Jiyong Zhang
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the experiment, we show the proposed method achieves not only better accuracy in both realism (FID) and diversity (LPIPS) in the test set, but also works well on the real-world images collected from the Internet. |
| Researcher Affiliation | Collaboration | Zhikun Huang1 , Zhedong Zheng2,4 , Chenggang Yan1 , Hongtao Xie3 , Yaoqi Sun1 , Jianzhong Wang1 , Jiyong Zhang1 1Hangzhou Dianzi University 2University of Technology Sydney 3University of Science and Technology of China 4Baidu Research |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper states that the IPM-Net is implemented in Pytorch and Paddle Paddle, but it does not provide any specific link or explicit statement about the release of the source code for the described methodology. |
| Open Datasets | Yes | We train and test our model on the widely-used Makeup Transfer dataset [Li et al., 2018]. |
| Dataset Splits | No | The paper mentions training and testing on the Makeup Transfer dataset, but it does not provide specific percentages or counts for training, validation, and test splits. It only describes image preprocessing and augmentation during the training phase. |
| Hardware Specification | Yes | All our experiments are conducted on one NVIDIA GTX 2080Ti GPU. |
| Software Dependencies | No | The paper states that IPM-Net is implemented in Pytorch and Paddle Paddle, and uses Adam optimizer, but it does not specify any version numbers for these software components. |
| Experiment Setup | Yes | During the training phase, each image is resized to 321 321, and then is random-cropped to 256 256. Randomly horizontally flipping is applied as simple data augmentation. We apply Adam [Kingma and Ba, 2014] to optimize the whole IPM-Net with λ1 = 0.5, λ2 = 0.999 and set learning rate to 0.0001. We train our model for 1,000,000 iterations, and the batch size is set as 3. |