Face Photo-Sketch Synthesis via Knowledge Transfer
Authors: Mingrui Zhu, Nannan Wang, Xinbo Gao, Jie Li, Zhifeng Li
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the effectiveness of our method across several datasets. Quantitative and qualitative evaluations illustrate that our model outperforms other state-of-the-art methods in generating face sketches (or photos) with high visual quality and recognition ability. |
| Researcher Affiliation | Collaboration | 1State Key Laboratory of Integrated Services Networks, Xidian University, Xi an, China 2School of Electronic Engineering, Xidian University, Xi an, China 3School of Telecommunications Engineering, Xidian University, Xi an, China 4Tencent AI Lab, Shenzhen, China |
| Pseudocode | No | The paper describes the network architecture in detail but does not provide pseudocode or a clearly labeled algorithm block. |
| Open Source Code | No | The paper states 'All results are obtained from the source codes provided by the authors except the results of FCN.' referring to other methods, but does not provide an explicit statement or link for the source code of the method described in this paper. |
| Open Datasets | Yes | We conducted experiments on two public datasets: the CUFS dataset [Tang and Wang, 2009] and the CUFSF dataset [Zhang et al., 2011b]. |
| Dataset Splits | Yes | For the CUHK student database, we randomly choose 88 face photo-sketch pairs for training and the rest are used for testing. For the AR database, 80 face photo-sketch pairs are randomly chosen for training and the rest 43 pairs are used for testing. For the XM2VTS database, we randomly choose 100 pairs for training and the rest 195 pairs are used for testing. For the CUFSF dataset, 250 face photo-sketch pairs are chosen for training and the rest are used for testing. |
| Hardware Specification | Yes | Our model was trained on a NVIDIA Titan X GPU. |
| Software Dependencies | No | The paper mentions 'Adam [Kingma and Ba, 2015] with β1 = 0.5 was used for optimization' but does not specify software versions for libraries, frameworks, or languages like Python, TensorFlow, or PyTorch. |
| Experiment Setup | Yes | Adam [Kingma and Ba, 2015] with β1 = 0.5 was used for optimization. The learning rate was set to 0.0002 and the iteration times was set to 200. Weights were initialized from a Gaussian distribution with mean 0 and standard deviation 0.02. We scaled the size of the input images to 256 × 256 and normalized the pixel value to the interval [−1, 1] before putting them into the model. The number of input and output channels was set to 3. We updated Stu-Pto S and Stu-Sto P alternatively at every iteration. The batch size was set to 1. |