Dual Conditional GANs for Face Aging and Rejuvenation
Authors: Jingkuan Song, Jingqiu Zhang, Lianli Gao, Xianglong Liu, Heng Tao Shen
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two publicly dataset demonstrate the appealing performance of the proposed framework by comparing with the state-of-the-art methods. |
| Researcher Affiliation | Academia | Jingkuan Song1, Jingqiu Zhang1, Lianli Gao1, Xianglong Liu2, Heng Tao Shen1 1Center for Future Media and School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China 2Beihang University, Beijing, 100083, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a direct link to the source code for the methodology described, nor does it explicitly state that the code will be made open source. It only provides a link to a third-party tool used for evaluation: 'Seeta Face Identification part of Seetaface Engine2 https://github.com/seetaface/Seeta Face Engine'. |
| Open Datasets | Yes | We use UTKFace [Zhang et al., 2017] for training... To show the generalizability of Dual c GANs, we not only test on the same dataset as the training dataset (i.e., UTKFace), but also test on the face images from CACD [Chen et al., 2014], FGNET [Lanitis et al., 2002], Morph [Ricanek and Tesafaye, 2006] and IMDB-Wiki dataset [Rothe et al., 2015]. |
| Dataset Splits | No | The paper mentions using UTKFace for training and other datasets for testing, but does not provide specific dataset split information (percentages, sample counts, or explicit validation set details) needed to reproduce the data partitioning for training, validation, and testing. |
| Hardware Specification | Yes | The training time for 80 epochs on NVIDIA TITAN X (12GB) is about 160 hours. |
| Software Dependencies | No | The paper mentions using 'RMSprop optimizer' but does not provide specific version numbers for any software dependencies like deep learning frameworks, libraries, or programming languages. |
| Experiment Setup | Yes | During the training process, we set α=10.0 and β=10.0 for a balance between keeping personality and changing feature. Each input needs to be trained nine times with nine images from different age groups. Our batch size is one, but actually it is nine as we updated the parameters after nine iterations. With four blocks, Gs, Gt, Ds and Dt, we train the discriminators one step, then two steps on generators as Dual GAN [Yi et al., 2017] described. The training time for 80 epochs on NVIDIA TITAN X (12GB) is about 160 hours. All the models are trained by RMSprop optimizer with a weight-decay of 0.9. The learning rate is initialized as 0.00005. |