Age Progression and Regression with Spatial Attention Modules

Authors: Qi Li, Yunfan Liu, Zhenan Sun11378-11385

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on multiple datasets demonstrate the ability of our model in synthesizing lifelike face images at desired ages with personalized features well preserved, and keeping age-irrelevant regions unchanged. Extensive experiments on three age databases are conducted to comprehensively evaluate the proposed method.
Researcher Affiliation Academia 1Center for Research on Intelligent Perception and Computing, CASIA 2National Laboratory of Pattern Recognition, CASIA 3Artificial Intelligence Research, CAS, Jiaozhou, Qingdao, China 4Center for Excellence in Brain Science and Intelligence Technology, CAS 5University of Chinese Academy of Sciences
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not explicitly state that source code for their method is open-sourced or provide a link to it. It mentions "Face++ Research Toolkit (http://www.faceplusplus.com)" which is a third-party tool used for evaluation.
Open Datasets Yes Three publicly available face aging datasets, Morph (Ricanek and Tesafaye 2006), CACD (Chen, Chen, and Hsu 2015), and UTKFace (Zhang, Song, and Qi 2017) are used in our experiments.
Dataset Splits No The paper specifies a training and testing split, but no explicit validation dataset split is mentioned. "For each dataset, we randomly select 80% images for training and the rest for testing, and ensure that these two sets do not share images of the same subject."
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU, CPU models) used for running its experiments.
Software Dependencies No The paper mentions using the "Adam optimizer" and "Face++ Research Toolkit" but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We train our model for 30 epochs with batchsize of 24, using the Adam optimizer with learning rate set to 1e-4. Optimization over generators is performed every 5 iterations of discriminators. As for the balancing hyperparameters λrecon, λactv, and λreg, we first initialize them to make all losses to be of the same order of magnitude as the adversarial loss LGAN, then divide them by 10 except for λreg to emphasize the importance of accurate age simulation.