AgeFlow: Conditional Age Progression and Regression with Normalizing Flows

Authors: Zhizhong Huang, Shouzhen Chen, Junping Zhang, Hongming Shan

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate superior performance over existing GANs-based methods on two benchmarked datasets.
Researcher Affiliation Academia 1Shanghai Key Lab of Intelligent Information Processing, School of Computer Science Fudan University, Shanghai 200433, China 2Institute of Science and Technology for Brain-inspired Intelligence and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200431, China 3Shanghai Center for Brain Science and Brain-inspired Technology, Shanghai 200031, China
Pseudocode No The paper describes the network architecture and loss functions in detail but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes The source code is available at https://github.com/Hzzone/AgeFlow.
Open Datasets Yes We conducted experiments on two benchmark age datasets: MORPH [Ricanek and Tesafaye, 2006] and CACD [Chen et al., 2015]. We also adopted FG-NET and Celeb A [Liu et al., 2015] as external testing sets...
Dataset Splits No We randomly divided the dataset into two parts without identities overlapping: 80% for training and the remaining for testing.
Hardware Specification Yes We trained all models with a batch size of 16 on 4 NVIDIA V100 GPUs
Software Dependencies No The paper mentions 'Py Torch' and 'Adam optimizer' but does not specify their version numbers.
Experiment Setup Yes All models are implemented by Py Torch and trained by Adam optimizer with a fixed learning rate of 10 5 for the GLOW model and ICTM, and 10 4 for the discriminator. In addition, due to the limited GPU memory, we trained all models with a batch size of 16 on 4 NVIDIA V100 GPUs and the parameters are updated for every 4 iterations, equal to a batch size of 64. We first trained the GLOW model on Celeb A [Liu et al., 2015] thanks to its diversity and a huge amount of face images for 1M iterations and then finetuned the models on each dataset with only 50K iterations. ICTM contains m = 32 flows. The hyperparameters in the final loss are empirically set as follows: λal was 1; λcl was 0.01; λacl was 1; λD acl was 0.1; and λakl was 1. The s in the knowledge distilling loss is set as 1.4 for MORPH and 1.8 for CACD.