Multi-Attribute Transfer via Disentangled Representation

Authors: Jianfu Zhang, Yuanyuan Huang, Yaoyi Li, Weijie Zhao, Liqing Zhang9195-9202

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We train and evaluate our model on face dataset Celeb A. Furthermore, the evaluation of another facial expression dataset Ra FD demonstrates the generalizability of our proposed model. Experiments show that we provide qualitative results on selected datasets with high quality synthesized images and disentangled representation. In this section, we show the experimental results of our proposed model. Here we put quantitative results to show the performance comparisons between our model and the others.
Researcher Affiliation Collaboration Jianfu Zhang,1 Yuanyuan Huang,1 Yaoyi Li,1 Weijie Zhao,2 Liqing Zhang1 1Shanghai Jiao Tong University, 2Versa
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not contain any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes The Celeb Faces Attributes (Celeb A) (Liu et al. 2015) contains 202599 face images of celebrities around the world. The Radboud Faces Database (Ra FD) (Langner et al. 2010) contains 4824 images collected from 67 different persons.
Dataset Splits No The paper states: 'Following the setting of (Choi et al. (2018), we randomly select 2000 images from the whole dataset as the test set and the rest images are used for training.' A separate validation set or split for hyperparameter tuning is not explicitly mentioned.
Hardware Specification Yes The whole traning process takes about one and a half days on a single NVIDIA Tesla P100 GPU.
Software Dependencies No The paper mentions optimizers and normalization layers (e.g., 'Adam optimizer', 'Spectral Normalization Layers', 'Batch Normalization Layers') but does not provide specific software library names or their version numbers (e.g., PyTorch version, TensorFlow version) for reproducibility.
Experiment Setup Yes We use λcls = λver = 0.1 and λrec = 10 in our experiments. We use Adam optimizer (Kingma and Ba 2014) with β1 = 0.5 and β2 = 0.999. Random horizontal flip are applied for data augmentation. We set batchsize to 16 and train our model with 200000 iterations with learning rate 0.0001. We assign 16 channels of feature map for each latent unit and 160 channels for attribute-irrelevant parts.