FACT: Fused Attention for Clothing Transfer with Generative Adversarial Networks

Authors: Yicheng Zhang, Lei Li, Li Song, Rong Xie, Wenjun Zhang12894-12901

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments In this section, we conduct quantitative and qualitative evaluations on the Deep Fashion Dataset to validate the effectiveness of our FACT model. We provide both qualitative and quantitative results on the clothing transfer task and demonstrate its superiority over the state-of-the-art method on the Deep Fashion dataset.
Researcher Affiliation Collaboration 1Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University {ironic, song li, xierong, zhangwenjun}@sjtu.edu.cn 2Sense Time, Shanghai, China lilei@sensetime.com 3Mo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about making the source code publicly available or a link to a code repository.
Open Datasets Yes Dataset. The Deep Fashion dataset is composed of a training set with 70,000 images and a test set with 8,979 images. All the evaluations are conducted on the test set.
Dataset Splits No The paper states: 'The Deep Fashion dataset is composed of a training set with 70,000 images and a test set with 8,979 images.' However, it does not provide details on a validation set split or how the training data is further partitioned for validation.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions techniques and optimizers like 'spectral normalization', 'label smoothing trick', and 'Adam', but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Training Details. The model is trained using Adam (Kingma and Ba 2014) with a learing rate of 0.0002 for both generators and discriminators. The batch size is set to 32 for the first stage and 16 for the second stage. The hyperparameters are λbg = λ1 rec = λ2 rec = 10 . We first train iteratively D1 and G1 for 15 epochs by fixing the second stage GAN and linearly decay the rate to zero over the last 5 epochs. Then we train D2 and G2 for 20 epochs by fixing the first stage GAN and linearly decay the rate to zero over the last 5 epochs.