Fashion Style Generator

Authors: Shuhui Jiang, Yun Fu

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our method outperforms the state-of-the-arts. Our training dataset contains two parts: A Fashion 144k dataset as full image inputs [Simo Serra and Ishikawa, 2016] and 300 online shopping images as patch inputs, which are randomly selected from the Online Shopping dataset [Hadi Kiapour et al., 2015]. Our testing data are 100 images randomly collected from online shopping websites.
Researcher Affiliation Academia Shuhui Jiang1 and Yun Fu1,2 1Department of Electrical and Computer Engineering, Northeastern University, Boston, MA 02115, USA 2College of Computer and Information Science, Northeastern University, Boston, MA 02115, USA
Pseudocode Yes Algorithm 1 Alternating Patch-Global Back-propagation
Open Source Code No The paper states 'For the comparison methods, we run the code released by the authors.' referring to other works, but does not provide any statement or link for the release of their own source code.
Open Datasets Yes Our training dataset contains two parts: A Fashion 144k dataset as full image inputs [Simo Serra and Ishikawa, 2016] and 300 online shopping images as patch inputs, which are randomly selected from the Online Shopping dataset [Hadi Kiapour et al., 2015].
Dataset Splits No The paper mentions training on Fashion 144k and 300 online shopping images, and testing on 100 images. It does not explicitly define training, validation, and test splits with percentages or exact counts for all parts of the dataset.
Hardware Specification Yes Each style training takes around 7 hours on a single GTX Titan X GPU.
Software Dependencies No The training is implemented using Torch [Collobert et al., 2011] and cu DNN [Chetlur et al., 2014]. Specific version numbers for Torch and cuDNN are not provided.
Experiment Setup Yes For global stage back-propagation, maximum iteration is set to be 40000, and a batch size of 4 is applied. ... The optimization is based on Adam [Kingma and Ba, 2014] with a learning rate of 1 10 3. No weight decay or dropout is used. ... In Ours, we set T = 1 and τ (1) = τ (2) = 3000. ... The initial learning rate η(1) in patch optimization is 0.02. We fix η(1) and tune η(2) of global optimization as e 5 to e 9.