Compatibility Family Learning for Item Recommendation and Generation

Authors: Yong-Siang Shih, Kai-Yueh Chang, Hsuan-Tien Lin, Min Sun

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our system on a toy dataset, two Amazon product datasets, and Polyvore outfit dataset. Our method consistently achieves state-of-the-art performance. Finally, we show that we can visualize the candidate compatible prototypes using a Metric-regularized Conditional Generative Adversarial Network (Mr CGAN), where the input is a projected prototype and the output is a generated image of a compatible item. ... We evaluate our framework on Fashion-MNIST dataset, two Amazon product datasets, and Polyvore outfit dataset. Our method consistently achieves state-of-the-art performance for compatible item recommendation.
Researcher Affiliation Collaboration Yong-Siang Shih,1 Kai-Yueh Chang,1 Hsuan-Tien Lin,1,2 Min Sun3 1Appier Inc., Taipei, Taiwan, 2National Taiwan University, Taipei, Taiwan, 3National Tsing Hua University, Hsinchu, Taiwan. {yongsiang.shih,kychang}@appier.com, htlin@csie.ntu.edu.tw, sunmin@ee.nthu.edu.tw
Pseudocode No The paper describes algorithms and models using mathematical formulas and text, but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code (e.g., a specific repository link or an explicit statement about code release) for the methodology described.
Open Datasets Yes We evaluate our system on a toy dataset, two Amazon product datasets, and Polyvore outfit dataset. ... Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017). ... Amazon dataset (Mc Auley et al. 2015). ... Polyvore.com
Dataset Splits Yes Among the 60,000 training samples, 16,500 and 1,500 pairs are non-overlapped and randomly selected to form the training and validation sets, respectively. ... we increase the validation set via randomly selecting an additional 9,996 pairs from the original training set since its original size is too small, and accordingly decrease the training set by removing the related pairs for the non-overlapping requirement. ... Items of source and target categories are non-overlapped split according to the ratios 60 : 20 : 20 for training, validation, and test sets.
Hardware Specification No The paper does not provide specific hardware details (like GPU or CPU models, memory amounts, or detailed computer specifications) used for running its experiments. It mentions using deep learning models but no explicit hardware.
Software Dependencies No The paper mentions various software components and architectures like Adam optimizer, Siamese CNNs, DCGAN, SRRes Net, Inception-V1/V3, and DRAGAN, but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes We set λm to 0 and 0.5 respectively for recommendation and generation experiments and the batch size to 100, and use Adam optimizer with (λlr, β1, β2) = (0.001, 0.9, 0.999). ... Each model is trained for 50 epochs. ... We train 200 epochs for each model. ... The last layer is trained for 5 epochs in each model. ... For the generation experiments... (λlr, β1, β2) = (0.0002, 0.5, 0.999). ... we set both λgp and λdra to 0.5... The dimension of z and the number of K are respectively set to 20 and 2 in all generation experiments. Besides, the rest of the parameters are taken as follows: (1) MNIST+1+2: (N, menc, mprj) = (20, 0.1, 0.5), (2) Amazon co-purchase: (N, menc, mprj) = (64, 0.05, 0.2), (3) Polyvore: (N, menc, mprj) = (20, 0.05, 0.3).