Explainable Fashion Recommendation: A Semantic Attribute Region Guided Approach

Authors: Min Hou, Le Wu, Enhong Chen, Zhi Li, Vincent W. Zheng, Qi Liu

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct experiments on a real-world dataset to verify the feasibility of our proposed framework. We first introduce the experimental setup, followed by the experiment results.
Researcher Affiliation Collaboration Min Hou1,2 , Le Wu3 , Enhong Chen1,2 , Zhi Li1,2 , Vincent W. Zheng4 and Qi Liu1,2 1Anhui Province Key Lab. of Big Data Analysis and Application, University of S&T of China 2School of Data Science, University of S&T of China 3Hefei University of Technology 4We Bank
Pseudocode No No explicit pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide any statement or link regarding the availability of open-source code for the described methodology.
Open Datasets Yes We evaluate our methods on a real-world e-commerce dataset, i.e., Amazon Fashion. This dataset was introduced in [He and Mc Auley, 2016b; Kang et al., 2017] and consist of reviews of clothing items crawled from Amazon.com. ... We combine the UT-Zap50K shoes2 dataset and the Tianchi Apparel3 dataset, which contains 50,025 shoe and over 180,000 apparel image-level attribute annotations respectively. 2http://vision.cs.utexas.edu/projects/finegrained/utzap50k/ 3https://tianchi.aliyun.com/competition/entrance/231671/information
Dataset Splits Yes For each user, we randomly select one record for validation and another one for test.
Hardware Specification Yes All experiments are trained with NVIDIA K80 graphics card and implemented by Tensorflow.
Software Dependencies No The paper mentions "Tensorflow" but does not provide specific version numbers for software dependencies.
Experiment Setup Yes For all the models (except Random and Pop Rank), we tune hyper-parameters via grid search on the validation set, with a regularizer selected from [0, 0.0001, 0.001, 0.01, 0.1, 1], learning rate selected from [0.0001, 0.001, 0.01], and the latent feature dimension of [10, 30, 50, 100]. We use mini-batch size of 256 to train all the models until they converge.