Learning to Match on Graph for Fashion Compatibility Modeling
Authors: Xun Yang, Xiaoyu Du, Meng Wang287-294
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct extensive experiments to justify the effectiveness of our proposed DREP on compatibility learning. 1) RQ1: Can DREP achieve competitive performance by exploiting the extra-connectivities? 2) RQ2: Can DREP effectively model the compatibility relationship? 3) RQ3: How do hyper-parameters effect the performance of DREP? |
| Researcher Affiliation | Academia | 1School of Computing, National University of Singapore 2School of Information and Software Engineering, University of Electronic Science and Technology 3Department of Computer Science, Hefei University of Technology |
| Pseudocode | No | No pseudocode or algorithm blocks are present in the paper. |
| Open Source Code | No | The paper does not provide an explicit statement about the open-source availability of the code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | In this work, we employ the widely-used Amazon (Men and Women) (Veit et al. 2015) dataset to justify the effectiveness of DREP on modeling compatibility relationship. |
| Dataset Splits | Yes | We randomly sample 80% items for training, 10% items for validation, and 10% items for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper states: 'We implement DREP using Tensorflow.' but does not provide specific version numbers for TensorFlow or any other software dependencies. |
| Experiment Setup | Yes | The number of embedding propagation layers in DREP is set to 2 by the performance on validation set. The embedding size of each layer is set to 64 for simplicity, resulting in 128-D item embeddings (as shown in Eq. (6)) from the output of DREP. We optimize all models with the Adagrad optimizer. The learning rate and regularization term are both fixed at 0.01 and 1e-5 by grid searching on validation set. |