Inferring Substitutable Products with Deep Network Embedding
Authors: Shijie Zhang, Hongzhi Yin, Qinyong Wang, Tong Chen, Hongxu Chen, Quoc Viet Hung Nguyen
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on seven real-world datasets are conducted, and the results verify that our model outperforms state-of-the-art baselines. |
| Researcher Affiliation | Academia | 1School of Information Technology and Electrical Engineering, The University of Queensland, Australia 2School of Information and Communication Technology, Griffith University, Australia |
| Pseudocode | No | The paper describes the model mathematically and textually but does not contain a structured pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide any explicit statement about making its source code open, nor does it provide a link to a code repository for the described methodology. |
| Open Datasets | Yes | We use seven publicly available Amazon product review datasets collected by [Mc Auley et al., 2015] |
| Dataset Splits | No | The paper states "We randomly choose 85% of the labelled substitutable product pairs as the training set, and use the remaining pairs as the test set." It does not explicitly mention a validation set. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library or solver names). |
| Experiment Setup | Yes | The hyper-parameters of α, β and γ are also listed in Table 2. For θ and µ in Lss, the optimal values are θ = 0.2 and µ = 0.6, following [Li et al., 2003]. To train SPEM, we set τ, batch size and learning rate as 5, 16 and 0.01 following [Wang et al., 2016]. |