Does Every Data Instance Matter? Enhancing Sequential Recommendation by Eliminating Unreliable Data
Authors: Yatong Sun, Bin Wang, Zhu Sun, Xiaochun Yang
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on four real-world datasets demonstrate the superiority of our proposed BERD. Additionally, detailed ablation study further confirms the effectiveness of each module of BERD. |
| Researcher Affiliation | Academia | 1Northeastern University, China 2Macquarie University, Australia |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link for open-source code. |
| Open Datasets | Yes | Datasets. We adopt four datasets varying w.r.t. domain, size, sparsity level, and ratio of unreliable data, as shown in Table 2. ML-1M [Harper and Konstan, 2015] is a popular dataset for movie recommendation. Steam [Kang and Mc Auley, 2018] is a game recommendation benchmark collected from Steam. CD and Elect are product review datasets crawled from Amazon [Mc Auley and Leskovec, 2013] for cd and electronics, respectively. |
| Dataset Splits | Yes | To be more specific, for each user, we split the last two interactions (instances) into validation and test sets, respectively, while the rest are used for training. |
| Hardware Specification | No | The paper does not provide any specific hardware details used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | Parameter Settings. We adopt Xavier [Glorot and Bengio, 2010] initializer and Adam [Kingma and Ba, 2015] optimizer with d = 50; the learning rate η = 0.01 with batch size of 8192; the weight of sampled loss λ = 0.01 and uncertainty margin γ = 1; the number of propagation layers K = 2; the input length L = 5; the sample size Z = 4; the filter ratio α = 0.05 for Steam and CD, and α = 0.1 for ML-1M and Elect; the head number of self-attention is set to 2. |