Evidential Stochastic Differential Equations for Time-Aware Sequential Recommendation
Authors: Krishna Neupane, Ervine Zheng, Qi Yu
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on real-world data demonstrate the effectiveness of the proposed method compared to the SOTA methods. To assess the feasibility of our method in comparison with existing state-of-the-art methods, we perform a wide range of experiments on publicly available real-world datasets. We also conduct ablation and case studies to analyze the effectiveness and interpretability of our method. |
| Researcher Affiliation | Academia | Krishna Prasad Neupane, Ervine Zheng, Qi Yu Rochester Institute of Technology {kpn3569,mxz5733,qi.yu}@rit.edu |
| Pseudocode | Yes | Algorithm 1 E-NSDE Training |
| Open Source Code | Yes | For the source code, please click this link: https://github.com/ritmininglab/ENSDE |
| Open Datasets | Yes | Movielens-100K2: This dataset contains 100,000 explicit ratings on a scale of (1-5) from 943 users on 1,682 movies. Each user at least rated 20 movies from September 19, 1997 through April 22, 1998. Movielens-1M3: This dataset includes 1M explicit feedback (i.e. ratings) made by 6,040 anonymous users on 3,900 distinct movies from 04/2000 to 02/2003. |
| Dataset Splits | No | The paper states, 'We first split users by 70% into train and 30% in test,' but does not specify a separate validation split or its percentage. |
| Hardware Specification | No | The paper states 'We provide the computing details in the Appendix' (NeurIPS Checklist point 8), but the Appendix does not contain specific hardware details such as GPU models, CPU types, or memory specifications used for experiments. |
| Software Dependencies | No | The paper mentions using the 'Adam optimizer' and 'SDE adjoint method [25]' but does not provide specific version numbers for these or other software dependencies like programming languages or libraries (e.g., Python, PyTorch version). |
| Experiment Setup | Yes | One of the key hyperparameters of the E-NSDE model is the regularizer constant (λ) for the evidential learning. We cross-validated this parameter with empirical results of the model for the different λ values in two datasets as shown in Table 7. From the table, our model achieves the best performance in both datasets with λ = 0.001. ... We perform a grid search for the embedding dimension (d) of the user and item representation in E-NSDE model as shown in Figure 4a. From the plot, it shows that E-NSDE has the best performance with d = 64. ... We leverage grid search on uncertainty-aware ranking factor η, and WBPR loss balancing factor ζ on three datasets as shown in Figure 4b and Figure 4c, respectively. The figure shows a clear advantage with η = 0.01 ... Similarly, for ζ balancing factor integrated overall loss has the best performance when it is equal to 0.001. |