Denoised Self-Augmented Learning for Social Recommendation
Authors: Tianle Wang, Lianghao Xia, Chao Huang
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results on various recommendation benchmarks confirm the superiority of our DSL over state-of-the-art methods. We conduct extensive experiments to evaluate the effectiveness of our DSL by answering the following research questions: RQ1: Does DSL outperform state-of-the-art recommender systems? RQ2: How do different components affect the performance of DSL? RQ3: Is DSL robust enough to handle noisy and sparse data in social recommendation? RQ4: How efficient is DSL compared to alternative methods. Dataset. We conduct experiments on three benchmark datasets collected from the Ciao, Epinions, and Yelp online platforms, where social connections can be established among users in addition to their observed implicit feedback (e.g., rating, click) over different items. Table 1 lists the detailed statistical information of the experimented datasets. Metrics. We use Hit Ratio (HR)@N and Normalized Discounted Cumulative Gain (NDCG)@N as evaluation metrics, where N is set to 10 by default. |
| Researcher Affiliation | Academia | University of Hong Kong, Hong Kong |
| Pseudocode | No | The paper provides mathematical equations and describes procedures in text, but it does not contain any explicitly labeled "Pseudocode" or "Algorithm" blocks. |
| Open Source Code | Yes | We release our model implementation at: https://github.com/HKUDS/DSL. |
| Open Datasets | Yes | We conduct experiments on three benchmark datasets collected from the Ciao, Epinions, and Yelp online platforms, where social connections can be established among users in addition to their observed implicit feedback (e.g., rating, click) over different items. Table 1 lists the detailed statistical information of the experimented datasets. |
| Dataset Splits | Yes | We adopt a leave-one-out strategy, following similar settings as in [Long et al., 2021]. |
| Hardware Specification | Yes | We measure the computational costs (running time) of different methods on an NVIDIA GeForce RTX 3090 and present the training time for each model in Table 5. |
| Software Dependencies | No | We implement our DSL using PyTorch and optimize parameter inference with Adam. (Mentions PyTorch but no version number.) |
| Experiment Setup | Yes | During training, we use a learning rate range of [5e 4, 1e 3, 5e 3] and a decay ratio of 0.96 per epoch. The batch size is selected from [1024, 2048, 4096, 8192] and the hidden dimensionality is tuned from [64, 128, 256, 512]. We search for the optimal number of information propagation layers in our graph neural architecture from [1, 2, 3, 4]. The regularization weights λ1 and λ2 are selected from [1e 3, 1e 2, 1e 1, 1e0, 1e1] and [1e 6, 1e 5, 1e 4, 1e 3], respectively. The weight for weight-decay regularization λ3 is tuned from [1e 7, 1e 6, 1e 5, 1e 4]. |