Multiple Robust Learning for Recommendation
Authors: Haoxuan Li, Quanyu Dai, Yuru Li, Yan Lyu, Zhenhua Dong, Xiao-Hua Zhou, Peng Wu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on real-world and semi-synthetic datasets, which demonstrates the superiority of the proposed approach over the state-of-the-art methods. |
| Researcher Affiliation | Collaboration | 1Peking University 2Huawei Noah s Ark Lab 3Beijing Technology and Business University |
| Pseudocode | Yes | Algorithm 1: Alternating Multiple Robust Learning with Stabilization |
| Open Source Code | Yes | The proposed MR2 and most existing debiasing methods are model-agnostic, which can be integrated into existing recommendation models for unbiased learning based on biased data. 2https://gitee.com/mindspore/models/tree/master/research/ recommend/multi_robust |
| Open Datasets | Yes | We conduct experiments on both real-world datasets and semi-synthetic datasets to evaluate the effectiveness of our proposed method. Datasets. We consider two benchmark real-world datasets containing MNAR and MAR ratings, i.e., Coat3 (Schnabel et al. 2016) and Yahoo4 (Marlin and Zemel 2009), as existing work (Schnabel et al. 2016; Wang et al. 2019). ... We conduct experiments on semi-synthetic datasets constructed from Movie Lens 100K5 (ML-100K). |
| Dataset Splits | Yes | All experiments are implemented on Py Torch (Paszke et al. 2019) with Adam optimizer (Kingma and Ba 2015), and grid search is used to choose the optimal set of hyper-parameters based on a validation set split from the training set. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for experiments. It mentions support from 'CANN (Compute Architecture for Neural Networks) and Ascend AI Processor' but this is a general statement about the supporting technology, not the specific hardware used for these experiments. |
| Software Dependencies | No | All experiments are implemented on Py Torch (Paszke et al. 2019) with Adam optimizer (Kingma and Ba 2015)... No specific version numbers for PyTorch or Adam are provided, only the citation years. |
| Experiment Setup | Yes | All experiments are implemented on Py Torch (Paszke et al. 2019) with Adam optimizer (Kingma and Ba 2015), and grid search is used to choose the optimal set of hyper-parameters based on a validation set split from the training set. ... where I is an identity matrix, and λ is a hyper-parameter for stabilization. |