Addressing Hidden Confounding with Heterogeneous Observational Datasets for Recommendation

Authors: Yanghao Xiao, Haoxuan Li, Yongqiang Tang, Wensheng Zhang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three public datasets validate our method achieves state-of-the-art performance in the presence of hidden confounding, regardless of RCT data availability. (...) 4 Experiments
Researcher Affiliation Academia Yanghao Xiao1,3 Haoxuan Li2 Yongqiang Tang3, Wensheng Zhang4, 1University of Chinese Academy of Sciences 2Peking University 3Institute of Automation, Chinese Academy of Sciences 4Guangzhou University
Pseudocode Yes Algorithm 1: The Proposed Meta Debias Learning Algorithm
Open Source Code No The paper states 'The paper provides open access to the data and code' in its NeurIPS checklist, but does not provide a specific link or explicit instruction for accessing the code within the main paper content or appendix that the user can directly act on.
Open Datasets Yes we conduct extensive experiments on three public datasets, COAT2, YAHOO! R33, and KUAIREC4 [11]. 2https://www.cs.cornell.edu/~schnabts/mnar/ 3https://webscope.sandbox.yahoo.com 4https://github.com/chongminggao/Kuai Rec
Dataset Splits Yes Following previous works [3, 28, 29, 34], we randomly split 5% unbiased data from the test set as validation set, and for all methods requiring RCT data, we employ observational data without hidden confounding to pretend RCT data.
Hardware Specification Yes All the methods are implemented on Py Torch with Adam as the optimizer and NVIDIA A40 as the computing resource, and we tune learning rate in {0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05} and weight decay in [1e-7, 10].
Software Dependencies No All the methods are implemented on Py Torch with Adam as the optimizer and NVIDIA A40 as the computing resource, and we tune learning rate in {0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05} and weight decay in [1e-7, 10]. It mentions 'Py Torch' but does not specify a version number.
Experiment Setup Yes Two-layer multi-layer perceptron are used as the base model, and we compare proposed methods with both RCT data-free and RCT data-based methods. (...) All the methods are implemented on Py Torch with Adam as the optimizer and NVIDIA A40 as the computing resource, and we tune learning rate in {0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05} and weight decay in [1e-7, 10].