Be Aware of the Neighborhood Effect: Modeling Selection Bias under Interference
Authors: Haoxuan Li, Chunyuan Zheng, Sihao Ding, Peng Wu, Zhi Geng, Fuli Feng, Xiangnan He
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive semi-synthetic and real-world experiments are conducted to demonstrate the effectiveness of the proposed methods. |
| Researcher Affiliation | Academia | 1Peking University 2University of Science and Technology of China 3Beijing Technology and Business University hxli@stu.pku.edu.cn dsihao@mail.ustc.edu.cn {zhengchunyuan99, fulifeng93, xiangnanhe}@gmail.com {pengwu, zhigeng}@btbu.edu.cn |
| Pseudocode | Yes | Algorithm 1: The Proposed Propensity Learning Algorithm, Algorithm 2: The Proposed N-IPS Learning Algorithm, Algorithm 3: The Proposed N-DR-JL Learning Algorithm, Algorithm 4: The Proposed N-MRDR-JL Learning Algorithm |
| Open Source Code | Yes | Our codes and datasets are available at https://github.com/haoxuanli-pku/ICLR24-Interference. |
| Open Datasets | Yes | We conduct semi-synthetic experiments using the Movie Lens 100K1 (ML-100K) dataset..., Coat 3 contains 6,960 MNAR ratings and 4,640 missing-at-random (MAR) ratings., Yahoo! R34 contains 311,704 MNAR ratings and 54,000 MAR ratings., Kuai Rec5 (Gao et al., 2022) is a public large-scale industrial dataset, which contains 4,676,570 video watching ratio records from 1,411 users for 3,327 videos. |
| Dataset Splits | No | The paper mentions 'training the prediction model' and 'test data' but does not explicitly provide specific training/validation/test dataset splits (e.g., percentages or counts) or reference standard predefined splits for its experimental setup. |
| Hardware Specification | Yes | For all experiments, we use NVIDIA Ge Force RTX 3090 as the computing resource. |
| Software Dependencies | No | All the experiments are implemented on Py Torch with Adam as the optimizer. |
| Experiment Setup | Yes | We tune the learning rate in {0.005, 0.01, 0.05, 0.1} and weight decay in [1e 6, 1e 2]. We tune bandwidth value in {40, 45, 50, 55, 60} for Coat, {1000, 1500, 2000, 2500, 3000} for Yahoo! R3 and {100, 150, 200, 250, 300} for Kuai Rec. |