Learning Fair Representations for Recommendation via Information Bottleneck Principle
Authors: Junsong Xie, Yonghui Yang, Zihan Wang, Le Wu
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical studies on two real-world datasets demonstrate the effectiveness of the proposed Fair IB, which significantly improves fairness while maintaining competitive recommendation accuracy, either in single or multiple sensitive scenarios. In this section, we first introduce our experimental settings. Then, we conduct extensive comparisons with SOTA methods to verify the effectiveness of our proposed Fair IB. Finally, we give a detailed analysis of our method, including ablation studies and parameter sensitivities. |
| Researcher Affiliation | Academia | 1Hefei University of Technology 2Institute of Dataspace, Hefei Comprehensive National Science Center |
| Pseudocode | No | No. The paper describes the proposed method and its optimization through text and mathematical equations, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/jsxie9/IJCAI_Fair_IB. |
| Open Datasets | Yes | To evaluate the effectiveness of our proposed method, we select two real-world recommendation datasets: Movielens-1M [Harper and Konstan, 2015; Wu et al., 2020] and Last FM [Celma Herrada and others, 2009]. |
| Dataset Splits | Yes | Following the previous works [Wu et al., 2021a; Zhao et al., 2023], we split all interactions into training, validation, and test data. |
| Hardware Specification | Yes | We conduct experiments on an NVIDIA A40 GPU with Pytorch-2.1.2. |
| Software Dependencies | Yes | We conduct experiments on an NVIDIA A40 GPU with Pytorch-2.1.2. |
| Experiment Setup | Yes | For model training, we set the latent embedding size as D = 64, the batch size is set to 2048 for Movielens-1M and 4096 for Last FM. The regularization parameter α is set to 0.001. We adopt the Adam optimizer with a learning rate of 0.001. We repeat experiments 10 times and report the average results. |