Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Bridging Jensen Gap for Max-Min Group Fairness Optimization in Recommendation

Authors: Chen Xu, Yuxin Li, Wenjie Wang, Liang Pang, Jun Xu, Tat-Seng Chua

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted using six large-scale RS backbone models on three publicly available datasets demonstrate that Fair Dual outperforms all baselines in terms of both accuracy and fairness. Our data and codes are shared at https://github.com/Xu Chen0427/Fair Dual.
Researcher Affiliation Academia 1 Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 2 School of Information Science and Technology, University of Science and Technology of China, Hefei, China 3 Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 4 NEx T++ Research Center, National University of Singapore, Singapore
Pseudocode Yes Algorithm 1: Fair Dual
Open Source Code Yes Extensive experiments conducted using six large-scale RS backbone models on three publicly available datasets demonstrate that Fair Dual outperforms all baselines in terms of both accuracy and fairness. Our data and codes are shared at https://github.com/Xu Chen0427/Fair Dual.
Open Datasets Yes Datasets. The experiments are conducted on the commonly used two widely used and publicly available recommendation datasets, including MIND (Wu et al., 2020)1, Amazon-Book and Amazon Electronic (He and Mc Auley, 2016)2. Their detailed statistical information is in Appendix I. 1https://microsoftnews.msn.com 2http://jmcauley.ucsd.edu/data/amazon/
Dataset Splits Yes Evaluation. We arrange all interactions in the dataset chronologically by their timestamps and employ the first 80% interactions as training data. The remaining 20% of interactions are divided equally, with each 10% segment used for validation and testing, respectively, during evaluation.
Hardware Specification Yes Environment: our experiments were implemented using Python 3.9 and Py Torch 2.0.1+cu117 (Paszke et al., 2017). All experiments were conducted on a server with an NVIDIA A5000 running Ubuntu 18.04.
Software Dependencies Yes Environment: our experiments were implemented using Python 3.9 and Py Torch 2.0.1+cu117 (Paszke et al., 2017). All experiments were conducted on a server with an NVIDIA A5000 running Ubuntu 18.04. We implement Fair Dual with the cvxpy (Diamond and Boyd, 2016) for optimization.
Experiment Setup Yes Hyper-parameter settings: the learning rate η [1e 2, 1e 4] (results shown in Figure 5), and trade-off factor λ [0, 10] (results shown in Figure 3). We set the mg as the group size mg = |Ig|. We also tune sample number Q [50, 400] (results shown in the Table 5), historical length H [3, 7] (results shown in Table 4), freeze parameter updating gap β [128, 3840] (results shown in Figure 4).