Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Learning Heterogeneous Performance-Fairness Trade-offs in Federated Learning
Authors: Rongguang Ye, Ming Tang
IJCAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on four datasets show that Het PFL significantly outperforms seven baselines in terms of the quality of learned local and global Pareto fronts. |
| Researcher Affiliation | Academia | Department of Computer Science and Engineering and the Research Institute of Trustworthy Autonomous Systems at Southern University of Science and Technology, Shenzhen, China EMAIL, EMAIL |
| Pseudocode | No | The paper describes the 'Het PFL algorithm' in Section 3.4 by explaining the steps in paragraph text (e.g., 'In round t, the communicated model on client k is updated...', 'Then, we proceed to optimize the hypernet...') rather than presenting it in a structured pseudocode or algorithm block. |
| Open Source Code | Yes | Our implementation is available at https://github.com/r G223/Het PFL. |
| Open Datasets | Yes | Four widely-used datasets are employed to evaluate the performance of Het PFL, including a SYNTHETIC [Zeng et al., 2021], COMPAS [Barenstein, 2019], BANK [Moro et al., 2014], and ADULT [Dua et al., 2017]. |
| Dataset Splits | No | The paper mentions using several datasets (SYNTHETIC, COMPAS, BANK, ADULT) but does not provide specific details regarding how these datasets were split into training, validation, or test sets (e.g., percentages, sample counts, or methodology for splitting). |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It only discusses the experimental settings and datasets without mentioning the underlying computational hardware. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies, libraries, or frameworks used in the implementation of Het PFL. |
| Experiment Setup | Yes | Since Pra FFL and Het PFL have the ability to generate any number of models during inference, we set them to generate 1,000 preference-specific models each for evaluation. |