Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
FairTrade: Achieving Pareto-Optimal Trade-Offs between Balanced Accuracy and Fairness in Federated Learning
Authors: Maryam Badar, Sandipan Sikdar, Wolfgang Nejdl, Marco Fisichella
AAAI 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide empirical evidence of our frameworkโs efficacy through extensive experiments on five real-world datasets and comparisons with six baselines. |
| Researcher Affiliation | Academia | L3S Research Center, Leibniz University, Hannover, Germany |
| Pseudocode | Yes | Algorithm 1: Fair Trade server side algorithm |
| Open Source Code | Yes | For reproducibility, all resources associated with our research, including code and data, are publicly accessible at the provided repository link 1. 1https://github.com/badarm/Fair Trade |
| Open Datasets | Yes | We evaluate Fair Trade with five real-world datasets: (1) Bank (Bache and Lichman 2013), (2) Default (Bache and Lichman 2013), (3) Adult (Bache and Lichman 2013), (4) Law (Wightman 1998), and (5) KDD (Bache and Lichman 2013). |
| Dataset Splits | No | The paper describes how the dataset is distributed among clients (randomly or attribute-based) but does not provide specific training, validation, and test dataset split percentages or sample counts for model training. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like BoTorch and Gaussian Process, but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | No | The paper describes the model architecture and that some parameters (learning rate, regularization parameter) are optimized, but it does not provide specific hyperparameter values (e.g., initial learning rate, batch size, number of epochs, or specific values for 'no' and 'nc' from Algorithm 1) used for the reported experiments. |