Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning

Authors: Xinyi Xu, Lingjuan Lyu, Xingjun Ma, Chenglin Miao, Chuan Sheng Foo, Bryan Kian Hsiang Low

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically demonstrate the effectiveness of our fair gradient reward mechanism on multiple benchmark datasets in terms of fairness, predictive performance, and time overhead.
Researcher Affiliation Collaboration Department of Computer Science, National University of Singapore, Republic of Singapore1 Sony AI2, School of Computer Science, Fudan University, People s Republic of China3 Department of Computer Science, University of Georgia, USA4 Institute for Infocomm Research, A*STAR, Republic of Singapore5
Pseudocode Yes We provide the pseudocodes performed by the server and agent i N in each iteration t below.
Open Source Code No The paper does not provide any explicit statements about making the source code available or include a link to a code repository.
Open Datasets Yes We perform extensive experiments on image classification datasets like MNIST [26] and CIFAR-10 [21] and text classification datasets like movie review (MR) [44] and Stanford sentiment treebank (SST) [20].
Dataset Splits Yes We perform extensive experiments on image classification datasets like MNIST [26] and CIFAR-10 [21] and text classification datasets like movie review (MR) [44] and Stanford sentiment treebank (SST) [20]. We use a 2-layer convolutional neural network (CNN) for MNIST [25], a 3-layer CNN for CIFAR-10 [22], and a text embedding CNN for MR and SST [20].
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU specifications, or memory.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes Hyperparameters. We find that α [0.8, 1)(i.e., relative weight on ri,t 1 in (4)), β [1, 2] (i.e., degree of altruism in (5)) and Γ [0.1, 1] (i.e., normalization coefficient in (1)) are effective in achieving competitive predictive performance and fairness. In our experiments, we set α = 0.95, β = [1, 1.2, 1.5, 2], and Γ = 0.5 for MNIST, Γ = 0.15 for CIFAR-10, and Γ = 1 for SST and MR.