Task-Agnostic Privacy-Preserving Representation Learning for Federated Learning against Attribute Inference Attacks
Authors: Caridad Arroyo Arevalo, Sayedeh Leila Noorbakhsh, Yun Dong, Yuan Hong, Binghui Wang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive results on multiple datasets and applications validate the effectiveness of TAPPFL to protect data privacy, maintain the FL utility, and be efficient as well. Experimental results also show that TAPPFL outperforms the existing defenses. |
| Researcher Affiliation | Academia | 1Illinois Institute or Technology 2Benedictine University 3University of Connecticut |
| Pseudocode | Yes | Algorithm 1 in the full version details the TAPPFL training process. |
| Open Source Code | Yes | The proofs are in the full version: https://github.com/TAPPFL. |
| Open Datasets | Yes | We evaluate our TAPPFL using three datasets from different applications. CIFAR-10 (Krizhevsky 2009) is an image dataset... For the Loans dataset (Hardt, Price, and Srebro 2016)... For the Adult income dataset (Becker and Kohavi 1996). |
| Dataset Splits | No | The paper mentions 'training/testing sets' but does not explicitly specify a validation set or its split percentage/count. |
| Hardware Specification | Yes | We use the Chameleon Cloud platform offered by the NSF (Keahey et al. 2020) (Cent OS7-CUDA 11 with Nvidia Rtx 6000). |
| Software Dependencies | Yes | We use the Chameleon Cloud platform offered by the NSF (Keahey et al. 2020) (Cent OS7-CUDA 11 with Nvidia Rtx 6000). ... The TAPPFL algorithm is implemented in Py Torch. |
| Experiment Setup | Yes | In each device, we train the three parameterized neural networks via the Stochastic Gradient Descent (SGD) algorithm, where we set the local batch size to be 10 and use 10 local epochs, and the learning rate in SGD is 0.01. ... The number of global rounds is set to be 20. |