FedInverse: Evaluating Privacy Leakage in Federated Learning

Authors: Di Wu, Jun Bai, Yiliao Song, Junjun Chen, Wei Zhou, Yong Xiang, Atul Sajjanhar

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiments show that Fed Inverse can effectively evaluate the data leakage risk that attackers successfully obtain the data belonging to other participants. The code of this work is available at https://github.com/Jun-B0518/Fed Inverse
Researcher Affiliation Academia School of Mathematics, Physics and Computing, University of Southern Queensland1 School of Information Technology, Deakin University2 The University of Adelaide3 Computer Center, Peking University4 School of Science, Computing and Engineering Technologies, Swinburne University of Technology5
Pseudocode Yes Algorithm 1 Fed Inverse Algorithm.
Open Source Code Yes The code of this work is available at https://github.com/Jun-B0518/Fed Inverse
Open Datasets Yes We use three typical datasets, Celeb Faces Attributes Dataset (Celeb A) Liu et al. (2015), MNIST dataset Le Cun et al. (1998), and CIFAR-10 Krizhevsky et al. (2009) (More experiment results on different datasets See Appendix A) to evaluate the Fed Inverse attack performance with different classification tasks.
Dataset Splits No The paper does not explicitly provide details about a validation dataset split used during training.
Hardware Specification No The paper does not explicitly describe the specific hardware used to run the experiments, such as particular GPU or CPU models.
Software Dependencies No The paper mentions general software concepts like Wasserstein-GAN but does not provide specific version numbers for any key software components or libraries used in the experiments.
Experiment Setup Yes For Celeb A, we choose 5 participants joining every training round. The local training batch size is 64 and the local training epoch is 50. ... For MNIST, 100 participants are chosen to join the FL, However, only 10 out of 100 participants can join the training in each training round. The local training batch is 10, and the local training epoch is 5.