PPIDSG: A Privacy-Preserving Image Distribution Sharing Scheme with GAN in Federated Learning

Authors: Yuting Ma, Yuanzhi Yao, Xiaohua Xu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The extensive experimental results and security analyses demonstrate the superiority of our proposed scheme compared to other state-of-the-art defense methods.
Researcher Affiliation Academia Yuting Ma1, Yuanzhi Yao2*, Xiaohua Xu1* 1University of Science and Technology of China 2Hefei University of Technology ytma@mail.ustc.edu.cn, yaoyz@hfut.edu.cn, xiaohuaxu@ustc.edu.cn
Pseudocode Yes Algorithm 1: Block Scrambling-based Encryption. Input: original image xi, randomly key Kj, j {1, , 5}. Output: encrypted image ˆxi. 1: IB(xi): xi with Px Py pixels be divided into n nonoverlapped blocks B(0) l with Bx By pixels, l [1, n]. 2: for each B(0) l , l [1, n] do 3: B(1) l IR(B(0) l , K1); 4: B(2) l IA(B(1) l , K2); 5: B(3) l IF(B(2) l , K3); 6: B(4) l IC(B(3) l , K4); //optional 7: end for 8: IS(B(4) l , K5): Shuffle and assemble all blocks B(4) l to generate a new encrypted image ˆxi with K5. 9: return ˆxi
Open Source Code Yes The code is available at https://github.com/ytingma/PPIDSG.
Open Datasets Yes We carry out experiments on four datasets: MNIST (Deng 2012), FMNIST (Xiao, Rasul, and Vollgraf 2017), CIFAR10 (Krizhevsky and Hinton 2009), and SVHN (Netzer et al. 2011).
Dataset Splits No The paper states the use of standard datasets (MNIST, FMNIST, CIFAR10, SVHN) and mentions using 'test datasets as shadow datasets' for some attacks. It also describes setting up a federated learning system with ten clients, each with the 'same amount of training data from the identical distribution'. However, it does not explicitly provide specific train, validation, and test split percentages or sample counts for the main model training, nor does it refer to standard splits with citations that define these specific splits.
Hardware Specification Yes Our experiments are performed on the Py Torch platform using an NVIDIA Ge Force 3090 Ti GPU.
Software Dependencies No The paper mentions that 'Our experiments are performed on the Py Torch platform' but does not specify a version number for PyTorch or any other software dependencies such as Python, CUDA, or other libraries used in the implementation.
Experiment Setup Yes We set batch size as 64, image pool size as 10, and block sizes Bx and By in the encryption algorithm are 4. We apply an Adam optimizer and set the learning rate to 0.0002 in G and D. For F and C, we use a SGD optimizer and set the learning rate to 0.01 (weight decay is 0.001). Their initial learning rates are constant in the first 20 global iterations and then decrease linearly until they converge to 0. We set λsem = 1, λcls = 2 and run 50 global rounds.