Protecting Split Learning by Potential Energy Loss
Authors: Fei Zheng, Chaochao Chen, Lingjuan Lyu, Xinyi Fu, Xing Fu, Weiqiang Wang, Xiaolin Zheng, Jianwei Yin
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiment results show that our method significantly lowers the performance of both fine-tuning attacks and clustering attacks. We conduct extensive experiments on multiple datasets, showing that our method significantly reduces the attacker’s learning accuracy of both fine-tuning attacks and clustering attacks, and performs better than the existing distance correlation approach. |
| Researcher Affiliation | Collaboration | 1College of Computer Science and Technology, Zhejiang University 2Sony AI 3Ant Group |
| Pseudocode | No | The paper includes mathematical formulations and descriptions of processes but does not present any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an explicit statement about making their source code available or provide a link to a code repository. |
| Open Datasets | Yes | We conduct experiments on four different datasets, i.e., MNIST [Le Cun et al., 1998], Fashion-MNIST [Xiao et al., 2017], CIFAR-10 [Krizhevsky et al., 2009], and DBpedia [Auer et al., 2007]. |
| Dataset Splits | No | The paper discusses training and testing datasets, but it does not explicitly mention the use of a separate validation dataset or specify validation splits in the main text. |
| Hardware Specification | Yes | We implement the experiment codes using the PyTorch and Scikit-Learn [Pedregosa et al., 2011] libraries, and run them on servers with NVIDIA RTX3090 GPUs. |
| Software Dependencies | No | The paper states: "We implement the experiment codes using the PyTorch and Scikit-Learn [Pedregosa et al., 2011] libraries". However, it does not provide specific version numbers for these software dependencies, which is required for reproducibility. |
| Experiment Setup | Yes | For the selection of hyperparameters, we vary the loss coefficient (α) of PELoss from 0.25 32 and 1 32 for Dcor Loss. For Label DP, the ratio of randomly flipped labels varies from 0.01 0.16. In all experiments, the value doubles each time. Detailed experiment settings are provided in Appendix B. |