FedVS: Straggler-Resilient and Privacy-Preserving Vertical Federated Learning for Split Models
Authors: Songze Li, Duanyi Yao, Jin Liu
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on various types of VFL datasets (including tabular, CV, and multi-view) demonstrate the universal advantages of Fed VS in straggler mitigation and privacy protection over baseline protocols. |
| Researcher Affiliation | Academia | 1The Hong Kong University of Science and Technology (Guangzhou) 2The Hong Kong University of Science and Technology. Correspondence to: Songze Li <songzeli@ust.hk>. |
| Pseudocode | Yes | The paper includes “Algorithm 1 The Fed VS protocol” with a structured block of steps. |
| Open Source Code | No | The paper does not provide any statement about releasing open-source code or a link to a code repository. |
| Open Datasets | Yes | We carry out split VFL experiments on three types of six real-world datasets, including Fashion MNIST dataset (Xiao et al., 2017a), Parkinson (Sakar et al., 2019), Credit card (Yeh & Lien, 2009), EMNIST (Cohen et al., 2017), Handwritten (Dua & Graff, 2017) and Caltech-7 (Li et al., 2022). |
| Dataset Splits | Yes | For Parkinson: “70% of the data is regarded as training data and the remaining part is test data.” For Hand Written: “The dataset is split to 60% as the train set and 40% as the test set.” For Caltech-7: “80% of the dataset is used for training and 20% for testing.” |
| Hardware Specification | Yes | All experiments are performed on a single machine using four NVIDIA Ge Force RTX 3090 GPUs. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x) used for running its experiments. |
| Experiment Setup | Yes | We provide descriptions of the datasets, number of clients considered for each dataset, employed model architectures, and training parameters in Appendix C. For Parkinson: “The learning rate is set to 0.005. The batch size is 16.” For Credit Card: “The learning rate is set to 0.01 and the batch size is set to 32.” |