Integer Is Enough: When Vertical Federated Learning Meets Rounding

Authors: Pengyu Qiu, Yuwen Pu, Yongchao Liu, Wenyan Liu, Yun Yue, Xiaowei Zhu, Lichun Li, Jinbao Li, Shouling Ji

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our theoretical analysis and empirical results demonstrate the benefits of the rounding layer in reducing computation and memory overhead, providing privacy protection, preserving model performance, and mitigating adversarial attacks. We hope this paper inspires further research into novel architectures to address practical issues in VFL.
Researcher Affiliation Collaboration 1Zhejiang University 2Ant Group 3Qilu University of Technology
Pseudocode Yes Algorithm 1: Rounding in Vertical Federated Learning
Open Source Code No The paper does not provide an explicit statement about releasing its own source code or a direct link to a code repository for the methodology described.
Open Datasets Yes MNIST (Lecun et al. 1998) is a widely used benchmark, which are handwritten digits with a training set of 60,000 examples, and a test set of 10,000 examples. Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017) is a newly proposed dataset, which contains images of different types of clothes, with a training set of 60,000 examples, and a test set of 10,000 examples. CIFAR10 (Krizhevsky, Hinton et al. 2009) is also a well-known image dataset, which consists of 60,000 colour images.
Dataset Splits No The paper describes training and test sets but does not explicitly state a separate validation set size or split for its experiments. For MNIST and Fashion-MNIST, it mentions a training set of 60,000 and a test set of 10,000. For CIFAR10, it states 60,000 images but doesn't detail splits. No specific validation split is provided.
Hardware Specification No The paper does not explicitly describe the specific hardware used to run its experiments, such as GPU or CPU models. It only discusses software and training parameters.
Software Dependencies No The paper mentions software like PyTorch, Captum, and SciPy with citations but does not provide specific version numbers for these software dependencies (e.g., PyTorch 1.x.x, SciPy x.y.z).
Experiment Setup Yes The training details are set as follows: training epochs of 30, batch size of 256, learning rate of 10 3, and weight decay of 5 10 4. Adam (Kingma and Ba 2017) is the optimizer used by default.