Effective passive membership inference attacks in federated learning against overparameterized models

Authors: Jiacheng Li, Ninghui Li, Bruno Ribeiro

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through an extensive empirical evaluation (described in Section 4) shows that, at later gradient update rounds t 1 of the optimization (in our experiments t > 50 if trained from scratch and t 2 if fine-tuned) of medium to large neural networks and at nearly any stage of the fine tuning of large pre-trained models gradient vectors of different training instances are orthogonal in the same way distinct samples of independent isotropic random vectors are orthogonal (such as two high-dimensional Gaussian random vectors with zero mean and diagonal covariance matrix (isotropic)).
Researcher Affiliation Academia Jiacheng Li, Ninghui Li & Bruno Ribeiro Department of Computer Science Purdue University West Lafayette, IN 47903, USA li2829@purdue.edu,{ninghui,ribeiro}@cs.purdue.edu
Pseudocode No The paper describes the methods textually and mathematically but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper does not explicitly state that source code for its methodology is made available or provide a link to a code repository.
Open Datasets Yes The medical-MNIST dataset apolanco3225 (2017) is a simple MNIST-style medical images in 64x64 dimension; There were originally taken from other datasets and processed into such style.
Dataset Splits Yes We divide this dataset into 3 disjoint set: 40,000 images for training, 5,000 images for validation and 8,724 images for testing.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments.
Software Dependencies No The paper does not explicitly mention specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, TensorFlow 2.x).
Experiment Setup Yes A detailed description of the training parameters is given in Table 6 in the Appendix.