Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Theoretically Unmasking Inference Attacks Against LDP-Protected Clients in Federated Vision Models
Authors: Quan Minh Nguyen, Minh N. Vu, Truc Nguyen, My T. Thai
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Practical evaluations on federated vision models confirm considerable privacy risks, revealing that the noise required to mitigate these attacks significantly degrades models utility. Our experimental results in real-world state-of-the-art vision models such as Vi Ts and Res Net (He et al., 2016) demonstrate that AMI attacks achieve notably high success rates observed even under stringent LDP protection (i.e., small privacy budgets ϵ) that considerably degrade the model s utility (Section 5). |
| Researcher Affiliation | Academia | 1Department of CISE, University of Florida, USA. 2Center for Nonlinear Studies, Los Alamos National Lab, USA 3Computational Science Center, National Renewable Energy Laboratory, USA. Correspondence to: My T. Thai <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 AD FC ATTACK(T) exploiting fully-connected layer in AMI Algorithm 2 AD FC GUESS(T, θ) exploiting fully-connected layer in AMI Algorithm 3 AD Attn ATTACK(v) using self-attention in AMI Algorithm 4 AD Attn GUESS(v, θ) using self-attention in AMI |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code for the described methodology, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | The synthetic datasets include one-hot encoded data and spherical data (points on the unit sphere). The real-world datasets, including CIFAR10, CIFAR100 (Krizhevsky et al., 2009), and Image Net (Krizhevsky et al., 2012), are processed using pre-trained embedding modules to obtain data D for our threat models. |
| Dataset Splits | Yes | For AD FC, we use the native classification tasks and the original models published along with the datasets. To implement LDP, we add noise directly to Res Net s embedding. For Vi Ts, we add noise to the patch embeddings of the image. |
| Hardware Specification | Yes | Our experiments are implemented using Python 3.8 and executed on a single GPU-enabled compute node running a Linux 64-bit operating system. The node is allocated 36 CPU cores with 2 threads per core and 384GB of RAM. Additionally, the node is equipped with 2 RTX A6000 GPUs, each with 48GB of memory. |
| Software Dependencies | No | The paper mentions 'Python 3.8' but does not list any versioned libraries, frameworks, or solvers that would be necessary to replicate the experiments. |
| Experiment Setup | Yes | For Theorem 1, τ D is set to X . The argument is made at Subsect. 4.1. For Theorem 3, β is chosen such that condition of the Theorem holds and γ is set to 2 ε. The argument is made at Appx. D.4. ... We integrate these simulations into our implementations of AD FC and AD Attn to tune τ D and γ before the security games. |