Towards Eliminating Hard Label Constraints in Gradient Inversion Attacks
Authors: Yanbo Wang, Jian Liang, Ran He
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments testify to the label recovery accuracy, as well as the benefits to the following image reconstruction. We believe soft labels in classification tasks are worth further attention in gradient inversion attacks1. Extensive experiments on various datasets under multiple networks demonstrate the correctness of such a scalar. Extensive experiments have proved the recovery accuracy and quality, together with the benefits of following image reconstruction on both fully-connected networks and CNNs. |
| Researcher Affiliation | Academia | Yanbo Wang1,2, Jian Liang1,2, Ran He1,2 1School of Artificial Intelligence, University of Chinese Academy of Sciences (UCAS) 2CRIPAC & MAIS, Institute of Automation, Chinese Academy of Sciences (CASIA) |
| Pseudocode | Yes | Here we present a big picture for our label recovery algorithms with gradient-descent based optimizer and PSO optimizer, as shown in Algorithm 1 and Algorithm 2. Full details can be checked in our repository. |
| Open Source Code | Yes | Our code is publicly available at https://github.com/ybwang119/label_recovery. |
| Open Datasets | Yes | We first test the label recovery accuracy with CIFAR-100 (Krizhevsky, 2009), Flowers-17 (Nilsback & Zisserman, 2006) and Image Net (Russakovsky et al., 2015) on Res Net50 (He et al., 2016) network. |
| Dataset Splits | No | The paper mentions using 'testset' and 'validation dataset' (e.g., 'CIFAR-10 validation dataset') but does not specify the exact percentages or counts for training, validation, and test splits needed to reproduce data partitioning. |
| Hardware Specification | No | The paper mentions 'CUDA memory limitation' which implies the use of NVIDIA GPUs, but no specific GPU models, CPU models, or detailed hardware specifications are provided. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., PyTorch 1.x, Python 3.x). |
| Experiment Setup | Yes | For L-BFGS, we set the learning rate=0.5, bound=100, iteration=200, coefficient=4, and initial=1. For PSO, we set initial=1, pop=200, max_ iter=30 by default. |