Foreseeing Reconstruction Quality of Gradient Inversion: An Optimization Perspective

Authors: Hyeong Gwon Hong, Yooshin Cho, Hanbyel Cho, Jaesung Ahn, Junmo Kim

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical investigation shows that the vulnerability ranking varies with the loss function used. We demonstrate the effectiveness of LAVP on various architectures and datasets, showing its consistent superiority over the gradient norm in capturing sample vulnerabilities.
Researcher Affiliation Academia Hyeong Gwon Hong1, Yooshin Cho2, Hanbyel Cho2, Jaesung Ahn1, Junmo Kim2 1Kim Jaechul Graduate School of AI, KAIST, Seoul, South Korea 2School of Electrical Engineering, KAIST, Daejeon, South Korea
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets Yes We conducted an evaluation by randomly selecting 100 validation images from CIFAR-10 (Krizhevsky 2009), CIFAR-100 (Krizhevsky 2009), Image Nette (Howard 2019), Image Woof (Howard 2019), and Image Net (Deng et al. 2009).
Dataset Splits Yes We conducted an evaluation by randomly selecting 100 validation images from CIFAR-10 (Krizhevsky 2009), CIFAR-100 (Krizhevsky 2009), Image Nette (Howard 2019), Image Woof (Howard 2019), and Image Net (Deng et al. 2009). ... We trained these models on a training set for 300 epochs...
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper mentions 'Autograd package in Py Torch' and 'Adam optimizer (Kingma and Ba 2015)' but does not provide specific version numbers for these software components.
Experiment Setup Yes We trained these models on a training set for 300 epochs, using the SGD optimizer with an initial learning rate of 0.1 and a learning rate decay of 0.1 at the 150th and 225th epochs. ... We use Adam optimizer (Kingma and Ba 2015) for gradient inversion.