Gradient Inversion with Generative Image Prior

Authors: Jinwoo Jeon, jaechang Kim, Kangwook Lee, Sewoong Oh, Jungseul Ok

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments Setup. Unless stated otherwise, we consider the image classification task on the validation set of Image Net [22] dataset scaled down to 64 64 pixels (for computational tractability) and use a randomly initialized Res Net18 [10] for training. For deep generative models in GIAS, we use Style GAN2 [13] trained on Image Net. We use a batch size of B = 4 as default and use the negative cosine to measure the gradient dissimilarity d( , ). We present detailed setup in Appendix H. Our experiment code is available at https://github.com/ml-postech/ gradient-inversion-generative-image-prior.
Researcher Affiliation Academia Jinwoo Jeon1 , Jaechang Kim2 , Kangwook Lee3, Sewoong Oh4, Jungseul Ok1,2 1 Department of Computer Science & Engineering, Pohang University of Science and Technology 2 Graduate School of Artificial Intelligence, Pohang University of Science and Technology 3 Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison 4 Paul G. Allen School of Computer Science & Engineering, University of Washington
Pseudocode Yes To fully utilize such a pretrained generative model, we propose gradient inversion in alternative spaces (GIAS), of which pseudocode is presented in Appendix A, which performs latent space search over z and then parameter space search over w.
Open Source Code Yes Our experiment code is available at https://github.com/ml-postech/ gradient-inversion-generative-image-prior.
Open Datasets Yes Unless stated otherwise, we consider the image classification task on the validation set of Image Net [22] dataset scaled down to 64 64 pixels (for computational tractability) and use a randomly initialized Res Net18 [10] for training. For deep generative models in GIAS, we use Style GAN2 [13] trained on Image Net. For computational tractability, we use DCGAN and images from FFHQ [12] resized to 32x32.
Dataset Splits No Unless stated otherwise, we consider the image classification task on the validation set of Image Net [22] dataset scaled down to 64 64 pixels (for computational tractability) and use a randomly initialized Res Net18 [10] for training. The paper uses a standard validation set from Image Net but does not explicitly provide the split percentages or absolute sample counts within the text.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions models like ResNet18, StyleGAN2, DCGAN, and optimizers like Adam, but does not specify software library dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We use a batch size of B = 4 as default and use the negative cosine to measure the gradient dissimilarity d( , ). We present detailed setup in Appendix H.