Understanding Noise Injection in GANs

Authors: Ruili Feng, Deli Zhao, Zheng-Jun Zha

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on image generation and GAN inversion validate our theory in practice.We conduct experiments on benchmark datasets including FFHQ faces, LSUN objects, and CIFAR-10. The GAN models we use are the baseline DCGAN (Radford et al., 2015) (originally without noise injection) and the state-of-the-art Style GAN2 (Karras et al., 2019b) (originally with Euclidean noise injection). For Style GAN2, we use images of resolution 128 128 and config-e in the original paper due to that config-e achieves the best performance with respect to Path Perceptual Length (PPL) score. Besides, we apply the experimental settings from Style GAN2.
Researcher Affiliation Collaboration 1University of Science and Technology of China, Hefei, China. 2Alibaba Group. Correspondence to: Ruili Feng <ruilifengustc@gmail.com>, Deli Zhao <zhaodeli@gmail.com>, Zhen-Jun Zha <zhazj@ustc.edu.cn>.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link to open-source code for the described methodology.
Open Datasets Yes We conduct experiments on benchmark datasets including FFHQ faces, LSUN objects, and CIFAR-10.
Dataset Splits No The paper mentions using Style GAN2 config-e which is known to achieve the best PPL score, but it does not explicitly state specific train/validation/test splits (e.g., percentages or sample counts) for reproducibility.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory) used for running its experiments. It only mentions 'extra GPU memory consuming of path length regularizer in training'.
Software Dependencies No The paper does not provide specific software dependencies with version numbers needed to replicate the experiments (e.g., Python 3.x, PyTorch 1.x, CUDA 11.x).
Experiment Setup No The paper states, "we apply the experimental settings from Style GAN2" but does not explicitly list specific hyperparameter values (e.g., learning rate, batch size, number of epochs) within the main text for reproducibility.