SSD-GAN: Measuring the Realness in the Spatial and Spectral Domains

Authors: Yuanqi Chen, Ge Li, Cece Jin, Shan Liu, Thomas Li1105-1112

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The effectiveness of SSD-GAN is validated on various network architectures, objective functions, and datasets. In the experiment, the effectiveness of the proposed method is validated on various network architectures, objective functions, and datasets.
Researcher Affiliation Collaboration Yuanqi Chen1,2, Ge Li1*, Cece Jin1,2, Shan Liu4, Thomas Li1,3 1 School of Electronic and Computer Engineering, Peking University 2 Peng Cheng Laboratory 3 Advanced Institute of Information Technology, Peking University 4 Tencent America cyq373@pku.edu.cn, geli@ece.pku.edu.cn, fordacre@pku.edu.cn, shanl@tencent.com, tli@aiit.org.cn
Pseudocode No The paper does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Code is available at https://github.com/cyq373/SSD-GAN.
Open Datasets Yes We evaluate the effectiveness of our method on the FFHQ (Karras, Laine, and Aila 2019) dataset. It consists 70,000 high-quality images at 1024 1024 resolution. We evaluate the proposed method on a range of datasets including CIFAR100 (Krizhevsky and Hinton 2009), STL10 (Coates, Ng, and Lee 2011), and LSUN-bedroom (Yu et al. 2015).
Dataset Splits No The paper describes the datasets used and evaluation metrics (FID, PPL) but does not explicitly specify the training, validation, and test dataset splits with percentages or counts for reproducibility.
Hardware Specification Yes The experiments are conducted on 4 Tesla V100 GPUs. All models are trained on a single Tesla V100 GPU.
Software Dependencies No The paper mentions specific optimizers (Adam) and frameworks (Style GAN, SNGAN) by name and citing papers, but does not provide specific version numbers for software dependencies like Python, PyTorch, or CUDA.
Experiment Setup Yes The training process is under a progressive growing manner (Karras et al. 2018) which starts from 8 8 to 1024 1024. We apply the non-saturating loss (Goodfellow et al. 2014) as our adversarial loss with R1 regularization (Mescheder, Geiger, and Nowozin 2018). We train all our models with Adam optimizer (Kingma and Ba 2015), setting (β1, β2) = (0, 0.99). The total training time is 25M images. The hyperparameter λ is set to 0.5. The learning rate is set to 0.0002, and the minibatch size is 64.