Real or Not Real, that is the Question
Authors: Yuanbo Xiangli*, Yubin Deng*, Bo Dai*, Chen Change Loy, Dahua Lin
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we study Realness GAN from multiple aspects. Specifically, 1) we firstly focus on Realness GAN s mode coverage ability on a synthetic dataset. 2) Then we evaluate Realness GAN on CIFAR10 (32*32) (Krizhevsky, 2009) and Celeb A (256*256) (Liu et al., 2015) datasets qualitatively and quantitatively. 3) Finally we explore Realness GAN on high-resolution image generation task, which is known to be challenging for unconditional non-progressive architectures. Surprisingly, on the FFHQ dataset (Karras et al., 2019), Realness GAN managed to generate images at the 1024*1024 resolution based on a non-progressive architecture. We compare Realness GAN to other popular objectives in generative adversarial learning, including the standard GAN (Std-GAN) (Radford et al., 2015), WGAN-GP (Arjovsky et al., 2017), Hinge GAN (Zhao et al., 2017) and LSGAN (Mao et al., 2017). |
| Researcher Affiliation | Academia | Yuanbo Xiangli1 , Yubin Deng1 , Bo Dai1 , Chen Change Loy2, Dahua Lin1 The Chinese University of Hong Kong Nanyang Technological University {xy019,dy015,bdai,dhlin}@ie.cuhk.edu.hk ccloy@ntu.edu.sg |
| Pseudocode | No | The paper contains mathematical equations and derivations, but no structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1Code will be available at https://github.com/kam1107/Realness GAN |
| Open Datasets | Yes | Then we evaluate Realness GAN on CIFAR10 (32*32) (Krizhevsky, 2009) and Celeb A (256*256) (Liu et al., 2015) datasets qualitatively and quantitatively. 3) Finally we explore Realness GAN on high-resolution image generation task... on the FFHQ dataset (Karras et al., 2019) |
| Dataset Splits | No | The paper mentions using CIFAR10, Celeb A, and FFHQ datasets but does not explicitly provide details about training, validation, and test splits, or cross-validation setup within the text. |
| Hardware Specification | No | The paper does not explicitly mention specific hardware components (e.g., GPU models, CPU types) used for running the experiments. It only states training details for the models. |
| Software Dependencies | No | The paper mentions algorithms and techniques like 'Adam', 'Batch normalization', and 'spectral normalization' but does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks with their specific versions). |
| Experiment Setup | Yes | For experiments on synthetic dataset, we use a generator with four fully-connected hidden layers, each of which has 400 units, followed by batch normalization and Re LU activation. The discriminator has three fully-connected hidden layers, with 200 units each layer. Linear Maxout with 5 maxout pieces are adopted and no batch normalization is used in the discriminator. The latent input z is a 32-dimensional vector sampled from a Gaussian distribution N(0, I). All models are trained using Adam (Kingma & Ba, 2015) for 500 iterations. On real-world datasets, the network architecture is identical to the DCGAN architecture in Radford et al. (2015), with the prior pz(z) a 128-dimensional Gaussian distribution N(0, I). Models are trained using Adam (Kingma & Ba, 2015) for 520k iterations. To guarantee training stability, we adopt settings that are proved to be effective for baseline methods. Batch normalization (Ioffe & Szegedy, 2015) is used in G, and spectral normalization (Miyato et al., 2018) is used in D. For WGAN-GP we use lr = 1e 4, β1 = 0.5, β2 = 0.9, updating D for 5 times per G s update (Gulrajani et al., 2017); for the remaining models, we use lr = 2e 4, β1 = 0.5, β2 = 0.999, updating D for one time per G s update (Radford et al., 2015). |