Smooth Deep Image Generator from Noises
Authors: Tianyu Guo, Chang Xu, Boxin Shi, Chao Xu, Dacheng Tao3731-3738
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on real-world image datasets demonstrate the necessity of studying smooth generator and the effectiveness of the proposed algorithm. In this section, we conduct comprehensive experiments on a toy dataset and three real-world image datasets, MNIST (Le Cun et al. 1998), CIFAR-10 (Krizhevsky and Hinton 2009), and Celeb A (Liu et al. 2015). |
| Researcher Affiliation | Academia | 1Key Laboratory of Machine Perception (MOE), School of EECS, Peking University, China 2UBTECH Sydney AI Centre, School of Computer Science, FEIT, University of Sydney, Australia 3Cooperative Medianet Innovation Center, Peking University, China 4National Engineering Laboratory for Video Technology, School of EECS, Peking University, China |
| Pseudocode | Yes | Algorithm 1 Smooth GAN |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | In this section, we conduct comprehensive experiments on a toy dataset and three real-world image datasets, MNIST (Le Cun et al. 1998), CIFAR-10 (Krizhevsky and Hinton 2009), and Celeb A (Liu et al. 2015). |
| Dataset Splits | Yes | The whole dataset of 70,000 images is split into 60,000 and 10,000 images for training and test, respectively. In the experiments on the MNIST dataset, we consider the 10,000 images in test set as valid set in the calculation of FID. There are 60,000 images in the CIFAR-10 dataset which are split into 50,000 training and 10,000 testing images. We also calculate the FID with 3,000 images that was randomly selected in the test set. 3,000 examples are randomly selected as the test set and the rest samples as the training set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers). |
| Experiment Setup | Yes | The common points are: i) no nonlinear activation was attached to the end of discriminators; ii) the minibatch used in training process is 64 for both generator and discriminators; iii) Adam optimizer with learning rate 0.0001 and momentum 0.5; iv) noise dimension of 128 for generator; v) weights initialized from Gaussian: N(0; 0.01). The number of critic iterations per generator iteration ncritic, the batch size m, Adam hyperparameters α, β1, and β2, the loss balanced coefficient λ, γ. |