Improving Generalization and Stability of Generative Adversarial Networks
Authors: Hoang Thanh-Tung, Truyen Tran, Svetha Venkatesh
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic and large scale datasets verify our theoretical analysis. |
| Researcher Affiliation | Academia | Hoang Thanh-Tung hoangtha@deakin.edu.au Truyen Tran truyen.tran@deakin.edu.au Svetha Venkatesh svetha.venkatesh@deakin.edu.au |
| Pseudocode | Yes | Algorithm 1: Path finding algorithm |
| Open Source Code | Yes | The code is made available at https://github.com/htt210/ Generalization And Stability In GANs. |
| Open Datasets | Yes | MNIST DATASET and When trained on Imange Net (Deng et al., 2009) |
| Dataset Splits | No | No explicit training/test/validation split percentages or counts are provided. The paper generally refers to 'training dataset' and 'held-out dataset' without specific split information. |
| Hardware Specification | No | No specific hardware details (e.g., GPU models, CPU types, memory amounts) are mentioned for running the experiments. |
| Software Dependencies | No | The paper mentions 'Pytorch (Paszke et al., 2017)' and 'Adam optimizer (Kingma & Ba, 2014)' but does not provide specific version numbers for Pytorch or other ancillary software components needed for reproduction. |
| Experiment Setup | Yes | Hyper parameters for synthetic and MNIST experiments: Learning rate 0.003 for both G and D Learning rate TTUR 0.003 for G, 0.009 for D and the full configuration for Imagenet: generator: name: resnet2 kwargs: nfilter: 32 nfilter_max: 512 embed_size: 128 discriminator: name: resnet2 kwargs: nfilter: 32 nfilter_max: 512 embed_size: 128 z_dist: type: gauss dim: 128 training: out_dir: ../output/imagenet_wgangp5_TTUR gan_type: wgan reg_type: wgangp reg_param: 10. batch_size: 64 nworkers: 32 take_model_average: true model_average_beta: 0.999 model_average_reinit: false monitoring: tensorboard sample_every: 1000 sample_nlabels: 20 inception_every: 10000 save_every: 900 backup_every: 100000 restart_every: -1 optimizer: adam lr_g: 0.0001 lr_d: 0.0003 lr_anneal: 1. lr_anneal_every: 150000 d_steps: 5 equalize_lr: false |