Understanding and Stabilizing GANs’ Training Dynamics Using Control Theory
Authors: Kun Xu, Chongxuan Li, Jun Zhu, Bo Zhang
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results show that our method can effectively stabilize the training and obtain state-of-the-art performance on data generation tasks. |
| Researcher Affiliation | Collaboration | 1Dept. of Comp. Sci. & Tech., Institute for AI, BNRist Center, Tsinghua-Bosch ML Center, THBI Lab, Tsinghua University, Beijing, China. |
| Pseudocode | Yes | Algorithm 1 Cloosed-loop Control GAN |
| Open Source Code | No | Our code is provided HERE. (Note: 'HERE' is a placeholder and not an actual link to a repository) |
| Open Datasets | Yes | We now empirically verify our method on the widely-adopted CIFAR10 (Krizhevsky et al., 2009) and Celeb A (Liu et al., 2015) datasets. |
| Dataset Splits | No | The paper mentions using CIFAR10 and Celeb A datasets but does not explicitly provide details about the validation split (e.g., percentages, counts, or specific references to predefined validation sets used in their experiments). |
| Hardware Specification | Yes | For instance, our method can conduct approximate 8 iterations per second of training on Celeb A whereas Reg-GAN can only conduct 4 iterations per second on Geforce 1080Ti. |
| Software Dependencies | No | The paper mentions architectures like ResNet and ReLU activation, but it does not provide specific version numbers for any software dependencies (e.g., Python, PyTorch, TensorFlow). |
| Experiment Setup | Yes | The batch size is 64, and the buffer size Nb is set to be 100 times of the batch size for all settings. We manually select the coefficient λ among {1, 2, 5, 10, 15, 20} in Reg-GAN s setting and among {0.05, 0.1, 0.2, 0.5} in SN-GAN s setting. |