Improving GAN Training via Binarized Representation Entropy (BRE) Regularization
Authors: Yanshuai Cao, Gavin Weiguang Ding, Kry Yik-Chau Lui, Ruitong Huang
ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on both synthetic data and real datasets demonstrate improvements in stability and convergence speed of the GAN training, as well as higher sample quality. |
| Researcher Affiliation | Industry | Yanshuai Cao, Gavin Weiguang Ding, Kry Yik-Chau Lui, Ruitong Huang Borealis AI Canada |
| Pseudocode | No | No structured pseudocode or algorithm blocks (e.g., clearly labeled algorithm sections or code-like formatted procedures) were found. |
| Open Source Code | No | The paper refers to using existing codebases (e.g., 'We used the same code and hyperparameters from Salimans et al. (2016)' with a link to `https://github.com/openai/improved-gan`), but does not explicitly state that the source code for their proposed method (BRE) is open-source or provide a link to it. |
| Open Datasets | Yes | Using a 2D synthetic dataset and CIFAR10 dataset (Krizhevsky, 2009), we show that our BRE improves unsupervised GAN training... We then demonstrate that BRE regularization improves semi-supervised classification accuracy on CIFAR10 and SVHN dataset (Netzer et al., 2011). |
| Dataset Splits | No | The paper mentions '1000 labeled training examples' for semi-supervised learning on CIFAR10 but does not explicitly provide specific dataset split information (e.g., percentages, sample counts for train/validation/test, or explicit cross-validation setup) needed to reproduce the data partitioning for all experiments. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, or detailed computer specifications) used for running experiments were mentioned in the paper. |
| Software Dependencies | No | The paper mentions using specific optimizers like Adam and refers to other GAN architectures (DCGAN, WGAN-GP) and codebases ('the same code and hyperparameters from Salimans et al. (2016)'), but it does not provide specific version numbers for any software dependencies (e.g., libraries, frameworks, or programming language versions) that would enable replication. |
| Experiment Setup | Yes | For Fig. 4, Fig. 12, and Fig. 13, lr=.001 with adam(.0, .999), and BRE regularizer weight 1., applied on h2 and h3; both lr and BRE weight linearly decay to over iterations to 1e 6 and 0 respectively. For Fig. 12 and Fig. 13, lr=.002 with adam(.5, .999), and BRE regularizer weight 1., applied on h2. [...] For Fig. 7, the default optimization setting (left column, i.e. (a) and (c)) is lr = 2e 4 and one D update per G update, lr for both D and G annealled to 1e 6 over 90K G updates; while the aggressive setting (right column, i.e. (b) and (d)) is lr = 2e 3 and three D update for every G update, lr for both D and G annealed to 1e 6 over 10K G updates. [...] On CIFAR10, we used a regularizer weight of .01, and on SVHN we used 0.1. BRE is applied on real, fake and interp data. |