On-Line Adaptative Curriculum Learning for GANs
Authors: Thang Doan, João Monteiro, Isabela Albuquerque, Bogdan Mazoure, Audrey Durand, Joelle Pineau, R. Devon Hjelm3470-3477
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, experimental results show that our approach improves samples quality and diversity over existing baselines by effectively learning a curriculum. These results also support the claim that weaker discriminators have higher entropy improving modes coverage. 4 Experiments In this section, we first give an understanding of how each discriminator provides informative feedback to the generator. We then compare our proposed approach (ac GAN) against existing methods from the literature. |
| Researcher Affiliation | Collaboration | 1Desautels Faculty of Management, Mc Gill University 2INRS-EMT, Universit e du Qu ebec 3Department of Mathematics & Statistics, Mc Gill University 4School of Computer Science, Mc Gill University 5Mila Quebec Artificial Intelligence Institute 6Facebook AI Research,7Microsoft Research Montreal |
| Pseudocode | Yes | Algorithm 1 Generic ac GAN algorithm |
| Open Source Code | No | No explicit statement or link indicating that the source code for the methodology described in this paper is openly available was found. |
| Open Datasets | Yes | We conducted a sanity check on 2 mode-dropping datasets: synthetic data consisting of a mixture of 25 Gaussians and Stacked-MNIST with 1000 modes. We then tested it on CIFAR10 and finally show generated samples on celeb A dataset (see Supplementary Material). We use the Stacked-MNIST dataset (Srivastava et al. 2017b) to measure the mode coverage of our proposed approach. |
| Dataset Splits | Yes | The squared FID was computed every epoch with 1,000 heldout samples at training time. |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models, memory, or processing units) used for running the experiments were provided in the paper. |
| Software Dependencies | No | No specific software dependencies, libraries, or frameworks with their version numbers were mentioned in the paper. |
| Experiment Setup | Yes | All parameters used to obtain the results can be found in the Supplementary Material. We conducted an in-depth study of ac GAN s performance on CIFAR-10 by running experiments on 5 independent seeds for 50 epochs each. We pretrained the generator (with 3 dense layers of 400 units with Re LU activation layers except for the last layer) with one discriminator on only 2 of the original 8 modes. The discriminator had 3 dense layers of 400 units (Re LU hidden activation layers). |