Discriminative Forests Improve Generative Diversity for Generative Adversarial Networks

Authors: Junjie Chen, Jiahao Li, Chen Song, Bin Li, Qingcai Chen, Hongchang Gao, Wendy Hui Wang, Zenglin Xu, Xinghua Shi

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted evaluation on simulated data, real-word images STL10 (96 96, 105K images) (Coates, Ng, and Lee 2011) and LSUN-Cat(256 256, 200K images) (Yu et al. 2015), respectively. We first investigated the effectiveness of Forest-GAN and visualized the changes of diversity in generated samples on simulated data. The details of Auto GAN are shown in Appendix D. Keeping same architecture of individual discriminators. We kept the backbone and training hyperparameters of the generator and discriminator in Auto GAN, and only increased the number of discriminators according to the discriminative forest framework settings. All discriminators have the same architecture, but are independently initialized and trained on their own bootstrapping datasets. We investigated the performance of Forest-GANs with K=1,2,5,10. The discriminator of the original Auto GAN has 1 Million parameters. For K discriminators in a discriminator forest, the total parameters become as K times, as all discriminators have the same architecture. Thus, with keeping same architecture of discriminators, we sought to compare the results of Para=1M (K=1), Para=2M (K=2), Para=5M (K=5) and Para=10M (K=10) in Table 1.
Researcher Affiliation Academia 1Harbin Institute of Technology, Shenzhen, Guangdong, China; 2Temple University, Philadelphia, Pennsylvania, USA; 3Stevens Institute of Technology, Hoboken, New Jersey, USA.
Pseudocode Yes Algorithm 1 shows the training process of Forest-GAN.
Open Source Code Yes Implementation details can be found at https://github.com/chen-bioinfo/Forest-GAN.
Open Datasets Yes We conducted evaluation on simulated data, real-word images STL10 (96 96, 105K images) (Coates, Ng, and Lee 2011) and LSUN-Cat(256 256, 200K images) (Yu et al. 2015), respectively.
Dataset Splits No The paper does not explicitly provide training/test/validation dataset splits, specific percentages, or sample counts, nor does it cite predefined splits for validation sets. It mentions 'training' and 'testing' but lacks details on a distinct validation split.
Hardware Specification No The paper mentions 'Forest-GAN can be deployed with any parallel computing paradigm' but does not provide specific details on GPU models (e.g., NVIDIA A100), CPU models, or other hardware specifications used for running the experiments.
Software Dependencies No The paper mentions using Auto GAN and Style GAN2-ADA as basis models, but it does not specify any software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA 11.x), which are necessary for reproducible setup.
Experiment Setup No The paper refers to 'training hyperparameters of the generator and discriminator in Auto GAN' and states 'Implementation details are described in Appendix C' and 'The details of Auto GAN are shown in Appendix D'. However, specific numerical values for hyperparameters such as learning rate, batch size, number of epochs, or optimizer settings are not explicitly provided within the main text of the paper.