GenCo: Generative Co-training for Generative Adversarial Networks with Limited Data
Authors: Kaiwen Cui, Jiaxing Huang, Zhipeng Luo, Gongjie Zhang, Fangneng Zhan, Shijian Lu499-507
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments over multiple benchmarks show that Gen Co achieves superior generation with limited training data. In addition, Gen Co also complements the augmentation approach with consistent and clear performance gains when combined. In this section, we conduct extensive experiments to evaluate our proposed Gen Co. |
| Researcher Affiliation | Academia | 1 School of Computer Science and Engineering, Nanyang Technological University 2 S-lab, Nanyang Technological University |
| Pseudocode | No | The paper describes the methods using text and equations and provides an architectural diagram, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We conduct experiments over multiple public datasets: CIFAR (Krizhevsky et al. 2009), 100-shot (Zhao et al. 2020), AFHQ (Si and Zhu 2011), FFHQ (Karras, Laine, and Aila 2019) and LSUN-Cat (Yu et al. 2015). |
| Dataset Splits | No | The paper mentions limited training sample sizes like '100-shot Obama', '100 (Obama, Grumpy Cat and Panda)', '20% data', '10% data' for training, but does not explicitly specify distinct training/validation/test dataset splits for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details such as CPU or GPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers, such as programming languages or library versions. |
| Experiment Setup | No | The paper mentions a hyper-parameter P set at 0.2 for Da Co, but does not provide a comprehensive list of other experimental setup details such as learning rate, batch size, number of epochs, or optimizer settings. |