Augmentation-Aware Self-Supervision for Data-Efficient GAN Training
Authors: Liang Hou, Qi Cao, Yige Yuan, Songtao Zhao, Chongyang Ma, Siyuan Pan, Pengfei Wan, Zhongyuan Wang, Huawei Shen, Xueqi Cheng
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate significant improvements of our method over SOTA methods in training data-efficient GANs. |
| Researcher Affiliation | Collaboration | 1CAS Key Laboratory of AI Safety and Security, Institute of Computing Technology, Chinese Academy of Sciences 2CAS Key Laboratory of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences 3University of Chinese Academy of Sciences 4Kuaishou Technology |
| Pseudocode | No | The paper does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/liang-hou/augself-gan. |
| Open Datasets | Yes | We compare our method with stateof-the-art (SOTA) methods using the class-conditional Big GAN and unconditional Style GAN2 architectures on data-limited CIFAR-10, CIFAR-100, FFHQ, LSUNCat, and five low-shot datasets. ... CIFAR-10 and CIFAR-100 [21]. ... FFHQ [17] and LSUN-Cat [45]. ... five low-shot datasets [36] (Obama, Grumpy cat, Panda, Animal Face cat, and Animal Face dog). |
| Dataset Splits | No | The paper mentions varying amounts of 'training data' (e.g., 100%, 20%, 10%) but does not explicitly state the percentages or counts for training, validation, and test splits needed for reproduction. While standard datasets often have predefined splits, these are not explicitly provided in the text. |
| Hardware Specification | Yes | Each of experiments in this work was conducted on an 32GB NVIDIA V100 GPU. |
| Software Dependencies | No | The paper mentions implementing the method based on Diff Augment but does not list specific software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow versions, CUDA). |
| Experiment Setup | Yes | where the hyper-parameters are set as λd = λg = 1 in experiments by default unless otherwise specified (see Figure 6 for empirical studies). ... The hyper-parameter is λg = 0.2. ... The hyper-parameters are λd = λg = 0.1 on Grumpy cat and Animal Face cat. |