Self-Supervised GANs with Label Augmentation

Authors: Liang Hou, Huawei Shen, Qi Cao, Xueqi Cheng

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments Our code is available at https://github.com/houliangict/ssgan-la. 5.1 SSGAN-LA Faithfully Learns the Real Data Distribution We experiment on a synthetic dataset to intuitively verify whether SSGAN, SSGAM-MS, and SSGAN-LA can accurately match the real data distribution. ... 5.3 Comparison of Sample Quality We conduct experiments on three real-world datasets: CIFAR-10 [26], STL-10 [8], and Tiny Image Net [27].
Researcher Affiliation Academia Liang Hou1,3, Huawei Shen1,3, Qi Cao1, Xueqi Cheng2,3 1Data Intelligence System Research Center, Institute of Computing Technology, Chinese Academy of Sciences 2CAS Key Laboratory of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences 3University of Chinese Academy of Sciences
Pseudocode No The paper describes methods through mathematical formulations and prose, but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/houliangict/ssgan-la.
Open Datasets Yes We conduct experiments on three real-world datasets: CIFAR-10 [26], STL-10 [8], and Tiny Image Net [27]. Figure 3 shows images randomly generated by DAGAN and SSGAN-LA on the Celeb A dataset [34].
Dataset Splits No The linear model is trained on the training set and tested on the validation set of the corresponding datasets. (This mentions validation but doesn't give specific split details like percentages or counts for reproduction for all datasets).
Hardware Specification No The paper does not specify any hardware details (e.g., GPU models, CPU types, or cloud computing instances with their specifications) used for running the experiments.
Software Dependencies No The paper mentions using Big GAN as a backbone but does not provide specific version numbers for any software dependencies (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes Specifically, we train the classifier with a batch size of 128 for 50 epochs. The optimizer is Adam with a learning rate of 0.05 and decayed by 10 at both epoch 30 and epoch 40, following the practice of [6].