GANs Can Play Lottery Tickets Too

Authors: Xuxi Chen, Zhenyu Zhang, Yongduo Sui, Tianlong Chen

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results demonstrate that our found subnetworks substantially outperform previous state-of-the-art GAN compression approaches in both image generation (e.g. SNGAN) and image-to-image translation GANs (e.g. Cycle GAN). To address this gap in the literature, we investigate the lottery ticket hypothesis in GANs.
Researcher Affiliation Academia 1University of Science and Technology of China, 2University of Texas at Austin
Pseudocode Yes Algorithm 1: Finding winning tickets by Iterative Magnitude Pruning; Algorithm 2: Finding winning tickets by Channel Pruning
Open Source Code Yes Codes available at https://github.com/VITA-Group/GAN-LTH.
Open Datasets Yes Datasets For image-to-image experiments, we use a widely-used benchmark horse2zebra (Zhu et al., 2017) for model training. As for noise-to-image experiments, we use CIFAR-10 (Krizhevsky et al., 2009) as the benchmark. For the transfer study, the experiments are conducted on CIFAR-10 and STL-10 (Coates et al., 2011).
Dataset Splits No The paper mentions training for "N iterations" and evaluating with metrics, but does not explicitly provide details about specific training, validation, or test dataset splits (e.g., percentages, sample counts, or explicit mention of cross-validation setup).
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or cloud resources used for the experiments.
Software Dependencies No The paper does not provide specific software dependency details, such as library names with version numbers (e.g., PyTorch version, CUDA version).
Experiment Setup No The paper mentions "hyper-parameters" for transfer learning but does not explicitly list specific values for hyperparameters (e.g., learning rate, batch size, number of epochs) or other training configurations in the main text.