Dynamically Grown Generative Adversarial Networks
Authors: Lanlan Liu, Yuting Zhang, Jia Deng, Stefano Soatto8680-8687
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate new state-of-the-art of image generation. We evaluate DGGAN against manually designed Prog GAN and other recent GAN models on CIFAR-10 and LSUN. |
| Researcher Affiliation | Collaboration | Lanlan Liu*, 1 Yuting Zhang, 2 Jia Deng, 3 Stefano Soatto 2 1 University of Michigan, Ann Arbor 2 Amazon Web Services 3 Princeton University |
| Pseudocode | Yes | Algorithm 1: Top-K Greedy Pruning Algorithm |
| Open Source Code | No | The paper refers to using "the most popular Py Torch implementation1of Prog GAN" (Footnote 1: https://github.com/facebookresearch/pytorchGANzoo) to implement their DGGAN, but it does not explicitly state that their own DGGAN source code is released or available at this or any other link. |
| Open Datasets | Yes | CIFAR-10 (Krizhevsky 2009) contains 50k 32 32 training images. LSUN (Yu et al. 2015) has over a million 256 256 bedroom images for training. |
| Dataset Splits | No | The paper mentions datasets used for training but does not specify training, validation, or test splits or percentages for reproducing the data partitioning. |
| Hardware Specification | No | The paper states: "The resulting computational cost is 580 GPU days for 2k+ CIFAR-10 models and 1720 GPU days for 1k+ LSUN models." This indicates the use of GPUs but does not specify exact models, manufacturers, or other hardware details. |
| Software Dependencies | No | The paper states: "We use the most popular Py Torch implementation1of Prog GAN to obtain comprehensive Prog GAN results and to implement our DGGAN." It mentions PyTorch but does not provide a specific version number for PyTorch or any other software dependency. |
| Experiment Setup | Yes | We train initial candidates for 100k iterations and train each new candidate with 100k iterations after weight inheritance. We gradually increase the resolution from d0 = 8 to 32. After reaching the final resolution, following (Karras et al. 2018), we further train the fixed architecture longer to achieve convergence. We follow the same training schedule as Prog GAN. |