InfinityGAN: Towards Infinite-Pixel Image Synthesis
Authors: Chieh Hubert Lin, Hsin-Ying Lee, Yen-Chi Cheng, Sergey Tulyakov, Ming-Hsuan Yang
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental evaluation validates that Infinity GAN generates images with superior realism compared to baselines and features parallelizable inference. |
| Researcher Affiliation | Collaboration | 1UC Merced 2Snap Inc. 3Carnegie Mellon University 4Yonsei University 5Google Research |
| Pseudocode | Yes | Figure 33: Implementation of spatial style fusion. We present (left) the original Style GAN2 forward function, and (right) a corresponding implementation for the spatial style fusion. We align the related code blocks on the left and right. |
| Open Source Code | Yes | All codes, datasets, and trained models are publicly available. Project page: https://hubert0527.github.io/infinityGAN/ |
| Open Datasets | Yes | All codes, datasets, and trained models are publicly available. Project page: https://hubert0527.github.io/infinityGAN/ |
| Dataset Splits | Yes | For image outpainting task, we split the data into 80%, 10%, 10% for training, validation, and test. |
| Hardware Specification | Yes | Note that training and inference (of any size) are performed on a single GTX TITAN X GPU. ... We perform all the experiments on a workstation with Intel Xeon CPU (E5-2650 2.20GHz) and 8 GTX 2080Ti GPUs. |
| Software Dependencies | Yes | We implement our framework with Pytorch 1.6, and execute in an environment with Nvidia driver version 440.44, cu DNN version 4.6.5, and Cuda version 10.2.89. |
| Experiment Setup | Yes | We use λar = 1, λdiv = 1, λR1 = 10, and λpath = 2 for all datasets. All models are trained with 101 101 patches cropped from 197 197 real images. ... We adopt the Adam (Kingma & Ba, 2015) optimizer with β1 = 0, β2 = 0.99 and a batch size 16 for 800,000 iterations. |