E$^2$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation
Authors: Yifan Gong, Zheng Zhan, Qing Jin, Yanyu Li, Yerlan Idelbayev, Xian Liu, Andrey Zharkov, Kfir Aberman, Sergey Tulyakov, Yanzhi Wang, Jian Ren
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5. Experiments In this section, we provide the detailed experimental settings and results of our proposed method. |
| Researcher Affiliation | Collaboration | Yifan Gong * 1 2 Zheng Zhan * 2 Qing Jin 1 Yanyu Li 1 2 Yerlan Idelbayev 2 Xian Liu 1 Andrey Zharkov 1 Kfir Aberman 1 Sergey Tulyakov 1 Yanzhi Wang 2 Jian Ren 1 *Equal contribution, work is done during Yifan s internship at Snap Inc. 1Snap Inc. 2Northeastern University. Correspondence to: Jian Ren <jren@snapchat.com>. |
| Pseudocode | Yes | The overall algorithm is described in Algorithm 1 in Sec. A in the Appendix. |
| Open Source Code | No | Project Page: https://yifanfanfanfan.github.io/e2gan/. The project page is mentioned, but there is no explicit statement that the source code for the methodology described in the paper is available there, nor is it a direct link to a code repository. |
| Open Datasets | Yes | We verify our method on 1,000 images from FFHQ dataset (Karras et al., 2019) and Flickr Scenery dataset (Cheng et al., 2022) with image resolution as 256 256. |
| Dataset Splits | Yes | To perform training and evaluation of GAN models, we divide the image pairs from each target concept into training/validation/test subsets with the ratio as 80%/10%/10%. |
| Hardware Specification | Yes | The training and training time measurements are conducted on one NVIDIA H100 GPU with 80 GB memory. |
| Software Dependencies | No | The paper mentions using the Adam solver, but does not provide specific version numbers for software libraries or environments like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | The training is conducted from an initial learning rate of 2e-4 with mini-batch SGD using Adam solver (Kingma & Ba, 2014). The total training epochs is set to 100 for E2GAN, and 200 for pix2pix (Isola et al., 2017) and pix2pix-zero-distilled (Parmar et al., 2023) for them to converge well. |