Generalized One-shot Domain Adaptation of Generative Adversarial Networks
Authors: Zicheng Zhang, Yinglu Liu, Congying Han, Tiande Guo, Ting Yao, Tao Mei
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Both quantitative and qualitative experiments demonstrate the effectiveness of our method in various scenarios. We conduct extensive experiments on various references with and without entities. The results show that our framework can fully exploit the cross-domain correspondence and achieve high transfer quality. |
| Researcher Affiliation | Collaboration | Zicheng Zhang1 Yinglu Liu2 Congying Han1 Tiande Guo1 Ting Yao2 Tao Mei2 1University of Chinese Academy of Sciences 2JD AI Research |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https: //github.com/zhangzc21/Generalized-One-shot-GAN-Adaptation. |
| Open Datasets | Yes | We take Style GANs pre-trained on the 512 512 AFHQ dog, cat data set [8] and the 256 256 LSUN church dataset [48] as exemplars. |
| Dataset Splits | No | The paper describes a one-shot GAN adaptation, where adaptation occurs from a single reference image, rather than traditional dataset splits for training, validation, and testing. Therefore, there are no explicit validation dataset splits provided in the conventional sense. |
| Hardware Specification | Yes | Our method costs about 12 minutes for mref = 0 and 3 minutes for mref = 0 on NVIDIA RTX 3090. |
| Software Dependencies | No | The paper mentions that its 'implementation is based on the official code of Style GAN1' and refers to pre-trained networks (lpips, Arcface, CLIP), but it does not provide specific version numbers for these software components or programming languages like Python or PyTorch. |
| Experiment Setup | Yes | If mref = 0, the total epoch is 2000. We adopt the Adam optimizer with learning rate 1e 3, β1 = 0, β2 = 0.999. The cosine annealing strategy of the learning rate is adopted to reduce the learning rate to 1e 4 gradually. In Eq. (2), λ1 = 10, λ2 = 0.2, λ3 = 2, λ4 = 1, and in Eq. (4) λ5 = 100. We use the Monte Carlo simulation that randomly samples 256 vectors on the unit sphere to compute the integral in Eq. (5). |