Domain Re-Modulation for Few-Shot Generative Domain Adaptation
Authors: Yi Wu, Ziqiang Li, Chaoyue Wang, Heliang Zheng, Shanshan Zhao, Bin Li, Dacheng Tao
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments, we demonstrate the superior performance of our Do RM and similaritybased structure loss in few-shot GDA, both quantitatively and qualitatively. |
| Researcher Affiliation | Collaboration | Yi Wu , Ziqiang Li University of Science and Technology of China Chaoyue Wang , Heliang Zheng, Shanshan Zhao JD Explore Academy Bin Li University of Science and Technology of China Dacheng Tao JD Explore Academy |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | Code will be available at https://github.com/wuyi2020/Do RM. |
| Open Datasets | Yes | Following previous literature [32; 59], we use FFHQ [18] with resolution 256 256 as the source domain. In 10-shot GDA, we evaluate our method on multiple target datasets, including Sketches [41], FFHQ-Babies [18], FFHQ-Sunglasses [18], Face-Caricatures, Face paintings by Amedeo Modigliani, Face paintings by Raphael, and Face paintings by Otto Dix [50]. |
| Dataset Splits | No | No explicit mention of specific training, validation, or test dataset splits (e.g., percentages, sample counts, or citations to predefined splits) was found for reproduction purposes. The paper mentions training on FFHQ and target datasets and evaluating on generated images, but not a dedicated validation split for model tuning. |
| Hardware Specification | No | No specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) were provided for running the experiments. The paper only states 'Our implementation is based on the official implementation of Style GAN2-ADA'. |
| Software Dependencies | No | No specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, PyTorch 1.9) were found. The paper mentions 'Style GAN2' and 'Style GAN2-ADA' but without specific version numbers. |
| Experiment Setup | Yes | We set the batch size to 4, and we terminate the training process after the discriminator has processed 100K real samples. In our experiments, we set λss = 10. The re-modulation weight α is set to a relatively small value. (More analyses can be found in Section A.1 showing values like 0.5, 0.2, 0.05, 0.005, 0.001). |