Bridging Data Gaps in Diffusion Models with Adversarial Noise-Based Transfer Learning
Authors: Xiyu Wang, Baijiong Lin, Daochang Liu, Ying-Cong Chen, Chang Xu
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments in the context of few-shot image generation tasks demonstrate that our method is efficient and excels in terms of image quality and diversity compared to existing GAN-based and DPM-based methods. |
| Researcher Affiliation | Academia | 1School of Computer Science, Faculty of Engineering, The University of Sydney, Australia 2The Hong Kong University of Science and Technology (Guangzhou), China. |
| Pseudocode | Yes | Algorithm 1 Training DPMs with ANT |
| Open Source Code | Yes | The code is available at https://github.com/ShinyGua/DPMs-ANT. |
| Open Datasets | Yes | Following (Ojha et al., 2021), we use FFHQ (Karras et al., 2020b) and LSUN Church (Yu et al., 2015) as source datasets. |
| Dataset Splits | No | The paper mentions using a "limited set of just 10 training images" for few-shot tasks, but it does not explicitly provide information on specific training, validation, or test dataset splits (e.g., percentages or counts for each split). |
| Hardware Specification | No | The acknowledgements mention the use of "National Computational Infrastructure (NCI)" and "Sydney Informatics Hub HPC Allocation Scheme," indicating high-performance computing resources were used, but specific hardware details such as GPU/CPU models or memory amounts are not provided. |
| Software Dependencies | No | The paper mentions frameworks like DDPM, LDM, and the Style GAN2 codebase but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We set c = 4 and d = 8 for DDPMs, while c = 2 and d = 8 for LDMs. ... For similarity-guided training, we set γ = 5. ... For adversarial noise selection, we set J = 10 and ω = 0.02. We employ a learning rate of 5e-5 for DDPMs and 1e-5 for LDMs to train with approximately 300 iterations and a batch size of 40. |