Few-shot Image Generation via Adaptation-Aware Kernel Modulation
Authors: Yunqing Zhao, Keshigeyan Chandrasegaran, Milad Abdollahzadeh, Ngai-Man (Man) Cheung
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results show that the proposed method consistently achieves SOTA performance across source/target domains of different proximity, including challenging setups when source and target domains are more apart. |
| Researcher Affiliation | Academia | Yunqing Zhao yunqing_zhao@mymail.sutd.edu.sg Keshigeyan Chandrasegaran keshigeyan@sutd.edu.sg Milad Abdollahzadeh milad_abdollahzadeh@sutd.edu.sg Ngai-Man Cheung: ngaiman_cheung@sutd.edu.sg Singapore University of Technology and Design (SUTD) |
| Pseudocode | Yes | Algorithm 1: Few-Shot Image Generation via Adaptation-Aware Kernel Modulation (Ad AM) |
| Open Source Code | Yes | Project Page: https://yunqing-me.github.io/Ad AM/ and We are releasing code and pre-trained GAN models. The URL details are included in Supplementary. |
| Open Datasets | Yes | We use Style GAN-V2 [3] as the GAN architecture and FFHQ as the source domain. Our experiments include setups with different source-target proximity: Babies/Sunglasses [14], Met Faces [36] and Cat/Dog/Wild (AFHQ) [5] |
| Dataset Splits | No | The paper mentions a '10-shot target adaptation setup' and 'batch size 4' but does not specify the train/validation/test dataset splits explicitly in the main text, only that training details are in Supplementary. |
| Hardware Specification | Yes | Adaptation is performed with 256 x 256 resolution and batch size 4 on a single Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions 'Style GAN-V2 [3] as the GAN architecture' but does not provide specific version numbers for software dependencies or libraries used. |
| Experiment Setup | Yes | Adaptation is performed with 256 x 256 resolution and batch size 4 on a single Tesla V100 GPU. We apply importance probing and modulation on base kernels of both generator and discriminator. We focus on 10-shot target adaptation setup in the main paper. |