Data-Efficient Instance Generation from Instance Discrimination

Authors: Ceyuan Yang, Yujun Shen, Yinghao Xu, Bolei Zhou

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness of our method on a variety of datasets and training settings. Noticeably, on the setting of 2K training images from the FFHQ dataset, we outperform the state-of-the-art approach with 23.5% FID improvement. We evaluate the proposed Ins Gen method on multiple benchmarks. Sec. 4.1 presents the comparison to prior literature on both FFHQ [23] and AFHQ [9] datasets. Sec. 4.2 provides a detailed ablation study to show the importance of each component.
Researcher Affiliation Collaboration Ceyuan Yang Yujun Shen Yinghao Xu Bolei Zhou The Chinese University of Hong Kong Byte Dance Inc.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes 1 Code is available at https://genforce.github.io/insgen/.
Open Datasets Yes We evaluate our Ins Gen with a number of other approaches on FFHQ [23] and AFHQ [9] datasets.
Dataset Splits No No explicit mention of training/validation/test dataset splits. The paper mentions '2K, 10K, and 140K stand for the number of samples used for training' and discusses 'a subset of training data by randomly sampling' but does not specify how this training data is further split for validation.
Hardware Specification No The paper states 'All the experiments are conducted on a server with 8 GPUs.' but does not specify the exact model of GPUs, CPU, or other detailed hardware specifications.
Software Dependencies No The paper mentions implementing on 'the official implementation of Style GAN2-ADA' and using 'Adam optimizer [28]' but does not provide specific version numbers for any software components or libraries.
Experiment Setup Yes Empirically, the loss weights λG, λf D and λr D are 0.1, 1.0 and 1.0 respectively. [...] ADA [24] adopts 1.0 for original Style GAN2 training while we use 0.8. We also found smaller loss weight of gradient penalty is beneficial to our Ins Gen on the less data, e.g., 0.3 and 0.5 for 10K and 2K experiments respectively. [...] The temperature τ in Eq. (4) is set as 2. [...] The parameters are updated with moving average scheme: ΘD αΘD + (1 α)ΘD. Here, α = 0.999 follows the same setting in Mo Co-v2 [8].