Few-shot Image Generation with Elastic Weight Consolidation

Authors: Yijun Li, Richard Zhang, Jingwan (Cynthia) Lu, Eli Shechtman

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we first discuss the experimental settings. We then present qualitative and quantitative comparisons between the proposed method and several competing methods. Finally, we analyze the performance of our method with respect to some important factors such as the number of examples.
Researcher Affiliation Industry Yijun Li Richard Zhang Jingwan Lu Eli Shechtman Adobe Research {yijli, rizhang, jlu, elishe}@adobe.com
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper provides a link to a personal publication page (https://yijunmaverick.github.io/publications/ewc/), but this is not a direct link to a source-code repository, nor does the text explicitly state that the code is released at this URL.
Open Datasets Yes We use the FFHQ dataset [16] as the source for real faces and several other face databases as the target: emoji faces from the Bitmoji API [11]; animal faces from the AFHQ dataset [3] and portrait paintings from the Artistic-Faces dataset [44]. We use 10 cat and dog images from the, much larger, AFHQ dataset. The Artistic-Faces dataset contains artistic portraits of 16 different artists and there are only 10 images per artist available. For the landscape, we use the CLP dataset [29] that contains thousands of landscape photos as the source and 10 pencil landscape drawings as the target.
Dataset Splits No No specific dataset splits (e.g., percentages for training, validation, and testing) are provided. The paper mentions "10-shot generation" or "1-shot adaptation" which refers to the number of *training examples available* in the target domain, not formal dataset splits.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, RAM) used for experiments are mentioned in the paper.
Software Dependencies No No specific software dependencies with version numbers are mentioned (e.g., Python, PyTorch, TensorFlow versions). It mentions using 'Style GAN [16] framework' and 'DCGAN [30] network' generally.
Experiment Setup No The paper mentions the regularization weight 'λ = 5 * 10^8' but does not provide other common experimental setup details like learning rate, batch size, optimizer type, or number of epochs for training.