Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation

Authors: Mingcong Liu, Qiang Li, Zekui Qin, Guoxin Zhang, Pengfei Wan, Wen Zheng

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that Blend GAN outperforms state-of-the-art methods in terms of visual quality and style diversity for both latent-guided and reference-guided stylized face synthesis. Our project webpage is https://onionliu.github.io/Blend GAN/. We compare our model with several leading baselines on diverse image synthesis including Ada IN [25], MUNIT [16], FUNIT [17], DRIT++ [18], and Star GANv2 [19]. To evaluate the quality of our results, we use Frechet inception distance (FID) metric [50] to measure the discrepancy between the generated images and AAHQ dataset.
Researcher Affiliation Industry Mingcong Liu Y-tech, Kuaishou Technology EMAIL Qiang Li Y-tech, Kuaishou Technology EMAIL Zekui Qin Y-tech, Kuaishou Technology EMAIL Guoxin Zhang Y-tech, Kuaishou Technology EMAIL Pengfei Wan Y-tech, Kuaishou Technology EMAIL Wen Zheng Y-tech, Kuaishou Technology EMAIL
Pseudocode No No pseudocode or algorithm blocks are present in the paper.
Open Source Code Yes Our project webpage is https://onionliu.github.io/Blend GAN/
Open Datasets Yes We use FFHQ [8] as the natural-face dataset, which includes 70,000 high-quality face images3. In addition, we build a new dataset of artistic-face images, Artstation-Artistic-face-HQ (AAHQ), consisting of 33,245 high-quality artistic faces at 10242 resolution (Figure 4).
Dataset Splits No The paper mentions using FFHQ and AAHQ datasets for training and evaluation but does not specify explicit training, validation, or test dataset splits or percentages.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are provided in the paper.
Software Dependencies No The paper mentions that the code is based on a PyTorch implementation of StyleGAN2, but no specific version numbers for PyTorch or other software dependencies are provided.
Experiment Setup No The paper describes the model architecture and training objectives but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, epochs).