BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation

Authors: Mingcong Liu, Qiang Li, Zekui Qin, Guoxin Zhang, Pengfei Wan, Wen Zheng

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that Blend GAN outperforms state-of-the-art methods in terms of visual quality and style diversity for both latent-guided and reference-guided stylized face synthesis. Our project webpage is https://onionliu.github.io/Blend GAN/. We compare our model with several leading baselines on diverse image synthesis including Ada IN [25], MUNIT [16], FUNIT [17], DRIT++ [18], and Star GANv2 [19]. To evaluate the quality of our results, we use Frechet inception distance (FID) metric [50] to measure the discrepancy between the generated images and AAHQ dataset.
Researcher Affiliation Industry Mingcong Liu Y-tech, Kuaishou Technology liumingcong03@kuaishou.com Qiang Li Y-tech, Kuaishou Technology liqiang03@kuaishou.com Zekui Qin Y-tech, Kuaishou Technology qinzekui03@kuaishou.com Guoxin Zhang Y-tech, Kuaishou Technology zhangguoxin@kuaishou.com Pengfei Wan Y-tech, Kuaishou Technology wanpengfei@kuaishou.com Wen Zheng Y-tech, Kuaishou Technology zhengwen@kuaishou.com
Pseudocode No No pseudocode or algorithm blocks are present in the paper.
Open Source Code Yes Our project webpage is https://onionliu.github.io/Blend GAN/
Open Datasets Yes We use FFHQ [8] as the natural-face dataset, which includes 70,000 high-quality face images3. In addition, we build a new dataset of artistic-face images, Artstation-Artistic-face-HQ (AAHQ), consisting of 33,245 high-quality artistic faces at 10242 resolution (Figure 4).
Dataset Splits No The paper mentions using FFHQ and AAHQ datasets for training and evaluation but does not specify explicit training, validation, or test dataset splits or percentages.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are provided in the paper.
Software Dependencies No The paper mentions that the code is based on a PyTorch implementation of StyleGAN2, but no specific version numbers for PyTorch or other software dependencies are provided.
Experiment Setup No The paper describes the model architecture and training objectives but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, epochs).