SMaRt: Improving GANs with Score Matching Regularity

Authors: Mengfei Xia, Yujun Shen, Ceyuan Yang, Ran Yi, Wenping Wang, Yong-Jin Liu

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Regarding the empirical evidences, we first design a toy example to show that training GANs by the aid of a ground-truth score function can help reproduce the real data distribution more accurately, and then confirm that our approach can consistently boost the synthesis performance of various state-of-the-art GANs on real-world datasets with pre-trained diffusion models acting as the approximate score function. For instance, when training Aurora on the Image Net 64 64 dataset, we manage to improve FID from 8.87 to 7.11
Researcher Affiliation Collaboration 1Tsinghua University 2BNRist 3Ant Group 4Shanghai AI Laboratory 5Shanghai Jiao Tong University 6Texas A&M University.
Pseudocode No The paper describes its methodology using mathematical equations but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/thuxmf/SMaRt.
Open Datasets Yes We train Style GAN2 on CIFAR10 (Krizhevsky & Hinton, 2009), Image Net 64x64 (Deng et al., 2009), and LSUN Bedroom 256x256 (Yu et al., 2015).
Dataset Splits No The paper lists the datasets used (CIFAR10, ImageNet, LSUN Bedroom) but does not explicitly provide specific train/validation/test dataset splits, percentages, or references to predefined splits for their experiments.
Hardware Specification Yes We train SMa Rt with NVIDIA A100 GPUs.
Software Dependencies No The paper references specific pre-trained diffusion models and GAN implementations (e.g., ADM, EDM, StyleGAN2) along with their corresponding research papers and GitHub links. However, it does not explicitly list specific version numbers for underlying software dependencies such as Python, PyTorch, TensorFlow, or specific library versions.
Experiment Setup Yes Table 6: Empirical value of hyper-parameters for SMa Rt used in our experiments. Dataset CIFAR10 Image Net 64 Image Net 128 LSUN Bedroom Setting Conditional Conditional Conditional Unconditional Dataset Scale 50K Images 1.3M Images 1.3M Images 3M Images λscore 0.01 0.1 0.1 0.1 t [40, 60] [25, 35] [25, 35] [25, 35] Frequency 8 8 8 8