Consistency-GAN: Training GANs with Consistency Model

Authors: Yunpeng Wang, Meng Pang, Shengbo Chen, Hong Rao

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our evaluations on various datasets indicate that our approach significantly accelerates sampling speeds compared to traditional diffusion models, while preserving sample quality and diversity. Furthermore, our approach also has better model coverage than traditional adversarial training methods. We conduct a series of experiments on datasets ranging from low to high resolutions, including CIFAR-10 (32 32) (Krizhevsky et al. 2009), STL-10 (64 64) (Coates, Ng, and Lee 2011) and Celeb A (256 256) (Liu et al. 2015).
Researcher Affiliation Academia Yunpeng Wang1, Meng Pang1*, Shengbo Chen2*, Hong Rao1* 1 Nanchang University 2 Henan University
Pseudocode Yes Algorithm 1: Training of Consistency-GAN Input: Random noise z p(z), Original data x p(x) Parameter: Initial training parameter θ and γ, Sequence of time points t1, t2, . . ., tk+1, Inference step N Output: Generated samples xg = G(z), Noise mixture samples y = fγ(x, t) 1: Step I: Consistency mapping training 2: k 0 3: repeat 4: Sample x p(x) 5: Sample z N(0, I) 6: Update γ using Equation (2) and (3) 7: k k + 1 8: until convergence 9: Step II: Adversarial training 10: while i <= number of training iterations do 11: Sample noise samples: z p(z). 12: Sample from original data: x p(x). 13: Obtain generated samples: xg G(z). 14: Obtain pretrained consistency module fγ( ). 15: Noise injection: 16: for j = 1 to N do 17: y f(x, j) 18: yg f(xg, j) 19: end for 20: Update θ in Equation (4). 21: end while
Open Source Code No The paper does not contain an unambiguous statement or a direct link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We conduct a series of experiments on datasets ranging from low to high resolutions, including CIFAR-10 (32 32) (Krizhevsky et al. 2009), STL-10 (64 64) (Coates, Ng, and Lee 2011) and Celeb A (256 256) (Liu et al. 2015).
Dataset Splits No The paper mentions training sets for CIFAR-10 (50,000 images) and Celeb A (30,000 images), and describes data preprocessing for STL-10, but it does not specify explicit training/validation/test splits, exact percentages, or sample counts for a validation set.
Hardware Specification Yes The training is conducted using 4 NVIDIA A800 GPUs, with a batch size of 64 and a total of 20, 000 training iterations.
Software Dependencies No The paper states: 'We implement Consistency-GAN using Py Torch.' It mentions the software name but does not provide a specific version number for PyTorch or any other ancillary software.
Experiment Setup Yes The parameters for training the consistency mapping module are empirically set as follows: µ0 = 0.9 (for consistency regularization weight), distance metric function d using the L2 metric. During adversarial training, the consistency mapping is performed for N = 5 steps, and Gaussian noise with a standard deviation of σ = 0.5 is added. The training is conducted using 4 NVIDIA A800 GPUs, with a batch size of 64 and a total of 20, 000 training iterations.