Contrastive Sampling Chains in Diffusion Models

Authors: Junyu Zhang, Daochang Liu, Shichao Zhang, Chang Xu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate the efficacy of our method, we conduct comprehensive experiments. and In this section, we demonstrate effectiveness of the contrastive sampling chain by experimental results, namely higher image quality, better log-likelihood, or slightly faster sampling speed. Comprehensive experiments are conducted on various datasets, including CIFAR-10, Celeb A/FFHQ 64x64, and Image Net 64x64.
Researcher Affiliation Academia Junyu Zhang Central South University zhangjunyu@csu.edu.cn Daochang Liu The University of Sydney daochang.liu@sydney.edu.au Shichao Zhang Central South University zhangsc@csu.edu.cn Chang Xu The University of Sydney c.xu@sydney.edu.au
Pseudocode No Not found. The paper describes its method verbally and mathematically but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at Contrastive-Sampling.
Open Datasets Yes Comprehensive experiments are conducted on various datasets, including CIFAR-10, Celeb A/FFHQ 64x64, and Image Net 64x64.
Dataset Splits No Not found. The paper mentions evaluating on 50,000 samples for testing but does not specify training/validation/test splits (e.g., percentages or counts for each) or how data was partitioned for validation.
Hardware Specification Yes To conduct our experiments, we utilized NVIDIA A100 GPUs for conducting experiments on CIFAR-10, Celeb A, FFHQ, and Image Net datasets. Additionally, we employed Nvidia V100 GPUs for experiments involving the combination of fast sampling methods.
Software Dependencies No Not found. The paper does not specify software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9').
Experiment Setup No Not found. The paper states that it maintains 'all the training settings of the pre-trained DMs' but does not explicitly provide specific hyperparameters or system-level training settings for its own fine-tuning process.