On Investigating the Conservative Property of Score-Based Generative Models

Authors: Chen-Hao Chao, Wei-Fang Sun, Bo-Wun Cheng, Chun-Yi Lee

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In addition, our experimental results on the CIFAR10, CIFAR-100, Image Net, and SVHN datasets validate the effectiveness of QCSBMs.
Researcher Affiliation Collaboration 1Elsa Lab, Department of Computer Science, National Tsing Hua University, Taiwan. 2NVIDIA AI Technology Center, NVIDIA Corporation.
Pseudocode Yes Algorithm 1 Training Procedure of QCSBM
Open Source Code Yes The code implementation for the experiments is provided in the following repository: https://github.com/chen-hao-chao/qcsbm.
Open Datasets Yes CIFAR-10, CIFAR-100 (Krizhevsky & Hinton, 2009), Image Net-32x32 (Van Oord et al., 2016), and SVHN (Netzer et al., 2011) datasets.
Dataset Splits No The paper specifies 'training and test sets' with their sizes but does not mention explicit validation splits or percentages.
Hardware Specification Yes The results are evaluated on a single NVIDIA V100 GPU with 32 GB memory, and the batch size is fixed at 32.
Software Dependencies No The paper mentions 'Pytorch framework' and 'scipy.integrate.solve ivp library' but does not specify their version numbers.
Experiment Setup Yes The SBMs s U and s C are trained using the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 7.5 10 4 and a batch size of 5, 000. The balancing factor λ is fixed to 0.1. The maximal and minimal noise scales σmax and σmin are set to 3 and 0.1, respectively.