Quantized Compressed Sensing with Score-Based Generative Models

Authors: Xiangming Meng, Yoshiyuki Kabashima

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on a variety of baseline datasets demonstrate that the proposed QCS-SGM significantly outperforms existing state-of-the-art algorithms by a large margin for both in-distribution and out-of-distribution samples. Moreover, as a posterior sampling method, QCS-SGM can be easily used to obtain confidence intervals or uncertainty estimates of the reconstructed results. The code is available at https://github.com/mengxiangming/QCS-SGM.
Researcher Affiliation Academia Xiangming Meng and Yoshiyuki Kabashima Institute for Physics of Intelligence and Department of Physics The University of Tokyo 7-3-1, Hongo, Tokyo 113-0033, Japan {meng,kaba}@g.ecc.u-tokyo.ac.jp
Pseudocode Yes Algorithm 1: Quantized Compressed Sensing with SGM (QCS-SGM)
Open Source Code Yes The code is available at https://github.com/mengxiangming/QCS-SGM.
Open Datasets Yes Datasets: Three popular datasets are considered: MNIST (Le Cun & Cortes, 2010) , Cifar-10 (Krizhevsky & Hinton, 2009), and Celeb A (Liu et al., 2015), and the high-resolution Flickr Faces High Quality (FFHQ) (Karras et al., 2018).
Dataset Splits Yes Results are averaged over a validation set of size 100.
Hardware Specification No The paper does not specify the hardware used for running the experiments (e.g., specific GPU models, CPU types, or memory amounts).
Software Dependencies No The paper mentions 'NCSNv2 (Song & Ermon, 2020)' and refers to external links for pre-trained models, but it does not specify version numbers for any software libraries, programming languages, or other dependencies used to implement or run the experiments.
Experiment Setup Yes When performing posterior sampling using the QCS-SGM in 1, for simplicity, we set a constant value ϵ = 0.0002 for all quantized measurements (e.g., 1-bit, 2-bit, 3-bit) for MNIST, Cifar10 and Celeb A. For the high-resolution FFHQ 256 256, we set ϵ = 0.00005 for 1-bit and ϵ = 0.00002 for 2-bit and 3-bit case, respectively. For all linear measurements for MNIST, Cifar10, and Celeb A, we set ϵ = 0.00002. It is believed that some improvement can be achieved with further fine-tuning of ϵ for different scenarios. For MNIST and Cifar-10, we set β1 = 50, βT = 0.01, T = 232; for Celeb A, we set β1 = 90, βT = 0.01, T = 500; for FFHQ, we set β1 = 348, βT = 0.01, T = 2311 which are the same as Song & Ermon (2020). The number of steps K in QCS-SGM for each noise scale is set to be K = 5 in all experiments. For more details, please refer to the submitted code. In training NCSNv2 for MNIST, we used a similar training setup as Song & Ermon (2020) for Cifar10 as follows. Training: batch-size: 128 n-epochs: 500000 n-iters: 300001 snapshot-freq: 50000 snapshot-sampling: true anneal-power: 2 log-all-sigmas: false.