Uncertainty Modeling in Generative Compressed Sensing

Authors: Yilang Zhang, Mengchu Xu, Xiaojun Mao, Jian Wang

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show a consistent improvement of CS-BGM over the baselines. In this section, we will state the setup of the numerical experiments and evaluate the performance of our method. Comparisons among CS-BGM, LASSO, CSGM, PGD-GAN, and Sparse-Gen will be presented to empirically appraise the recovery capabilities.
Researcher Affiliation Academia 1School of Data Science, Fudan University, Shanghai, China 2School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai, China.
Pseudocode Yes Algorithm 1 Alternate optimization for CS-BGM
Open Source Code Yes For convenient reproducibility, our codes are available at https://github.com/347325285/CS_BGM.
Open Datasets Yes We consider two datasets: i) the MNIST handwritten digit dataset (Le Cun & Cortes, 2010) and ii) the Celeb Faces Attributes (Celeb A) dataset (Liu et al., 2015).
Dataset Splits No The paper mentions using a 'test dataset' for MNIST and selecting a number of images for Celeb A for comparison, but it does not specify explicit training, validation, or test splits for the datasets or mention cross-validation. It uses 'pre-trained VAE models' but does not detail the splits used for that pre-training process.
Hardware Specification Yes All experiments are run using Tensorflow (Abadi et al., 2015) on one Intel(R) Xeon(R) Silver 4116 CPU and four Ge Force RTX 2080 Ti GPUs.
Software Dependencies No The paper states 'All experiments are run using Tensorflow (Abadi et al., 2015)'. However, it does not specify the version number of Tensorflow or any other software dependencies, which is required for reproducible description.
Experiment Setup Yes For both models, we use Adam optimizer (Kingma & Ba, 2015) with learning rate 0.001 for v and 0.002 for θ, respectively. In the fast implementation named CS-BGM (w/o MC), which infers z with MAP estimation like θ, the learning rate for z is set to be 0.001l. The regular coefficient λθ is 0.1. The number of MC samples for inference is set to 20 on the MNIST dataset and 10 on the celeb A dataset. In our experiments, we perform the optimization of z for 2000 iterations and then θ for 500 iterations with only 1 alternations. In particular, we set J = 1, K = 2000, and L = 500 to Alg. 1. In the fast implementation, we set J = 1, K = 500, and L = 200 to Alg. 1.