Instance-Optimal Compressed Sensing via Posterior Sampling

Authors: Ajil Jalal, Sushrut Karmalkar, Alex Dimakis, Eric Price

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we discuss our algorithm for posterior sampling, discuss why existing algorithms can fail, and show our empirical evaluation of posterior sampling versus baselines. 5.1. Datasets and Models We perform our experiments on the Celeb A-HQ (Liu et al., 2018; Karras et al., 2017) and Flickr Faces-HQ (Karras et al., 2019) datasets. ... 5.4. Experimental Results
Researcher Affiliation Academia 1University of Texas at Austin, Department of Electrical and Computer Engineering 2University of Texas at Austin, Department of Computer Science.
Pseudocode No The paper describes iterative procedures for Langevin dynamics (e.g., 'zt+1 zt + αt 2 z log p (zt|y) + αtζt, ζt N(0, I)'), but these are presented as mathematical equations rather than formal pseudocode blocks or algorithms.
Open Source Code Yes Code and models available at: https://github.com/ajiljalal/code-cs-fairness.
Open Datasets Yes We perform our experiments on the Celeb A-HQ (Liu et al., 2018; Karras et al., 2017) and Flickr Faces-HQ (Karras et al., 2019) datasets.
Dataset Splits No The paper mentions 'validation images' and 'validation set' in the context of hyperparameter selection, but does not provide specific details on the dataset splits (e.g., percentages, counts, or splitting methodology) for reproduction.
Hardware Specification No The paper mentions 'computing resources from TACC' in the acknowledgements, but does not specify any particular hardware components such as GPU models, CPU types, or detailed specifications of the computing environment used for experiments.
Software Dependencies No The paper mentions specific generative models (Glow, NCSNv2) and techniques (Langevin dynamics), but it does not provide specific version numbers for any software dependencies or libraries used for implementation (e.g., Python, PyTorch, TensorFlow, CUDA).
Experiment Setup Yes For Langevin dynamics, the paper describes the annealing process for σt: 'By taking a decreasing sequence of σt that approach the true value of σ, we can anneal Langevin dynamics and sample from p(z|y). Please refer to Appendix C for more details about how σt varies.' It also mentions 'This model also requires annealing, and we follow the schedule prescribed by (Song & Ermon, 2020). Please see Appendix C for more details.' For baselines, it states: 'The MAP baseline in Figure 4 tries to maximize the posterior likelihood, and hence hyperparameters are selected so that the posterior is optimized. In contrast, what we term the modified-MAP algorithm was proposed by (Asim et al., 2019), and this algorithm picks hyperparameters that minimize reconstruction error on a holdout set of images.'