Compressed Sensing using Generative Models
Authors: Ashish Bora, Ajil Jalal, Eric Price, Alexandros G. Dimakis
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare the performance of our algorithm with baselines. We show a plot of per pixel reconstruction error as we vary the number of measurements. The vertical bars indicate 95% confidence intervals. |
| Researcher Affiliation | Academia | 1University of Texas at Austin, Department of Computer Science 2University of Texas at Austin, Department of Electrical and Computer Engineering. |
| Pseudocode | No | The paper describes the algorithm in Section 2 'Our Algorithm' using prose, but it does not provide a formal pseudocode block or algorithm listing. |
| Open Source Code | Yes | Code for experiments in the paper can be found at: https://github.com/Ashish Bora/csgm |
| Open Datasets | Yes | The MNIST dataset consists of about 60, 000 images of handwritten digits, where each image is of size 28 28 (Le Cun et al., 1998). ... Celeb A is a dataset of more than 200, 000 face images of celebrities (Liu et al., 2015). |
| Dataset Splits | No | The paper mentions using a 'held out test set' but does not specify explicit training/validation/test splits (e.g., percentages or sample counts) or cross-validation setup. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., CPU, GPU models, or memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using the Adam optimizer and implies a TensorFlow implementation through a reference (Kim, 2017), but it does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | We train the VAE using the Adam optimizer (Kingma & Ba, 2014) with a mini-batch size 100 and a learning rate of 0.001. We use λ = 0.1 in Eqn. (3). ... Each update used the Adam optimizer (Kingma & Ba, 2014) with minibatch size 64, learning rate 0.0002 and β1 = 0.5. We use λ = 0.001 in Eqn. (3). |