Deep Compressed Sensing
Authors: Yan Wu, Mihaela Rosca, Timothy Lillicrap
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We first evaluate the DCS model using the MNIST (Yann et al., 1998) and Celeb A (Liu et al., 2015) datasets. ... Tables 2 and 3 summarise the results from our models as well as the baseline model from Bora et al. (2017). ... We use the CIFAR dataset ... and evaluate them using the Inception Score (IS) (Salimans et al., 2016) and Frchet Inception Distance (FID) (Heusel et al., 2017). |
| Researcher Affiliation | Industry | 1DeepMind, London, UK. Correspondence to: Yan Wu <yanwu@google.com>. |
| Pseudocode | Yes | Algorithm 1 Compressed Sensing with Meta Learning |
| Open Source Code | Yes | Our code will be available at https://github.com/deepmind/deep-compressed-sensing. |
| Open Datasets | Yes | We first evaluate the DCS model using the MNIST (Yann et al., 1998) and Celeb A (Liu et al., 2015) datasets. ... We use the CIFAR dataset which contains various categories of natural images, whose features from an Inception Network (Ioffe & Szegedy, 2015) are meaningful for evaluating the IS and FID. |
| Dataset Splits | No | Tables 2 and 3 summarise the results from our models as well as the baseline model from Bora et al. (2017). ... The reconstruction loss for the baseline model is estimated from Figure 1 in Bora et al. (2017). DCS performs significantly better than the baseline. In addition, while the baseline model used hundreds or thousands of gradient-descent steps with several re-starts, we only used 3 steps without any restarting, achieving orders of magnitudes higher efficiency. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x) needed to replicate the experiment beyond general mentions of concepts or methods. |
| Experiment Setup | Yes | Unless otherwise specified, we use 3 gradient descent steps for latent optimisation. More details, including hyperparameter values, are reported in the Appendix. |