Robust compressed sensing using generative models

Authors: Ajil Jalal, Liu Liu, Alexandros G. Dimakis, Constantine Caramanis

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we study the empirical performance of our algorithm on generative models trained on real image datasets. We show that we can reconstruct images under heavy-tailed samples and arbitrary outliers. For additional experiments and experimental setup details, see Appendix F.
Researcher Affiliation Academia Ajil Jalal ECE, UT Austin ajiljalal@utexas.edu Liu Liu ECE, UT Austin liuliu@utexas.edu Alexandros G. Dimakis ECE, UT Austin dimakis@austin.utexas.edu Constantine Caramanis ECE, UT Austin constantine@utexas.edu
Pseudocode Yes Algorithm 1 Robust compressed sensing of generative models 1: Input: Data samples {yj, aj}m j=1. 2: Output: G(bz). 3: Parameters: Number of batches M. 4: Initialize z and z . 5: for t = 0 to T 1, do 6: For each batch j [M], calculate 1 |Bj|(ℓj(z) ℓj(z )) by eq. (1). 7: Pick the batch with the median loss median 1 j M (ℓj(z) ℓj(z )), and evaluate the gradient for z and z using backpropagation on that batch. (i) perform gradient descent for z; (ii) perform gradient ascent for z . 8: end for 9: Output the G(bz) = G(z).
Open Source Code Yes Link to our code: https://github.com/ajiljalal/csgm-robust-neurips
Open Datasets Yes We trained a DCGAN [80] with k = 100 and d = 5 layers to produce 64 64 MNIST images. For Celeb A-HQ, we used a PG-GAN [51] with k = 512 to produce images of size 256 256 3 = 196, 608.
Dataset Splits No The paper uses standard datasets (MNIST, Celeb A-HQ) but does not explicitly provide specific train/validation/test dataset splits (e.g., percentages or sample counts) within the main text for its experiments.
Hardware Specification No The paper mentions 'computing resources from TACC' but does not provide specific details such as GPU/CPU models, memory, or other detailed computer specifications used for running experiments.
Software Dependencies No The paper does not specify software dependencies with version numbers. It refers to general frameworks like DCGAN [80] and PG-GAN [51] and an optimizer Adam [52], but without explicit version details for libraries or environments.
Experiment Setup Yes For additional experiments and experimental setup details, see Appendix F. We fix k = 100 for the MNIST dataset and k = 512 for the Celeb A-HQ dataset. We set the outliers of measurement matrix A as random sign matrix, and the outliers of y are fixed to be 1. we fix the number of measurements to m = 1000.