Fast and Provable ADMM for Learning with Generative Priors

Authors: Fabian Latorre, Armin eftekhari, Volkan Cevher

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate this algorithm numerically in the context of denoising with GANs in the presence of adversarial or stochastic noise, as well as compressive sensing. In this section we evaluate our algorithms for image recovery tasks with a generative prior.
Researcher Affiliation Academia Fabian Latorre, Armin Eftekhari and Volkan Cevher Laboratory for information and inference systems (LIONS) EPFL, Lausanne, Switzerland {firstname.lastname}@epfl.ch
Pseudocode Yes Algorithm 1 Linearized ADMM for solving problem (1). The pseudocode for Algorithm 2 is given in Supplementary I.
Open Source Code No The paper does not provide a direct link to the source code or an explicit statement about its public availability. It mentions 'Supplementary I' for pseudocode but not for runnable code.
Open Datasets Yes The datasets we consider are the Celeb A dataset of face images [Liu et al., 2015] and the MNIST dataset of handwritten digits [Le Cun and Cortes, 2010].
Dataset Splits No The paper mentions a 'test set' but does not specify exact train/validation/test splits, percentages, or methodology for data partitioning.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory specifications) used for running experiments are mentioned in the paper.
Software Dependencies No The paper mentions frameworks and optimizers (e.g., 'Wasserstein GAN framework', 'ADAM optimizer') but does not specify their version numbers or any other software dependencies with versions.
Experiment Setup Yes For the Celeb A dataset we downsample the images to 64x64 pixels as in Gulrajani et al. [2017] and we use the same residual architecture [He et al., 2015] for the generator with four residual blocks followed by a convolutional layer. For MNIST, we use the same architecture as one in Gulrajani et al. [2017], which contains one fully connected layer followed by three deconvolutional layers. We compare the ADAM optimizer [Kingma and Ba, 2014], GD and ADMM (450 iterations and for GD and ADAM, and 300 iterations for EADMM).