Inverting Deep Generative models, One layer at a time
Authors: Qi Lei, Ajil Jalal, Inderjit S. Dhillon, Alexandros G. Dimakis
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical validation demonstrates that we obtain better reconstructions when the latent dimension is large. In this section, we describe our experimental setup and report the performance comparisons of our algorithms with the gradient descent method [15, 12]4. We conduct simulations in various aspects with Gaussian random weights, and a simple GAN architecture with MNIST dataset to show that our approach can work in practice for the denoising problem. |
| Researcher Affiliation | Collaboration | Qi Lei , Ajil Jalal , Inderjit S. Dhillon , and Alexandros G. Dimakis UT Austin Amazon {leiqi@oden., ajiljalal@, inderjit@cs., dimakis@austin.}utexas.edu |
| Pseudocode | Yes | Algorithm 1 Linear programming to invert a single layer with ℓ error bound (ℓ LP) and Algorithm 2 Linear programming to invert a single layer with ℓ1 error bound (ℓ1 LP) |
| Open Source Code | Yes | The code to reproduce our results could be found here: https://github.com/cecilialeiqi/ Invert_GAN_LP. |
| Open Datasets | Yes | We train the network using the original Generative Adversarial Network [8]. We conduct experiments on a real generative network with the MNIST dataset. |
| Dataset Splits | No | The paper does not explicitly state training, validation, or test dataset splits or proportions. It mentions using the MNIST dataset, which has standard splits, but these are not detailed in the paper itself. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory used for running the experiments. It only mentions general experimental setups. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. While it references common machine learning frameworks implicitly through citations (e.g., GANs suggest PyTorch/TensorFlow), no explicit software versions are listed. |
| Experiment Setup | Yes | For our methods, we choose the scaling factor α = 1.2. With gradient descent, we use learning rate of 1 and up to 1,000 iterations or until the gradient norm is no more than 10 9. Under this setting, we choose the learning rate to be 10 3 and number of iterations up to 10,000 (or until gradient norm is below 10 9). |